id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2310.13584 | Upper bounds for the blow-up time of a system of fractional differential
equations with Caputo derivatives and a numerical scheme for the solution of
the system | The article provides upper bounds for the blow-up time of a system of
fractional differential equations in the Caputo sense. Furthermore, concrete
examples of blow-up time estimation are given using a numerical algorithm of
the predictor-corrector type. | José Villa-Morales | 2023-10-20T15:27:53Z | http://arxiv.org/abs/2310.13584v1 | Upper bounds for the blow-up time of a system of fractional differential equations with Caputo derivatives and a numerical scheme for the solution of the system
###### Abstract
The article provides upper bounds for the blow-up time of a system of fractional differential equations in the Caputo sense. Furthermore, concrete examples of blow-up time estimation are given using a numerical algorithm of the predictor-corrector type.
keywords: Blow-up time, upper bounds, Caputo's fractional differential equations, predictor-corrector. Msc: 26A33, 34A08, 65R20, 34A40
## 1 Introduction
The study of fractional differential equations is a current topic in pure and applied mathematics, this is due in large part to the need for new techniques for their analysis and the multiple applications that this mathematical entity has (see for example Kilbas et al. (2006), Zhan et al. (2014), Villa-Morales et al. (2022) and the references it contains). Quite concisely, we can say that the theoretical study of fractional differential equations has focused primarily on proving existence and uniqueness theorems (see Zhou and Chua (2012), Bai and Qiu (2009), Sun et al. (2012), Furati and Tatar (2004), Messaoudi et al. (2007), Aldawish and Samet (2022)) or in giving criteria for the non-existence of solutions (see Kirane et al. (2014), Laskri and Tatar (2010), Kerbal (2010), Jleli et al. (2019)). Something similar occurs in the context of fractional partial differential equations (see Kirane et al. (2005), Cabral-Garcia and Villa-Morales (2019)). This work is related to the second topic, that is, criteria for the non-existence of global solutions.
Let us assume that \(\alpha\in(0,1)\) and \(q_{1}\), \(q_{2}\), \(p_{11}\), \(p_{12}\), \(p_{21}\), \(p_{22}\) are non-negative real numbers. In the paper, we will consider the system of Caputo's fractional differential equations,
\[{}^{C}\!D^{\alpha}_{0+}x(t) = t^{q_{1}}x^{p_{11}}(t)y^{p_{12}}(t),\ \ t>0, \tag{1.1}\] \[{}^{C}\!D^{\alpha}_{0+}y(t) = t^{q_{2}}y^{p_{21}}(t)x^{p_{22}}(t),\ \ t>0, \tag{1.2}\]
with initial conditions
\[x(0)>0,\ \ y(0)>0. \tag{1.3}\]
We will say that the solution \((x(t),y(t))\) of the system (1.1)-(1.3) blows-up (or explodes) in finite time if there is a positive real-number \(\tau_{xy}\) such that the conditions (1.1)-(1.3) are satisfied for all \(0\leq t<\tau_{xy}\) and
\[\lim_{t\uparrow t_{xy}}\,\sup\{|x(s)|+|y(s)|:s\in[0,t]\}=\infty. \tag{1.4}\]
The time \(\tau_{xy}\) is called blow-up (or explosion) time. The objective of this work is to give some conditions on the parameters of the system (1.1)-(1.2) in order to obtain upper bounds of the explosion time \(\tau_{xy}\), as a consequence conditions under which the system has no global solution are obtained, see Theorem 3.2.
It should be noted that, although it is a bit tedious, when \(\alpha=1\) and \(q_{1}=q_{2}\) the explosion time of the system of ordinary differential equations (1.1-(1.2) can be given explicitly. To the best of our knowledge, there are no non-trivial fractional differential equations for which their explosion time is explicitly known. In addition, it is convenient to point out that there are practical applications of modeling with fractional differential equations where the explosions occurs in a finite time. For example, this occurs in the study of the fracture of materials or in the study of the combustion of gases, see for example Zhan et al. (2014) or Bebernes and Eberly (1989); in these cases, abrupt changes occur in the phenomenon under consideration. Due to this, when modeling, it is convenient to have numerical and mathematical methods that allow us to bound the blow-up time of a system of interest.
Roughly speaking, it can be said that the difficulty of finding upper bounds for the blow-up time of the system \((x(t),y(t))\), solution of (1.1)-(1.2), consists of comparing the growth of the components of the system, that is, verifying that, for example, after a certain time the component \(x(t)\) is greater than or equal to the component \(y(t)\). When this occurs, the problem is reduced, in a sense, to the study of a single fractional inequality, see Proposition 3.1. In our case, we have overcome this difficulty by introducing a new inequality related to the difference of the components of the system, so we can determine if it is positive or negative, therefore we can compare the components where the system has a solution, see the inequality (3.20). As far as we know, there are no comparison theorems for the solutions of a system of fractional differential equations that can be applied to this type of system; for this reason, the technique for studying the blow-up time, as well as the upper bounds obtained, seem new to us.
Since, in general, it is rare to obtain explicit solutions to a system of differential equations, it is necessary to consider a numerical scheme to give an idea of its solution. In our case, we adapt a numerical scheme that is a modification of the fractional Euler method that considers a corrective term. Using this numerical algorithm, it is possible to graphically estimate the blow-up time of the system (1.1)-(1.2). Three illustrative numerical examples are discussed.
The article is organized as follows. We present some definitions and certain preliminary results in Section 2, here it is worth highlighting the important Jensen's type inequality (2.3). In Section 3, we study a fractional inequality and prove the main result. Finally, in Section 4, we give a numerical algorithm for solving a system of general fractional differential equations and consider three illustrative numerical examples.
**2. Preliminaries**
In what follows we will assume that \(\alpha\in(0,1)\) and \(a\) will be a real number. We will begin by recalling some definitions and basic properties that will be useful to us. If \(u(t)\) is an absolutely continuous function, then the Caputo left hand-side fractional derivative is defined as
\[{}^{C}\!D^{\alpha}_{a+}u(t)=\frac{1}{\Gamma(1-\alpha)}\int_{a}^{t}\frac{u^{ \prime}(s)ds}{(t-s)^{\alpha}},\ \ t>a, \tag{2.1}\]
where \(\Gamma\) is the usual gamma function.
On the other hand, the Riemann-Liouville right hand-side fractional derivative of a function \(u(t)\) is defined as
\[{}^{RL}\!D^{\alpha}_{a-}u(t)=-\frac{1}{\Gamma(1-\alpha)}\frac{d}{dt}\int_{t}^ {a}\frac{u(s)ds}{(s-t)^{\alpha}},\ \ t<a.\]
Another concept that we will use is the following, the Riemann-Liouville left-side fractional integral of a function \(u(t)\) is defined as
\[I^{\alpha}_{a+}u(t)=\frac{1}{\Gamma(\alpha)}\int_{a}^{t}\frac{u(s)ds}{(t-s)^{ 1-\alpha}},\ \ t>a. \tag{2.2}\]
We compile some basic properties in the next result.
**Lemma 2.1**.: _Let \(\beta\) be a real number, then_
\[\left({}^{RL}\!D^{\alpha}_{a-}(a-s)^{\beta-1}\right)(t) = \begin{cases}0,&\beta=\alpha,\\ \frac{\Gamma(\beta)}{\Gamma(\beta-\alpha)}(a-t)^{\beta-\alpha-1},&\beta\neq \alpha.\end{cases} \tag{2.3}\]
_If the function \(u(t)\) is continuous, then_
\[(I^{\alpha}_{a+}\,{}^{C}\!D^{\alpha}_{a+}u)(t)=u(t)-u(a),\ \ t>a. \tag{2.4}\]
Proof.: The derivative (2.3) is formulae (2.1.19) and (2.1.20) in Kilbas et al. (2006). On the other hand, Lemma 2.22 of Kilbas et al. (2006) contains formula (2.4).
For \(\alpha>0\) and \(\beta\in\mathbb{R}\) the Mittag-Leffler function \(E_{\alpha,\beta}\) is defined by
\[E_{\alpha,\beta}(t)=\sum_{k=0}^{\infty}\frac{t^{k}}{\Gamma(\alpha k+\beta)}, \ \ t\in\mathbb{R}. \tag{2.5}\]
In particular, \(E_{\alpha,1}\) will be denoted by \(E_{\alpha}\).
We have the following elementary result.
**Lemma 2.2**.: _Let us assume that \(\lambda\) and \(a\) are two real numbers and set_
\[e^{\lambda(a-t)}_{\alpha}:=(a-t)^{\alpha-1}E_{\alpha,\alpha}[\lambda(a-t)^{ \alpha}],\ \ t<a. \tag{2.6}\]
_If \(0<\alpha\leq 1\), then the function \(e^{\lambda(a-t)}_{\alpha}\) is positive and_
\[{}^{RL}\!D^{\alpha}_{a-}e^{\lambda(a-t)}_{\alpha}-\lambda e^{\lambda(a-t)}_{ \alpha}=0,\ \ t<a. \tag{2.7}\]
Proof.: To verify that \(e^{\lambda(a-t)}_{\alpha}\) is positive it suffices to prove that \(E_{\alpha,\alpha}\) is positive. It is clear that \(E_{\alpha,\alpha}(t)>0\) if \(t\geq 0\). On the other hand, the function \(E_{\alpha}(-t)\), with \(t\geq 0\), is completely monotonic, namely
\[E_{\alpha}(-t)=\int_{0}^{\infty}e^{-ts}dF_{\alpha}(s),\]
where \(F_{\alpha}(s)\) is a bounded function and \(F^{\prime}_{\alpha}(s)\geq 0\) (moreover, an explicit expression for \(F^{\prime}_{\alpha}(s)\) can be find in Pollard (1948)). With this we deduce
\[E_{\alpha,\alpha}(-t)=-\alpha\frac{d}{dt}E_{\alpha}(-t)=\alpha\int_{0}^{ \infty}se^{-ts}dF_{\alpha}(s)>0.\]
The equation (2.7) is easily established using the property (2.3) and the linearity of the fractional derivative.
**Proposition 2.3**.: _Let \(\gamma\geq 2\) and \(u(t)\) be an absolutely continuous function on \((a,b)\). If_
\[u(t)\geq u(0)\geq 0,\ \ t\in(a,b), \tag{2.8}\]
_then_
\[{}^{C}\!D^{\alpha}_{a+}u^{\gamma}(t)\leq\gamma u^{\gamma-1}(t)\,^{C}\!D^{ \alpha}_{a+}u(t),\ \ t\in(a,b). \tag{2.9}\]
Proof.: The Definition (2.1) and the use of Fubini's theorem imply
\[u^{\gamma-1}(t)\,^{C}\!D^{\alpha}_{a+}u(t)-\frac{1}{\gamma}\,^{ C}\!D^{\alpha}_{a+}u^{\gamma}(t) = \frac{1}{\Gamma(1-\alpha)}\int_{a}^{t}\frac{1}{(t-r)^{\alpha}} \left[u^{\gamma-1}(t)u^{\prime}(r)-u^{\gamma-1}(r)u^{\prime}(r)\right]dr\] \[= \frac{1}{\Gamma(1-\alpha)}\int_{a}^{t}\frac{u^{\prime}(r)}{(t-r) ^{\alpha}}\int_{r}^{t}\frac{du^{\gamma-1}}{ds}(s)dsdr\] \[= \frac{1}{\Gamma(1-\alpha)}\int_{a}^{t}(\gamma-1)u^{\gamma-2}(s)u^ {\prime}(s)\int_{a}^{s}\frac{u^{\prime}(r)}{(t-r)^{\alpha}}drds.\]
Inasmuch as
\[\frac{\partial}{\partial s}\left(\int_{a}^{s}\frac{u^{\prime}(r)}{(t-r)^{ \alpha}}dr\right)^{2}=\frac{2u^{\prime}(s)}{(t-s)^{\alpha}}\int_{a}^{s}\frac{ u^{\prime}(r)}{(t-r)^{\alpha}}dr,\]
then
\[u^{\gamma-1}(t)\,^{C}\!D^{\alpha}_{a+}u(t)-\frac{1}{\gamma}\,^{ C}\!D^{\alpha}_{a+}u^{\gamma}(t) = \frac{\gamma-1}{2\Gamma(1-\alpha)}\int_{a}^{t}u^{\gamma-2}(s)(t-s )^{\alpha}\frac{\partial}{\partial s}\left(\int_{a}^{s}\frac{u^{\prime}(r)}{( t-r)^{\alpha}}dr\right)^{2}ds.\]
In this way, the usual integration by parts and the inequality (2.8) yield
\[u^{\gamma-1}(t)\,^{C}\!D^{\alpha}_{a+}u(t)-\frac{1}{\gamma}\,^{ C}\!D^{\alpha}_{a+}u^{\gamma}(t) \geq \frac{(\gamma-1)u^{\gamma-2}(0)}{2\Gamma(1-\alpha)}\int_{a}^{t}(t -s)^{\alpha}\frac{\partial}{\partial s}\left(\int_{a}^{s}\frac{u^{\prime}(r)}{ (t-r)^{\alpha}}dr\right)^{2}ds\] \[= \frac{\alpha(\gamma-1)u^{\gamma-2}(0)}{2\Gamma(1-\alpha)}\int_{a} ^{t}(t-s)^{\alpha-1}\left(\int_{a}^{s}\frac{u^{\prime}(r)}{(t-r)^{\alpha}}dr \right)^{2}ds.\]
From here, the result evidently follows.
The hypothesis (2.8) in Proposition 2.3 is not necessary when \(\gamma=2\). In this case, the following references Alikhanov (2010) or Aguila-Camacho et al. (2014) could be consulted.
## 3 Proof of the main result
Next, our first step will be to give an estimate of the blow-up time for a fractional inequality. As is common, for \(p\geq 1\) we denote by \(\tilde{p}\in(1,\infty]\) the conjugate index of \(p\) defined by means of
\[\frac{1}{p}+\frac{1}{\tilde{p}}=1.\]
**Proposition 3.1**.: _Let \(K>0\), \(q\geq 0\) and \(p\geq 1\). Let \(u(t)\) be a non-negative solution of the following Caputo's fractional inequality_
\[{}^{C}\!D^{\alpha}_{0+}u(t)\geq Kt^{q}u^{p}(t),\ \ t>0, \tag{3.1}\]
_with initial condition_
\[u(0)=u_{0}>0.\]
_If \(q+1>q\tilde{p}\), then \(u(t)\) blows-up in finite time; moreover, the blow-up time \(\tau_{u}\) is less than or equal to_
\[\tau(u_{0},q,p):=\left(\frac{\Gamma(q(1-\tilde{p})+1)}{(u_{0})^{p}\Gamma(q+1) }B(\lambda_{m})\right)^{1/(\tilde{p}(\alpha+q))}, \tag{3.2}\]
_where \(B(\lambda_{m})=\min\{B(\lambda):\lambda>\alpha\tilde{p}-1\}\) with_
\[B(\lambda):=\frac{\Gamma(\lambda+1)^{\tilde{p}-1}\,\Gamma(\lambda+1-\alpha \tilde{p})\,\Gamma(q+\lambda+2)}{\Gamma(\lambda+1-\alpha)^{\tilde{p}}\,\Gamma (q+\lambda+2-\tilde{p}(q+\alpha))}. \tag{3.3}\]
Proof.: We will use the well known capacity method, see for example Kassim et al. (2017). Let us assume that the solution \(u(t)\) of the inequality (3.1) is defined in \([0,s]\), for some
\[s>\tau(u_{0},q,p). \tag{3.4}\]
For \(\lambda>\alpha\tilde{p}-1\) let us introduce the test function
\[\varphi_{s}(t):=\frac{(s-t)^{\lambda}}{s^{\lambda}},\ \ t\in(0,s). \tag{3.5}\]
Multiplying the inequality (3.1) by \(\varphi_{s}\) and integrating from \(0\) to \(s\) gives
\[\int_{0}^{s}\varphi_{s}(t)Kt^{q}u^{p}(t)dt\leq\int_{0}^{s}\varphi_{s}(t)\,{}^ {C}\!D^{\alpha}_{0+}u(t)dt. \tag{3.6}\]
Using the integration by parts formula for the Caputo fractional derivative (see Agrawal (2007) or Kassim et al. (2017)) we have
\[\int_{0}^{s}\varphi_{s}(t)\,{}^{C}\!D^{\alpha}_{0+}u(t)dt=\int_{0}^{s}u(t)\,{ }^{RL}\!D^{\alpha}_{s-}\varphi_{s}(t)dt-\frac{\Gamma(\lambda+1)}{\Gamma( \lambda-\alpha+2)}s^{1-\alpha}u(0). \tag{3.7}\]
Using the formula
\[{}^{RL}\!D^{\alpha}_{s-}\varphi_{s}(t) = \frac{\Gamma(\lambda+1)}{\Gamma(\lambda+1-\alpha)}\cdot\frac{ \varphi_{s}(t)}{(s-t)^{\alpha}},\ \ 0\leq t<s, \tag{3.8}\]
and \(u(0)\geq 0\), we obtain
\[\int_{0}^{s}\varphi_{s}(t)Kt^{q}u^{p}(t)dt \leq \frac{\Gamma(\lambda+1)K}{\Gamma(\lambda+1-\alpha)}\int_{0}^{s}u(t) \frac{\varphi_{s}(t)}{(s-t)^{\alpha}}dt\] \[= \frac{\Gamma(\lambda+1)K}{\Gamma(\lambda+1-\alpha)}\int_{0}^{s} \varphi_{s}(t)^{\frac{1}{p}}t^{\frac{q}{p}}u(t)\cdot\frac{\varphi_{s}(t)^{1- \frac{1}{p}}t^{-\frac{q}{p}}}{(s-t)^{\alpha}}dt.\]
Employing the epsilon Young inequality, with \(\varepsilon\in(0,\Gamma(\lambda+1-\alpha)/\Gamma(\lambda+1)]\), see Appendix B in Evans (2010), we get
\[\int_{0}^{s}\varphi_{s}(t)Kt^{q}u^{p}(t)dt \leq \frac{\Gamma(\lambda+1)K}{\Gamma(\lambda+1-\alpha)}\left\{ \varepsilon\int_{0}^{s}\varphi_{s}(t)t^{q}u(t)^{p}dt+C(\varepsilon)\int_{0}^ {s}\frac{\varphi_{s}(t)^{\tilde{p}\left(1-\frac{1}{p}\right)}t^{-\frac{q \tilde{p}}{p}}}{(s-t)^{\alpha\tilde{p}}}dt\right\},\]
where
\[C(\varepsilon)=\frac{1}{(\varepsilon p)^{\tilde{p}/p}\,\tilde{p}}. \tag{3.9}\]
An elementary algebraic manipulation leads us to the inequality
\[\int_{0}^{s}\varphi_{s}(t)Kt^{q}u^{p}(t)dt\left(1-\frac{ \varepsilon\Gamma(\lambda+1)}{\Gamma(\lambda+1-\alpha)}\right) \leq \frac{C(\varepsilon)\Gamma(\lambda+1)K}{\Gamma(\lambda+1-\alpha)} \int_{0}^{s}\frac{\varphi_{s}(t)t^{q(1-\tilde{p})}}{(s-t)^{\alpha\tilde{p}}}dt \tag{3.10}\] \[= \frac{C(\varepsilon)\Gamma(\lambda+1)K}{\Gamma(\lambda+1-\alpha) }\cdot B(\lambda+1-\alpha\tilde{p},q(1-\tilde{p})+1)\] \[\cdot\;s^{1-\alpha\tilde{p}+q(1-\tilde{p})}.\]
Otherwise, applying the operator \(I_{0+}^{\alpha}\) to the inequality (3.1) we get, by (2.4),
\[u(t)\geq u(0)+I_{0+}^{\alpha}(Kt^{q}u(t)^{p})\geq u_{0},\;\;0<t<s, \tag{3.11}\]
then
\[\int_{0}^{s}\varphi_{s}(t)Kt^{q}u^{p}(t)dt \geq (u_{0})^{p}K\int_{0}^{s}\varphi_{s}(t)t^{q}dt\] \[= (u_{0})^{p}KB(q+1,\lambda+1)\,s^{1+q}.\]
This inequality and (3.10) imply
\[s^{\tilde{p}(\alpha+q)} \leq \frac{C(\varepsilon)\Gamma(\lambda+1)}{\Gamma(\lambda+1-\alpha)- \varepsilon\Gamma(\lambda+1)}\cdot\frac{B(\lambda+1-\alpha\tilde{p},q(1- \tilde{p})+1)}{u(0)^{p}B(q+1,\lambda+1)} \tag{3.12}\] \[= \frac{\Gamma(q(1-\tilde{p})+1)}{u(0)^{p}\Gamma(q+1)}\cdot\frac{ \Gamma(\lambda+1-\alpha\tilde{p})\Gamma(q+\lambda+2)}{\Gamma(q+\lambda+2- \tilde{p}(q+\alpha))}\cdot H(\varepsilon),\]
where
\[H(\varepsilon):=\frac{C(\varepsilon)}{\Gamma(\lambda+1-\alpha)-\varepsilon \Gamma(\lambda+1)}.\]
Since the right hand side of inequality (3.12) is valid for any \(0<\varepsilon<\Gamma(\lambda+1-\alpha)/\Gamma(\lambda+1)\), then we use the identity (3.9) to minimize the function \(H(\varepsilon)\) to get
\[s \leq \left(\frac{\Gamma(q(1-\tilde{p})+1)}{(u_{0})^{p}\Gamma(q+1)} \cdot B(\lambda)\right)^{1/(\tilde{p}(\alpha+q))},\;\;\lambda>\alpha\tilde{p}-1, \tag{3.13}\]
where \(B(\lambda)\) is defined in (3.3). Taking the infimum over \(\lambda\) in the above inequality we deduce that \(s\leq\tau(u_{0},q,p)\), contradicting the inequality (3.4). In this way, the desired result is achieved.
We are now in a position to state and prove our main result.
**Theorem 3.2**.: _Let \(\alpha\in(0,1)\) and \(q_{1}\), \(q_{2}\), \(p_{11}\), \(p_{12}\), \(p_{21}\), \(p_{22}\) be non-negative real numbers. We will consider the system of Caputo's fractional differential equations,_
\[{}^{C}\!D^{\alpha}_{0+}x(t) = t^{q_{1}}x^{p_{11}}(t)y^{p_{12}}(t),\ \ t>0, \tag{3.14}\] \[{}^{C}\!D^{\alpha}_{0+}y(t) = t^{q_{2}}y^{p_{21}}(t)x^{p_{22}}(t),\ \ t>0, \tag{3.15}\]
_with initial conditions_
\[x(0)=x_{0}>0,\ \ y(0)=y_{0}>0.\]
**Case \(q_{1}\neq q_{2}\):** _Let us set \(q_{i}:=\min\{q_{1},q_{2}\}\), \(j:=3-i\) and_
\[p_{j}:=p_{ji}+p_{jj}\gamma_{j},\]
_where_
\[\gamma_{j}:=\frac{p_{ij}+1-p_{ji}}{2}.\]
_If_
\[p_{ij}\geq 3+p_{ji},\ \ p_{ii}+1\geq p_{jj},\ \ q_{j}+1>q_{j}\tilde{p}_{j}, \tag{3.16}\]
_then the solution of system (3.14)-(3.15) blows-up in finite time. Moreover, the blow-up time, \(\tau_{xy}\), is less than or equal to \(\tau(u_{j},q_{j},p_{j})\), where \(\tau(u_{0},p,q)\) is defined in (3.2) and_
\[u_{j} = \begin{cases}x_{0},&j=1,\\ y_{0},&j=2.\end{cases}\]
**Case \(q_{1}=q_{2}\):** _In this case, if_
\[p_{22}\geq 3+p_{11},\ \ p_{21}+1\geq p_{12},\ \ q_{1}+1>q_{1}\tilde{p}_{1}, \tag{3.17}\]
_then the solution of system (3.14)-(3.15) blows-up in finite time and the blow-up time \(\tau_{xy}\) is less than or equal to \(\tau(x_{0},q_{1},p_{1})\), where_
\[p_{1}:=p_{11}+p_{12}\gamma_{1},\ \ \gamma_{1}:=\frac{p_{22}+1-p_{11}}{2}.\]
_Or, if_
\[p_{12}\geq 3+p_{21},\ \ p_{11}+1\geq p_{22},\ \ q_{2}+1>q_{2}\tilde{p}_{2}, \tag{3.18}\]
_then the solution of system (3.14)-(3.15) blows-up in finite time and the blow-up time \(\tau_{xy}\) is less than or equal to \(\tau(y_{0},q_{2},p_{2})\), where_
\[p_{2}:=p_{21}+p_{22}\gamma_{2},\ \ \gamma_{2}:=\frac{p_{12}+1-p_{21}}{2}.\]
Proof.: The local existence of a positive solution \((x(t),y(t))\) of system (3.14)-(3.15) is obtained by proceeding as in the classical case (i.e., the non-fractional one). Indeed, instead of treating the system of fractional differential equations, the associated system of integral equations is studied (see formula (3.5.4) in (Kilbas et al., 2006)). A first step consists of applying a version of Banach's contraction principle to a mapping, determined by the system of integral equations, in order to find a positive solution \((x(t),y(t))\) in a certain interval \([0,\delta)\), that is, we obtain a local solution (see Theorem 3.25 in (Kilbas et al., 2006), for example). For the second step, let \([0,t_{\max})\) be the maximum interval of existence of the solution \((x(t),y(t))\). If \(t_{\max}<\infty\), then \(\lim_{t\uparrow t_{\max}}\,\sup\{x(s)+y(s):s\in[0,t]\}=\infty\). If such a limit were finite, then we can proceed as in the first step and deduce that we can extend the solution to the time interval \([0,t_{\max}+\delta)\), which is absurd (see, for example, the proof of Theorem 1.4 in Chapter 6 of Pazy (1983)). Therefore, the maximum time of existence of the solution is precisely the blow-up (explosion) time.
Now, we will concentrate on finding upper bounds for the blow-up time \(\tau_{xy}\) of the system (3.14)-(3.15). Without loss of generality, we will suppose that we have \(q_{2}\geq q_{1}\), \(p_{12}\geq 3+p_{21}\), \(p_{11}+1\geq p_{22}\) and \(q_{2}+1>q_{2}\tilde{p}_{2}\); in the other cases the procedure is similar, therefore we omit them. We will see that the blow-up time \(\tau_{xy}\) of the system is less than or equal to \(\tau(y_{0},q_{2},p_{2})\). We proceed by contradiction, that is, \(\tau(y_{0},q_{2},p_{2})<\tau_{xy}\), then the system (3.14)-(3.15) has a solution \((x(s),y(s))\) in \([0,t]\), for some
\[\tau(y_{0},q_{2},p_{2})<t<\tau_{xy}. \tag{3.19}\]
Under this assumption it makes sense to introduce the following function
\[J(s)=Mx(s)-y^{\gamma_{2}}(s),\ \ 0\leq s\leq t,\]
where \(M\) is a positive constant that will be fixed letter. Observe that the hypotheses imply \(\gamma_{2}\geq 2\) and this is a condition necessary to apply Proposition 2.3. Using the linearity of the Caputo's fractional derivative, (2.9), (3.14) and (3.15) we obtain
\[{}^{C}\!D^{\alpha}_{0+}J(s) = M\,{}^{C}\!D^{\alpha}_{0+}x(s)-\,{}^{C}\!D^{\alpha}_{0+}y^{ \gamma_{2}}(s)\] \[\geq M\,{}^{C}\!D^{\alpha}_{0+}x(s)-\gamma_{2}\,y^{\gamma_{2}-1}(s)\, {}^{C}\!D^{\alpha}_{0+}y(s)\] \[= Ms^{q_{1}}x^{p_{11}}(s)y^{p_{12}}(s)-\gamma_{2}\,s^{q_{2}}x^{p_{ 22}}(s)y^{\gamma_{2}-1+p_{21}}(s).\]
Setting
\[h(s):=s^{q_{2}}x^{p_{11}}(s)y^{\gamma_{2}-1+p_{21}}(s),\ \ 0\leq s\leq t,\]
the above inequality implies
\[{}^{C}\!D^{\alpha}_{0+}J(s)+h(s)J(s) \geq x^{p_{11}}(s)y^{p_{12}}(s)(Ms^{q_{1}}-s^{q_{2}})\] \[+\,s^{q_{2}}y^{\gamma_{2}-1+p_{21}}(s)(Mx^{p_{11}+1}(s)-\gamma_{2} \,x^{p_{22}}(s)).\]
Proceeding as in the proof of (3.11) we can verify that
\[x(s)\geq x_{0}>0,\ \ 0\leq s\leq t.\]
If we take
\[M > \max\left\{s^{q_{2}-q_{1}},\gamma_{2}\,x^{p_{22}-p_{11}-1}(s):0 \leq s\leq t\right\}\]
\[= \max\left\{t^{q_{2}-q_{1}},\gamma_{2}\left(x_{0}\right)^{p_{22}-p_{11} -1}\right\},\]
then
\[{}^{C}\!D_{0+}^{\alpha}J(s)+h(s)J(s)>0,\ \ 0\leq s\leq t.\]
Furthermore, if we take \(M>(y_{0})^{\gamma_{2}}(x_{0})^{-1}\), then \(J(0)>0\).
Let us set
\[A:=\{r\in[0,t]:J(s)>0,\mbox{ for all }s\in[0,r]\}.\]
Notice that \(0\in A\). Let us suppose that \(a:=\sup A<t\). This implies
\[{}^{C}\!D_{0+}^{\alpha}J(s)+||h||J(s)>0,\ \ 0\leq s<a, \tag{3.20}\]
where \(||h||:=\sup\{|h(s)|:0\leq s\leq t\}\). We will consider the function \(e_{\alpha}^{-||h||(a-s)}>0\) defined in (2.6). Multiplying by \(e_{\alpha}^{-||h||(a-s)}\) on both sides of the inequality (3.20) and integrating with respect to \(s\) we obtain
\[0 < \int_{0}^{a}e_{\alpha}^{-||h||(a-s)}\,\left[{}^{C}\!D_{0+}^{ \alpha}J(s)+||h||J(s)\right]ds\] \[= \int_{0}^{a}e_{\alpha}^{-||h||(a-s)}\,{}^{C}\!D_{0+}^{\alpha}J(s )ds+\int_{0}^{a}e_{\alpha}^{-||h||(a-s)}||h||J(s)ds\] \[\leq \int_{0}^{a}J(s)\,{}^{RL}\!D_{a-}^{\alpha}e_{\alpha}^{-||h||(a-s) }\,ds+\int_{0}^{a}J(s)||h||e_{\alpha}^{-||h||(a-s)}\,ds\] \[= \int_{0}^{a}J(s)\,\left[{}^{RL}\!D_{a-}^{\alpha}e_{\alpha}^{-||h ||(a-s)}+||h||\,e_{\alpha}^{-||h||(a-s)}\right]ds=0.\]
This contradiction yields \(a=t\), then \(J(s)\geq 0\), \(0\leq s\leq t\), namely
\[x(s)\geq\frac{1}{M}\,y^{\gamma_{2}}(s),\ \ 0\leq s\leq t.\]
From the above inequality and (3.15) we arrive at
\[{}^{C}\!D_{0+}^{\alpha}y(s)\geq\frac{1}{M^{p_{22}}}\,s^{q_{2}}y^{p_{21}+p_{22} \gamma_{2}}(s),\ \ 0\leq s\leq t.\]
If we take \(K=M^{-p_{22}}\), \(q=q_{2}\), \(p=p_{21}+p_{22}\gamma_{2}\) and \(u_{0}=y_{0}\) in Proposition 3.1 we conclude that the system (3.14)-(3.15) can not have a solution on \([0,t]\), otherwise we have the inequality, \(t\leq\tau(u_{0},q,p)=\tau(y_{0},q_{2},p_{2})\). But this contradicts (3.19), as expected.
## 4 Numerical experiments
To find the solution of the system (3.14)-(3.15), in this section we are going to consider a numerical scheme. We will use the predictor-corrector approach introduced in Diethelm et al. (2002) to find numerical solutions. In fact, in such paper it is also included a code that is very simple to implement. It is convenient to point out that in Diethelm et al. (2002) the algorithm is introduced to find the solution of a single fractional differential equation,
in our case, we adapt their algorithm for systems of fractional equations. In addition, we present the algorithm for a more general system than the one considered in Section 3, since we believe that it may be of interest to a broader audience.
We are going to consider a numerical scheme for the system of fractional differential equations in Caputo's sense,
\[{}^{C}\!D_{0+}^{\alpha}x(t) = f(t,x(t),y(t)),\ \ t>0, \tag{4.1}\] \[{}^{C}\!D_{0+}^{\alpha}y(t) = g(t,x(t),y(t)),\ \ t>0, \tag{4.2}\]
where \(f(t,x,y)\) and \(g(t,x,y)\) are given functions together with the initial data
\[x(0)=x_{0},\ \ y(0)=y_{0}. \tag{4.3}\]
Let us look at a numerical solution in the time interval \([0,T]\). We denote by \(N\) the number of divisions of \([0,T]\) and let \(t_{n}:=nh\), \(n=0,1,...,N\), be the corresponding regular partition, where \(h:=T/N\). By \((x_{n+1},y_{n+1})\), we set the numeric approximation of the solution \((x(t),y(t))\) of the system (4.1)-(4.2) at time \(t=t_{n+1}\). For \(j=0,1,...,n+1\), we will set
\[x_{n+1}=x_{0}+\frac{h^{\alpha}}{\Gamma(\alpha+2)}\sum_{j=0}^{n} a_{j,n+1}f(t_{j},x_{j},y_{j})+\frac{h^{\alpha}}{\Gamma(\alpha+2)}f(t_{n+1},p_{ n+1},q_{n+1}),\] \[y_{n+1}=y_{0}+\frac{h^{\alpha}}{\Gamma(\alpha+2)}\sum_{j=0}^{n} a_{j,n+1}g(t_{j},x_{j},y_{j})+\frac{h^{\alpha}}{\Gamma(\alpha+2)}g(t_{n+1},p_{ n+1},q_{n+1}),\]
where
\[a_{j,n+1} = \begin{cases}n^{\alpha+1}-(n-\alpha)(n+1)^{\alpha},&j=0,\\ (n-j+2)^{\alpha+1}+(n-j)^{\alpha+1}-2(n-j+1)^{\alpha+1},&1\leq j\leq n,\end{cases}\]
and
\[p_{n+1}=x_{0}+\frac{1}{\Gamma(\alpha)}\sum_{j=0}^{n}b_{j,n+1}f(t_ {j},x_{j},y_{j}),\] \[q_{n+1}=y_{0}+\frac{1}{\Gamma(\alpha)}\sum_{j=0}^{n}b_{j,n+1}g(t _{j},x_{j},y_{j}),\]
here
\[b_{j,n+1}=\frac{h^{\alpha}}{\alpha}\left((n+1-j)^{\alpha}-(n-j)^{\alpha} \right).\]
The blow-up time that will be obtained from the numerical scheme is denoted by \(t_{\rm num}\). More precisely, such time is deduced from the graph of the numerical solution of the system of interest. It should be noted that the objective of this work is not to design a numerical scheme to determine the blow-up time, this is a wide topic of study, see for example Perez and Villa-Morales (2022) and the references there in. Our objective here is to show that the explosion time obtained in Theorem 3.2, which we will denote by \(\tau_{ub}\), is an upper bound for the numerical blow-up time, \(t_{\rm num}\).
We are going to consider three examples, which correspond to the three possible cases that can occur in Theorem 3.2.
**Example 1**.: Let us examine the system of fractional differential equations
\[{}^{C}\!D_{0+}^{\alpha}x(t) = t^{0.5}x^{1.5}(t)y^{3.6}(t),\ \ t>0, \tag{4.4}\] \[{}^{C}\!D_{0+}^{\alpha}y(t) = t^{1.5}y^{0.5}(t)x^{2.4}(t),\ \ t>0, \tag{4.5}\]
with initial conditions
\[x(0)=1,\ \ y(0)=1.2.\]
We apply the predictor-corrector algorithm to system (4.4)-(4.5) to obtain the graph of its solution, which can be seen in Figure 1. From the graphical representation we obtain an estimate of the numerical explosion time \(t_{\rm num}\), for the fractional index \(\alpha=0.1,0.4,0.6,0.9\).
On the other hand, the parameters
\[q_{1}=0.5,\ \ p_{11}=1.5,\ \ p_{12}=3.6,\ \ q_{2}=1.5,\ \ p_{21}=0.5,\ \ p_{22}=2.4,\]
satisfy the conditions of (3.16), so we can apply Theorem 3.2. In order to do this, we need to know the minimum value of the function \(B\), defined in (3.3). Because such a function is quite
Figure 1: Graphs of the solutions of system (4.4)-(4.5).
complicated, it is not obvious that a minimum value exists. Fortunately, when graphing the function \(B\), see Figure 2, we note that this function is convex, for each value of \(\alpha\), therefore the existence of the value sought follows. The function \(B\) has a similar behaviour in the following two examples, so we will omit its graph. With the minimum value of \(B\), we get the upper bound \(t_{ub}\) of the blow-up time \(\tau_{xy}\).
We concentrate this information in the following table:
\begin{tabular}{|c||c|c|c|c|} \hline \(\alpha\) & \(0.1\) & \(0.4\) & \(0.6\) & \(0.9\) \\ \hline \(\lambda_{m}\) & \(-0.802...\) & \(-0.358...\) & \(-0.083...\) & \(0.315...\) \\ \hline \(t_{\text{num}}\) & \(0.085\) & \(0.28\) & \(0.44\) & \(0.66\) \\ \hline \(\tau_{ub}\) & \(0.720...\) & \(0.998...\) & \(1.169...\) & \(1.415...\) \\ \hline \end{tabular} From this, we clearly appreciate that upper bounds have been obtained for the blow-up (explosion) time. In graph (d) of Figure 1 it can be seen that the function \(y(t)\) begins to grow; in fact, if it is graphed separately, it can be seen that it also explodes in finite time. Its blow-up time is approximately \(0.69\).
Figure 2: Graphs of \(B\) given in Example 1.
**Example 2**.: Now let us consider the system of fractional differential equations
\[{}^{C}\!D^{\alpha}_{0+}x(t) = y^{3.2}(t),\ \ t>0, \tag{4.6}\] \[{}^{C}\!D^{\alpha}_{0+}y(t) = y^{0.2}(t)x^{0.5}(t),\ \ t>0, \tag{4.7}\]
with initial conditions
\[x(0)=0.5,\ \ y(0)=0.5.\]
Applying the predictor-corrector algorithm, the graphs of the solutions of system (4.6)-(4.7), for the parameter \(\alpha=0.1,0.4,0.6,0.9\), are obtained, and they appear in Figure 3. From here we estimate the numerical value of the explosion time, \(t_{\rm num}\).
We observe that the parameters
\[q_{1}=0,\ \ p_{11}=0,\ \ p_{12}=3.2,\ \ q_{2}=0,p_{21}=0.2,\ \ p_{22}=0.5,\]
of the system of equations (4.6)-(4.7) meet, in this case, conditions (3.17), therefore we can apply Theorem 3.2 to obtain an estimate \(\tau_{ub}\) from above the blow-up time \(\tau_{xy}\) of this system. We summarize the results in the following table:
Figure 3: Graphs of the solutions of system (4.6)-(4.7).
\begin{tabular}{|c||c|c|c|c|} \hline \(\alpha\) & \(0.1\) & \(0.4\) & \(0.6\) & \(0.9\) \\ \hline \(t_{\rm num}\) & \(0.35\) & \(3.8\) & \(5.1\) & \(6.9\) \\ \hline \(\tau_{ub}\) & \(8.899...\) & \(6.333...\) & \(7.297...\) & \(8.948...\) \\ \hline \end{tabular}
As a result, we can conclude that \(\tau_{ub}\) is an upper bound for the numerical blow-up time, \(t_{\rm num}\). As in the previous case, if we graph the function \(y(t)\) separately we can determine its explosion time, which in this case is approximately \(8.7\). Remember that the explosion time of a system is the minimum of the explosion times of each component.
**Example 3**: _Here we deal with the system of fractional differential equations_
\[{}^{C}\!D_{0+}^{\alpha}x(t) = t^{0.5}x(t)y^{3}(t),\ \ t>0, \tag{4.8}\] \[{}^{C}\!D_{0+}^{\alpha}y(t) = t^{0.5}y^{2}(t)x^{4}(t),\ \ t>0, \tag{4.9}\]
_with initial conditions_
\[x(0)=1,\ \ y(0)=1.\]
_As in the previous cases, we apply the predictor-corrector algorithm to obtain the graphical solutions of system (4.8)-(4.9); we present them in Figure 4. Numerical estimates for time \(t_{\rm num}\) are obtained from them. The parameters of the system (4.8)-(4.9) are_
\[q_{1}=0.5,\ \ p_{11}=01,\ \ p_{12}=3,\ \ q_{2}=0.5,p_{21}=2,\ \ p_{22}=4,\]
_and they meet the conditions imposed in (3.18), therefore we can apply Theorem 3.2 to obtain \(\tau_{ub}\), which provides us with an upper bound for the numerical time explosion, \(t_{\rm num}\). We put this information together in the following table:_
\begin{tabular}{|c||c|c|c|c|} \hline \(\alpha\) & \(0.1\) & \(0.4\) & \(0.6\) & \(0.9\) \\ \hline \(t_{\rm num}\) & \(0.019\) & \(0.11\) & \(0.21\) & \(0.42\) \\ \hline \(\tau_{ub}\) & \(1.228...\) & \(1.551...\) & \(1.726...\) & \(1.967...\) \\ \hline \end{tabular}
_Here we again see that \(\tau_{ub}\) is an upper bound for \(t_{\rm num}\). In this case, both components have approximately the same explosion time._
## 5 Conclusions
In the present work, upper bounds have been obtained for the blow-up time of a system of fractional differential equations in the Caputo sense (1.1)-(1.2). As a consequence of this, sufficient conditions have been derived so the system of fractional differential equations does not have a global solution. Furthermore, based on the predictor-corrector algorithm, a numerical scheme has been proposed to solve systems of fractional differential equations in general. Using this numerical method, three illustrative examples have been presented that correspond to each of the cases of the Theorem 3.2.
_Acknowledgment:_ The author was partially supported by the grant PIM22-1 of Universidad Autonoma de Aguascalientes. Thanks to Jose Miguel Villa-Ocampo for his help in the implementation of the numerical algorithm in R. |
2304.00301 | Investigation of Optical Pumping in Cesium Atoms with a Radio-Frequency
Field, Using Liouville Equation | Optical pumping is a technique for engineering atomic-sublevel population of
desired atoms. We investigate the population evolution of Cesium atoms by
employing Liouville equation. For this purpose, we apply a circularly polarized
light at a frequency suitable for electronic transition from ground states to
excited states and calculate the relaxation rate, repopulation, and population
evolution of the Cesium Zeeman sublevels. For engineering the sublevel
population after optical pumping, we employ a radiofrequency (RF) field and
consider the effect of RF field in Liouville equation. With this approach, we
are able to prepare desired distribution of the population in the atomic
sublevels with high efficiency, which can be employed in different optical
experiments. | Hossein Davoodi Yeganeh, Zahra Shaterzadeh-Yazdi | 2023-04-01T11:49:28Z | http://arxiv.org/abs/2304.00301v2 | Investigation of Optical Pumping in Cesium atoms with a Radio-Frequency field, Using Liouville equation
###### Abstract
Optical pumping is a technique for engineering atomic-sublevel population of desired atoms. We investigate the population evolution of Cesium atoms by employing Liouville equation. For this purpose, we apply a circularly polarized light at a frequency suitable for electronic transition from ground states to excited states and calculate the relaxation rate, repopulation, and population evolution of the Cesium Zeeman sublevels. For engineering the sublevel population after optical pumping, we employ a radiofrequency (RF) field and consider the effect of RF field in Liouville equation. With this approach, we are able to prepare desired distribution of the population in the atomic sublevels with high efficiency, which can be employed in different optical experiments.
**Keyword**: Optical pumping, Radio frequency field, Alkali atoms, Cesium atom
## 1 Introduction
Optical pumping is a process in which photons of light are interacting with the constituent atoms of a matter. In an isolated collection of atoms in the form of a gas, the atoms occupy their energy states at a given temperature, in a way predicted by standard statistical mechanics. Assuming that the atoms are exposed to a stream of photons, these photons play a key role in redistribution of the states, occupied by the atoms [1, 2]. Among different methods that are used for engineering the atomic-sublevel population, such as chirped laser pulse population transfer [3] and laser-induced population transfer [4], optical pumping is the most practical one. This method has application in many fields, such as magneto-optical trapping and laser cooling [5].
Population evolution of atomic sub-level states, caused by optical pumping, can be described by evolution of its density matrix, by employing Liouville equation. In this method, it is assumed that the system of interest is a closed system, therefore interaction between the atomic system and its surrounding environment is neglected. However, in most cases, interaction between the atomic system and the environment can be characterized phenomenologically as repopulation and relaxation of the atomic system, regardless of explicitly considering the presence of the environment.
There are other methods to describe the population evolution of the sublevel states in the optical pumping process. One of the methods is the rate equations [6], in which the fast-transient processes in the excited states can be neglected. Another method is the Lindblad master equation by which the optical pumping is described by means of open quantum system [7]. Alkali atoms are usually used in optical pumping, because they have a simpler structure than the other atoms [8, 9, 10, 11, 12, 13]. Generally speaking, alkali atoms, such as \({}^{7}\)Li, \({}^{23}\)Na, \({}^{39}\)K, \({}^{87}\)Rb, and\({}^{133}\)Cs, possess spin-half nuclei. They have hyperfine-doublet ground states \(n\)\({}^{2}\)S\({}_{1/2}\) given by \(F=I-J,\ I+J\) and fine-doublet excited states \(n\)\({}^{2}\)P\({}_{1/2}\), \({}_{3/2}\). The corresponding excited states \(n\)\({}^{2}\)P\({}_{1/2}\) and \(n\)\({}^{2}\)P\({}_{3/2}\) further have the hyperfine structure for \(F=I-J\)... \(I+J\). Therefore, there are two hyperfine levels for \(J=1/2\) and four hyperfine levels for \(J=3/2\).
In this paper, we investigate the optical pumping of \({}^{133}\)Cs atoms by employing polarized light, and engineer their zeeman sublevel populations, using Liouville equation. We use a semiclassical approach for describing the interaction between the optical light and the Cs atoms. In this process, first, the Cs atoms are pumped to their excited states, and then radiofrequency (RF) field is added to the system to engineer the sublevel populations. Manipulation of the atomic states is performed by employing a combination of an appropriate laser field, a magnetic field, and a RF radiation [14, 15, 16, 17, 18]. The results show that by using the RF field, the excited sublevel populations return to the ground states, and the population gets distributed evenly between the sublevels' ground states.
This paper is structured as follows: in Sec. 2, we introduce a theoretical model for the process of optical pumping, using the Liouville equation approach. In Sec. 3, based on the introduced model, we present the results of calculating the population evolution of the Cesium sublevels, both at the presence and in the absence of the RF field, in the process of optical pumping. Finally, in Sec. 4 we end up with the concluding remarks.
## 2 Theoretical Model of Optical Pumping with Liouville Equation Approach
The state of an atomic system that is evolving in time is described by its density matrix. The time evolution of an atomic system is dominated by primary conditions of the system, structure of the atoms, and external applied fields such as the static electric and magnetic fields, or the optical electromagnetic field. Furthermore, usually the atomic system is not completely isolated from its surrounding environment, and the interaction with the environment needs to be modeled by including phenomenological terms into the evolution equations. These interactions often lead to relaxation and repopulation phenomenon, such as radiative decay and collisions [5, 19, 20].
We aim to model a system that is composed of an ensemble of thermally distributed atoms with a range of velocities, located in a vapor cell. The time evolution of the density matrix \(\rho\), associated with the system of interest, is governed by [20]
\[i\frac{d}{dt}\rho=[H,\rho]-i\frac{1}{2}\{\Gamma,\rho\}+i\Lambda, \tag{1}\]
where \(H\) is the total Hamiltonian of the system of interest given by \(H=H_{0}+H_{I}+H_{B}\), in which \(H_{0}\) is the Hamiltonian of the desired atoms, \(H_{I}\) is the light-atom interaction Hamiltonian and \(H_{B}\) is the magnetic field-atom interaction Hamiltonian. Parameter \(\Gamma\) is the diagonal-form relaxation matrix, which shows the effect of relaxation on the time evolution of the density matrix. In the desired atomic system, each basis state \(|n\rangle\) relaxes with the rate \(\Gamma_{n}\). The density matrix \(\rho\) of the system has the condition \(Tr(\rho)=1\), which is indicating that the number of atoms in the system is conserved. Hence, there must be repopulation corresponding to the relaxation processes, in order to replenish the atoms. Repopulation is represented by the repopulation matrix \(\Lambda\) in Eq. 1.
In this research work, we consider all the important steps in optical pumping and employ polarized light for the light-matter interaction. Also, we consider selection rules governing atomic transitions. For the interaction Hamiltonian (\(H_{I}\)), we consider the interaction of the atomic system with static electric and magnetic fields, and a radiofrequency field (RF). All the states of the atomic system is considered by using the density matrix associated with the system. By considering the selection rules, we can compute repopulation and relaxation rates of the desired system.
It worth to note that in the Liouville aproach, which we used for modeling the optical pumping, the effect of external field, e.g. RF fields, can easily be added in the Hamiltonian, whereas in the other methods, such as the rate equation, this is not simple and have mathematical limitations. Therefore, we employ Liouville equation for optical pumping of Cesium atoms. By obtaining the time evolution of the density matrix, we achieve more information about the population and coherent transition of the desired atomic system. Therefore, by using this method, we can model the optical pumping, engineer the states of the system, and use the optical pumping of alkali atoms efficiently.
Results: Population evolution of the Cesium sublevels
Cesium atoms have been employed in various quantum optics experiments, such as cold atoms prepared by laser cooling and trapping [21, 22]. In Cs atoms, the two transitions \(6^{2}S_{1/2}\to 6^{2}P_{3/2}\) and \(6^{2}S_{1/2}\to 6^{2}P_{1/2}\) are the components of a fine-structure doublet, and each of these transitions additionally have hyperfine structures. The ground state of Cs has \(J=1/2\) and \(I=7/2\), so it has two hyperfine groundstate levels (\(F=3,4\)) and two excited hyperfine levels (\(f=3,4\)) in D1 transition line. In addition, each of these hyperfine levels has \(2F+1\) Zeeman sublevels [23]. Fine structure, hyperfine structure and Zeeman splitting of D1 lines in Cs atom are shown schematically in Fig. 1.
Cesium atom has 16 ground-state sublevels, i.e. \(\{|F=3,M=3\rangle,\ldots,|3,-3\rangle,\ldots|4,4\rangle,\ldots,|4,-4\rangle\}\) and 16 excited-state sublevels, i.e. \(\{|f=3,m=3\rangle,\ldots,|3,-3\rangle\ldots|4,4\rangle.....|4,-4\rangle\}\). The states \(\{|F,M\rangle\}\) and \(\{|f,m\rangle\}\) are the basis states of the system's Hilbert state. Consequently, the density matrix associated with the Cs atom is represented by a \(32\times 32\) matrix.
Assuming the energy of the ground-state sublevels to be zero and the energy of the excited-state sublevels to be \(\hbar\omega_{0}\), then all the elements of the matrix \(H_{0}\) are zero, except those diagonal elements that represents the excited-state sublevels, i.e.
\[|f,m\rangle\langle f,m|=|4,4\rangle\langle 4,4|=\ldots=|3,-3\rangle\langle 3,-3|= \hbar\omega_{0}. \tag{2}\]
Furthermore, we assume that the right-circular polarized light \(\sigma^{+}\) and the left-circular polarized light \(\sigma^{-}\) are interacting with the Cs atom. Therefore, the light-atom interaction Hamiltonian is given by
\[H_{I}=-\mathbf{E}.\mathbf{\hat{d}}, \tag{3}\]
where \(\mathbf{E}\) is the optical electric field and \(\mathbf{\hat{d}}\) is the dipole operator representing the electric dipole moment corresponding to the Cs atom.
The electric field associated with the right-circular polarized light and the left-circular polarized light are assumed to be \(E^{+}=(E_{0}e^{i\omega t},iE_{0}e^{i\omega t},0)\) and \(E^{-}=(E_{0}e^{i\omega t},-iE_{0}e^{i\omega t},0)\), respectively. Also, we choose \(d_{x}=\frac{1}{\sqrt{2}}(d_{-1}-d_{+1})\) where \(d_{-1}\) and \(d_{+1}\) are the matrix elements of the dipole operator for the light \(\sigma^{+}\) and \(\sigma^{-}\), respectively. Using the Wigner-Eckart theorem, these matrix elements are given by,
\[\langle F_{1}m_{1}|d_{\pm}|F_{2}m_{2}\rangle=(-1)^{F_{1}-m_{1}}\langle F_{1}m _{1}|d|F_{2}m_{2}\rangle\left(\begin{array}{ccc}F_{1}&1&F_{2}\\ -m_{1}&\pm 1&m_{2}\end{array}\right), \tag{4}\]
where \(\left(\begin{array}{ccc}F_{1}&1&F_{2}\\ -m_{1}&\pm 1&m_{2}\end{array}\right)\) is Clebsch-Gordan coefficients describing how individual angular momentum states may be coupled to yield the total angular momentum state of a system; these coefficients are also known as 3j symbol coefficients.
Figure 1: Schematic view of fine structure, hyperfine structure and Zeeman splitting of D1 line of Cs atom, employed in optical pumping process.
We are interested in investigating the optical pumping between \(F=4\) and \(f=3\) in the Cs atom. Optical pumping between these levels is used in many experiments and practical applications such as quantum memories [24]. Hence, we assume that the polarized light interact with these sublevels. In general, we can consider the polarized light to interact with the whole atomic system. Also, one should note that there is similar formulation for the right-circular polarized light and the left-circular polarized light. For \(\sigma^{+}\), the light-atom interaction Hamiltonian is given by
\[\hat{H}_{I}^{+}=-\mathbf{E}.\hat{\mathbf{d}}=\Omega_{R}cos\omega t\ M^{+}, \tag{5}\]
where \(\Omega_{R}\) is the optical Rabi frequency given by
\[\Omega_{R}=E_{0}\langle F||d||F^{\prime}\rangle\left(\begin{array}{ccc}F&1& F^{\prime}\\ -m_{1}&\pm 1&m_{2}\end{array}\right). \tag{6}\]
In Eq. 5, \(M^{+}\) is a \(32\times 32\) matrix that, according to the selection rules, all its elements are zero except those elements that show coupling between sublevels \(F=4\) and \(f=3\). Similarly, for the light source \(\sigma^{-}\), we have
\[\hat{H}_{I}^{-}=-\mathbf{E}.\hat{\mathbf{d}}=\Omega_{R}cos\omega t\ M^{-}. \tag{7}\]
Transitions between sublevels \(F=4\) and \(f=3\) with \(\sigma^{+}\) and \(\sigma^{-}\) has been shown schematically in Fig. 2.
Finally, we consider applying the \(z\) component of a magnetic field for causing the Zeeman effect. The corresponding interaction Hamiltonian is
\[\hat{H}_{B}=\hat{\mu}.\mathbf{B}=g\mu_{0}F_{z}B_{z}=\hbar\Omega_{L}F_{z}, \tag{8}\]
where \(g\) is the \(g\)-factor, \(\mu\) is the magnetic dipole moment, \(\Omega_{L}=g\mu_{0}B_{z}/\hbar\) is the Larmor frequency and \(F_{z}\) is the total angular momentum matrix for the \(z\) direction.
The constituent terms of the total Hamiltonian of the system, i.e. \(H=H_{0}+H_{I}+H_{B}\), are now determined. Therefore, we can solve Eq. 1, in order to study the dynamic of the system of interest. The Hamiltonian has a time dependence at the optical frequency. We use rotating wave approximation and neglect the quickly oscillating terms in the Hamiltonian and only keep terms that are showing the detuning frequency, i.e. \(\Delta=\omega-\omega_{0}\).
Considering relaxation at a rate \(\gamma\) and spontaneous decay at a rate \(\Gamma_{s}\), the relaxation matrix is then given by a diagonal matrix \((32\times 32)\), in which the first 16 diagonal elements are \(\gamma\) and the other 16 diagonal elements are \(\gamma+\Gamma_{s}\). Repopulation matrix \(\Lambda\), in Eq. 1, is given by a \(32\times 32\) matrix that describes a process in which atoms leave the region of interest, other atoms may be entering and these atoms may be polarized or unpolarized. In addition, in population matrix, transition rate between various pairs of upper- and lower-state sublevels can be considered in different values and coherences between between sublevels as well as can be transferred by spontaneous decay. By numerical calculation of Eq. 1, time evolution of the density matrix of the system is obtained. Then, by applying \(Tr(\rho|F,M\rangle\langle F,M|)\), the population of Zeeman sublevels is extracted. It should
Figure 2: Schematic view of transitions between sublevels \(F=4\) and \(f=3\) with polarized light \(\sigma^{+}\) and (b) polarized light \(\sigma^{-}\) in the D1 line of Cs atoms, which is employed in optical pumping processes.
be noted that rotating wave approximation does not affect the population of Zeeman sublevels in our calculations.
Figures 3 and 4 demonstrate numerical results for the time evolution of the Zeeman sublevels population \(F=4\), caused by applying the polarized light \(\sigma^{+}\) and \(\sigma^{-}\), respectively. Here, we consider \(\Omega_{R}=11\times 10^{3}\) Hz, \(\Gamma=613\) MHz, \(\Omega_{L}=0.05\times\Gamma\) MHz, \(\Delta=0.5\times\Gamma\) MHz, \(\gamma=0.05\times\Gamma\) MHz and \(\omega_{0}=3351.21\) MHz. The values of these parameters are chosen based on the experimental results obtained for the Cs atom [23]. In Fig. 3, the population of sublevels \(F=4\) driven by the field associated with \(\sigma^{+}\) is demonstrated. It can be seen that the population of Zeeman sublevels \(m_{f}=4\) and \(m_{f}=3\) are increased, compared to other sublevels, during optical pumping. The reason for the increase of population in these sublevels is the use of \(\sigma^{+}\) light which is obtained by applying atomic transition rules[25]. Also, population of sublevels \(F=4\) caused by the field \(\sigma^{-}\) is ploted in Fig. 4. It can be seen the populations of Zeeman sublevels \(m_{f}=-4\) and \(m_{f}=-3\), compared to other sublevels, are increased during optical pumping. The reason for population increment in these sublevels is the use of \(\sigma^{-}\) light. In Fig. 3, order of sublevels population is \(P_{4,3}>P_{4,2}>P_{4,4}>P_{4,1}\) also, in Fig. 4, order of sublevels population is \(P_{4,-3}>P_{4,-2}>P_{4,-4}>P_{4,-1}\) which indicates the employ of \(\sigma^{+}\) light and \(\sigma^{-}\) light respectively and this behavior is caused by the energy difference between the sublevels of the cesium atoms in two cases.
Figure 4: Time evolution of the population for the Zeeman sublevels \(F=4\). The applied polarized light, used for optical pumping is \(\sigma^{-}\).
Figure 3: Time evolution of the population for the Zeeman sublevels \(F=4\). The applied polarized light, used for optical pumping is \(\sigma^{+}\).
In addition, we investigate the effect of a magnetic-resonance RF field on the time evolution of the population of Zeeman sublevels. Before pumping, the atoms are distributed evenly between the ground-states Zeeman sublevels. After absorbing photons provided by a laser beam, atoms are raised to Zeeman sublevels of the excited states and then decay spontaneously and return back to the ground states sublevels. During this optical process, all of the atoms are distributed in the ground states sublevels, but the populations of sublevels are not equal. Pumping can be removed by a RF field; the RF field interacts with all the Zeeman sublevels and by a relaxation mechanism equalizes all the sublevel populations [25]. We consider applying a RF field to Cs atoms, in the \(x\) direction and with the associated Rabi frequency \(\Omega_{RF}=g\mu_{B}B_{0RF}\),
\[B_{RF}=B_{0RF}\cos(\omega_{RF}t), \tag{9}\] \[\hat{H}_{RF}=\mu.\hat{\textbf{B}}=\Omega_{RF}\cos(\omega_{RF}t)F_{ RF},\]
where \(F_{RF}\) is a \((32\times 32)\) matrix, which has elements describing the magnetic dipole transitions \(\Delta m=0,\ \pm 1\). Assuming optical pumping between \(F=4\) and \(f=3\), we consider magnetic dipole transitions between these sublevels. According to Eq. 1, the Hamiltonian associated with RF field is added to the total Hamiltonian of the desired system. We employ rotating wave approximation for the Hamiltonian of the RF field, and keep only the terms that are showing RF detuning (\(\Delta_{RF}=\omega_{RF}-\Omega_{L}\)).
Figures 5 and 6 show the results of applying a RF field in the process of optical pumping, and its effect on the time evolution of the Zeeman sublevels population. We choose \(\Omega_{RF}=2000\) MHz and \(\omega_{RF}=\Omega_{L}\) for our numerical simulation. In fact, we consider resonance case that is used in experiments [23, 24, 25]. In Figure 5, populations of sublevels \(F=4\) are ploted for the case of driving with \(\sigma^{+}\) light, at the presence of applied RF field. As it is seen, by increasing the time steps, the populations of Zeeman sublevels become equal. Similarly, in Fig. 6 populations of sublevels \(F=4\) are equalized for \(\sigma^{-}\) light by applying the RF field. In both cases, by applying the RF field, the population of sub-levels will be equal. In the graph, the difference between the populations of the subscales can be seen at the level of \(0.01\), which is consistent with the experiment results Therefore, according to the figures, it can be concluded that the role of the RF field is to equalize all the sublevel populations.
In Fig. 9 we consider the time evolution of atomic polarization for \(\sigma^{+}\) and \(\sigma^{-}\), in terms of the Rabi frequency of the RF field. Atomic polarization can be obtained from \(\langle F_{Z}\rangle=1/M\sum_{M}p_{M}M\), where \(M\) is the Zeeman sublevel projection number (i.e. different \(m_{f}\)) and \(p_{M}\) is its population. It can be seen that with the increase of the Rabi frequency, the atomic polarization associated with the \(\sigma^{+}\) (\(\sigma^{-}\)) light decreases (increases). This result is indicating that by inducing imbalance between Zeeman sublevels population by applying a RF field, the polarization would be changed.
Figure 6: Time evolution of the population of Zeeman sublevels \(F=4\), for the applied polarized light \(\sigma^{-}\), at the presence of a magnetic-resonance RF field.
Figure 7: Time evolution of the population of Zeeman sublevels \(F=4\) for \(\sigma^{+}\) light, as a function of RF Rabi frequency.
## 4 Conclusion
In this research study, we investigated the optical pumping of Cs atoms by employing Liouville equation approach. Also, we engineered the population of Cesium Zeeman-sublevel states by applying a RF field. For this purpose, a circular polarized light was applied to Cesium atoms and the effect of relaxation and repopulation were studied. Then, by using a RF field and considering its effect in the Liouville equation, engineering of population in Cesium Zeeman sublevels were performed. Finally, the time evolution of the population of Zeeman sublevels was investigated at the presence and in the absence of the RF field, for optical pumping with both polarized lights, \(\sigma^{+}\) and \(\sigma^{-}\). Also the time evolution of the atomic polarization for \(\sigma^{+}\) and \(\sigma^{-}\) was considered. This approach can be used in all alkali atoms and it has many applications in different optical experiments.
Figure 8: Time evolution of the population of Zeeman sublevels \(F=4\) for \(\sigma^{-}\) light as a function of RF Rabi frequency.
Figure 9: Time evolution of the atomic polarization for the applied light \(\sigma^{+}\) and \(\sigma^{-}\). |
2308.07963 | Extended body dynamics in general relativity: hyperelastic models | We present a numerical framework for modeling extended hyperelastic bodies
based on a Lagrangian formulation of general relativistic elasticity theory. We
use finite element methods to discretize the body, then use the semi--discrete
action to derive ordinary differential equations of motion for the discrete
nodes. The nodes are evolved in time using fourth--order Runge--Kutta. We
validate our code against the normal modes of oscillation of a hyperelastic
sphere, which are known analytically in the limit of small (linear), slow
(Newtonian) oscillations. The algorithm displays second order convergence. This
numerical framework can be used to obtain the orbital motion and internal
dynamics of a hyperelastic body of any shape, for any spacetime metric, and for
varying hyperelastic energy models. | Nishita Jadoo, J. David Brown, Charles R. Evans | 2023-08-15T18:01:26Z | http://arxiv.org/abs/2308.07963v2 | # Extended body dynamics in general relativity: Hyperelastic models
###### Abstract
We present a numerical framework for modeling extended hyperelastic bodies based on a Lagrangian formulation of general relativistic elasticity theory. We use finite element methods to discretize the body, then use the semi-discrete action to derive ordinary differential equations of motion for the discrete nodes. The nodes are evolved in time using fourth-order Runge-Kutta. We validate our code against the normal modes of oscillation of a hyperelastic sphere, which are known analytically in the limit of small (linear), slow (Newtonian) oscillations. The algorithm displays second order convergence. This numerical framework can be used to obtain the orbital motion and internal dynamics of a hyperelastic body of any shape, for any spacetime metric, and for varying hyperelastic energy models.
## I Introduction
The problem of motion in general relativity has a long history. Einstein was interested in whether the laws of motion of material points can be derived from the vacuum field equations. With Grommer, [1] he showed that if a point particle is treated as a singularity in spacetime, it follows a geodesic. This gives the motion of point particles but not extended bodies. The first person to describe an extended body in general relativity was Mathisson [2]. Mathisson defines a multipole expansion of the body using the body's stress-energy-momentum (SEM) tensor with the single pole (monopole) defining the mass and the dipole and quadrupole defining the "rotation moment". Subsequently, many others have worked on this problem following a similar method, including Papapetrou [3] who gives the equations of motion of spinning particles to dipole order. See also Refs. [4], [5] and [6]. These works differ in the way the multipole moments are defined. In a series of papers [7; 8; 9], Dixon and collaborators provide a more thorough definition of the multipole moments. The equations describing the motion of pole-dipole particles are commonly known as the Mathisson-Papapetrou-Dixon (MPD) equations. In general, the analysis leaves the equation of motion of the quadrupole moment unspecified.
In the pole-dipole case, additional equations are needed to define the center of mass, called spin supplementary conditions. Several different spin supplementary conditions have been proposed which lead to different worldlines for the representative point in the body. In the pole-dipole approximation, these worldlines lie within the minimal world tube [10]. Reference [10] gives a list of known spin supplementary conditions and discusses what they imply for conserved quantities for the pole-dipole particle. Reference [11] gives an in-depth discussion of different spin supplementary conditions.
In this paper, we examine the dynamics of hyperelastic1 bodies as models for extended body motion in general relativity. Elastic bodies are closer to physical reality than, for example, rigid bodies. There is difficulty in defining a rigid body in curved spacetime: If a rigid body is defined as one that has no deformation, it would be unphysical because it would require a speed of sound that is greater than the speed of light. Furthermore, stresses generated in the body can be important and contribute to the SEM tensor. In general, it is perhaps simpler to treat extended bodies as elastic (or fluid). If one wants to model stiff bodies, then the material properties of the elastic body can be chosen so that the speed of sound is close to the speed of light.
Footnote 1: A hyperelastic material is an elastic material whose stress tensor can be derived from a potential energy function of the strain tensor. For a hyperelastic material, there is no energy dissipation or heat conduction. We sometimes refer to such materials simply as “elastic.”
The motion and deformation of extended bodies in general relativity is an important topic. The quadrupole deformation of neutron stars in binary inspirals can be potentially detected from the observed gravitational waves [12; 13]. These observations should provide crucial insight into the nuclear equation of state of these objects. The deformation, spin, and internal structure of the small body in extreme mass ratio inspirals may have an effect on the gravitational waves emitted [14]. The planned space interferometer LISA may be able to detect these effects. In particular, the spin of the small body is expected to have a next-to-leading order (i.e., first post-adiabatic order) influence on the phase of these gravitational waves [15]. Except for specific cases where the tidal field is static [16] or steady, the deformation of the body cannot be modeled accurately by simply setting it proportional to the tidal field. For example, the small body could be spinning too rapidly to come to equilibrium in response to the tidal forces or might be immersed in the time-changing tidal field in an eccentric or inclined orbit. Thus the treatment of the dynamics of the extended
body must expand beyond MPD to include the dynamics of the quadrupole and higher moments, moments which are known [17] to affect the motion.
Relativistic hydrodynamics is a very successful theory and is widely used to model fluids in strong gravity and at high Lorentz factors. A major difference between relativistic hydrodynamics and general relativistic elasticity is that shear stresses are absent in perfect fluid hydrodynamics. However, it is known that neutron star crusts are solid [18]. Moreover, some (ultra-massive) white dwarfs are expected to have frozen cores with up to 99% of their mass in crystallized form [19]. As a natural alternative to fluids, elastic bodies allow for shear stresses.
Numerical works on general relativistic elasticity are few in number. One such work is found in Ref. [20]. The authors propose an Eulerian formulation of general relativistic elasticity that can be used for numerical modeling and can capture shocks. They test their framework on Riemann problems in Minkowski spacetime. The authors of Refs. [21; 22] used general relativistic elasticity to study spherically symmetric elastic stars. They proposed that elasticity might be an important factor for modeling exotic compact objects. Other works include a set of papers [23; 24; 25; 26] that propose a coherent framework for accurately modeling the solid crust within neutron stars.
In this paper, we are interested in accurately modeling an extended hyperelastic body in general relativity. Our goal is to determine how its motion is affected by its finite size and calculate the changes in its internal structure, including deformation and spin, due to interactions with the background curvature. As a first step, for this work we assume that the extended body's SEM tensor does not affect the spacetime curvature. In other words, we ignore self-gravity and gravitational radiation. In a paper that will immediately follow, we will show that despite this restriction the system exhibits interesting radiationless self-force effects beyond pole-dipole order, with transfers of energy and angular momentum between an orbit and the body itself. The present paper details the formalism and the numerical method. The elastic body is handled with a Lagrangian scheme, where the mass is broken up into finite elements. A novel approach to the dynamics is pursued, where the action for the body is spatially discretized. The discrete action in turn leads directly to Euler-Lagrange equations for the finite mass elements as a large set of coupled ordinary differential equations. The method developed here will be used in future applications that consider extended body encounters with massive black holes, which can be exploited to test MPD and higher-order curvature-coupling effects.
The outline of this paper is as follows. We begin in Sec. II by reviewing the general relativistic theory of hyperelasticity as formulated in Ref. [27]. We explain our numerical method in Sec. III. In Sec. IV we rederive and review the normal modes of oscillation for a hyperelastic sphere in the linearized, nonrelativistic limits. We test our code in Sec. V by comparing the numerical and analytical displacements and velocities corresponding to a combination of selected normal modes.
Throughout this paper, we use the sign conventions of Misner, Thorne and Wheeler [28].
## II General relativistic theory of elasticity
In this section, we give a brief review of hyperelasticity theory in general relativity using a Lagrangian formulation as developed in Ref. [27]. We focus on the action and the stress-energy-momentum tensor.
The earliest work on generalizing elasticity theory to special relativity is by Herglotz [29]. Subsequently, DeWitt [30] extended Herglotz' theory to the general relativistic domain to describe a "stiff elastic medium". He used this structure to aid in the formulation of a quantum theory of gravity. Later works on general relativistic elasticity theory include Carter and Quintana [31], Kijowski and Magli [32], Beig and Schmidt [33; 34], Gundlach, Hawke and Erickson [20] and Beig [35].
Some of these works favor an Eulerian formulation while the others use a Lagrangian approach. In the Eulerian approach, the fundamental variables are fields on spacetime. In the Lagrangian approach, the fundamental variables are time-dependent fields on "matter space", the space of material particles that make up the elastic body. The two approaches are mathematically equivalent. The advantage of the Lagrangian formulation for numerical modeling is that it is easier to implement natural boundary conditions [36] where the surface is free to move. In the Eulerian formulation the surface is not simply defined and requires interpolation. Also, since the Lagrangian field equations are formulated on matter space rather than physical space, the number of (discrete) equations to be solved is much smaller in the Lagrangian approach.
### World tube, radar metric and Lagrangian strain
Let the four-dimensional spacetime manifold be denoted by \(\mathcal{M}\), with spacetime coordinates \(x^{\mu}\) and metric \(g_{\mu\nu}\). The matter space, \(\mathcal{S}\), is the space of material points with coordinates \(\zeta^{i}\) for \(i=1,2,3\). (Note that Latin indices beginning with \(i,j,k,\ldots\) should not be confused with the indices of the spatial subset of spacetime coordinates.)
Let \(\lambda\) be a real parameter. The functions \(X^{\mu}(\lambda,\zeta)\) are maps from \(\mathfrak{R}\times\mathcal{S}\) to \(\mathcal{M}\), see Fig. 1. As \(\lambda\) is continuously varied, \(X^{\mu}(\lambda,\zeta)\) traces the timelike worldline of the material point \(\zeta^{i}\). The collection of all worldlines corresponding to the material points of the body is called the world tube.
The four-velocity of a material point is
\[U^{\mu}=\dot{X}^{\mu}/\alpha\, \tag{1}\]
where the "dot" denotes \(\partial/\partial\lambda\) and
\[\alpha=\sqrt{-\dot{X}^{\mu}\dot{X}_{\mu}}\, \tag{2}\]
is the material lapse function. The radar metric, \(f_{\mu\nu}\), defined inside the world tube is
\[f_{\mu\nu}=g_{\mu\nu}+U_{\mu}U_{\nu}. \tag{3}\]
The name "radar" comes from Landau and Lifshitz [37] who used light signals to find the spatial distance between two infinitesimally separated events. It is easy to see that \(f_{\mu\nu}U^{\mu}=0\) and that \(f_{\mu}^{\nu}V^{\mu}\) is orthogonal to \(U^{\mu}\) for any vector \(V^{\nu}\). Hence, \(f_{\mu}^{\nu}\) is a "projection tensor" that projects \(V^{\mu}\) into the space orthogonal to \(U^{\mu}\). The radar metric can be mapped back to the matter space,
\[f_{ij}=X_{,i}^{\mu}f_{\mu\nu}X_{,j}^{\nu}. \tag{4}\]
where \(,j\) denotes \(\partial/\partial\zeta^{j}\). The radar metric \(f_{ij}\) gives distances between infinitesimally separated material points such that the distance is measured in the rest frame of the points in physical spacetime, \(\mathcal{M}\). That is, \(ds^{2}=f_{ij}d\zeta^{i}d\zeta^{j}\), is the square of the proper distance between material points.
The Lagrangian strain tensor can be defined in the same way as in nonrelativistic elasticity:
\[E_{ij}=(f_{ij}-\epsilon_{ij})/2. \tag{5}\]
Here, \(\epsilon_{ij}\) is the "relaxed metric" on matter space. That is, \(\epsilon_{ij}d\zeta^{i}d\zeta^{j}\) is the square of the physical distance between nearby material points when the body is undeformed.
The deformation gradient gives the amount of strain in the material and in the relativistic domain it is defined using the radar metric and the map, \(X_{,i}^{\nu}\),
\[F_{\mu i}=f_{\mu\nu}X_{,i}^{\nu}. \tag{6}\]
Another important tensor is the second Piola-Kirchhoff stress tensor2 defined as the gradient of the energy density \(\rho\) with respect to the Lagrangian strain,
Footnote 2: Various measures of stress in the nonrelativistic domain are described in works on continuum mechanics, such as Bower [38] and Kelly [39]. The first and second Piola-Kirchhoff stress tensors were introduced by Piola [40] and Kirchhoff [41].
\[S^{ij}=\frac{\partial\rho}{\partial E_{ij}}. \tag{7}\]
### Action and stress-energy-momentum tensor
Hyperelastic materials have a stored energy function that can be specified in terms of the Lagrangian strain \(E_{ij}\). The energy density per unit of undeformed volume is denoted by \(\rho(E)\) and is a function of \(E_{ij}\) and \(\epsilon_{ij}\). It can also depend on \(\zeta^{i}\) if the material is not uniform. The dependence on \(\epsilon_{ij}\) and \(\zeta\) has been omitted in the notation to make it more compact. The relativistic action for a hyperelastic body is [42; 30; 27],
\[S[X,g]=-\int_{\lambda_{i}}^{\lambda_{f}}d\lambda\int_{\mathcal{S}}d^{3}\zeta \sqrt{\epsilon}\,\alpha\rho. \tag{8}\]
This action is a generalization of the action for a continuum of particles with "nearest neighbor" interactions mediated by the Lagrangian strain tensor.
The energy density can be written as
\[\rho(E)=\rho_{0}+W(E)\, \tag{9}\]
where \(\rho_{0}\) is the rest mass per unit undeformed volume and \(W\) is the potential energy (or interaction energy) per unit undeformed volume. The interaction energy of the hyperelastic body is obtained by using distances computed in the rest frames of elements of the body.
Let \(x^{0}\equiv t=\text{const}\) correspond to spacelike hypersurfaces and let \(x^{a}\) denote the spatial subset of the spacetime coordinates, where \(a=1,2,3\). The coordinate basis vectors \(\partial/\partial x^{a}\) are spacelike. Because of the gauge invariance of the action, we can freely choose the parameter \(\lambda\) along each worldline. Thus, we can choose the parameterization \(\lambda=x^{0}\equiv t\). Then, \(\dot{X}^{0}=1\) and \(X^{0}_{,i}=0\). With this gauge choice the action in the \(\lambda=t\) gauge can be written as
\[S[X]=-\int_{t^{\prime}}^{t^{\prime\prime}}dt\,\int_{\mathcal{S}}d^{3}\zeta\, \sqrt{\epsilon}\alpha\rho(E)\, \tag{10}\]
which is a functional of \(X^{a}(t,\zeta)\).
In this gauge the radar metric and material lapse are
\[f_{ij} =X^{a}{}_{,i}(g_{ab}+\gamma^{2}V_{a}V_{b})X^{b}{}_{,j} \tag{11a}\] \[\alpha =N\sqrt{1-V^{a}V_{a}} \tag{11b}\]
where the "dot" now denotes \(\partial/\partial t\). Here, \(g_{ab}\) is the spatial metric and
\[V^{a}\equiv(\dot{X}^{a}+N^{a})/N \tag{12}\]
with \(N=\sqrt{-1/g^{tt}}\) denoting the spacetime lapse function and \(N_{a}=g_{ta}\) denoting the shift vector. The spatial vector \(V^{a}\) is the velocity of the material as seen by observers at rest in the \(t=\text{const}\) surfaces. We have also
defined the Lorentz factor \(\gamma\equiv 1/\sqrt{1-V^{a}V_{a}}\). Note that spatial indices are raised and lowered with the spatial metric.
The stress-energy-momentum (SEM) tensor for matter fields is obtained from the functional derivative of the matter action with respect to the metric,
\[T^{\mu\nu}(x)=\frac{2}{\sqrt{-g}}\frac{\delta S_{\rm matter}}{\delta g_{\mu\nu}( x)}. \tag{13}\]
The final form of the SEM tensor is [27]
\[T^{\mu\nu}(X(\lambda,\zeta))=\frac{1}{J}\left[\rho U^{\mu}U^{\nu}+S^{ij}F_{i}^{ \mu}F_{j}^{\nu}\right]\,, \tag{14}\]
where \(J\equiv\sqrt{f}/\sqrt{\epsilon}\). The metric \(\epsilon_{ij}\) gives distances between material points \(\zeta^{i}\) in \(\mathcal{S}\) when the elastic body is relaxed and \(f_{ij}\) gives distances between material points \(\zeta^{i}\) in \(\mathcal{S}\) when the elastic body is deformed. Therefore, the factor \(1/J\) converts energy density per unit undeformed volume to per unit deformed volume. The SEM tensor satisfies local conservation, \(\nabla_{\mu}T^{\mu\nu}=0\).
## III Numerical method
Numerical methods for solving partial differential equations (PDEs) include finite difference (FD), finite volume (FV) and finite element (FE) methods. FE methods are particularly useful in solving elasticity problems. By using triangular or tetrahedral meshes, they allow boundaries of elastic bodies to be represented more closely than the rectangular grids used in FD and FV methods. In FD methods, the PDEs are discretized directly whereas in FV methods, the PDEs are integrated over a volume element. In FE methods, the PDEs are converted to a weak form by multiplying with a test function that satisfies the boundary conditions and then integrating over the domain [43].
We discretize the action of the elastic body directly instead of discretizing the partial differential equations of motion. This leads to the free surface or natural boundary condition where variations at the boundary are nonzero, to be trivially implemented via the variational process. We use FE methods with tetrahedral elements to model elastic bodies of any shape such as spheres or ellipsoids. These models can be used to describe solid astrophysical objects. We discretize the action in space and not in time and obtain ordinary differential equations (ODEs) in mass matrix form. There is a suite of well-tested methods that can be used to solve such coupled ODEs.
We use Matlab's partial differential equation toolbox [44] to generate a linear tetrahedral mesh for three-dimensional bodies. To utilize computing clusters, we use the software package Metis [45] to partition the mesh and parallelize the algorithm. The Message Passing Interface (MPI) is used to communicate neighbor information.
### Matter space discretization
The matter space \(\mathcal{S}\) is divided into non-overlapping elements. Let \(\mathcal{S}_{E}\) for \(E=1,2,\ldots\) denote the elements, that is, \(\mathcal{S}\) is the union of the \(\mathcal{S}_{E}\)'s. Let \(n=1,2,\ldots\) label the nodes throughout the body. Each node in the body has a unique index number. Let \(\mathcal{N}(E)\) denote the set of nodes in element \(E\). An example of \(\mathcal{N}(E)\) is shown in Fig. 2. Then, for \(\zeta^{i}\in\mathcal{S}_{E}\), we have
\[X^{a}(t,\zeta)=\sum_{n\in\mathcal{N}(E)}X^{a}_{n}(t)\ \phi^{E}_{n}(\zeta)\,\quad \zeta^{i}\in\mathcal{S}_{E}\, \tag{15}\]
where the sum is over the nodes contained in the element \(\mathcal{S}_{E}\). Note that the shape functions \(\phi^{E}_{n}(\zeta)\) depend on the node as well as the element.
### Semi-discretized action and equations of motion
The action in the \(\lambda=t\) gauge (Eq. (10)) is discretized using Eq. (15),
\[S[X]=\int_{t^{\prime}}^{t^{\prime\prime}}dt\sum_{E}\int_{\mathcal{ S}_{E}}d^{3}\zeta\ \mathcal{L}\bigg{(}\sum_{n\in\mathcal{N}(E)}X^{a}_{n}(t)\ \phi^{E}_{n}(\zeta),\] \[\sum_{n\in\mathcal{N}(E)}\dot{X}^{a}_{n}(t)\ \phi^{E}_{n}(\zeta),\ \sum_{n\in \mathcal{N}(E)}X^{a}_{n}(t)\ \phi^{E}_{n,i}\bigg{)}\, \tag{16}\]
where the Lagrangian density is defined by \(\mathcal{L}(X,\dot{X},X_{,i})=-\sqrt{\epsilon}\alpha\rho(E)\). The action is a functional of the coordinates of each node, \(X^{a}_{n}(t)\).
We select the element type to be linear tetrahedrons with nodes at the vertices only. A general tetrahedral element \(\mathcal{S}_{E}\) is transformed into a unit trirectangular tetrahedron \(\mathcal{T}\) with coordinates \(\eta^{i}\). Let \(\zeta^{i}_{(\alpha)}\) denote the coordinates of the four nodes, for \(\alpha=0,1,2,3\). The transformation is linear, with \(\zeta^{i}=A^{ij}\eta^{j}+B^{i}\) where \(A^{ij}\) and
Figure 2: Here we are depicting a two-dimensional triangular mesh instead of a tetrahedral mesh for clarity. This figure shows node labels, \(n=\{1,2..,10\}\) and element labels \(E=\{1,2..,11\}\) (boxed). The set of nodes in element, \(E=4\), is \(\mathcal{N}(4)=\{4,5,8\}\). The ring of node \(n=5\) is \(\mathcal{R}(5)=\{3,4,6,7,8,9\}\).
are constants in each element. These constants are given by
\[B^{i} =\zeta^{i}_{(0)}\, \tag{17a}\] \[A^{i1} =\zeta^{i}_{(1)}-\zeta^{i}_{(0)}\,\] (17b) \[A^{i2} =\zeta^{i}_{(2)}-\zeta^{i}_{(0)}\,\] (17c) \[A^{i3} =\zeta^{i}_{(3)}-\zeta^{i}_{(0)}. \tag{17d}\]
In the new coordinates, the nodes have coordinates \(\eta^{i}_{(0)}=(0,0,0)\), \(\eta^{i}_{(1)}=(1,0,0)\), \(\eta^{i}_{(2)}=(0,1,0)\), and \(\eta^{i}_{(3)}=(0,0,1)\). Figure 3 shows the transformation.
Let \(\alpha(n)\) map the four node numbers of \(\mathcal{S}_{E}\) to the set \(\{0,1,2,3\}\). The shape function defined in terms of the new coordinates \(\eta^{i}\) are
\[\bar{\phi}_{\alpha(n)}(\eta)\equiv\phi^{E}_{n}(\zeta(\eta)). \tag{18}\]
Explicitly, the linear shape functions are given by (see Eqs. (3.19)-(3.12) of Ref. [46])
\[\bar{\phi}_{0}(\eta) =1-\eta^{1}-\eta^{2}-\eta^{3}\, \tag{19a}\] \[\bar{\phi}_{1}(\eta) =\eta^{1}\,\] (19b) \[\bar{\phi}_{2}(\eta) =\eta^{2}\,\] (19c) \[\bar{\phi}_{3}(\eta) =\eta^{3}. \tag{19d}\]
In the new coordinates, the action is
\[S[X] =\int_{t^{\prime}}^{t^{\prime\prime}}dt\sum_{E}\int_{\mathcal{T}} d^{3}\eta\,|J_{E}|\mathcal{L}\bigg{(}\sum_{n\in\mathcal{N}(E)}X^{a}_{n}(t)\; \bar{\phi}_{\alpha(n)}(\eta),\] \[\sum_{n\in\mathcal{N}(E)}\dot{X}^{a}_{n}(t)\;\bar{\phi}_{\alpha( n)}(\eta),\sum_{n\in\mathcal{N}(E)}X^{a}_{n}(t)\;\phi^{E}_{n,i}\bigg{)}\, \tag{20}\]
where \(|J_{E}|\) is the determinant of the Jacobian of the transformation from \(\zeta^{i}\) to \(\eta^{i}\) for element \(E\). We can pull \(|J_{E}|\) outside the integral since it is independent of \(\eta^{i}\). It should be noted that \(\phi^{E}_{n,i}\equiv\partial\phi^{E}_{n}/\partial\zeta^{i}\) are constants, independent of \(\eta^{i}\).
We now replace the integral over \(\eta^{i}\) in each element with a quadrature rule:
\[S[X] =\int_{t^{\prime}}^{t^{\prime\prime}}dt\sum_{E}\sum_{\sigma}w_{ \sigma}\,|J_{E}|\mathcal{L}\bigg{(}\sum_{n\in\mathcal{N}(E)}X^{a}_{n}(t)\; \bar{\phi}_{\alpha(n)}(\eta_{(\sigma)}),\] \[\sum_{n\in\mathcal{N}(E)}\dot{X}^{a}_{n}(t)\;\bar{\phi}_{\alpha( n)}(\eta_{(\sigma)}),\sum_{n\in\mathcal{N}(E)}X^{a}_{n}(t)\;\phi^{E}_{n,i} \bigg{)}\, \tag{21}\]
for some set of points \(\eta^{i}_{(\sigma)}\) in \(\mathcal{T}\). We choose the points to coincide with the nodes (vertices) of the element, and choose weights \(w_{\sigma}=1/24\) for each node. With this weighting, the integration of linear functions is exact.
Using the results \(\bar{\phi}_{\alpha}(\eta_{(\sigma)})=\delta_{\alpha\sigma}\), the discrete action becomes
\[S[X] =\int_{t^{\prime}}^{t^{\prime\prime}}dt\sum_{E}\sum_{\sigma}w_{ \sigma}\,|J_{E}|\mathcal{L}\bigg{(}\sum_{n\in\mathcal{N}(E)}X^{a}_{n}(t)\; \delta_{\alpha(n)\sigma},\] \[\sum_{n\in\mathcal{N}(E)}\dot{X}^{a}_{n}(t)\;\delta_{\alpha(n) \sigma},\;\sum_{n\in\mathcal{N}(E)}X^{a}_{n}(t)\;\phi^{E}_{n,i}\bigg{)}. \tag{22}\]
For each value of \(\sigma\) in the sum, the only term in the first argument of \(\mathcal{L}\) that is nonzero is the one for which \(\alpha(n)=\sigma\). Likewise for the second argument of \(\mathcal{L}\). Thus, we can write the action as
\[S[X] =\frac{1}{24}\int_{t^{\prime}}^{t^{\prime\prime}}dt\sum_{E}\sum_{ n\in\mathcal{N}(E)}|J_{E}|\mathcal{L}\bigg{(}X^{a}_{n}(t),\;\dot{X}^{a}_{n}(t),\] \[\sum_{m\in\mathcal{N}(E)}X^{a}_{m}(t)\;\phi^{E}_{m,i}\bigg{)}. \tag{23}\]
Let \(\mathcal{R}(n)\) be the "ring" of \(n\). This is the list of elements (\(E\) values) that have \(n\) as one of their nodes. An example of \(\mathcal{R}(n)\) is shown in Fig. 2. We isolate the terms in the action that involve the variable \(X^{a}_{N}\) for some fixed node number \(N\). Let these terms be denoted by \(S_{N}\):
\[S_{N}=\frac{1}{24}\int_{t^{\prime}}^{t^{\prime\prime}}dt\sum_{E \in\mathcal{R}(N)}\sum_{n\in\mathcal{N}(E)}|J_{E}|\mathcal{L}\bigg{(}X^{a}_{n}( t),\;\dot{X}^{a}_{n}(t),\] \[\sum_{m\in\mathcal{N}(E)}X^{a}_{m}(t)\;\phi^{E}_{m,i}\bigg{)}. \tag{24}\]
Only elements in the ring of \(N\) depend on the node \(X^{a}_{N}\). In the sum over nodes for each element, there are two cases. One case is when the node number \(n\) equals \(N\), the other is when \(n\) does not equal \(N\). Therefore, we find
Figure 3: A general tetrahedral element \(\mathcal{S}_{E}\) in \(\mathcal{S}\) is transformed into a unit trirectangular tetrahedron \(\mathcal{T}\) with a node at origin and the three other nodes displaced by one unit along the coordinate axes. It does not matter which node is at the origin.
\[S_{N}=\frac{1}{24}\int_{t^{\prime}}^{t^{\prime\prime}}dt\sum_{E\in \mathcal{R}(N)}|J_{E}|\bigg{\{}\mathcal{L}\bigg{(}X^{a}_{N}(t),\;\dot{X}^{a}_{N} (t),\sum_{m\in\mathcal{N}(E)}X^{a}_{m}(t)\;\phi^{E}_{m,i}\bigg{)}\\ +\sum_{n\in\mathcal{N}(E),n\neq N}\mathcal{L}\bigg{(}X^{a}_{n}(t),\;\dot{X}^{a}_{n}(t),\sum_{m\in\mathcal{N}(E)}X^{a}_{m}(t)\;\phi^{E}_{m,i} \bigg{)}\bigg{\}}. \tag{25}\]
It should be noted that \(X^{a}_{N}\) occurs in the third argument of \(\mathcal{L}\) in both terms.
We now vary \(S_{N}\) with respect to \(X^{a}_{N}\):
\[\begin{split}&\delta S_{N}=\\ &\frac{1}{24}\int_{t^{\prime}}^{t^{\prime\prime}}dt\sum_{E\in \mathcal{R}(N)}|J_{E}|\bigg{\{}\frac{\partial\mathcal{L}}{\partial X^{a}} \bigg{|}_{N,E}\delta X^{a}_{N}+\frac{\partial\mathcal{L}}{\partial\dot{X}^{a }}\bigg{|}_{N,E}\delta\dot{X}^{a}_{N}+\frac{\partial\mathcal{L}}{\partial X^{ a}_{,i}}\bigg{|}_{N,E}\phi^{E}_{N,i}\delta X^{a}_{N}+\sum_{n\in\mathcal{N}(E),n\neq N }\frac{\partial\mathcal{L}}{\partial X^{a}_{,i}}\bigg{|}_{n,E}\phi^{E}_{N,i} \delta X^{a}_{N}\bigg{\}}\,\end{split} \tag{26}\]
where the symbol \(|_{n,E}\) indicates that the partial derivatives are evaluated at \(X^{a}=X^{a}_{n}\), \(\dot{X}^{a}=\dot{X}^{a}_{n}\), and \(X^{a}_{,i}=\sum_{m\in\mathcal{N}(E)}X^{a}_{m}(t)\phi^{E}_{m,i}\). The last two terms in \(\delta S_{N}\) can be combined into a single sum over all \(n\in\mathcal{N}(E)\). Then the functional derivative (Lagrange's equation) is
\[0=\frac{\delta S}{\delta X^{a}_{N}}=\frac{1}{24}\sum_{E\in \mathcal{R}(N)}|J_{E}|\bigg{\{}\frac{\partial\mathcal{L}}{\partial X^{a}} \bigg{|}_{N,E}-\frac{d}{dt}\left(\frac{\partial\mathcal{L}}{\partial\dot{X}^{ a}}\bigg{|}_{N,E}\right)+\sum_{n\in\mathcal{N}(E)}\frac{\partial\mathcal{L}}{ \partial X^{a}_{,i}}\bigg{|}_{n,E}\phi^{E}_{N,i}\bigg{\}}. \tag{27}\]
Next, we expand the total time derivative:
\[\frac{d}{dt}\left(\frac{\partial\mathcal{L}}{\partial\dot{X}^{a}}\bigg{|}_{N,E}\right)=\frac{\partial^{2}\mathcal{L}}{\partial\dot{X}^{b}\partial\dot{X}^ {a}}\bigg{|}_{N,E}\ddot{X}^{b}_{N}+\frac{\partial^{2}\mathcal{L}}{\partial X^{ b}\partial\dot{X}^{a}}\bigg{|}_{N,E}\dot{X}^{b}_{N}+\frac{\partial^{2} \mathcal{L}}{\partial X^{b}_{,i}\partial\dot{X}^{a}}\bigg{|}_{N,E}\sum_{m\in \mathcal{N}(E)}\dot{X}^{b}_{m}\phi^{E}_{m,i}. \tag{28}\]
Then the equations of motion are
\[\underbrace{\sum_{E\in\mathcal{R}(n)}|J_{E}|\frac{\partial^{2} \mathcal{L}}{\partial\dot{X}^{b}\partial\dot{X}^{a}}\bigg{|}_{n,E}}_{(M_{ab})_ {n}}\ddot{X}^{b}_{n}=\sum_{E\in\mathcal{R}(n)}|J_{E}|\bigg{\{}\frac{\partial \mathcal{L}}{\partial X^{a}}\bigg{|}_{n,E}+\sum_{m\in\mathcal{N}(E)}\frac{ \partial\mathcal{L}}{\partial X^{a}}\bigg{|}_{m,E}\phi^{E}_{n,i}-\frac{\partial ^{2}\mathcal{L}}{\partial X^{b}\partial\dot{X}^{a}}\bigg{|}_{n,E}\dot{X}^{b}_ {n}\\ -\;\frac{\partial^{2}\mathcal{L}}{\partial X^{b}_{,i}\partial\dot{ X}^{a}}\bigg{|}_{n,E}\sum_{m\in\mathcal{N}(E)}\dot{X}^{b}_{m}\phi^{E}_{m,i} \bigg{\}}\, \tag{29}\]
where \(n\) is replaced by \(m\) and \(N\) is replaced by \(n\).
For each value of \(n\) in Eq. (29), the coefficient of \(\dot{X}^{b}_{n}\) is a \(3\times 3\) matrix in the indices \(a\) and \(b\). These equations are rewritten as a system of \(6N_{\text{total}}\) first order ODEs for the variables \(X^{a}_{n}\) and \(V^{a}_{n}=\dot{X}^{a}_{n}\), where \(N_{\text{total}}\) is the total number of nodes. The first \(3N_{\text{total}}\) equations are the definitions \(V^{a}_{n}=\dot{X}^{a}_{n}\) with \(a=1,2,3\) and \(n=1,\dots,N_{\text{total}}\). Denoting the coefficient of \(\ddot{X}^{b}_{n}\) in Eq. (29) as \((M_{ab})_{n}\) and the right hand side as \((F_{a})_{n}\), the next \(3N_{\text{total}}\) first order ODEs are written in matrix form as
\[\underbrace{\begin{bmatrix}(M_{11})_{1}&(M_{12})_{1}&(M_{13})_{1}&\dots&0\\ (M_{21})_{1}&(M_{22})_{1}&(M_{23})_{1}&\dots&0\\ (M_{31})_{1}&(M_{32})_{1}&(M_{33})_{1}&\dots&0\\ \vdots&&&\\ 0&\dots&\dots&\dots\\ \end{bmatrix}}_{\text{mass matrix},\,M}\frac{d}{dt}\left(\begin{bmatrix}V^{1}_{1} \\ V^{2}_{1}\\ V^{3}_{1}\\ \vdots\\ \vdots\\ \vdots\\ \end{bmatrix}\right)=\underbrace{\begin{bmatrix}(F_{1})_{1}\\ (F_{2})_{1}\\ (F_{3})_{1}\\ \vdots\\ \vdots\\ \vdots\\ \end{bmatrix}}_{\text{vector},\,F}. \tag{30}\]
The mass matrix \(M\) is pentadiagonal. We use the subroutine DGBSV from the Fortran Linear Algebra Package
(LAPACK) which uses lower-upper (LU) decomposition to solve the linear system of equations (30) and obtain the time derivatives of \(V_{n}^{a}\). We then use the fourth-order Runge-Kutta scheme to evolve \(X_{n}^{a}\) and \(V_{n}^{a}\) at discrete values of \(t\).
From numerical experiments we find that the Courant condition,
\[\Delta t\leq h_{\rm min}/C_{L}\, \tag{31}\]
must be met for stability. Here, \(\Delta t\) is the time step size, \(h_{\rm min}\) is the minimum edge length of the tetrahedral elements, and \(C_{L}\) is the longitudinal sound speed (see Eq. (67) defined below) which is the maximum sound speed in the material.
## IV Test models in the nonrelativistic domain
Elasticity theory in the nonrelativistic domain has a long history and many applications. A large body of works make use of linear elasticity for which exact solutions are known in some cases. Here we reproduce the exact solutions for the normal mode oscillations of a free solid elastic sphere. In Sec. V we use our relativistic, nonlinear code to simulate the motion of a solid elastic sphere in flat spacetime, and show that the expected results are obtained in the limit of small, nonrelativistic oscillations.
Nonrelativistic elasticity theory can be obtained from general relativistic elasticity theory (Sec. II) by taking the nonrelativistic limit, as shown in Ref. [27]. The resulting action, deduced from (10), is
\[S[X]=\int_{t_{i}}^{t_{f}}dt\int d^{3}\zeta\,\sqrt{\epsilon}\left[\frac{1}{2} \rho_{0}\dot{X}^{a}\dot{X}_{a}-W(E)-\rho_{0}\Phi\right]\,, \tag{32}\]
where \(\Phi\) is the Newtonian gravitational potential. The index on \(\dot{X}_{a}\) has been lowered with the spatial metric \(g_{ab}\) which in this section is taken to be flat. Here, the energy density is written as
\[\rho(E)=\rho_{0}+W(E)\, \tag{33}\]
where \(\rho_{0}\) is the rest mass density per unit undeformed volume and \(W(E)\) is the potential energy density per unit undeformed volume. In this nonrelativistic limit the radar metric reduces to
\[f_{ij}=X_{,i}^{a}g_{ab}X_{,j}^{b}. \tag{34}\]
The second Piola stress tensor, \(S^{ij}\), is the derivative of \(W(E)\) with respect to the Lagrangian strain, \(E_{ij}=(f_{ij}-\epsilon_{ij})/2\), as in Eq. (7).
### Hyperelastic energy models
Elastic materials are materials for which the stress can be written in terms of the strain at a particular time. Hyperelastic materials are materials for which the work done by stresses during the deformation process depends only on the initial and final configurations. Homogeneous materials are materials for which portions of the elastic material have the same mechanical behaviour. Isotropic materials are materials for which the potential energy function, \(W\), depends on the deformation gradient only through \(f_{ij}\) and \(\epsilon_{ij}\). Some energy models [27] for isotropic hyperelastic materials include the Saint Venant-Kirchhoff, the Mooney-Rivlin [47], the neo-Hookean and the Ogden models [48].
In this paper we use the Saint Venant-Kirchhoff model with potential energy function
\[W(E)=\frac{\lambda}{2}(\epsilon^{ij}E_{ij})^{2}+\mu(\epsilon^{ik}\epsilon^{jl} E_{ij}E_{kl}). \tag{35}\]
Here, \(\epsilon^{ij}\) is the inverse of \(\epsilon_{ij}\) and \(\lambda\) and \(\mu\) are the Lame constants. (The Lame constant \(\lambda\) should not to be confused with our previous use of \(\lambda\) as a path parameter in Sec. II.) The bulk modulus, \(K=\lambda+2\mu/3\), measures resistance to volume changes. The Saint Venant-Kirchhoff model is not valid for large strains because the model softens under large compression.
### Linear elasticity
Linear elasticity is used when the deformation is the result of small displacements from some reference configuration, which we denote by \(X_{R}^{a}(\zeta)\). We also assume that there is no rotation. We then write
\[X^{a}(\zeta,t)=X_{R}^{a}(\zeta)+\xi^{a}(\zeta,t)\, \tag{36}\]
where \(|\xi^{a}(\zeta,t)|\) is small. Then we have
\[\dot{X}^{a}(\zeta,t) =\dot{\xi}^{a}(\zeta,t)\, \tag{37}\] \[X_{,i}^{a}(\zeta,t) =X_{R,i}^{a}(\zeta)+\xi_{,i}^{a}(\zeta,t)\, \tag{38}\]
and we also assume that \(|\xi_{,i}^{a}(\zeta,t)|\) is small. Choosing flat space and Cartesian coordinates, the radar metric (also known as the right Cauchy-Green deformation tensor) and the relaxed matter space metric become
\[f_{ij} =X_{,i}^{a}\delta_{ab}X_{,j}^{b}=(X_{R,i}^{a}+\xi_{,i}^{a}) \delta_{ab}(X_{R,j}^{b}+\xi_{,j}^{b})\, \tag{39}\] \[\epsilon_{ij} =X_{R,i}^{a}\delta_{ab}X_{R,j}^{b}. \tag{40}\]
From the map \(x^{a}=X_{R}^{a}(\zeta)\) we can define the inverse map that takes a point in physical space to the matter space label for the body in its relaxed state:
\[\zeta^{i}=Z_{R}^{i}(x). \tag{41}\]
Differentiation with respect to \(\zeta\) yields the useful relations:
\[X_{R,i}^{a}Z_{R,a}^{j} =\delta_{i}^{j}\, \tag{42}\] \[X_{R,i}^{a}Z_{R,b}^{b} =\delta_{b}^{a}. \tag{43}\]
The following formulas for the matter space metric and its inverse hold:
\[\epsilon^{ij} =Z^{i}_{R,a}\delta^{ab}Z^{j}_{R,b}\, \tag{44}\] \[\delta^{ab} =X^{a}_{R,i}\epsilon^{ij}X^{b}_{R,j}\,\] (45) \[\delta_{ab} =Z^{i}_{R,a}\epsilon_{ij}Z^{j}_{R,b}. \tag{46}\]
We can verify these by computing \(\epsilon^{ij}\epsilon_{jk}=\delta^{i}_{k}\) and \(\delta^{ab}\delta_{bc}=\delta^{a}_{c}\).
We now linearize the Saint Venant-Kirchhoff energy model by expanding \(W(E)\) to second order in \(\xi^{a}_{,i}\). Insert Eq. 39 and Eq. 40 into the Lagrangian strain tensor \(E_{ij}=(f_{ij}-\epsilon_{ij})/2\) to obtain
\[E_{ij}=\frac{1}{2}[\xi^{a}_{,i}\delta_{ab}X^{b}_{R,j}+X^{a}_{R,i}\delta_{ab} \xi^{b}_{,j}]+\mathcal{O}^{2}(\xi^{a}_{,i}). \tag{47}\]
Using the identities (42)-(44) above, we find
\[\epsilon^{ij}E_{ij}=Z^{i}_{R,a}\xi^{a}_{,i}+\mathcal{O}^{2}(\xi^{a}_{,i}). \tag{48}\]
With a slight abuse of notation, we can define \(\xi^{a}(x)\equiv\xi^{a}(Z_{R}(x))\) so that \(Z^{i}_{R,b}\xi^{a}_{,i}=\xi^{a}_{,b}\). Then the result (48) becomes
\[\epsilon^{ij}E_{ij}=\xi^{a}_{,a}+\mathcal{O}^{2}(\xi^{a}_{,i}). \tag{49}\]
A similar calculation gives
\[\epsilon^{ik}\epsilon^{jl}E_{ij}E_{kl}=\frac{1}{2}[\xi^{d}_{,e}\xi^{e}_{,d}+ \xi^{d}_{,d}\xi_{,d}{}^{e}]+\mathcal{O}^{4}(\xi^{a}_{,i}). \tag{50}\]
Then to second order in \(\xi^{a}\) and its derivatives, we obtain
\[W(E)=\frac{\lambda}{2}(\xi^{a}_{,a})^{2}+\frac{\mu}{2}(\xi^{d}_{,e}\xi^{e}_{, d}+\xi^{d}_{,e}\xi_{d,}{}^{e})\, \tag{51}\]
for the Saint Venant-Kirchhoff model.
### Dynamical solution for normal modes of an elastic sphere
Equation (32) gives the action for an elastic body in nonlinear elasticity. We specialize to free oscillations by setting the gravitational potential to zero, \(\Phi=0\). We specialize to the linear Saint Venant-Kirchhoff model by using the results of the previous subsection. These results assume a flat spatial metric with Cartesian coordinates, so that \(g_{ab}=\delta_{ab}\). Then the action becomes
\[S[\xi]=\int_{t_{i}}^{t_{f}}dt\int_{\mathcal{S}}d^{3}\zeta\, \sqrt{\epsilon} \left[\frac{1}{2}\rho_{0}\dot{\xi}^{a}\delta_{ab}\dot{\xi}^{b}- \frac{\lambda}{2}(\xi^{a}_{,a})^{2}\right.\] \[\left.-\frac{\mu}{2}(\xi^{d}_{,e}\xi^{e}_{,d}+\xi^{d}_{,e}\xi_{d,} {}^{e})\right]. \tag{52}\]
We can transform the matter space integral over \(d^{3}\zeta\) to a physical space integral over \(d^{3}x\) using the Jacobian of the transformation \(|\text{det}(X^{a}_{R,i})|=1/\sqrt{\epsilon}\). Thus, we find
\[S[\xi]=\int_{t_{i}}^{t_{f}}dt\int_{\mathcal{R}}d^{3}x\bigg{[} \frac{1}{2}\rho_{0}\dot{\xi}^{a}\delta_{ab}\dot{\xi}^{b}-\frac{\lambda}{2}(\xi ^{a}_{,a})^{2}\] \[-\frac{\mu}{2}(\xi^{d}_{,e}\xi^{e}_{,d}+\xi^{d}_{,e}\xi_{d,}{}^{e })\bigg{]}\, \tag{53}\]
where \(\mathcal{R}\) is the spatial extent of the undeformed body.
The variation of the action is
\[\delta S =\int_{t_{i}}^{t_{f}}dt\int_{\mathcal{R}}d^{3}x\bigg{[}-\rho_{0} \ddot{\xi}^{a}\delta_{ac}+\lambda\xi^{a}_{,a,d}\delta^{d}_{c}\] \[+\mu(\xi^{d}_{,c,d}+\xi_{c,}{}^{d}_{,d}{}_{,d})\bigg{]}\delta\xi^{c}\] \[-\int_{t_{i}}^{t_{f}}dt\int_{\partial\mathcal{R}}d^{3}x\bigg{[} \lambda\xi^{a}_{,a}\delta^{d}_{c}+\mu(\xi^{d}_{,c}+\xi_{c,}{}^{d}_{,d})\bigg{]} \delta\xi^{c}n_{d}\, \tag{54}\]
where \(n_{c}\) is the normal to the boundary. In deriving this result, we have integrated by parts to remove derivatives on \(\delta\xi^{a}\) and used the fact that variations vanish at the initial and final times, \(t_{i}\) and \(t_{f}\). Setting \(\delta S=0\), we find the bulk equations
\[-\rho_{0}\ddot{\xi}_{c}+\lambda\xi^{a}_{,a,c}+\mu(\xi^{d}_{,c,d}+\xi_{c,}{}^{d }_{,d})=0\, \tag{55}\]
and the equations
\[\lambda\xi^{a}_{,a}n_{c}+\mu(\xi^{d}_{,c}+\xi_{c,}{}^{d})n_{d}=0\, \tag{56}\]
that must hold on the boundary of the body. Since the physical space is flat and three-dimensional, we can easily generalize these results to arbitrary spatial coordinates by replacing partial derivatives with covariant derivatives. The bulk equation becomes
\[\rho_{0}\ddot{\xi}^{c}=\lambda\nabla^{c}\nabla_{a}\xi^{a}+\mu(\nabla_{d}\nabla^ {c}\xi^{d}+\nabla_{d}\nabla^{d}\xi^{c})=0\, \tag{57}\]
which simplifies to
\[\ddot{\xi}^{c}=\bigg{(}\frac{\lambda+\mu}{\rho_{0}}\bigg{)}\nabla^{c}\nabla_{a }\xi^{a}+\frac{\mu}{\rho_{0}}\nabla_{d}\nabla^{d}\xi^{c}=0. \tag{58}\]
The boundary equation in arbitrary coordinates is
\[\lambda\nabla_{a}\xi^{a}n^{c}+\mu(\nabla^{c}\xi^{d}+\nabla^{d}\xi^{c})n_{d}=0. \tag{59}\]
The nonrelativistic normal modes of vibration of a solid elastic sphere were first described in a classic paper by Horace Lamb [49] in 1881. See also the later treatise by Love [50]. A modern presentation is given in Thorne and Blandford [51] (exercise 12.12). These normal modes can be separated into two classes, the spheroidal and torsional modes. In this paper, we focus on the spheroidal modes. The subset of the spheroidal modes with \(\ell=0\) are called the radial modes. Spherical coordinates, \(x^{a}=\{r,\theta,\phi\}\) are used to simplify the problem.
We assume a harmonic time dependence. From [51], the radial displacement field that satisfies the bulk Eq. (58) is
\[\vec{\xi}_{n}(t,r)=A_{n}j^{\prime}_{0}(\omega_{n}r/C_{L})\,\hat{r}\,\cos( \omega_{n}t+\phi_{n})\, \tag{60}\]
where \(A_{n}\) is the amplitude, \(\phi_{n}\) is the phase, and \(\omega_{n}\) is the angular frequency. (At this point, the subscript \(n\) is undefined, but will refer subsequently to the discrete modes once the surface boundary condition is imposed and the resulting eigenvalue problem is solved.) The spherical Bessel functions are denoted by \(j_{\ell}(x)\), with \(j_{\ell}^{\prime}(x)\equiv\partial j_{\ell}(x)/\partial x\). The constant \(C_{L}\equiv\sqrt{(\lambda+2\mu)/\rho_{0}}\) is the longitudinal sound speed.
For \(\ell>0\), the general displacement solution satisfying the bulk Eq. (58) is [51]
\[\vec{\Xi}_{n\ell m}(t,r,\theta,\phi)=A_{n\ell m}\vec{\Xi}_{n\ell m}(r,\theta, \phi)\cos(\omega_{n\ell}t+\phi_{n\ell m})\, \tag{61}\]
with amplitude \(A_{n\ell m}\), phase \(\phi_{n\ell m}\) and angular frequency \(\omega_{n\ell}\). (Again values for discrete \(n\) are yet to be determined.) The vector field \(\vec{\Xi}_{n\ell m}\) is given by
\[\vec{\Xi}_{n\ell m}(r,\theta,\phi) =f_{n\ell}(r)Y_{\ell m}\hat{r}\] \[+g_{n\ell}(r)\left[\frac{\partial Y_{\ell m}}{\partial\theta} \hat{\theta}+\frac{1}{\sin\theta}\frac{\partial Y_{\ell m}}{\partial\phi} \hat{\phi}\right]\, \tag{62}\]
where \(Y_{\ell m}\) are the real spherical harmonics defined by
\[Y_{\ell m}=\begin{cases}(-1)^{m}\,\sqrt{2}\sqrt{\frac{2\ell+1}{4\pi}\frac{( \ell-|m|)!}{(\ell+|m|)!}}\;P_{\ell}^{|m|}(\cos\theta)\;\sin(|m|\phi)\,&\text{if $m<0$}\,\\ \sqrt{\frac{2\ell+1}{4\pi}}\;P_{\ell}^{m}(\cos\theta)\,&\text{if $m=0$}\,\\ (-1)^{m}\,\sqrt{2}\sqrt{\frac{2\ell+1}{4\pi}\frac{(\ell-m)!}{(\ell+m)!}}\;P_{ \ell}^{m}(\cos\theta)\;\cos(m\phi)\,&\text{if $m>0$}\,\end{cases} \tag{63}\]
and \(P_{\ell}^{m}\) are the associated Legendre functions. The functions, \(f_{n\ell}(r)\) and \(g_{n\ell}(r)\) are
\[f_{n\ell}(r) =\frac{\alpha_{n\ell}}{k_{Lnt}}j_{\ell}^{\prime}(k_{Ln\ell}r)+ \frac{\beta_{n\ell}}{k_{Tn\ell}}l(l+1)\frac{j_{\ell}(k_{Tn\ell}r)}{k_{Tn\ell} r}\, \tag{64}\] \[g_{n\ell}(r) =\frac{\alpha_{n\ell}}{k_{Ln\ell}}\frac{j_{\ell}(k_{Ln\ell}r)}{k _{Ln\ell}r}\] \[\qquad+\frac{\beta_{n\ell}}{k_{Tn\ell}r}\bigg{[}\frac{j_{\ell}(k _{Tn\ell}r)}{k_{Tn\ell}}+rj_{\ell}^{\prime}(k_{Tn\ell}r)\bigg{]}\, \tag{65}\]
where, again, \(j_{\ell}(x)\) are the spherical Bessel functions and
\[k_{Ln\ell}\equiv\frac{\omega_{n\ell}}{C_{L}}\,\quad k_{Tn\ell}\equiv\frac{ \omega_{n\ell}}{C_{T}}. \tag{66}\]
\(C_{L}\) and \(C_{T}\) are the longitudinal and transverse sound speeds:
\[C_{L}=\sqrt{\frac{\lambda+2\mu}{\rho_{0}}}\,\quad C_{T}=\sqrt{\frac{\mu}{\rho_{0}}}. \tag{67}\]
The constants \(\alpha_{n\ell}\) and \(\beta_{nl}\) that appear in the equations for \(f_{n\ell}(r)\) and \(g_{n\ell}(r)\) determine the weights of the longitudinal and transverse parts of the displacement, with their ratio to be determined when the eigenvalue problem is solved.
These solutions to the bulk motion equation are now subjected to the boundary condition (59), which results in the aforementioned eigenvalue problem. The eigenvalue problem has an infinite discrete set of solutions, or modes, each marked by an integer \(n\). For each unique spherical harmonic order, these modes differ in their radial dependence and are successively higher frequency overtones.
Let \(a\) denote the undeformed radius of the sphere, and \(\hat{n}=\hat{r}\) denote the unit normal to the boundary. Inserting the \(\ell=0\) radial solution (60) evaluated at the surface \(r=a\) into the boundary equation (59) results in the following relation:
\[\frac{\tan\left(\omega_{n}a/C_{L}\right)}{\omega_{n}a/C_{L}}=\frac{4}{4-( \omega_{n}a/C_{T})^{2}}. \tag{68}\]
The roots can be obtained numerically for the mode frequencies \(\omega_{n}\). The first root corresponds to the first value of \(n\) and so on. For example, choosing \(C_{L}/C_{T}=\sqrt{3}\) we find the solutions for \(\omega_{n}a/(\pi C_{L})\equiv k_{Ln0}a/\pi\) for \(n=0,1,2,3\) shown in Table 1. Using these solutions, the radial dependence for the \(\ell=0\) modes, given by \(j_{0}^{\prime}(\omega_{n}r/C_{L})\), is plotted in Fig. 4.
For \(\ell>0\), inserting the bulk displacement solution (61) evaluated at the surface \(r=a\) into the boundary equation (59) results in two equations,
\[\alpha_{n\ell}\big{[}2j_{\ell}^{\prime\prime}(k_{Ln\ell}a)-((k_{Tn \ell}/k_{Ln\ell})^{2}-2))j_{\ell}(k_{Ln\ell}a)\big{]}\\ +\beta_{n\ell}\big{[}2\ell(\ell+1)f_{1}(k_{Tn\ell}a)\big{]}=0\, \tag{69}\]
\[\alpha_{n\ell}\big{[}2f_{1}(k_{Ln\ell}a)\big{]} \tag{70}\] \[+\beta_{n\ell}\big{[}j_{\ell}^{\prime\prime}(k_{Tn\ell}a)+(\ell (\ell+1)-2)f_{0}(k_{Tn\ell}a)\big{]}=0\,\]
where \(f_{0}(x)\equiv j_{\ell}(x)/x^{2}\) and \(f_{1}(x)\equiv\partial(j_{\ell}(x)/x)/\partial x\). The simultaneous linear equations for \(\alpha_{n\ell}\) and \(\beta_{n\ell}\) have a solution if the determinant is zero,
\[\big{[}2j_{\ell}^{\prime\prime}(k_{Ln\ell}a)-((k_{Tn\ell}/k_{Ln\ell})^{2}-2)) j_{\ell}(k_{Ln\ell}a)\big{]}\]
\[\left[j_{\ell}^{\prime\prime}(k_{Tn\ell}a)+(\ell(\ell+1)-2)f_{0}(k_{Tn \ell}a)\right]\] \[-\left[2f_{1}(k_{Ln\ell}a)\right]\!\left[2\ell(\ell+1)f_{1}(k_{Tn \ell}a)\right]=0. \tag{71}\]
Equation (71) can be expressed in terms of \(k_{Ln\ell}\) and the roots can be obtained numerically. For example, for \(C_{L}/C_{T}=\sqrt{3}\), we find the solutions for \(\ell=1,2,3\) and \(n=0,1,2,3\) shown in Table 1. Inserting these solutions in Eq. (69) gives the ratio of the longitudinal to the transverse parts shown in Table 2. Using these solutions, the dependence of the functions, \(f_{n\ell}(r)\) and \(g_{n\ell}(r)\), on \(r\) for \(\ell=1,2,3\), and \(n=0,1,2,3\) is plotted in Fig. 5.
## V Numerical tests
In this section, we use the analytical solutions for an elastic sphere in Sec IV to validate the numerical method presented in Sec III (in the nonrelativistic limit) and find its convergence rate.
The numerical method is fully relativistic and is based on nonlinear elasticity. We set the metric equal to the Minkowski metric and choose the analytical solution to be a sum of \(\ell=2\) and \(\ell=3\) modes with amplitudes \(A_{020}\) and \(A_{031}\) and phase difference \(\phi_{031}-\phi_{020}=\pi/2\):
\[\vec{\xi}_{\rm analytic}(t,r,\theta,\phi) = \vec{\xi}_{020}(t,r,\theta,\phi)+\vec{\xi}_{031}(t,r,\theta,\phi)\] \[= A_{020}\vec{\Xi}_{020}(r,\theta,\phi)\cos(\omega_{02}t+\phi_{020})\] \[+ A_{031}\vec{\Xi}_{031}(r,\theta,\phi)\cos(\omega_{03}t+\phi_{03 1})\.\]
We select the material properties of the sphere such that \(C_{L}/C_{T}=\sqrt{3}\). The amplitudes \(A_{020}\) and \(A_{031}\) are small compared to \(a\), and the sound speeds are small compared to the speed of light.
We use Matlab's [44] mesh generation algorithm to generate a linear tetrahedral mesh for a sphere of radius \(0.5\,\mathrm{m}\). As the mesh is refined, the total volume of tetrahedral elements converges to \(V_{\rm conv}\) and we find the converged radius using the converged volume, \(a_{\rm conv}=\sqrt[3]{3V_{\rm conv}/4\pi}\approx 0.49881\,\mathrm{m}\). We use \(a_{\rm conv}\) as the undeformed radius of the sphere in computing the analytical solution.
We set \(X^{a}\) for all nodes at the initial time step such that their displacement from their relaxed value is equal to Eq. (72) evaluated at \(t=0\). We also set \(\hat{X}^{a}\) equal to the time derivative of Eq. (72) evaluated at \(t=0\). We numerically evolve the coordinates and velocities in time.
The relativistic terms in the elastic body action are of order \(v^{2}/c^{2}\) and higher, where \(v^{2}=\hat{X}^{a}\hat{X}_{a}\) and \(c\) is the speed of light. The nonlinear elasticity terms in the action are of order \((\xi_{i,i}^{a})^{3}\) and higher. After obtaining the numerical solution we ensured that the discrepancy between the numerical and analytical solution is not due to relativistic and nonlinear elasticity effects by computing \(\max(v^{2}/c^{2})\) and \(\max(|X_{,i}^{a}-X_{R,i}^{a}|)\) using the numerical solution. We found that \(\max(v^{2}/c^{2})\approx 10^{-27}\) and \(\max(|X_{,i}^{a}-X_{R,i}^{a}|)\approx 10^{-8}\), which makes \(\max(|X_{,i}^{a}-X_{R,i}^{a}|^{3})\) about 16 orders of magnitude smaller than \(\max(|X_{,i}^{a}-X_{R,i}^{a}|)\).
We use four mesh refinements with \(h_{\rm max}=\{a/4,a/8,a/16,a/32\}\), where \(h_{\rm max}\) is the maximum edge length of the tetrahedral elements. Figure 6 shows the analytical and numerical displacement and velocity of the node \(\zeta^{i}=(0.1522,0.2636,-0.3967)\) as a function of time, for the mesh refinement with \(h_{\rm max}=a/8\). Figure 7 shows the displacement and velocity for the node \(\zeta^{i}=(0.1779,0.4262,0.1913)\) with \(h_{\rm max}=a/16\). (The matter space coordinates are Cartesian with met
Figure 4: Radial dependence of the \(\ell=0\) modes, including the \(n=0\) fundamental and the first three overtones for \(C_{L}/C_{T}=\sqrt{3}\). The displacement is zero at the origin for all \(n\) values. For the radial modes, the mode number \(n\) coincides with the number of nodes (places of zero displacement) along the radial direction. The maximum displacement does not occur necessarily at the surface.
\begin{table}
\begin{tabular}{c c c c c} \(\ell\) & \(k_{L0\ell}a/\pi\) & \(k_{Ln\ell}a/\pi\) & \(k_{L2\ell}a/\pi\) & \(k_{L3\ell}a/\pi\) \\ \hline
0 & 0.81596643669775 & 1.92853458475813 & 2.95387153514092 & 3.96577216329668 \\
1 & 0.62934739815975 & 1.24440286338649 & 1.42338683343041 & 1.96556466385947 \\
2 & 0.48514540434785 & 0.89412183542721 & 1.53070871073100 & 1.79736223921180 \\
3 & 0.71972992130588 & 1.18616009042197 & 1.78353164657311 & 2.15894591358743 \\ \end{tabular}
\end{table}
Table 1: Numerical solutions for \(k_{Ln\ell}a/\pi\) for \(C_{L}/C_{T}=\sqrt{3}\) satisfying the boundary conditions for the first four \(\ell\) and \(n\) values of the normal modes of oscillation. The values of \(k_{Ln\ell}a/\pi\) increase with increasing \(n\) number.
ric \(\epsilon_{ij}=\delta_{ij}\). The coordinate values are reported to four decimal places for brevity.)
We compute the L2-norm of the error in the coordinates using
\[e=\frac{\sqrt{\sum_{n}^{N_{\rm total}}(X_{n}^{a,{\rm num}}-X_{n}^{a,{\rm analytic }})(X_{a\,n}^{{\rm num}}-X_{a\,n}^{{\rm analytic}})}}{N_{\rm total}}\, \tag{73}\]
and similarly for the velocities. Figure 8 is the log-log plot of the L2-norm of the errors in the coordinates and velocities at the last time step as functions of \(h_{\rm max}\). The numerical method displays second order convergence.
## VI Conclusions
We have presented a second-order-convergent finite element numerical scheme for modeling extended bodies in curved spacetime using elasticity theory in general relativity. Finite elements allow a Lagrangian approach to the elastic body and provide a free surface boundary condition when formulating the numerical method. The equations of motion for the body are obtained as coupled ODEs by taking a novel approach of spatially discretizing the action. The resulting Euler-Lagrange equations are explicitly integrated in time with fourth-order Runge-Kutta, subject to a Courant condition on the time step. The numerical method can be used for bodies of any shape described by any hyperelastic potential energy function, moving through any spacetime.
Reducing to a linearized action for the hyperelastic body in the nonrelativistic limit, we reproduced the classic solutions [49][51] for radial and nonradial normal mode oscillations of an elastic sphere. These modes were then used to test the numerical code in the linearized, nonrelativistic limit. By ensuring that relativistic and nonlinear contributions are negligible, our numerical results show second-order convergence to the analytical so
\begin{table}
\begin{tabular}{c c c c c} \(\ell\) & \(\alpha_{0\ell}/\beta_{0\ell}\) & \(\alpha_{1\ell}/\beta_{1\ell}\) & \(\alpha_{2\ell}/\beta_{2\ell}\) & \(\alpha_{3\ell}/\beta_{3\ell}\) \\ \hline
1 & -0.39334285456883 & 0.57828661556718 & -0.35961617280979 & 0.07851036440781 \\
2 & -0.68808506569504 & -0.95183672540982 & 0.55915283283423 & -1.16213124001178 \\
3 & -1.56275090908497 & -1.65898880431533 & 0.68019269841200 & -1.71401134238877 \\ \end{tabular}
\end{table}
Table 2: Numerical solutions for the ratio of the longitudinal to the transverse parts, \(\alpha_{n\ell}/\beta_{n\ell}\), for \(C_{L}/C_{T}=\sqrt{3}\) for \(\ell=1,2,3\) and first four \(n\) values of the normal modes of oscillation. The ratios \(\alpha_{n\ell}/\beta_{n\ell}\) increase with increasing \(\ell\) number.
lutions.
In a paper to follow shortly, we will apply our numerical framework to model the motion and internal dynamics of a hyperelastic sphere during tidal encounters with a Schwarzschild black hole along a quasi-parabolic orbit. Beyond that, the method presented in this paper will allow a host of investigations to be carried out on extended body interactions with spacetime curvature, including MPD spin-curvature effects on rapidly-rotating bodies and effects of higher multipole moments. Encounters could be generalized to scattering with Kerr black holes. Furthermore, the technique could be extended to include gravitational perturbations and radiation reaction effects on the finite-sized mass.
###### Acknowledgements.
We acknowledge the computing resources provided by North Carolina State University High Performance Computing Services Core Facility (RRID SCR_022168). We also thank Lisa L. Lowe for her assistance with port- and optimization. C.R.E. was partially supported by NSF Grant No. PHY-2110335 to the University of North Carolina-Chapel Hill.
|
2306.11475 | Delegated Classification | When machine learning is outsourced to a rational agent, conflicts of
interest might arise and severely impact predictive performance. In this work,
we propose a theoretical framework for incentive-aware delegation of machine
learning tasks. We model delegation as a principal-agent game, in which
accurate learning can be incentivized by the principal using performance-based
contracts. Adapting the economic theory of contract design to this setting, we
define budget-optimal contracts and prove they take a simple threshold form
under reasonable assumptions. In the binary-action case, the optimality of such
contracts is shown to be equivalent to the classic Neyman-Pearson lemma,
establishing a formal connection between contract design and statistical
hypothesis testing. Empirically, we demonstrate that budget-optimal contracts
can be constructed using small-scale data, leveraging recent advances in the
study of learning curves and scaling laws. Performance and economic outcomes
are evaluated using synthetic and real-world classification tasks. | Eden Saig, Inbal Talgam-Cohen, Nir Rosenfeld | 2023-06-20T11:59:03Z | http://arxiv.org/abs/2306.11475v2 | # Delegated Classification
###### Abstract
When machine learning is outsourced to a rational agent, conflicts of interest might arise and severely impact predictive performance. In this work, we propose a theoretical framework for incentive-aware delegation of machine learning tasks. We model delegation as a principal-agent game, in which accurate learning can be incentivized by the principal using performance-based contracts. Adapting the economic theory of contract design to this setting, we define _budget-optimal_ contracts and prove they take a simple threshold form under reasonable assumptions. In the binary-action case, the optimality of such contracts is shown to be equivalent to the classic Neyman-Pearson lemma, establishing a formal connection between contract design and statistical hypothesis testing. Empirically, we demonstrate that budget-optimal contracts can be constructed using small-scale data, leveraging recent advances in the study of learning curves and scaling laws. Performance and economic outcomes are evaluated using synthetic and real-world classification tasks.
## 1 Introduction
The acclaimed success of machine learning at effectively solving difficult prediction tasks across diverse problem domains has made it highly appealing for firms, institutions, and individual practitioners. But machine learning has also become increasingly complex, cumbersome, and difficult to operate--and not all those who seek to learn have access to the necessary expertise, infrastructure, and designated resources required for learning effectively. This gap has created a new market for _outsourced machine learning_, in which a client interested in obtaining an accurate predictive model can hire the services of a specialized provider which, for a price, trains the model on their behalf. Consider for example a hospital purchasing a classifier for deciding between hospitalization and outpatient treatment when triaging patients. The provider invests in curating, cleaning and annotating training data, and delivers a trained model in return to payment from the hospital.
Having a budget to expend on outsourced learning [45], we model the the client as aiming to obtain the best possible predictive model. At first glance, it is tempting to assume that the optimal strategy is simply to pay the provider the maximal feasible amount--and hope to get a high-end model in return. After all, if the client were to spend the budget directly on learning, investing the maximal available sum would yield the best possible results. But this neglects to account for the _incentives_ of the provider, who is interested in maximizing profit. Since the actions of the provider remain private, it is in his best interest to (secretly) minimize efforts, which in turn can result in his delivering a
suboptimally-trained model. In our example, the provider can cut costs by annotating only a subset of the data, obtaining cheaper low-quality annotations, or neglecting to meticulously remove all outliers.
Outsourced learning is hence susceptible to _moral hazard_, an economic situation which might occur under information asymmetry, and to the detriment of the client. Motivated by this observation, in this paper we initiate the study of _delegated learning_, and aim to explore the economic, algorithmic, and statistical implications that occur when learning is delegated to a specialized provider. Our key novelty is in instantiating delegated learning as a problem of _optimal contract design_[7, 36, 41, 42]. Broadly, contracts are an important monetary device that allows the client to establish a payment scheme which, if properly set, serves to align incentives and guarantee that both parties are well-off. Our main challenge is to design effective contracts specialized to the task of delegated learning on a budget.
Towards this, we begin with a conventional supervised classification setup, and impose economic structure by assuming that acquiring training examples is costly. We then conceptually "split" the conventional self-sufficient learner into two rational entities: a _principal_, who controls the budget and is interested in maximizing predictive accuracy; and an _agent_, who controls learning (in particular the training set) and is interested in maximizing profit. This allows us to model principal-agent relations as a Stackelberg game, in which the principal commits to a _contract_\(t\), determining _a priori_ the amount to be paid for every possible (stochastic) level of obtained accuracy. The agent best-responds to the contract by choosing the profit-maximizing number of samples \(n\), and training the predictive model.
Under this setting, we study the algorithmic problem of designing an optimal contract. As is standard in economic analysis, we begin with the assumption that the principal has full information on the distribution of possible outcomes for each of the agent's possible actions. In our setting, actions correspond to the number of training samples, and outcomes to the empirical classifier accuracy; thus, the main object of interest for contract design in delegated learning settings is the _learning curve_, which describes the (stochastic) performance of learning per sample size. Under certain plausible conditions on the learning curve, namely MLRP and a certain notion of convexity, our main result here is that optimal contracts are _simple_, and in particular, take on the form of simple threshold functions. Simple contracts are appealing because they are straightforward to understand and communicate; in our setting, they are also easy to compute, and we give a closed-form solution for the optimal threshold contract. Our results rely on establishing a novel connection between contracts and the renowned Neymon-Pearson lemma [39], which we view as our main theoretical contribution.
We then switch gears and turn to empirically studying the construction of contracts from partial information. In particular, we consider a setting where the principal can only estimate the learning curve from small available data (e.g., by bootstrapping on small \(n\) and extrapolating). Using the recent LCDB dataset of learning curves [37], we show that threshold contracts generally perform well on estimated curves despite the inherent uncertainty. We also explore the role of different parameters of our setup, consider various tradeoffs in curve-fitting and contract design, and discuss limitations by pointing out certain failure modes to which contracts may be susceptible.
Taken together, our results shed light on why and how simple contracts for delegated learning work to correctly balance between the incentives of both delegator and delegatee in outsourced learning.
Figure 1: Delegated classification interaction sequence. The principal examines initial information, and designs a contract \(t:\{0,\dots,m\}\rightarrow\mathbb{R}_{\geq 0}\). Having observed \(t\), the agent strategically selects a dataset size \(n\) that will maximize his expected utility. He samples a training set \(S\sim D^{n}\), incurs cost \(c_{n}\), then trains the classifier \(h\in\mathcal{H}\) and sends it to the principal. Upon receiving \(h\), the principal evaluates its accuracy on a random validation set \(V\sim D^{m}\), and pays the agent according to the contract \(t\).
### Related work
Previous works have considered delegation of ML-related tasks that differ from our task of training a classifier: labeling of data points in [11], and gathering information in [12]. The delegated task in [3] is computing a high-dimensional function \(f(x_{1},\dots,x_{n})\), where obtaining coordinates is costly; by learning a single coordinate herself the principal can incentivize correct computation of \(f(x)\). The delegated task in [19] is the closest to ours--the agent provides a classifier and the principal verifies its near-optimality within \(\mathcal{H}\). A fundamental difference is that their agent is assumed to be adversarial rather than rational, and so interactive proofs are used instead of economic incentives.
The Neyman-Pearson lemma has recently been connected to economic design by [5] in the context of adverse selection rather than moral hazard. The agent has a hidden type (e.g., whether a new drug is effective), and the optimal menu to offer this agent is designed based on the theory of e-values. When the hidden type has binary support, the method is equivalent to Neyman-Pearson. A "moral" link between the design of contracts (for a non-budgeted principal) and statistical inference (in particular likelihood ratios) was observed already in [20], but no connection was made to the power of hypothesis tests. Other intersections of ML and contracts that do not involve delegation of learning-related tasks include strategic classification [e.g., 30, 31, 2] and online learning of optimal contracts [23, 13, 46] Contract settings with a binary action and/or outcome space have been studied in [e.g., 16, 4, 17]. Not to be confused with our notion of delegation, there is a growing computational literature on delegation without monetary payments [e.g., 29].
To extrapolate from partial data, our work builds upon recent advancements in the study of learning curves, which characterize the expected generalization of learning as a function of dataset size and other exogenous factors [43]. There is growing empirical evidence that performance of modern neural networks can be predicted using simple scaling laws [e.g., 28, 35, 44, 18, 1, 40, 24], and theoretical results that back these findings in simplified settings [8, 9, 27, 6].
## 2 Problem Setup
The core of our setting is based on a standard supervised classification task. Let \(x\in\mathcal{X}\) be features and \(y\in\mathcal{Y}\) be labels, and assume there is some unknown joint distribution \(D\) over \((x,y)\) pairs. Given a sample set \(S=\{(x_{i},y_{i})\}_{i=1}^{n}\sim D^{n}\), the goal in learning is to use \(S\) to find a classifier \(h\) from a class \(\mathcal{H}\) that maximizes expected accuracy, \(\mathrm{acc}_{D}(h)=\mathbb{P}_{(x,y)\sim D}[h(x)=y]\). Because the underlying distribution \(D\) is unknown, expected performance is estimated by the empirical average on an additional held-out validation set \(V\sim D^{m}\) of size \(m\) as \(\mathrm{acc}_{V}(h)=\frac{1}{m}\sum_{i=1}^{n}\mathds{1}\left[h(x_{i})=y_{i}\right]\), which is consistent and unbiased as an estimator of \(\mathrm{acc}_{D}(h)\). Throughout we will assume that the learning algorithm itself (i.e., which prescribes how to obtain \(h\) from \(S\)) is known and fixed.
Learning on a budget.We will be interested in studying learning when certain resources are limited or costly. Our main focus will be on the setting where the main cost of learning is the number of labeled examples \(n\), but we note that our approach can in principle extend to other forms of learning 'effort'.1 We assume the learner has a monetary budget \(B\) to spend on samples, and is interested in maximizing accuracy under budget constraints. Let \(c_{n}\geq 0\) be the cost of \(n\) samples (assumed to be increasing in \(n\)), then the learner aims to solve:
Footnote 1: For example, [28] argue that not only training-set size, but also compute time and architecture depth, are related to accuracy through scaling laws, which have tight connections to the assumptions we discuss in Sec. 3.
\[n^{*}=\operatorname*{argmax}_{n}\mathbb{E}_{h_{n}}[\mathrm{acc}_{D}(h_{n})] \quad\text{s.t.}\quad c_{n}\leq B \tag{1}\]
where \(h_{n}\) is the classifier learned from a random dataset of size \(|S|=n\). We denote \(\alpha_{n}=\mathrm{acc}_{D}(h_{n})\), and note that \(\alpha_{n}\) is a random variable, as is \(h_{n}\) itself. When the learner is a self-sufficient entity, and when \(\alpha_{n}\) improves monotonically in \(n\), then Eq. (1) admits a trivial solution as the largest affordable \(n\).
Delegation.We model the delegation of learning as a conceptual "split" of the learner into two distinct entities: an _agent_, who controls learning; and a _principal_, who controls the validation process. The principal outsources the learning task to the agent, who in turn uses the training set \(S\) to train the classifier \(h\); once delivered, the principal validates the performance of \(h\) using the validation set \(V\). Whereas the classifier's accuracy benefits the principal alone, the cost of learning (i.e., the cost of acquiring \(S\)) is born exclusively by the agent. Importantly, the amount of invested effort remains private to the agent; in our example, the principal cannot now how many examples received quality labeling. Because the agent seeks to maximize profit, the principal can use her budget as a source of
monetary payment to incentivize the agent to invest in larger \(|S|=n\). Intuitively, one could expect larger payments to entail larger \(n\), and therefore higher-accuracy \(h\). However, as we will see, this is not always the case, and careful planning is required in order to fully utilize a given budget.
### Delegation as contract design
As the training set remains private to the agent, there is an information gap between the two parties. This creates a conflict of interest for the agent known as _moral hazard_[7], in which the agent may be tempted to invest sub-par effort, while claiming that efforts were in fact his honest best. In economics, the celebrated solution to moral hazard are _contracts_: pay-per-performance rules that a-priory determine future payments for every possible outcome, which we formally describe next [25].
Contract design.A contract setting is defined by a set of actions \(\mathcal{A}=\{a_{1},\ldots,a_{N}\}\) that can be taken by the agent, and a set of possible outcomes \(j=\{0,\ldots,m\}\). Each action \(a_{i}\) is associated with a cost \(c_{i}\), and w.l.o.g. we assume \(c_{1}\leq\cdots\leq c_{N}\) so that actions correspond to increasing _effort levels_. The agent's choice to perform action \(a\in\mathcal{A}\) yields a random outcome \(j\sim f_{a}\) for the principal, where \(f_{a}\) describes a distribution over the possible outcomes associated with action \(a\). The principal, who does not observe the agent's chosen action, can incentivize the agent through a _contract_, \(t:\{0,\ldots,m\}\to\mathbb{R}_{\geq 0}\), according to which she pays the agent \(t(j)\geq 0\) when the materialized outcome is \(j\). Given contract \(t\), let \(u_{a}(t)\) be the agent's expected _utility_ from taking action \(a\in\mathcal{A}\) at cost \(c_{a}\) (via stochastic outcomes \(j\sim f_{a}\)), and let \(a(t)\) be the agent's _best response_--an action that maximizes his expected utility (and following standard tie-breaking assumptions as in [14]). Then:
\[u_{a}(t)=\mathbb{E}_{j\sim f_{n}}[t(j)]-c_{a},\qquad\qquad a(t)\in\operatorname {argmax}_{a\in\mathcal{A}}u_{a}(t). \tag{2}\]
Every action \(a^{*}\) that is the best response \(a^{*}=a(t)\) to some contract \(t\) is called _implementable_. In economic terms, the principal and agent are playing a _Stackelberg game_, in which the principal commits to a contract \(t\) and the agent best-responds by choosing action \(a(t)\) that maximizes his expected utility \(u_{a}(t)\). The goal of the principal is to design a contract \(t\) which incentivizes the agent to take best-response actions yielding favorable outcomes for the principal.
Contracts for delegated learning.We propose to formulate delegated learning as a problem of optimal contract design, instantiated as follows. First, we relate agent actions \(a\) with the number of samples \(n\), and denote \(\mathcal{A}=\{n_{1},\ldots,n_{N}\}\) as the possible sizes of \(S\) that the learning agent can work with. The cost of acquiring samples naturally maps as \(c_{a}=c_{n}\). Next, we associate outcomes \(j\) with accuracy for the principal by defining \(j\) as the number of validation samples (out of the possible \(m\)) on which \(h\) is correct; note this implies \(\operatorname{acc}_{V}(h)=j/m\), and we will therefore use \(j\) and \(\operatorname{acc}_{V}(h)\) as 'out-comes' interchangeably. Finally, for an action \(n\), we set \(f_{n}\) to be the distribution over possible accuracies obtained when learning with \(n\) samples, namely \(f_{n}(j)=P(\operatorname{acc}_{V}(h_{n})=j/m)\ \forall j\). We will also use the matrix form \(F_{nj}=f_{n}(j)\), where \(F\in[0,1]^{N\times(m+1)}\). Note \(F\) admits two sources of variation: (i) _a-priori_ variation in \(h_{n}\) due to stochasticity in \(\hat{S}\sim D^{n}\); and (ii) _a-posteriori_ variation in \(j\) for any fixed \(h_{n}\) due to stochasticity in \(V\sim D^{m}\). Note also that given \(h_{n}\), the latter admits a simple binomial form, namely \(j\sim\operatorname{Binomial}(m,\alpha_{n})\); empirically we observe this to be the dominant component.
### Delegation as an optimization problem
Recall that our principal seeks to maximize accuracy under budget constraints (Eq. (1)). Once learning is delegated to an agent, and framed as a contract design problem, the principal's objective becomes:
\[t^{*}=\operatorname{argmax}_{t\in[0,B]}=\mathbb{E}_{h_{n(t)}}[\operatorname{ acc}_{D}(h_{n(t)})] \tag{3}\]
Contract \(t^{*}\) is chosen to incentivize the agent to invest effort \(n(t)\) (via eq. (2)) such that the training of \(h_{n(t)}\) yields high dividends for the principal in terms of expected accuracy. We will refer to \(t^{*}\) as a _budget-optimal contract_, and to the general task of finding \(t^{*}\) as _budget-optimal contract design_.
Information structure.Borrowing from contract theory, in a delegated learning setting actions \(\mathcal{A}\) and costs \(\{c_{n}\}\) are known to both sides, and the outcome distributions \(\{f_{n}\}\) are known to the agent. For the principal, we explore varying levels of knowledge: In Sec. 3, we assume (as in the classic contract design literature) that the principal has full information of \(F\) (i.e., knows the learning curve), and focus on characterizing the optimal contract. In Sec. 4 we relax this assumption, and explore a partial-information setting in which the principal relies instead on an empirically-estimated curves \(\hat{F}\)
## 3 Budget-optimal Contracts
We now turn to present our approach for solving budget-optimal contracts, beginning with challenges.
Why agents cut corners.The conceptual challenge in designing contracts lies in that agents cannot reliably report _what_ they did. For example, consider a principal that, after delegation, received a classifier attaining 0.74 (validation) accuracy. Should she be happy? The crux is that there are two ways that this could have happened: (i) the agent invested high effort in learning (large \(n\)), but (by chance) received an uninformative \(S\), and as a result, delivered a low-quality \(h\); and (ii) the agent invested low effort (small \(n\)). Because the agent's actions are private, and because outcomes are stochastic, the principal can never know for certain which is the true underlying cause. In other words, a 'lazy' (or rather rational) agent can hide behind the uncertainty that is inherent in learning outcomes.2
Footnote 2: The agent can also hide in the uncertainty due to \(V\), particularly when \(m\) is small; see experiment in Sec. 4.2.
Contract examples.To overcome this informational gap, the principal can devise a contract to align incentives and encourage the agent to prefer certain actions over others. But not all contracts are equally effective. Fig. 2 illustrates for a budget \(B\) three contract types and their economic implications:
* **Constant contract (\(t(j)=B\))**: The agent is paid \(B\) regardless of the outcome. His best-response in this case is to choose the least-costly action--to the detriment of the principal.
* **Linear contract (\(t(j)=Bj/m\))**: The agent is paid a fraction of \(B\), linear in the resulting accuracy. Linear contracts are a popular and extensively-studied class of contracts (e.g., [26]). Nonetheless, and though seemingly sensible, linear contracts turn out to be sub-optimal for our setting.
* **Threshold contract (\(t(j)=B\mathds{1}\)**\([j\geq j_{0}]\) for some \(j_{0}\))**: The agent is paid \(B\) provided the empirical accuracy surpasses a threshold \(j_{0}\). In this example, the threshold contract is optimal.
Rather than committing _a priori_ to some parametric form of contract, we seek to find the best budget-optimal contract by rigorously solving Eq. (3). This, however, requires _structure_.
Stochastic learning curves (and where to find them).Our approach relies on the observation that there is a tight connection between the set of distributions \(\{f_{n}\}\) encoded in \(F\), and _learning curves_, which describe the anticipated accuracy of a classifier as a function of the size of its training set. Learning curves typically depict only expected accuracy, but there is also inherent variation in outcomes. We will therefore broadly use the term'stochastic learning curve' to describe both mean trend _and_ variation in accuracy as a function of \(n\); formally, a stochastic learning curve is defined precisely by \(F\). This connection is useful because learning curves have structure: First, learning curves are typically _monotone_[28; 8]; when not [37; 43], they can be monotonized [9]. Second, stochastic learning curves are likely to satisfy the _monotone likelihood ratio property_ (MLRP), which states that expected accuracy increases, but variance decreases, with \(n\); this holds in particular under the Binomial distribution [15; B.1]. MLRP has proved useful in contract theory [21], and will be key for us as well.
Figure 2: A delegated classification setting (data from Sec. 4). **(Left)** Each costly action taken by the agent (training set size \(n\)) induces a distribution \(f_{n}\) of possible outcomes (classifier accuracy). The principal seeks to construct a contract \(t\) that incentivizes a profit-maximizing agent to take actions entailing favorable outcomes. Note the \(f_{n}\) exhibit increasing expectation, but decreasing variance, in \(n\). **(Center)** Three contracts for a given budget \(B\), mapping outcomes to payments. **(Top-right)** Agent’s utilities \(u_{n}(t)\) and best responses \(n(t)\) (stars) for each contract \(t\). **(Bottom-right)** Expected accuracies for principal resulting from each contract; here the threshold contract is optimal (see Sec. 3).
### Optimizing (simple) contracts
Our main technique relies on a reduction to what we refer to as _min-budget contracts_. Given an (implementable) target action \(n^{*}\in\mathcal{A}\), a min-budget contract for \(n^{*}\) is a contract \(t\) that incentivizes the agent to employ precisely the action \(n^{*}\), while minimizing the maximum payment \(\left\lVert t\right\rVert_{\infty}=\max_{j\in\{0,\ldots,m\}}\{t(j)\}\) by the principal; i.e., \(t\) implements \(n^{*}\) at minimum budget. Formally:
\[t^{*}=\operatorname*{argmin}_{t}\left\lVert t\right\rVert_{\infty}\quad \text{s.t.}\quad n(t)=n^{*} \tag{4}\]
When the learning curve is monotone, and given a budget \(B\), to find the _budget-optimal_ contract it suffices to identify the maximum implementable \(n\in\mathcal{A}\) for which the _min-budget_ contract has payments bounded by \(B\). The budget-optimal problem thus reduces to multiple min-budget (and implementability) problems. Our approach will therefore be to solve Equation (3) by a series of calls to a solver for Equation (4). Correctness holds due to the following claim (proof in Appendix B.1):
**Proposition 1**.: _There always exists a budget-optimal contract which is min-budget._
Solving Eq. (4) can be done by linear programming (LP)--see Appx. B.2 for the MIN-BUDGET LP. 3 The LP approach is valid but can be costly. One of our contributions is in identifying natural cases where min-budget contracts take on _simple_ forms, which are easier to optimize, and have practical merit. In particular, we will show that plausible structural assumptions on the learning curve give rise to _threshold_ contracts, namely \(t(j)=B\mathds{1}\left[j\geq j_{0}\right]\) for some \(j_{0}\), which are a special case of _all-or-nothing_ contracts having \(t(j)\in\{0,B\}\). Our theoretical results are summarized in Table 1.
Footnote 3: The MIN-BUDGET LP is closely related to the well-known MIN-PAY LP from non-budgeted contract design [e.g., 15], but with a different objective, and hence very different optimal solutions (see Appendix B.3).
### Simple min-budget contracts for binary action space
We begin with a simple binary-action delegated learning setting where the agent can choose one of two actions, \(\mathcal{A}=\{n_{1},n_{2}\}\), e.g., a'small' vs. 'large' training set. This reduced case will be useful for making a precise connection between contract design and statistical inference, and as a building block to be used when returning to the general-action setting in Sec. 3.3. We focus throughout on the interesting case in which the principal wishes to incentivize \(n_{N}\), which is implementable.4 We thus refer to the min-budget contract for \(n_{N}\) as the _optimal contract_. Our main result in this section is characterizing the optimal contract as an all-or-nothing contract whose budget is determined by the distance between \(f_{1},f_{2}\) (see Proposition 2), and establishing its equivalence (in Theorem 1) to the optimal _hypothesis test_ according to the Neyman-Pearson lemma (Lemma 1).
Footnote 4: For \(N=2\), when \(n_{2}\) is not the target or when is not implementable, then the solution is immediate: always pay \(c_{1}\). For \(N>2\), given a monotone learning curve the focus on implementable \(n_{N}\) is again without loss.
Contracts and tests.Towards establishing our result, we first develop intuition: Given the outcome distributions \(\{f_{1},f_{2}\}\), in order to incentivize \(n_{2}\) the principal would ideally like to design a contract that pays the agent if the observed outcome \(j\in\{0,\ldots,m\}\) originates from \(f_{2}\). The principal can attempt to identify whether the outcome \(j\) is drawn from distribution \(f_{1}\) or \(f_{2}\) through hypothesis testing, where a hypothesis test \(\psi:\{0,\ldots,m\}\rightarrow\{0,1\}\) maps a sample \(j\) to either \(f_{2}\) (indicated by 1) or to the null hypothesis \(f_{1}\) (indicated by 0). For our purpose it is convenient to allow tests to be non-integral, in which case \(\psi:\{0,\ldots,m\}\rightarrow[0,1]\) maps \(j\) to a _probability_ with which it originates from \(f_{2}\). The quality of a hypothesis test is measured by summing its type-1 and type-2
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multicolumn{2}{c}{Problem size} & \multicolumn{2}{c}{Structural assumptions} \\ \hline Actions & Outcomes & No assumptions & MLRP & MLRP + Concave \\ \hline \(|\mathcal{A}|=2\) & any size & Neyman-Pearson (T1) & Threshold (B.6.1) \\ \(|\mathcal{A}|>2\) & \(\begin{array}{c}m+1=2\\ m+1>2\end{array}\) & All-or-nothing (B.5.1) & Threshold (B.6.2) \\ & NP-hard (T2) & \(\exists\) non-threshold (B.6.3) & Threshold (T3) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Characterization of simple min-budget contracts under different assumptions. Simple contract forms include all-or-nothing contracts and (under MLRP) their subclass of threshold contracts. The table specifies for each configuration either the simple form that is optimal (in one case through equivalence to the Neyman-Pearson lemma), or that the simple form is non-optimal or intractable.
errors: \(\sum_{j=0}^{m}f_{1}\psi_{j}+\sum_{j=0}^{m}f_{2}(1-\psi_{j})\). The test that minimizes this sum is known as the _maximum power_ hypothesis test, and has been characterized by Neyman and Pearson [39, 4.3].
The following theorem establishes the connection between the optimal contract and the most powerful hypothesis test. For a fixed \(B\), observe that every contract with budget \(B\) can be mapped to a hypothesis test via the bijection \(\psi(j)=\nicefrac{{t(j)}}{{B}}\). Then:
**Theorem 1** (Optimal contract vs. test).: _Consider binary-action delegated learning with distributions \(f_{1},f_{2}\) and costs \(c_{1},c_{2}\). A contract \(t\) with budget \(B\) is optimal if and only if its corresponding hypothesis test \(\psi=\nicefrac{{t}}{{B}}\) is maximum power with type-1 and type-2 errors summing to \(1-\frac{c_{2}-c_{1}}{B}\)._
The proof (Appendix B.4.2) is by a non-linear variable transformation to MIN-BUDGET LP.
All or nothing.Theorem 1 immediately implies that the following characterization of the optimal contract for the binary-action case (Proposition 2) is equivalent to the well-known Neyman-Pearson lemma characterizing the most powerful hypothesis test (Lemma 1). To state these two results, let \(\left\|f_{2}-f_{1}\right\|_{\mathrm{TV}}\) denote the _total variation distance_ between \(f_{2}\) and \(f_{1}\), namely \(\frac{1}{2}\sum_{j=0}^{m}|f_{2,j}-f_{1,j}|\).
**Proposition 2** (Optimal contract for binary-action).: _Consider a binary-action delegated learning setting with distributions \(f_{1},f_{2}\) and costs \(c_{1},c_{2}\). The optimal contract is an all-or-nothing contract given by \(B=\nicefrac{{(c_{2}-c_{1})}}{{\left\|f_{2}-f_{1}\right\|_{\mathrm{TV}}}}\) and \(t^{*}(j)=B\mathds{1}\left[f_{2}(j)\geq f_{1}(j)\right]\) for all \(j\in\{0,\ldots,m\}\)._
**Lemma 1** (Neyman-Pearson [e.g., 39]).: _Let \(f_{1},f_{2}\) be two discrete probability distributions. Then the most powerful hypothesis test for \(f_{1},f_{2}\) is the likelihood ratio test \(\psi(j)=\mathds{1}\left[f_{2}(j)\geq f_{1}(j)\right]\), which attains the optimal bound \(1-\left\|p-q\right\|_{\mathrm{TV}}\) on type-1 and type-2 errors._
The optimal contract pays the agent for outcomes that are more likely to come from \(f_{2}\) than from \(f_{1}\). Moreover, the optimal contract (resp., hypothesis test) has a higher budget (resp., sum of errors) the smaller the distance is between the two distributions \(f_{1},f_{2}\). At the extremes, if their distance is 1 (no overlap among their supports), the required budget for incentivizing \(n_{2}\) is \(c_{2}-c_{1}\) (the sum of errors is zero), whereas if their distance is 0 it becomes infeasible to incentivize \(n_{2}\) (the sum of errors is one). Note that Proposition 2 can also be proved directly using LP duality (Appendix B.4.1).
Thresholds.As the optimal contract is based on the likelihood ratio test, it is natural to add to our binary-action setting the MLRP assumption of increasing likelihood ratio: If \(j^{\prime}>j\) then \(\nicefrac{{f_{2}(j^{\prime})}}{{f_{1}(j^{\prime})}}\geq\nicefrac{{f_{2}(j)}}{ {f_{1}(j)}}\). The intuition behind the MLRP assumption is that the better the evaluation result of the classifier, the more likely it was trained using more data. Under the MLRP assumption, \(n_{2}\) is always implementable, 5 and the optimal contract assumes a _threshold_ form. This is by Proposition 2 and the fact that \(\exists j_{0}\) such that \(f_{2}(j)/f_{1}(j)\geq 1\) iff \(j\geq j_{0}\) (see also Appendix B.6.1). Interestingly, this is similar to the relation between the Neyman-Pearson lemma and the Karlin-Rubin theorem, which characterizes the most powerful hypothesis test under monotone likelihood ratio [34].
Footnote 5: As a corollary of a similar result for min-pay contracts [15, Lemma 7].
### Beyond binary action space
For the important special case of binary-outcome space, even with more than two actions, the results of Sec. 3.2 largely hold: the optimal contract is again all-or-nothing, and with MLRP it becomes a threshold contract (proofs in Appx. B.5.1 and Appx. B.6.2). However, for a general number of actions and outcomes, this no longer holds. The optimal contract is not guaranteed to be all-or-nothing, and in fact the problem of determining whether there exists an all-or-nothing contract that is optimal is NP-hard:
**Theorem 2** (Hardness).: _Consider a delegated learning setting with \(N\) actions in which action \(a_{N}\) is implementable. Finding the budget-minimizing all-or-nothing contract for action \(a_{N}\) is NP-hard._
The proof is by reduction from 3SAT and appears in Appendix B.5.2. Also, simple contracts are not guaranteed even under the MLRP assumption. In Appendix B.6.3, we construct an explicit setting satisfying MLRP, for which no optimal contract of threshold form exists. Scrutinizing the counterexample identifies the source of failure that differentiates this case from that of binary action: Consider for each action \(n_{i}\) where \(i<N\) the crossing point \(j_{i}^{*}=j_{f_{i},f_{i-1}}^{*}\) at which action \(n_{i}\) becomes more likely than \(n_{i-1}\). In the counterexample, the survival probability \(\mathbb{S}_{i}(j_{i}^{*})=\mathbb{P}_{f_{i}}[j>j_{i^{*}}]\) is not concave. Interestingly, _requiring_ concavity at this point is sufficient.
**Theorem 3** (Sufficiency for threshold).: _Consider delegated learning with MLRP and a concave survival function at the crossing point \(j^{*}_{i}\). Then the optimal contract is a threshold contract._
We prove this claim by showing that concavity implies that only one constraint is binding in the linear program equivalent to eq.4, reducing the problem to the two-action case. By applying proposition2, we obtain optimality of threshold contracts in this case as well (proof in AppendixB.6.3).
## 4 Experiments
We now turn to our empirical investigation of delegated learning under full and partial information. We base our experiments on the recently curated Learning Curves Database(LCDB) [37], which includes a large collection of stochastic learning curves for multiple classification datasets and methods. For each dataset and method, the database includes held-out accuracy measurements obtained for increasing sample sizes \(n\in\left\{2^{4},2^{4.5},\ldots,2^{15}\right\}\), with multiple repetitions per \(n\); these provide us with stochastic learning curves. Here we focus primarily on the popular MNIST dataset [33] as our case study, and on MLP and GBDT as representative classifiers, but we refer the reader to Appx. C for further experiments on additional datasets and methods. Code is available at: [https://github.com/edensaig/delegated-classification](https://github.com/edensaig/delegated-classification).
### Full information
We begin with the full information setting to explore in a clean environment how different parameters of the learning setting and environment affect predictive performance and economic outcomes.
Validation set size.Fig.3 (left) presents typical stochastic learning curves for two methods: Multi-Layered Perceptron (MLP) and Gradient-Boosted Decision Trees (GBDT). We take an arbitrary accuracy point on the curve at \(\mathrm{acc}(n)=0.85\) (dotted line) to examine the effects of validation set size \(m\) on min-budget contracts. Notice that MLP requires larger \(n\) to obtain 0.85; Fig.3 (center) shows how this translates to a larger required budget \(B^{*}\), which holds for all \(m\). As \(m\) increases, required budgets Larger validation sets are therefore useful for reducing required budget. Nonetheless, even for reasonable \(m\), obtained budgets still remain higher than their theoretical lower bounds (dotted lines). In terms of compute, we were able to execute the full LP solver only up to \(m\leq 20\); in contrast, the local solver is easy to run for larger \(m\), and for small \(m\) gives solutions that coincide with the full solver.
Budget regimes.Fig.3 (left) also indicates two points in which the learning curves cross (dashed lines), at \(\sim\)\(0.74\) and \(\sim\)\(0.94\) accuracy. These correspond to sample sizes \(n\) for which both methods obtain matching accuracies (in expectation). For a self-sufficient learner, the implication is that at each of these points, both methods are equally costly, i.e., both cost \(c_{n}\). Interestingly, and in contrast, delegation can entail different required budgets _despite_ equal accuracies. Fig.3 (right) shows for each target accuracy the gap in required budgets between both methods, \(\Delta B^{*}=B^{*}_{\text{GBDT}}-B^{*}_{\text{MLP}}\). As can be seen, each method is comparatively more (or less) costly in different accuracy regimes (up to 0.6; between 0.6 and 0.92; and above 0.92). Crucially, the budget gap can be large even when accuracies match (dashed lines). For example, even though both MLP and GBDT require \(n{=}362{\approx}2^{17/2}\) samples to obtain \(\sim\)\(0.74\) accuracy, GBDT is cheaper (\(\Delta B^{*}{=}-10^{2}\)); for \(\sim 0.94\) which requires \(n{=}23170{\approx}2^{29/2}\) from both, GBDT is significantly more expensive (\(\Delta B^{*}{=}10^{5}\)). The reason for this is that optimal budgets are determined by the ability to distinguish between distributions (Sec.3.2).
Figure 3: Delegating with full information. **(Left)** Typical learning curves for two methods on MNIST. **(Center)** Required budget for target accuracy of \(0.85\) per validation set size \(m\). **(Right)** Different cost regimes, indicating per accuracy region which of the two methods is cheaper to delegate.
### Partial information
We now turn to consider delegation under partial information, in which the principal must rely on an estimated learning curve. We instantiate this idea by assuming that the principal has access to a small 'pilot' dataset of size \(k\), where \(k\) is considered small. Using this set, the principal creates an estimated learning curve \(\hat{F}\) by fitting a curve to accuracies obtained for up to some \(n_{0}\leq k\), and extrapolating to larger \(n>n_{0}\). In particular, we experiment with fitting parametric power-law curves of the form \(\mathbb{E}[\alpha_{n}]=a-bn^{-c}\), which have been shown to provide good fit in various scenarios both empirically and theoretically [43, 28, 8]. Since power-law curves are monotone, composition with binomial distributions increasing in \(p\) provably results in MLRP stochastic curves [15, B.1].
Bias-variance tradeoff.Given \(k\) pilot examples, there are different ways in which the principal can use them to construct an estimated curve. Here we consider a simple tradeoff: setting \(n_{0}\) to be small but with more samples per \(n<n_{0}\) (low variance), or setting \(n_{0}\) to be large but with few samples per \(n<n_{0}\) (low bias). We define \(r\) as the number of samples per \(n\) (so low \(r\) means larger \(n_{0}\)). Then, for a given \(r\), we set \(n_{0}\) such that \(\sum_{n\leq n_{0}}r\cdot n\leq k\) (i.e., such that the total number of used samples does not exceed \(k\)). Fig. 4 (left) shows different curve fits for \(r=1,3,5\), and corresponding \(n_{0}\). Then, Fig. 4 (center-left) shows for a certain fixed budget the accuracy level that can be attained for increasing \(k\), and as a function of \(r\). As can be seen, having sufficient points \(k\) for constructing \(\hat{F}\) is important, but performance grows quickly with \(k\) (note log-scale x-axis). It is also apparent in our example that low bias (via larger \(n_{0}\)) is much more important than low variance for constructing useful \(\hat{F}\).
Cost-efficiency tradeoff.Because the pilot set provides the principal a basic means for obtaining minimal accuracy, we can ask: given \(k\) examples, and for a fixed budget \(B\), what is the added benefit of delegating learning? For this, we define \(\mu(k)=n(\hat{t})/k\) to be the _sample-size multiplier_, i.e., the multiplicative gain in the effective number of samples due to delegation. Fig. 4 (center-right) shows \(\mu(k)\) for increasing \(k\) and across \(r\). For \(r=1\) (which is superior in terms of performance and outcomes), \(\mu\) begins at \(\sim\)10, increases to \(\sim\)30 at around \(k=190\), and slowly decreases back to \(\sim\)10 towards \(k=1,000\). For \(r>1\), we observe that \(\mu\approx 1\), i.e., there is effectively no gain from delegation, until around \(k=100\), only after which some gain is restored. This highlights the importance of obtaining an accurate estimate \(\hat{F}\) in terms of the economic consequences of delegation.
Over vs. under-estimation.Typically in curve-fitting, over and under-estimation are treated equally, since both types of error can negatively affect goodness of fit and extrapolation quality. However, for delegation, the implications of over vs. under-estimation on contract outcomes are highly asymmetric. Fig. 4 (right) shows for a target incentivized number of samples \(n(t^{*})\) the relation between the (theoretical) _signed_ extrapolation error \(n(t^{*})\) (i.e., over- or under-fit, measured in accuracy points) and the eventual loss in accuracy obtained through delegation, relative to perfect estimation. Each point in the plot corresponds to one curve-fitting instance, with points shown for varying \(k\), \(n_{0}\), and \(r\), and with multiple independent repetitions. Results show that in the under-fitting regime (i.e., _negative_ extrapolation error), loss in accuracy degrades gracefully with the estimation error. In stark contrast, even minimal over-fitting (_positive_ extrapolation error) causes accuracy to plummet dramatically.This has important implications for the choice of how to fit and extrapolate learning curves, suggesting that contracts can be tolerant to under-estimation, while over-estimation should be avoided at all costs.
Figure 4: Delegating with partial information. **(Left)** Extrapolated learning curves for different \(r\). **(Center-left)** Accuracy obtained via delegation per pilot set size \(k\). **(Center-right)** Multiplicative gain in effective number of samples due to delegation. **(Right)** Implications of over vs. under-estimation.
Discussion
Motivated by the increasingly-common practice of outsourcing learning tasks, this paper sets out to introduce and study the novel problem of delegated classification. Our findings suggest that conflict of interests should not be overlooked, and that contracts hold potential as a means for aligning them. Our analysis relies on a set of assumptions, which should be carefully considered by practitioners and empiricists alike; we also believe that there are likely further fruitful connections to explore between contracts and statistical hypothesis testing. As a problem of contract design, and when the learning task is reasonably well-behaved, delegated learning manifests in the form simple threshold contracts. A natural question for future work is whether simplicity also implies _robustness_--as is often the case [15].
Acknowledgements.The authors would like to thank Ruth Heller, Shafi Goldwasser, Jonathan Shafer, and Ohad Einav for their insightful remarks and valuable suggestions. Nir Rosenfeld is supported by the Israel Science Foundation grant no. 278/22. Eden Saig is supported by the Israel Council for Higher Education PBC scholarship for Ph.D. students in data science. Funded by the European Union (ERC, ALGORCNTRACT, 101077862, PI: Inbal Talgam-Cohen).
|
2302.03586 | Adaptive Aggregation for Safety-Critical Control | Safety has been recognized as the central obstacle to preventing the use of
reinforcement learning (RL) for real-world applications. Different methods have
been developed to deal with safety concerns in RL. However, learning reliable
RL-based solutions usually require a large number of interactions with the
environment. Likewise, how to improve the learning efficiency, specifically,
how to utilize transfer learning for safe reinforcement learning, has not been
well studied. In this work, we propose an adaptive aggregation framework for
safety-critical control. Our method comprises two key techniques: 1) we learn
to transfer the safety knowledge by aggregating the multiple source tasks and a
target task through the attention network; 2) we separate the goal of improving
task performance and reducing constraint violations by utilizing a safeguard.
Experiment results demonstrate that our algorithm can achieve fewer safety
violations while showing better data efficiency compared with several
baselines. | Huiliang Zhang, Di Wu, Benoit Boulet | 2023-02-07T16:53:33Z | http://arxiv.org/abs/2302.03586v1 | # Adaptive Aggregation for Safety-Critical Control
###### Abstract
Safety has been recognized as the central obstacle to preventing the use of reinforcement learning (RL) for real-world applications. Different methods have been developed to deal with safety concerns in RL. However, learning reliable RL-based solutions usually require a large number of interactions with the environment. Likewise, how to improve the learning efficiency, specifically, how to utilize transfer learning for safe reinforcement learning, has not been well studied. In this work, we propose an adaptive aggregation framework for safety-critical control. Our method comprises two key techniques: 1) we learn to transfer the safety knowledge by aggregating the multiple source tasks and a target task through the attention network; 2) we separate the goal of improving task performance and reducing constraint violations by utilizing a safeguard. Experiment results demonstrate that our algorithm can achieve fewer safety violations while showing better data efficiency compared with several baselines.
## 1 Introduction
Reinforcement learning (RL) is a key technique to build autonomous agents which can learn and adapt to the changes of environments. Recent advances in RL have led to rapid progress in domains such as Atari Mnih et al. (2015); Go Silver et al. (2017), manipulation Nagabandi et al. (2020); Sun et al. (2022), locomotion tasks Haarnoja et al. (2018); Li et al. (2021); and business Zhang et al. (2021); Ma et al. (2021). However, deploying RL algorithms to real-world applications faces a hurdle with safety concerns. When venturing into new regions of the state space during the unconstrained exploration, the agent may cause unaccepted failures, such as unfavourable impacts to people, property and the agent itself Garcia and Fernandez (2015); Chen et al. (2021); Thomas et al. (2021); Saboo et al. (2021). Moreover, the safety constraints may also limit the agents' ability to explore the entire state and action space to maximize the expected total reward Thananjeyan et al. (2021). Thus, achieving adaptability and maintaining good performance with constraints satisfaction is of importance for the widespread use of RL in the real world.
Most RL research endows agents with the ability to satisfy safety constraints from the line of control theory-based method or the constrained policy optimization formulation Chow et al. (2018); Thananjeyan et al. (2021). Remarkably, those safe RL algorithms succeeded with surprisingly little access to prior knowledge about the experienced tasks. Though the ability to learn with minimal prior knowledge is desirable, it can lead to computationally intensive learning and limited exploration. Moreover, the control theory-based method learns a conservative safe region with the accurate dynamic model, which can remain no safety violations Chow et al. (2018); Lutjens et al. (2019); Cheng et al. (2019); Brown et al. (2020); Thomas et al. (2021); Paternain et al. (2022). The constrained policy optimization uses an intervention mechanism to evaluate the safety or by adding the penalty in reward functions to suppress the unsafe policy Alshiekh et al. (2018); Thananjeyan et al. (2021); Cowen-Rivers et al. (2022). Those two methods maintain good safety performance after the policy converged but may fail to work well in a new safety-critical settings Zhang et al. (2020); Laroche et al. (2019); Chen et al. (2021); harsh satija et al. (2021).
Transferring safety knowledge gained from tasks solved earlier to solve a new target safety-critical task can help, either in terms of speeding up the learning process or in terms of achieving a better performance Laroche et al. (2019); Zhang et al. (2020); Turchetta et al. (2020); Chen et al. (2021). The existing transfer RL approaches such as Rajendran et al. (2015) (A2T) and Barekatain et al. (2016);
[2019] (MULTIPOLAR) omit the safety requirements which could lead to costly failure. A2T also fails to deal with partially useful policies and MULTIPOLAR can't attend to the changes of input states directly. Plus, safe policy reuse methods assume an optimal safe policy and focus on selecting a suitable source policy for explorations. They are also unable to handle cases when the source policy is only partially useful for learning the target task Zhang et al. [2020], Turchetta et al. [2020] (CARL and CISR). Although some transfer approaches have utilized multiple source policies during the target task learning, they have strong assumptions on the guaranteed relatedness between source and target tasks Turchetta et al. [2020], Chen et al. [2021]. Moreover, we cannot rely on a history of their individual experiences, as they may be unavailable due to a lack of communication between factories or prohibitively large dataset sizes Chen et al. [2021], Laroche et al. [2019], Turchetta et al. [2020], Thananjeyan et al. [2021], and cannot assure the non-safety violations during the training and deployment.
To tackle the aforementioned challenges, we propose to use transfer learning to improve the learning efficiency in safe RL. Specifically, inspired by Rajendran et al. [2015], Barekatain et al. [2019], we propose an adaptive aggregation architecture in safety-critical (AASC) control which reuses knowledge from multiple sources solutions. Our key idea is twofold; 1) The safety knowledge transfer by aggregating multiple source tasks and a target task through attention and auxiliary network. By learning aggregation parameters to maximize the expected return at a target environment instance, we can adapt quickly thus improving the learning efficiency in unseen target tasks without knowing source environmental dynamics or source policy performances. Plus, the agent can decide which source solution to attend or suppress to, or to choose the solution from the auxiliary when the source tasks are irrelevant to solving the target task. 2) We separate the goals of improving task performance and constraint satisfaction by utilizing a safeguard to improve safety performance. This separation allows the learned task policy to purely focus on collecting the most informative experiences and can maintain safety during training and deployment. We also empirically validate AASC by comparing its performance with several standard transfer RL and safe RL algorithms in simulated control tasks. Our experimental results demonstrate the significant improvement in learning efficiency and safety performance with the proposed approach. We also conducted a detailed analysis of factors that affect the performance of AASC, and demonstrate that the AASC is an effective and generic framework for safe RL.
## 2 Preliminaries
### Safe reinforcement learning
We consider safe RL in a \(\gamma\)-discounted infinite horizon MDP \(M\). An MDP can be expressed as a tuple \(M=\langle S,A,P,R,\gamma\rangle\), where \(S\), \(A\), \(P\) and \(R\) are the sets of states \(s_{t}\), actions \(a_{t}\), state transition probabilities \(p\) and rewards \(r\); \(\gamma\in[0,1]\) is a discount factor accounting for future rewards. A policy \(\pi\) induces a trajectory distribution. For a state distribution \(d\in\triangle(S)\) and function \(f:s\rightarrow\mathbb{R}\), we define \(f(d)=\mathbb{E}_{s\sim d}[f(s)]\). The initial state distribution of \(\pi\) is \(d_{0}^{\pi}\) and the average state distribution induced by \(\pi\) is \(d^{\pi}=(1-\gamma)\sum_{t=0}^{\infty}\gamma^{t}d_{t}^{\pi}(s)\). The state-action value function of \(\pi\) is defined as \(Q^{\pi}(s,a)=\mathbb{E}_{\xi\sim\rho^{\pi}|s_{0}=s,a_{0}=a}[\sum_{t=0}^{\infty} \gamma^{t}r(s_{t},a_{t})]\) and its state value function as \(V^{\pi}(s)=Q^{\pi}(s,\pi)\). The optimal stationary policy of \(M\) is \(\pi^{*}\) and its respective value function are \(Q^{*}\) and \(V^{*}\).
The definition of safety is that the probability of the agent entering an unsafe subset of state \(S_{unsafe}\subset S\) is low, which is consistent with Thomas et al. [2021], Thananjeyan et al. [2021], Wagener et al. [2021]. We assume that we know the unsafe subset \(S_{unsafe}\) and the safe subset \(S_{safe}=S\setminus S_{unsafe}\). However, we make no assumption on the knowledge of reward \(r\) and dynamics \(P\), except that the reward \(r\) is zero on \(S_{unsafe}\) and that \(S_{unsafe}\) is absorbing: once the agent enters \(S_{unsafe}\) in a rollout, it cannot travel back to \(S_{safe}\) and stays in \(S_{unsafe}\) for the rest of the rollout. Our goal is to find a policy \(\pi\) that is safe and has a high return in \(M\), and to do so via a safe data collection process, which is shown as follows:
\[\pi^{*}=\operatorname*{argmax}_{\pi}\{V^{\pi}(d_{0}):(1-\gamma)\sum_{h=0}^{ \infty}\gamma^{h}\operatorname{Prob}(\xi_{h}\subset S_{safe}|\pi)\geq 1-\delta\} \tag{1}\]
where \(\xi_{h}=(s_{0},a_{0},s_{1},a_{1},\ldots,s_{h-1},a_{h-1})\) denotes an \(h\)-step trajectory segment and \(\delta\in[0,1]\) is the tolerated failure probability. \(\operatorname{Prob}(\xi_{h}\subset S_{safe}|\pi)\) denotes the probability of \(\xi_{h}\) being safe (i.e., not entering absorbing state from time step \(0\) to \(h-1\)) under the trajectory distribution \(\rho^{\pi}\) of \(\pi\) on \(M\). An initial state drawn from \(d_{0}\) is assumed to safe with probability 1. The constraint shown
in equation (1) is known as a chance constraint. The definition here accords to an exponentially weighted average (based on the discount factor \(\gamma\)) of trajectory safety probabilities of different horizons. Then the problem in equation (1) can be formulated as a constrained MDP (CMDP) problem with extra constraint cost function \(C:S\times A\rightarrow\{0,1\}\) with associated discount factor \(\gamma_{risk}\) which indicates whether state action is constraint violating. This yields the following new CMDP: \(\widetilde{M}=<S,A,P,R,\gamma,C,\gamma_{risk}>\). And \(\widetilde{M}\) consists with \(M\) and a cost-based MDP (\(\overline{M}=<S,A,P,C,\gamma_{risk}>\)). The chance-constrained policy optimization in (1) corresponds to the CMDP formulation from Efroni et al. (2020) can be written as:
\[\pi^{*}=\operatorname*{argmax}_{\pi}\{V^{\pi}(d_{0}):\overline{V}^{\pi}(d_{0}) \leq\delta\} \tag{2}\]
where \(\overline{V}^{\pi}(s)=\overline{Q}^{\pi}(s,\pi)\) and \(\overline{Q}^{\pi}(s,a)=\mathbb{E}_{\xi\sim\rho^{*}|s_{0}=s,a_{0}=a}[\sum_{t= 0}^{\infty}\gamma_{risk}^{t}c(s_{t},a_{t})]\). Equation (2) aims to find a policy that has a high cumulative reward \(V^{\pi}(d_{0})\) with cumulative cost \(\overline{V}^{\pi}(d_{0})\) below the allowed failure probability \(\delta\). We assume that episodes terminate on violations, equivalent to transitioning to a constraint-satisfying absorbing state with zero reward.
### Transfer reinforcement learning
Transfer RL aims at improving the learning efficiency of an agent by exploiting knowledge from other agents trained on source tasks Barekatain et al. (2019). Source tasks refer to tasks that we have already learnt to perform and target task refers to the task that we are interested in learning now. Here the source tasks should be in the same domain as the target task, having the same state and action spaces. Let there be \(I\) source tasks which correspond to \(I\) instances of the same environment which differ only in their state transition dynamics. Namely, we model each environment instance by an indexed MDP: \(M_{i}=<S,A,P_{i},R,\gamma>\) where no two-state transition distributions \(P_{i}\), \(P_{j},i\neq j\) are identical. We assume that each \(P_{i}\) is unknown when training a target policy, i.e., agents cannot access the exact form of \(P_{i}\) nor a collection of states sampled from \(P_{i}\). For each of the \(I\) environment instances, we are given a source policy solution \(\pi_{src}^{i}:S\rightarrow\triangle(A)\) that only maps states to actions. These solutions could be for example policies or state-action values.
## 3 Adaptive aggregation in safety-critical (AASC) control
In this section, we elaborate on the framework of AASC in safe RL settings, including the adaptive aggregation of safety policies and the safeguard evaluation parts. In safety-critical settings, humans tend to query the knowledge learned before solving similar problems and get a potential solution. Then they will evaluate the safety of the potential solution and decided whether to execute it in real life. The solution with safety constraints satisfaction could be found quickly in this manner. Thus,
Figure 1: (a) Overview of AASC architecture. The adaptive aggregation module learns the exploratory action \(a_{expl}\) and the safeguard module ensures the safety during the learning process and output task safe action \(a_{task}\). (b) Adaptive aggregation of safe policies. We formulate the aggregated policy \(\pi_{expl}\) with the sum of 1) the adaptive aggregation of actions from source policies \(\pi_{src}\) and 2) the auxiliary network \(\pi_{aux}\) for predicting residuals \(a_{aux}\). The dotted arrows represent the path of back propagation.
the AASC consists of two parts: **the adaptive aggregation of safety policies** and **the safeguard evaluation**, as shown in figure 1 (a). The former helps us to efficiently learn the safe policy of a target agent given a collection of source policies, which inspired by Rajendran et al. (2015); Barekatain et al. (2019), the latter ensures the safety constraints and maximizes the final performance in the safety-critical settings, inspired by Bharadhwaj et al. (2020); Wagener et al. (2021).
The proposed method optimizes policies iteratively as outlined in Algorithm 1. As input, it takes an AASC algorithm \(\mathcal{F}\) with a safe policy aggregation and safeguard module and multi-source tasks solutions \(\pi_{src}\). The RL algorithm \(\mathcal{F}\) finds a nearly optimal policy for the MDP \(\widetilde{M}\) constructed by the safeguard together with \(M\), which is an approximate solution of equation (2). During training, the agent can interact with the unknown MDP \(M\) to collect data under a training budget, such as the maximum number of environment interactions or allowed unsafe trajectories the agent can generate. In every iteration, the proposed method first queries the safe policy aggregation of \(\mathcal{F}\) to have a potential exploratory action \(a_{expl}\) to execute in \(\widetilde{M}\). Then it uses a safeguard policy to modify exploratory action into task safe action \(a_{task}\). The safeguard module is a shielded policy such that the agent runs backup policy \(\mu:S\rightarrow\triangle(A)\) instead of \(\pi_{expl}\) when \(\pi_{expl}\) proposed unsafe actions. Then running \(a_{task}\) in the \(M\) can be safe with high probability. Next, it collects data by running \(a_{task}\) in \(M\) and collects data into \(\mathcal{D}_{task}\) then transforms it into data \(\mathcal{D}_{Safeguard}\). The transition stores in \(\mathcal{D}_{task}\) is \(<s_{t},a_{task},s_{t+1},r_{t}>\) and \(<s_{t},a_{expl},s_{t+1},r_{t},c_{t}>\) in \(\mathcal{D}_{Safeguard}\). It then feeds \(\mathcal{D}_{task}\) to the \(\mathcal{F}\) for policy optimization and uses \(\mathcal{D}_{Safeguard}\) to refine the shield policy. The process above is repeated until the training budget is used up. When this happens, it terminates and returns the best policy \(\hat{\pi}^{*}\) from algorithm \(\mathcal{F}\) can produce.
### Adaptive aggregation of safe policies
The goal of this adaptive aggregation of safe policies is to train a new target agent's policy \(\pi_{task}\) in a sample efficient fashion. The target agent interacts with the target environment instance which is not identical to the source due to their distinct dynamics. For each of the \(I\) source tasks, we are given the source policy \(\pi^{i}_{src}:S\to A\) that only maps states to actions. Each source policy \(\pi^{i}_{src}\) can be either parameterized (e.g., learned by interacting with its environment instance \(M_{i}\)) or non-parameterized (e.g., heuristically designed by humans). Either way, we assume that no prior knowledge about the source policies \(\pi^{i}_{src}\) is available for a target agent, such as their representations of original performances, except that they were acquired from a source environment instance with an unknown dynamics. As shown in figure 1 (b), with the adaptive aggregation of safety policies, a target policy is formulated with the adaptive aggregation of actions from the set of source policies, and the auxiliary network mimicking the selected policies and predicting residuals around the aggregated actions. Let \(a^{1}_{src},a^{2}_{src},\ldots,a^{I}_{src}\) be the solutions of these source tasks \(1,\ldots,I\) respectively. \(a_{aux}\) is the solution of an auxiliary network that starts learning from scratch while acting on the target task. Let \(a_{expl}\) be the solution that we learn in the target task. The action space \(a^{i}_{t}\in\mathbb{R}^{D}\) is a \(D\)-dimensional real-valued vector representing \(D\) actions performed jointly in each timestep. For the collection of source policies, we derive the matrix of their actions:
\[A_{t}=[(a^{1}_{src})^{\mathrm{T}},\ldots,(a^{I}_{src})^{\mathrm{T}},(a_{aux })^{\mathrm{T}}]\in\mathbb{R}^{(I+1)\times D} \tag{3}\]
```
0: AASC RL algorithm \(\mathcal{F}\), \(\mathcal{D}_{task}\leftarrow\emptyset\), \(\mathcal{D}_{Safeguard}\leftarrow\emptyset\), multi-source tasks solutions \(\pi_{src}\).
0: Optimized safe policy \(\hat{\pi}^{*}\)
1:\(\mathcal{F}.\mathrm{Initialize}()\)
2:\(s\leftarrow\) env.reset()
3:while training budget available do
4:\(a_{expl}\leftarrow\mathcal{F}.\mathrm{SafePolicyAggregation}(s,\pi_{src})\)
5:\(a_{task}\leftarrow\mathcal{F}.\mathrm{Safeguard}(s,a_{expl})\)
6: Execute \(a_{task}\) and collect data to \(\mathcal{D}_{task}\) and \(\mathcal{D}_{Safeguard}\), \(s=s^{{}^{\prime}}\)
7:\(\hat{\pi}\leftarrow\)\(\mathcal{F}.\mathrm{OptimizePolicy}(\mathcal{D}_{task})\)
8:\(\mathcal{F}.\mathrm{UpdateSafeguardRule}(\mathcal{D}_{Safeguard})\)
9:endwhile
10:\(\hat{\pi}^{*}\leftarrow\mathcal{F}.\mathrm{GetOptimizePolicy}()\)
```
**Algorithm 1** Adaptive Aggregation in Safety-Critical (AASC) Control
The key idea of the aggregation module is to aggregate \(A_{t}\) adaptively in an RL loop, i.e., to maximize the expected return \(V^{\pi}(d_{0})\). The adaptive aggregation only contains the source policies action that could introduce a strong inductive bias in the training of a target policy. So we learn an auxiliary policy network as \(\pi_{aux}:S\rightarrow\triangle(A)\) jointly with the source task policy, to predict residuals around the aggregated source task actions. \(\pi_{aux}\) is used to improve the target policy training in two ways. 1) If the aggregated actions from \(\pi_{src}\) are already useful in the target environment instance, \(a_{aux}\) will correct them for a higher expected return. 2) Otherwise, \(\pi_{aux}\) learns the target task while leveraging \(\pi_{aux}\) as a prior to have a guided exploration process. Any network could be used for \(a_{aux}\) as long as it is parameterized and fully differentiable.
While the source task solutions \(a^{1}_{src},a^{2}_{src},\ldots,a^{I}_{src}\) remain fixed, the auxiliary network solutions are learnt and hence \(a_{aux}\) can change over time. There is an attention network to learn the weights \(w_{aux},w^{1}_{src},w^{2}_{src},\ldots,w^{I}_{src}\) given the input state \(s_{t}\). The weights determine the attention each actions gets, allowing the agent to selectively accept or reject the different actions, depending on the input states. The aggregation policy is formulated as:
\[a_{expl}=W_{t}\odot A_{t} \tag{4}\]
where \(W_{t}=[w^{1}_{src},w^{2}_{src},\ldots,w^{I}_{src},w_{aux}]\in\mathbb{R}^{(I +1)\times D}\) is the weight matrix. \(\odot\) is the element-wise multiplication. \(W_{t}\) is neither normalized nor regularized and can scale each action of each policy independently. In this way, we can flexibly emphasize informative source actions while suppressing irrelevant ones. If the \(i\) source task solution's action is useful at state \(s\), then the corresponding element in \(w^{i}_{src}\) is set to a high value by the attention network. Working at the granularity of states allows the attention network to attend to different source tasks, for different parts of the state space of the target task, thus giving it the ability to select informative actions. For parts of the state space in the target task, where the source task solutions are not relevant or even perform badly, the attention network learns to give high weight to the auxiliary network solution (which can be learnt and improved), thus avoiding bad source actions. The adaptive safe policy aggregation is shown in algorithm 3.
```
0: State \(s\), multi-source tasks solutions \(\pi_{src}\)
0: Adaptive aggregation action \(a_{expl}\)
1:for\(i\in\{1,...,I\}\)do
2: Calculate \(a^{i}_{src}\sim\pi^{i}_{src}(\cdot|s)\)
3: Calculate \(a_{aux}\sim\pi_{aux}(\cdot|s)\)
4: Calculate \(w_{aux},w^{1}_{src},w^{2}_{src},\ldots,w^{I}_{src}\)
5: Calculate \(a_{expl}\) according to equation (4).
```
**Algorithm 2** Adaptive Aggregation of Safety Policies
Depending on the feedback obtained from the environment upon following \(a_{expl}\), the attention network's parameters are updated to improve performance. As mentioned earlier, the source task solutions, \(a^{1}_{src},a^{2}_{src},\ldots,a^{I}_{src}\) remain fixed. Updating these source task's parameters would cause a significant amount of unlearning in the source tasks solutions and result in a weaker transfer, which we observed empirically. Even though the agent follows \(a_{expl}\), we update the parameters of the auxiliary network that produces \(a_{aux}\), as if the action taken by the agent was based only on \(a_{aux}\). Due to this special way of updating \(a_{aux}\), \(a_{aux}\) also uses the valuable experience got by using \(a_{expl}\) which uses the solutions of the source tasks as well. This also means that, if there is a source task whose solution \(a^{i}_{src}\) is useful for the target task in some parts of its state space, then \(a_{aux}\) tries to replicate \(a^{i}_{src}\) in those parts of the state space. In practice, the source task solutions though useful might need to be modified to suit perfectly the target task. The auxiliary network takes care of these modifications required to make the useful source task solutions perfect for the target task. The special way of training the auxiliary network assists the architecture in achieving this faster.
### Safeguard evaluation
The goal of safeguard evaluation is to evaluate the safety given the potential actions \(a_{expl}\) under the given states, and to guide the agent to safety when there are constraint violations likely. Most prior work in safe RL integrates constraint satisfaction into the task objective to jointly optimize the two and detect those regions. However, the inherent objective conflict exploration and constraints can lead to suboptimzlities in policy optimization. In the proposed safeguard, we consider an RL formulation subject to constraints on the probability of unsafe future behaviour and design the algorithm that
can balance the often conflicting objectives of task-directed exploration and safety, which is inspired by Thananjeyan et al. (2021); Wagener et al. (2021). The agent evaluates the safety of \(a_{expl}\) in the safeguard module, and instead executes approximate resets to nearby safe states when constraint violation is probable.
To quantify the risk of entering an unsafe state, the safeguard rule is specified by a tuple \(\mathcal{G}=<\overline{Q},\mu,\eta>\), where \(\overline{Q}:S\times A\rightarrow[0,1]\) is a state-action risk value estimator, \(\eta\) is a threshold and \(\mu\) is a backup safeguard action from \(\pi_{backup}\) policy Thananjeyan et al. (2021). \(\pi_{backup}\) is supposed to safeguard the exploration. As shown in algorithm (3), when sampling \(a_{task}\) from safeguard policy, it first queries if \((s,a_{expl})\in\mathcal{T}^{\pi}_{unsafe}\) then it samples \(a_{task}\) according to \(\pi_{backup}\). Otherwise executes \(a_{task}=a_{expl}\). However, activating the backup policy too often is undesirable, as it only collects data from \(\pi_{backup}\) so there will be little exploration. Hence we define the unsafe set \(\mathcal{T}^{\pi}_{\text{unsafe}}\) in safeguard as:
\[\mathcal{T}^{\pi}_{unsafe} =\{(s,a)\in S_{safe}\times A:\overline{A}(s,a_{task})\geq\eta\} \tag{5}\] \[\mathcal{T}^{\pi}_{safe} =S\times A\setminus\mathcal{T}^{\pi}_{unsafe}\]
and advantage cost function is:
\[\overline{A}(s,a)=\overline{Q}(s,a)-\overline{Q}(s,\mu) \tag{6}\]
The equation (5) and (6) mean that we have assumption: for all \((s,a)\in\mathcal{T}^{\pi}_{unsafe}\) that can be reached from \(d_{0}\) with some policy, there exist some \(a\in A\) such that \(\overline{A}(s,a)=\overline{Q}(s,a)-\overline{Q}(s,\mu)\geq\eta\). In other words, for every state action we can reach from \(d_{0}\) that will be overridden, there is an alternative action in the agent's action space \(A\) that keeps the agent's policy being safe. By running the safeguard evaluation constructed by the advantage function \(\overline{A}\), our method controls the safety relative to the backup policy \(\mu\) concerning \(d_{0}\). If the relative safety for each time step (i.e., advantage) is close to zero, then the relative safety overall is also close to zero (i.e. \(\overline{V}^{\pi}(d_{0})\leq\delta\)). Note that the safeguard while satisfying \(\overline{V}^{\pi}(d_{0})\leq\delta\), can generally visit (with low probability) the states where \(\overline{V}^{\mu}(s)>0\) (e.g., \(=1\)). At these places where \(\mu\) is useless for safety, the safeguard rule naturally deactivates and lets the learner explore, which avoids the overly conservative safe region in Thananjeyan et al. (2021); Bharadhwaj et al. (2020). When the agent takes some actions \((s,a)\in\mathcal{T}^{\pi}_{unsafe}\) in \(\widetilde{M}\), it goes to an absorbing state and receives a negative reward as shown in equation (7) and figure (2). Thus, the MDP \(\widetilde{M}\) gives larger penalties for taking backup safe state-actions than for going into \(S_{unsafe}\). This design ensures that any nearly-optimal policy of \(\widetilde{M}\) will (when running in \(M\)) have a high reward
Figure 2: Safeguard Evaluation. (a) The agent in AASC starts in the safe states and follows the policy which is projected from the source and auxiliary policies. Without the safeguard evaluation, the agents may execute the disadvantaged action and have safety violations as red path. (b) Under the protection of safeguard, the backup policy will be activated and guide the agent to safety as green path.
and low probability of visiting safety violations state-actions.
\[\widetilde{r}(s,a)=\begin{cases}b,&(s,a)\in\mathcal{T}_{unsafe}^{\pi}\\ 0,&s\text{ is an absorbing state}\\ r,&(otherwise).\end{cases} \tag{7}\]
where \(b\leq 0\) is independent non-positive constant.
## 4 Experiments evaluation
### Experiments setup
To showcase the effectiveness of the proposed method, we test its performance on two different simulated environments, i.e., Circle and Half-cheetah ( figure 3). To ensure fair comparisons and reproducibility of experiments, we followed the guidelines introduced by Francois-Lavet et al. (2018) for conducting and evaluating all of our experiments.
**Baseline methods**: We implement AASC by using PPO (Schulman et al. (2017)) as the base RL model. To complete the experiments in a reasonable amount of time, we set the number of source policies to be \(I=4\) unless mentioned otherwise. The source policies are randomly sampled from the source policy candidates. See appendix A for all the implementation details. We also compared our AASC to the standard multi-layer perceptron (**MLP**) trained from scratch, which is typically used in RL literature Francois-Lavet et al. (2018). As another baseline, we use **MULTIPOLAR**Barekatain et al. (2019) which selects source policies through an adjustable matrix. Also, we consider multi-source policy reuse framework **CARL**Zhang et al. (2020). In all the experiments, source policies are the same for AASC, MULTIPOLAR and CARL to ensure an unbiased evaluation. The following CMDP-based approach which enforce constraints via the optimization objective **CPO**Achaim et al. (2017) is also considered. And the **Recovery RL**Thananjeyan et al. (2021) which uses ideas from offline RL to pretrain the recovery policy and designs a recovery rule directly based on Q-based functions is also compared.
**Environment: Circle**: The circle environment ( figure 3 (a)) is the point environment from Achiam et al. (2017). The agent is rewarded for running in a wide circle but is constrained to stay within a safe region smaller than the radius of the target circle while remaining in a circular path at high speed. The agent has mass \(m\) and can achieve maximum speed \(v_{max}\). The safe set to staying within desired positional bounds \(x_{max}\) and \(y_{max}\), as shown in the green space in the left of figure 3: \(S_{safe}=\{s\in S:|x|\leq x_{max}\text{ and }|y|\leq y_{max}\}\) The backup policy \(\mu\) applies a decelerating force (with component-wise magnitude up to \(a_{max}\)) until the agent has zero velocity. **Half-Cheetah**: The Half-Cheetah environment ( figure 3 (b)) comes from OpenAI Gym and has a reward equal to the agent's forward velocity. One of the agent's links (green circle in the right of figure 3:) is constrained to lie in a given height range, outside of which the robot is deemed to have fallen over. In other words, if \(h\) is the height of the link of interest, \(h_{min}\) is the minimum height, and \(h_{max}\) is the maximum height, the safe set is defined as \(S_{safe}=\{s\in S:h_{min}\leq h\leq h_{max}\}\). The agent gets a reward equal to its forward velocity, with one of its links constrained to remain in a given height range, outside of which the robot is deemed to be unsafe.
**Evaluation metric**: Following the guidelines of Francois-Lavet et al. (2018), to measure the sampling efficiency of training policies, i.e., how quickly the training progresses, we used the average episodic reward over training samples. Furthermore, we also report the cumulative constraint violations to
Figure 3: (a) The Circle environment. The agent can run in the green space. The green circle is the desired path, and the red lines are the constraints on the horizontal position. The vertical constraints are outside of the visualized environment. (b) The Half-cheetah environment. The green circle is centred on the link of interest, and the red dashed lines denote the allowed height range of the link.
show the safety performance of the proposed method. We tune all algorithms to maximize the total return to see the safety performance. Each run for simulation experiments is replicated across 5 random seeds and we report the mean and standard error.
### Evaluation results
We study the learning and safety performance of the AASC and prior methods in all simulation domains in figure 4. Results suggest that AASC significantly improve the learning efficiency remain fever safety violations than prior algorithms across two environments (Circle and Half-cheetah), which is of consistency with the motivation of our algorithm. The left column of figure 4 clearly shows that on average, AASC outperformed baseline policies in terms of sample efficiency and sometimes the final episodic reward. For example, the AASC converges faster at the early training stage than MLP and CPO in both environments, which indicates the effectiveness of leveraging multiple source policies in adaptive aggregation module. Compared with transferred RL methods such as CARL and MULTIPOLAR, AASC has always been on par or better performance on the sample efficiency. Because the aggregation module in AASC avoids the estimation of model error and leverages the change of the environment states as input, and can flexibly aggregate each action of each source policy.
The right column of figure 4 illustrates that the AASC prevents many safety violations. The safeguard in AASC is an unconstrained approach and allows for reliable convergence, as opposed to the baselines which rely on elaborate constrained approaches like CPO. AASC also incurs orders of magnitude fewer safety violations than Recovery RL and CARL, since the advantage-based safety evaluation is used and no model estimation error is accumulated, thus voiding the overly conservative safe region in exploration. More ablation studies could be found in appendix B, where we conduct a detailed analysis of factors that affect the performance of AASC, and demonstrate that the AASC is an effective and generic framework for safe RL.
## 5 Related work
### Safe reinforcement learning
Many safe RL works on endowing RL agents with the ability to satisfy constraints from the line of control theory-based and the constrained policy optimization formulation approaches. A recent line of works on safe RL utilizes ideas from control theory and model-based approach Cheng et al. (2019); Zeng et al. (2021); Luo and Ma (2021); Chow et al. (2018); Thomas et al. (2021); Cowen-Rivers et al. (2022). These works propose sufficient conditions involving certain Lyapunov functions or control barrier functions that can certify the safety of a subset of states or policies. Chow et al.
Figure 4: Results of MLP, MULTIPOLAR, CARL, CPO, Recovery RL and AASC over all the experiments for each environment. Overall AASC dramatically reduces the amount of training time and safety constraint violations while still having large returns at deployment. Plots in a row share the same legend. All error bars are \(\pm 1\) standard deviation over 5 random seeds. Any curve not plotted in the third column corresponds to zero safety violations.
[2018] constructs sets of stabilizing actions using a Lyapunov function and projects the action to the set. Cheng et al. [2019] uses a barrier function to safeguard exploration and uses a reinforcement learning algorithm to learn a policy. Then Luo and Ma [2021] learns a barrier function to substitute the handcrafts one in Cheng et al. [2019]. From which the agent not only finds a high-return policy but also avoids undesirable states as much as possible, even during training. However, the works on Lyapunov functions require the discretizing of the state space and thus only work for low-dimensional space. And the barrier function-based method suffers from the sample efficiency and the bias of learned dynamics model problems Thomas et al. [2021], Cowen-Rivers et al. [2022].
Another line of works design the actor-critic based algorithms under the constrained policy optimization formulation. This line could also be divided into two groups: jointly optimizing the task performance and safety and restricting exploration with an auxiliary policy. Geibel and Wysotzki [2005] uses a Lagrangian method to solve CMDP, and the Lagrangian multiplier is controlled adaptively Tessler et al. [2018]. Paternain et al. [2022] uses the first-order primal-dual optimization to solve a stochastic nonconvex saddle-point problem in CMDP, but such approaches have no guarantees of policy safety during learning. Dalal et al. [2018] adds an additional layer, which corrects the output of the policy locally. However, these approaches make safe and optimal performance trade-offs. Then Bharadhwaj et al. [2020] learns a conservative safety critic who underestimates how safe the policy is, and uses the conservative safety critics for safe exploration and policy optimization. Thananjeyan et al. [2021] makes use of existing offline data and co-trains a conservative recovery policy based on the cost-based value function. In Wachi et al. [2021], the authors utilize the sensors data to make the features states available, and use linear function approximation to ensure the safety performance in the discrete control environment.
### Transfer reinforcement learning
Transfer RL aims at improving the learning efficiency of an agent by exploiting knowledge from other source agents trained on relevant tasks. Song et al. [2016] transfers the action-value functions of the source tasks to the target task according to a task similarity metric to compute the task distance. However, they assume a well-estimated model which is not always available in practice. Later, Laroche and Barlier [2017] reuses the experience instances of a source task to estimate the reward function of the target task. Fernandez and Veloso [2006] uses the policy reuse method as a probabilistic bias when learning the new, similar tasks. Rajendran et al. [2015] proposes the A2T (Attend, Adapt and Transfer) architecture to select and transfer from multiple source tasks by incorporating an attention network that learns the weights of several source policies for combination. Barekatain et al. [2019] uses an adjustable matrix, named MULTIPOLAR, to flexibly utilize the source policies, but omits the influence of the target environment on the selection of source solutions. Currently, there are only a few works considering leveraging learned policy Zhang et al. [2020], Chen et al. [2021] in safety-critical control. Zhang et al. [2020] employs model-based RL named CARL to train a probabilistic model to capture uncertainty about transition dynamics and catastrophic states across varied source environments. Chen et al. [2021] proposes the context-aware safe reinforcement learning method as a meta-learning framework to realize safe adaptation in non-stationary environments, which rely on a history of individual experiences.
## 6 Conclusion
In this work, we propose an adaptive aggregation framework for safety-critical control, which aims to improve sample efficiency and safety performance. We first learn to aggregate the safe actions provided by the source policies adaptively to maximize the target task performance. Meanwhile, we learn an auxiliary network that predicts residuals around the aggregated safe actions, which ensures the target policy's expressiveness even when some of the source policies perform poorly. What's more, we separate the constraints and explorations and use an advantataged safeguard evaluation to ensure safety during the learning process. Separating the task and safeguard policies makes it easier to balance task performance and safety, and allows using off-the-shelf RL algorithms for both. Empirically, our algorithm compares favourably to state-of-the-art safe RL methods, in terms of the trade-off between the learning and safety performance, and can achieve higher sample efficiency. |
2307.06220 | ($\odot$, $\vee$)-derivations on MV-algebras | Let $A$ be an MV-algebra. An $(\odot,\vee)$-derivation on $A$ is a map $d:
A\to A$ satisfying: $d(x \odot y) = (d(x) \odot y) \vee(x \odot d(y))$ for all
$x, y \in A$. This paper initiates the study of $(\odot,\vee)$-derivations on
MV-algebras. Several families of $(\odot,\vee)$-derivations on an MV-algebra
are explicitly constructed to give realizations of the underlying lattice of an
MV-algebra as lattices of $(\odot,\vee)$-derivations. Furthermore,
$(\odot,\vee)$-derivations on a finite MV-chain are enumerated and the
underlying lattice is described. | Xueting Zhao, Aiping Gan, Yichuan Yang | 2023-07-12T15:08:30Z | http://arxiv.org/abs/2307.06220v1 | # \((\odot,\vee)\)-derivations on MV-algebras
###### Abstract.
Let \(A\) be an MV-algebra. An \((\odot,\vee)\)-derivation on \(A\) is a map \(d:A\to A\) satisfying: \(d(x\odot y)=(d(x)\odot y)\vee(x\odot d(y))\) for all \(x,y\in A\). This paper initiates the study of \((\odot,\vee)\)-derivations on MV-algebras. Several families of \((\odot,\vee)\)-derivations on an MV-algebra are explicitly constructed to give realizations of the underlying lattice of an MV-algebra as lattices of \((\odot,\vee)\)-derivations. Furthermore, \((\odot,\vee)\)-derivations on a finite MV-chain are enumerated and the underlying lattice is described.
**Key words:** MV-algebra, derivation, direct product, complete lattice, Boolean center, ideal, fixed point set
**MSC(2020):** 03G20, 06D35, 06B10, 08B26
###### Contents
* 1 Introduction
* 2 Preliminaries
* 3 \((\odot,\vee)\)-derivations on MV-algebras
* 3.1 Basic properties of \((\odot,\vee)\)-derivations on MV-algebras
* 3.2 \((\odot,\vee)\)-derivations on MV-chains
* 3.3 Isotone \((\odot,\vee)\)-derivations on MV-algebras
* 4 Direct product of \((\odot,\vee)\)-derivations
* 5 Lattice structure of \((\odot,\vee)\)-derivations on MV-algebras
* 6 Discussions
* Declaration
* Appendix I. Calculation program by Python in Example 4.1 (2)
* Appendix II. Calculation program by Python in Example 5.1 (2)
## 1. Introduction
The notion of derivation from analysis has been defined for various algebraic structures by extracting the Leibniz rule
\[\frac{d}{dx}(fg)=(\frac{d}{dx}(f))g+f(\frac{d}{dx}(g)).\]
Derivations play an important role on describing the characteristics of prime rings [24], and the multiplicative or additive commutativity of near rings [3], etc. A derivation in a prime ring \((R,+,\cdot)\) is a map \(d:R\to R\) satisfying that for any \(x,y\in R\):
\[(1)\;d(x+y)=d(x)+d(y),\quad(2)\;d(x\cdot y)=d(x)\cdot y+x\cdot d(y).\]
The derivation on a lattice \((L,\vee,\wedge)\) was defined by Szasz [25], and was deeply investigated in [13], which is a map \(d:L\to L\) satisfying that for all \(x,y\in L\):
\[(i)\;d(x\lor y)=d(x)\lor d(y),\quad(ii)\;d(x\wedge y)=(d(x)\wedge y)\vee(x\wedge d (y)).\]
The notion of derivations satisfying condition \((ii)\) only was investigated by Xin and his coauthors [32, 33] with motivation from information science. In recent years the derivations have been defined and studied for BCI-algbras [18], BCC-algebras [1, 26], BE-algebras [19], and basic algebras [21]. Furthermore, the derivations on operator algebras were investigated by Bresar etc.[5, 6, 12] which promoted the mathematical quantum mechanics and quantum field theory.
An algebraic structure with a derivation is broadly called a differential algebra [20]. In fact, differential algebra has found important applications in arithmetic geometry, logic and computational algebra, especially in the profound work of Wu on mechanical proof of geometric theorems [30, 31]. There are many instances of differential algebras, such as for fields [27], commutative algebras [28], noncommutative algebras [15], lattices [14], and MV-algbras [16].
The concept of derivations on MV-algebras was introduced by Alshehri [2]: given an MV-algebra \((M,\oplus,^{*},0)\), a derivation on \(M\) is an operator (i.e, a map) \(d:M\to M\) such that \(d(x\odot y)=(d(x)\odot y)\oplus(x\odot d(y))\), for all \(x,y\in M\), where \(x\odot y=(x^{*}\oplus y^{*})^{*}\). Furthermore, the different kinds of derivations on MV-algebras have been deeply investigated. Yazarli [34] introduced the notions of symmetric bi-derivation, generalized derivation on MV-algebras. Then Wang, Davvaz and He [29] studied additive derivations and their adjoint derivations to give a representation of MV-algebras. Recently, \(\tau\)-additive derivations on MV-algebras have been extended by Lu and Yang [23]. Following these developments, we define the notion of \((\odot,\vee)\)-derivations on \(A\) satisfying
\[d(x\odot y)=(d(x)\odot y)\vee(x\odot d(y))\]
for any \(x,y\in A\), where \(x\lor y=(x\odot y^{*})\oplus y\). Our choice do not impose the extra "union-preserving" condition: \(d(x\lor y)=d(x)\lor d(y)\) and leads to several properties in this paper. Indeed as similar as [14, Proposition 2.5], a \((\odot,\vee)\)-derivation with the "union-preserving" must be isotone.
This paper initiates the study of \((\odot,\vee)\)-derivations on MV-algebras. In Section 2, we recall some necessary properties and examples of MV-algebras. In Section 3, we introduce and study \((\odot,\vee)\)-derivations on MV-algebras. After exploring a sufficient and necessary condition for an operator on a \(n\)-element MV-chain \(L_{n}\) to be an \((\odot,\vee)\)-derivation (Theorem 3.10), we show that the cardinality of the set of all \((\odot,\vee)\)-derivations on \(L_{n}\) is exactly \(\frac{(n-1)(n+2)}{2}\) (Theorem 3.11). In Section 4, the direct product of \((\odot,\vee)\)-derivations is introduced. Let \(\Omega\) be an index set, \(\{A_{i}\}_{i\in\Omega}\) be a family of MV-algebras and \(d_{i}\) be an operator of \(A_{i}\) for each \(i\in\Omega\), we prove that the direct product \(\prod_{i\in\Omega}d_{i}\) of \(d_{i}\)'s is an \((\odot,\vee)\)-derivation (resp. a principal \((\odot,\vee)\)-derivation) on \(\prod_{i\in\Omega}A_{i}\) if and only if \(d_{i}\) is an \((\odot,\vee)\)-derivation (resp. a principal \((\odot,\vee)\)-derivation) on \(A_{i}\) for each \(i\in\Omega\) (Theorem 4.6). In Section 5, we show that the set of \((\odot,\vee)\)-derivations on a finite MV-algebra has a natural lattice structure (Proposition 5.3) and we consider several lattice structure of \((\odot,\vee)\)-derivations which are isomorphic to the underlying lattice \(\mathbf{L}(A)\) of an MV-algebra \(A\) (Propositions 5.10 and 5.12). We also describe the lattice structure of \((\odot,\vee)\)-derivations on finite MV-chains (Theorem 5.6).
**Notations**. Throughout this paper, let \(|A|\) denote the cardinality of a set \(A\) and \(\mathbb{N}_{+}\) denote the set of all positive integers.
## 2. Preliminaries
In this section, we recall some necessary definitions and results about MV-algebras.
**Definition 2.1**.: [11, Definition 1.1.1] An algebra \((A,\oplus,*,0)\) of type \((2,1,0)\) is called an **MV-algebra** if it satisfies the following equations:
(MV1) \(x\oplus(y\oplus z)=(x\oplus y)\oplus z\);
(MV2) \(x\oplus y=y\oplus x\);
(MV3) \(x\oplus 0=x\);
(MV4) \(x^{**}=x\);
(MV5) \(x\oplus 0^{*}=0^{*}\);
(MV6) \((x^{*}\oplus y)^{*}\oplus y=(y^{*}\oplus x)^{*}\oplus x\).
As usual, we shall denote an MV-algebra by its underlying carrier set. Note that all axioms of MV-algebras are equations, it follows by Birkhoff Theorem [7, Theorem 11.9] that the class of all MV-algebras forms a variety. So the notions of isomorphism, subalgebra, congruence and direct product are just the particular cases of the corresponding universal algebraic notions.
**Example 2.1**.: [11] Let \(L=[0,1]\) be the real unit interval. Define
\[x\oplus y=\min\{1,x+y\}\text{ and }x^{*}=1-x\text{ for any }x,y\in L.\]
Then \((L,\oplus,*,0)\) is an MV-algebra.
Let \(Q=[0,1]\cap\mathbb{Q}\) and for each positive integer \(n\geq 2\), let
\[L_{n}=\{0,\frac{1}{n-1},\frac{2}{n-1},\cdots,\frac{n-2}{n-1},1\}.\]
Then \(Q\) and the \(n\)-element subset \(L_{n}\) are subalgebras of \(L\).
**Example 2.2**.: [8] Define the following sets of formal symbols:
\[\mathcal{C}_{0}=\{0,c,2c,3c,\cdots\},\quad\mathcal{C}_{1}=\{1,c^{*},(2c)^{*},( 3c)^{*},\cdots\},\]
where \((kc)^{*}=1-kc\), and \((kc)^{**}=((kc)^{*})^{*}=kc\) for any \(k\in\mathbb{N}_{+}\).
Let \(+\) (respectively, \(-\)) be the ordinary sum (respectively, subtraction) between integers. We define the following binary operation \(\oplus\) on \(\mathcal{C}=\mathcal{C}_{0}\cup\mathcal{C}_{1}\):
* \(nc\oplus mc=(n+m)c\)
* \((nc)^{*}\oplus(mc)^{*}=1\)
* \(nc\oplus(mc)^{*}=(mc)^{*}\oplus nc=\left\{\begin{array}{cc}1&m\leq n\\ ((m-n)c)^{*}&m>n\end{array}\right.\)
Then \((\mathcal{C},\oplus,*,0)\) is an infinite MV-chain, and \(0<c<2c<3c<\cdots<(n-1)c<nc<\cdots<(nc)^{*}<((n-1)c)^{*}<\cdots<(3c)^{*}<(2c)^{* }<c^{*}<1\). MV-chains \(Q\) and \(\mathcal{C}\) are not isomorphic, though they have the same countable cardinality.
On every MV-algebra \(A\), we define the constant \(1\) and the operation \(\odot\) as: \(1=0^{*}\) and \(x\odot y=(x^{*}\oplus y^{*})^{*}\). Then for all \(x,y\in A\), the following well-known properties hold [11, 22]:
* \((A,\odot,*,1)\) is an MV-algebra;
* \(*\) is an isomorphism between \((A,\oplus,*,0)\) and \((A,\odot,*,1)\);
* \(1^{*}=0,1\oplus x=1\);
* \(x\oplus y=(x^{*}\odot y^{*})^{*}\);
* \(x^{*}\oplus x=1\), \(x\odot x^{*}=0\).
Let \(A\) be an MV-algebra. For any \(x,y\in A\), define \(x\leq y\) if and only if \(x^{*}\oplus y=1\). Then \(\leq\) is a partial order on \(A\), called **the natural order** of \(A\)[11]. Furthermore, the natural order determines a structure of bounded distributive lattice \(\mathbf{L}(A)\) on \(A\), with \(0\) and \(1\) are respectively the bottom and the top element, and
\[x\lor y=(x\odot y^{*})\oplus y\text{ and }x\wedge y=x\odot(x^{*}\oplus y).\]
A linearly ordered MV-algebra is called an **MV-chain**. It is well-known that every \(n\)-element MV-chain is isormorphic to the MV-chain \(L_{n}\) in Example 2.1.
**Lemma 2.2**.: _[_11_, Lemma 1.1.2]_ _Let \(A\) be an MV-algebra and \(x,y\in A\). Then the following statements are equivalent:_
1. \(x\leq y\)_;_
2. \(x^{*}\oplus y=1\)_;_
3. \(x\odot y^{*}=0\)_;_
4. \(y=x\oplus(y\odot x^{*})\)_;_
5. _there is an element_ \(z\in A\) _such that_ \(x\oplus z=y\)_._
**Lemma 2.3**.: _[_8, 11_]_ _Let \(A\) be an MV-algebra, and \(x,y,z\in A\). Then the following statements hold:_
1. \(x\odot y\leq x\wedge y\leq x\leq x\lor y\leq x\oplus y\)_;_
2. _If_ \(x\oplus y=0\)_, then_ \(x=y=0\)_; If_ \(x\odot y=1\)_, then_ \(x=y=1\)_;_
3. _If_ \(x\leq y\)_, then_ \(x\lor z\leq y\lor z\)_,_ \(x\wedge z\leq y\wedge z\)_;_
4. _If_ \(x\leq y\)_, then_ \(x\oplus z\leq y\oplus z\)_,_ \(x\odot z\leq y\odot z\)_;_
5. \(x\leq y\) _if and only if_ \(y^{*}\leq x^{*}\)_;_
6. \(x\odot(y\wedge z)=(x\odot y)\wedge(x\odot z)\)_;_
7. \(x\odot(y\lor z)=(x\odot y)\vee(x\odot z)\)_;_
8. \(x\odot y\leq z\) _if and only if_ \(x\leq y^{*}\oplus z\)_._
**Lemma 2.4**.: _[_11_, Lemma 1.6.1]_ _Let \(A\) be an MV-chain. For any \(x,y,z\in A\),_
1. \(x\oplus y=x\) _if and only if_ \(x=1\) _or_ \(y=0\)_;_
2. _If_ \(x\odot y=x\odot z>0\)_, then_ \(y=z\)_._
**Example 2.3**.: For any Boolean algebra \((A,\vee,\wedge,-,0,1)\), the structure \((A,\vee,-,0)\) is an MV-algebra, where \(\vee\), - and \(0\) denote, respectively, the join, the complement and the smallest element in \(A\).
Boolean algebras form a subvariety of the variety of MV-algebras. They are precisely the MV-algebras satisfying the additional equation \(x\oplus x=x\). An element \(a\) of \(A\) is called **idempotent** if \(a\oplus a=a\). Denote the set of all idempotent elements of \(A\) by \(\mathbf{B}(A)\), called **Boolean center of \(A\)**. It is known that \(\mathbf{B}(A)\) is a subalgebra of the MV-algebra \(A\), and a subalgebra \(B\) of \(A\) is a Boolean algebra if and only if \(B\subseteq\mathbf{B}(A)\)[11, Corollary 1.5.4]. For convenience, we denote by \(B_{n}\) the \(n\)-element Boolean algebra. It is clear that \(B_{2}\) is exactly the \(2\)-element MV-chain \(L_{2}\).
**Lemma 2.5**.: _[_11_, Theorem 1.5.3]_ _For every element \(x\) in an MV-algebra \(A\), the following conditions are equivalent:_
1. \(x\in\mathbf{B}(A)\)_;_
2. \(x\oplus x=x\)_;_
3. \(x\odot x=x\)_;_
4. \(x^{*}\in\mathbf{B}(A)\)_;_
5. \(x\oplus y=x\lor y\) _for all_ \(y\in A\)_;_
6. \(x\odot y=x\wedge y\) _for all_ \(y\in A\)_._
**Definition 2.6**.: [11] Let \(A\) be an MV-algebra and \(I\) be a subset of \(A\). Then we say that \(I\) is an **ideal** if the following conditions are satisfied:
1. \(0\in I\);
2. \(x,y\in I\) imply \(x\oplus y\in I\);
3. \(x\in I\) and \(y\leq x\) imply \(y\in I\).
**Definition 2.7**.: [11] Let \(A\) be a lattice and \(I\) be a subset of \(A\). Then we say that \(I\) is a **lattice ideal** if the following conditions are satisfied:
1. \(0\in I\);
2. \(x,y\in I\) imply \(x\lor y\in I\);
3. \(x\in I\) and \(y\leq x\) imply \(y\in I\).
That is, a lattice ideal of an MV-algebra \(A\) is the notion of ideal in the underlying lattice \((A,\wedge,\vee)\)[11, Proposition 1.1.5]. It can easily be verified that an ideal is a lattice ideal but the opposition is not necessarily the case. The next lemma gives the representation of a finite MV-algebra:
**Lemma 2.8**.: _[_11_,_ Propostion 3.6.5]_ _An MV-algebra \(A\) is finite if and only if \(A\) is isomorphic to a finite product of finite chains, in symbols,_
\[A\cong L_{d_{1}}\times\dots\times L_{d_{n}},\]
_for some integers \(2\leq d_{1}\leq d_{2}\leq\dots\leq d_{n}\). This representation is unique, up to the ordering of factors._
Finally, we list the famous Chang's Subdirect Representation Theorem, stating that if an equation holds in all totally ordered MV-algebras, then the equation holds in all MV-algebras.
**Lemma 2.9**.: _[_11_, Theorem 1.3.3]_ _Every nontrivial MV-algebra is a subdirect product of MV-chains._
## 3. \((\odot,\vee)\)-derivations on MV-algebras
In this section, we introduce \((\odot,\vee)\)-derivations on MV-algebras, and characterize some properties about \((\odot,\vee)\)-derivations, such as isotonicity and idempotency. Also, we enumerate the cardinality of \((\odot,\vee)\)-derivations on finite MV-chains.
### Basic properties of \((\odot,\vee)\)-derivations on MV-algebras
**Definition 3.1**.: Let \(A\) be an MV-algebra. A map \(d:A\to A\) is called an \((\odot,\vee)\)**-derivation on \(A\)** if it satisfies the equation:
\[d(x\odot y)=(d(x)\odot y)\vee(x\odot d(y))\quad\text{ for all }x,y\in A. \tag{1}\]
It is easy to check that the identity map \(\operatorname{Id}_{A}\) and the zero map \(\mathbf{0}_{A}\) are simple examples of \((\odot,\vee)\)-derivations on an MV-algebra \(A\), where
\[\operatorname{Id}_{A}(x)=x\quad\text{ and }\quad\mathbf{0}_{A}(x)=0\quad \text{ for any }x\in A.\]
Also, for a given \(a\in A\), define the map \(d_{a}:A\to A\) by
\[d_{a}(x):=a\odot x\quad\text{ for all }x\in A.\]
Then \(d_{a}\) is an \((\odot,\vee)\)-derivation, called a **principal \((\odot,\vee)\)-derivation**. Both \(\operatorname{Id}_{A}\) and \(\mathbf{0}_{A}\) are principal \((\odot,\vee)\)-derivations, since \(\operatorname{Id}_{A}=d_{1}\) and \(\mathbf{0}_{A}=d_{0}\).
Denote the set of all \((\odot,\vee)\)-derivations on \(A\) by \(\operatorname{Der}(A)\); and the set of all the principal \((\odot,\vee)\)-derivations on \(A\) by \(\operatorname{PDer}(A)\), that is \(\operatorname{PDer}(A)=\{d_{a}\mid a\in A\}\).
**Remark 3.2**.:
1. It is clear that Eq.(1) holds when \(x=y=1\), where \(d(1)=d(1)\odot 1\).
2. Adapting the classical terminology of differential algebras, we also call a derivation a differential operator. More generally, we also call a map \(f:A\to A\) an operator even though there is no linearity involved.
3. Note that in [2, 29], an \((\odot,\oplus)\)-derivation on an MV-algebra \(A\) is defined to be a map satisfying \(d(x\odot y)=(d(x)\odot y)\oplus(x\odot d(y))\) for all \(x,y\in A\). In this paper, we use "\(\vee\)" instead of "\(\oplus\)". Our choice of this notation has its motivation from certain asymmetry of "\(\vee\)" and "\(\odot\)", and already leads to some properties as displayed in Proposition 3.3.
4. It is natural to consider a \((\oplus,\wedge)\)-derivation which is dual to the \((\odot,\vee)\)-derivation on an MV-algebra \(A\): \(d(x\oplus y)=(d(x)\oplus y)\wedge(x\oplus d(y))\) for all \(x,y\in A\). If this condition is taken, then the study should be completely parallel to the study of Eq. (1) due to the symmetry of the operations "\(\vee\)" and "\(\wedge\)", "\(\odot\)" and "\(\oplus\)" in the definition of an MV-algebra. Furthermore, if a map \(d\) is both an \((\odot,\vee)\)-derivation and a \((\oplus,\wedge)\)-derivation, then \(d=\operatorname{Id}_{A}\) (see Proposition 3.4).
**Proposition 3.3**.: _Let \(A\) be an MV-algebra, \(x,y\in A\) and \(d\in\operatorname{Der}(A)\). Then for any positive integer \(n\), the following statements hold:_
1. \(d(0)=0\)_._
2. \(d(x^{n})=x^{n-1}\odot d(x)\)_, where_ \(x^{0}=1\)_,_ \(x^{n}=\overbrace{x\odot x\odot\cdots\odot x}^{n}\)_._
3. \(d(x)\odot x^{*}=x\odot d(x^{*})=0\)_._
4. \(d(x)\leq x\)_._
5. \(d(x)=d(x)\vee(x\odot d(1))\) _and so_ \(x\odot d(1)\leq d(x)\)_._
6. \(d(x^{*})\leq x^{*}\leq(d(x))^{*}\)_._
7. \(d(x)\odot d(y)\leq d(x\odot y)\leq d(x)\lor d(y)\leq d(x)\oplus d(y)\)_._
8. \((d(x))^{n}\leq d(x^{n})\)_._
9. _If_ \(I\) _is a downset of_ \(A\)_, then_ \(d(I)\subseteq I\)_, where_ \(d(I)=\{d(x)|x\in I\}\)_._
10. _If_ \(y\leq x\) _and_ \(d(x)=x\)_, then_ \(d(y)=y\)_._
Proof.: (1) Putting \(x=y=0\) in Eq.(1), we immediately have \(d(0)=d(0\odot 0)=(d(0)\odot 0)\vee(0\odot d(0))=0\).
(2) We prove \(d(x^{n})=x^{n-1}\odot d(x)\) by induction on \(n\). First, it is clear that \(d(x^{1})=d(x)=1\odot d(x)=x^{1-1}\odot d(x)\). For \(n=2\), putting \(x=y\) in Eq.(1), we get \(d(x^{2})=d(x\odot x)=(d(x)\odot x)\vee(x\odot d(x))=x\odot d(x)\).
Now assume that \(d(x^{n})=x^{n-1}\odot d(x)\). By Eq.(1), we have \(d(x^{n+1})=d(x^{n}\odot x)=(d(x^{n})\odot x)\vee(x^{n}\odot d(x))=(x^{n-1} \odot d(x)\odot x)\vee(x^{n}\odot d(x))=x^{n}\odot d(x)\), and so (2) holds.
(3) Since \(x\odot x^{*}=0\), by Item (1) it follows that \(0=d(0)=d(x\odot x^{*})=(d(x)\odot x^{*})\vee(x\odot d(x^{*}))\). So \(d(x)\odot x^{*}=0\) and \(x\odot d(x^{*})=0\).
(4) Since \(d(x)\odot x^{*}=0\) by Item (3), it follows immediately by Lemma 2.2 that \(d(x)\leq x\).
(5) By Eq.(1) we have \(d(x)=d(x\odot 1)=(d(x)\odot 1)\vee(x\odot d(1))=d(x)\vee(x\odot d(1))\). So \(x\odot d(1)\leq d(x)\).
(6) We have \(d(x^{*})\leq x^{*}\) and \(d(x)\leq x\) by Item (4). Thus \(x^{*}\leq(d(x))^{*}\) by Lemma 2.3 (5).
(7) By Item (4) and Lemma 2.3 (4), we have \(d(x)\odot d(y)\leq x\odot d(y)\) and \(d(x)\odot d(y)\leq d(x)\odot y\). So \(d(x)\odot d(y)\leq(d(x)\odot y)\vee(x\odot d(y))=d(x\odot y)\). Furthermore, it follows from Lemma 2.3 (1) that \(d(x)\odot y\leq d(x),x\odot d(y)\leq d(y)\). So \(d(x)\odot y=(d(x)\odot y)\vee(x\odot d(y))\leq d(x)\lor d(y)\). Finally, we get \(d(x)\lor d(y)\leq d(x)\oplus d(y)\) by Lemma 2.3 (1).
(8) By Item (2), we have \(d(x^{n})=x^{n-1}\odot d(x)\). Since \(d(x)\leq x\), it follows by Lemma 2.3 (4) that \((d(x))^{n-1}\leq x^{n-1}\) and then \((d(x))^{n}=(d(x))^{n-1}\odot d(x)\leq x^{n-1}\odot d(x)=d(x^{n})\).
(9) Let \(I\) be a downset of \(A\) and \(y\in d(I)\). Then there exists \(a\in I\) such that \(y=d(a)\). Since \(d(a)\leq a\) by Item (4), we have \(y=d(a)\in I\) by Definition 2.6. Thus \(d(I)\subseteq I\).
(10) If \(y\leq x\) and \(d(x)=x\), then
\[d(y)=d(x\wedge y)=d(x\odot(x^{*}\oplus y))\] \[= (d(x)\odot(x^{*}\oplus y))\vee(x\odot d(x^{*}\oplus y))\] \[= (x\odot(x^{*}\oplus y))\vee(x\odot d(x^{*}\oplus y))\] \[= x\odot(x^{*}\oplus y)\] \[= x\wedge y\] \[= y,\]
and so we get \(d(y)=y\).
It is known that if \(d\) is a derivation on a lattice \(L\), then \(d=\operatorname{Id}_{L}\) iff \(d\) is injective iff \(d\) is surjective [32, Theorem 3.17]. In Proposition 3.4, we will show that if \(d\) is an \((\odot,\vee)\)-derivation on an MV-algebra \(A\), then \(d=\operatorname{Id}_{A}\) iff \(d\) is surjective. However, \(d\) is injective may not imply that \(d=\operatorname{Id}_{A}\) (see Remark 3.5).
**Proposition 3.4**.: _Let \(A\) be an MV-algebra and \(d\in\operatorname{Der}(A)\). Then the following statements are equivalent:_
1. \(d=\operatorname{Id}_{A}\)_;_
2. \(d(1)=1\)_;_
3. \(d(a)=1\) _for some_ \(a\in A\)_;_
4. \(d\) _is surjective;_
5. \(d\) _is a_ \((\oplus,\wedge)\)_-derivation, i.e.,_ \(d\) _satisfies the condition:_ \(d(x\oplus y)=(d(x)\oplus y)\wedge(x\oplus d(y))\) _for all_ \(x,y\in A\)_._
Proof.: It is clear that \((1)\Rightarrow(2)\Rightarrow(3)\), and \((1)\Rightarrow(4)\Rightarrow(3)\) by the property of \(\operatorname{Id}_{A}\).
\((2)\Rightarrow(1)\). Assume that \(d(1)=1\). Then by Proposition 3.3 (10) we have that \(d(x)=x\) for all \(x\in A\). Thus \(d=\operatorname{Id}_{A}\), Item (1) holds.
\((3)\Rightarrow(2)\). Assume that \(d(a)=1\) for some \(a\in A\). By Proposition 3.3 (4), we have \(1=d(a)\leq a\), and so \(a=1\). Thus \(d(1)=1\), Item (2) holds.
\((1)\Rightarrow(5)\). Assume that \(d=\operatorname{Id}_{A}\). Then \(d(x\oplus y)=x\oplus y=(x\oplus y)\wedge(x\oplus y)=(d(x)\oplus y)\wedge(x \oplus d(y))\) for all \(x,y\in A\), and thus Item (5) holds.
\((5)\Rightarrow(2)\). Assume that \(d\) is a \((\oplus,\wedge)\)-derivation. Then \(d(1)=d(1\oplus 1)=(d(1)\oplus 1)\wedge(1\oplus d(1))=1\), and so Item (2) holds.
**Remark 3.5**.: Let \(A\) be an MV-algebra and \(d\in\operatorname{Der}(A)\). Generally, \(d\neq\operatorname{Id}_{A}\) if \(d\) is injective. For example, let \(\mathcal{C}\) be the infinite MV-chain in Example 2.2. Define an operator \(d\) on \(\mathcal{C}\) by
\[d(x):=\begin{cases}x\odot c^{*},&\text{if $x\in\mathcal{C}_{1}$}\\ x,&\text{if $x\in\mathcal{C}_{0}$}\end{cases}\]
**Claim** (1): \(d\in\operatorname{Der}(\mathcal{C})\). Indeed, let \(x,y\in\mathcal{C}\). Consider the following cases:
Case \((i)\): \(x,y\in\mathcal{C}_{1}\). Then \(d(x\odot y)=(x\odot y)\odot c^{*}=(d(x)\odot y)\vee(x\odot d(y))\).
Case \((ii)\): \(x,y\in\mathcal{C}_{0}\). Then \(d(x\odot y)=x\odot y=(d(x)\odot y)\vee(x\odot d(y))\).
Case \((iii)\): \(x\in\mathcal{C}_{1},y\in\mathcal{C}_{0}\), let \(x=(mc)^{*},y=nc\), where \(m,n\in\mathbb{N}_{+}\). Then \(d(x)=(mc)^{*}\odot c^{*}=((m+1)c)^{*}\) and \(d(y)=y\).
If \(m\geq n\), then \(d(x\odot y)=d((mc)^{*}\odot nc)=d(0)=0=(((m+1)c)^{*}\odot nc)\vee((mc)^{*} \odot nc)=(d(x)\odot y)\vee(x\odot d(y))\). If \(m<n\), then
\[d(x)\odot y=((m+1)c)^{*}\odot nc=\begin{cases}0,&\text{if }m+1=n\\ (n-m-1)c,&\text{if }m+1<n\end{cases}\]
It follows that \(d(x)\odot y<(n-m)c=x\odot d(y)\), and thus \(d(x\odot y)=d((mc)^{*}\odot nc)=d((n-m)c)=(n-m)c=(d(x)\odot y)\vee(x\odot d(y))\).
Case \((iv)\): \(x\in\mathcal{C}_{0},y\in\mathcal{C}_{1}\). Similarly, we can obtain that \(d(x\odot y)=(d(x)\odot y)\vee(x\odot d(y))\).
Summarizing the above arguments, we get \(d\in\operatorname{Der}(\mathcal{C})\).
**Claim** (2): \(d\) is injective. Indeed, let \(x,y\in\mathcal{C}\) and \(x\neq y\). If \(x,y\in\mathcal{C}_{1}\), say \(x=(mc)^{*},y=(nc)^{*}\), where \(m,n\) are positive integers and \(m\neq n\), then \(d(x)=(mc)^{*}\odot c^{*}=((m+1)c)^{*}\neq((n+1)c)^{*}=(nc)^{*}\odot c^{*}=d(y)\). If \(x,y\in\mathcal{C}_{0}\), then \(d(x)=x\neq y=d(y)\).
If \(x\in\mathcal{C}_{1},y\in\mathcal{C}_{0}\) or \(y\in\mathcal{C}_{1},x\in\mathcal{C}_{0}\), say \(x\in\mathcal{C}_{1},y\in\mathcal{C}_{0}\), then by the definition of \(d\), we have \(d(x)\in\mathcal{C}_{1},d(y)\in\mathcal{C}_{0}\), so \(d(x)\neq d(y)\) since \(\mathcal{C}_{0}\cap\mathcal{C}_{1}=\emptyset\). Thus \(d\) is injective.
However, \(d\neq\operatorname{Id}_{\mathcal{C}}\) since \(d(1)=c^{*}\neq 1\).
Let \(A\) be an MV-algebra and \(d\in\operatorname{Der}(A)\). From Remark 3.5, we see that \(d(a)\) may not lie in \(\mathbf{B}(A)\) if \(a\in\mathbf{B}(A)\). In what follows, some properties of \((\odot,\vee)\)-derivations related to Boolean center \(\mathbf{B}(A)\) of an MV-algebra \(A\) are given.
**Proposition 3.6**.: _Let \(A\) be an MV-algebra and \(d\in\operatorname{Der}(A)\). Then for all \(x,y\in\mathbf{B}(A)\), the following statements hold:_
1. \(d(x\wedge y)=(d(x)\wedge y)\vee(x\wedge d(y))\)_._
2. \(d(x)=x\odot d(x)\)_._
Proof.: (1) By Lemma 2.5 (6), we have \(d(x\wedge y)=d(x\odot y)=(d(x)\odot y)\vee(x\odot d(y))=(d(x)\wedge y)\vee(x \wedge d(y))\).
(2) Since \(x\odot x=x\), we have \(d(x)=d(x\odot x)=x\odot d(x)\) by Proposition 3.3 (2).
**Corollary 3.7**.: _If an MV-algebra \(A\) is a Boolean algebra, then \(d\) is an \((\odot,\vee)\)-derivation on \(A\) if and only if \(d\) is a derivation on the lattice \((A,\vee,\wedge)\)._
Proof.: It follows immediately by Proposition 3.6 and Lemma 2.5.
Note that \(d(d(a))\) may not equal \(d(a)\) if \(a\in\mathbf{B}(A)\). For example, in Remark 3.5, we have \(1\in\mathbf{B}(C)\) but \(d(d(1))=d(c^{*})=c^{*}\odot c^{*}=(2c)^{*}\neq c^{*}=d(1)\). Proposition 3.8 tells us that \(d(d(a))=d(a)\) if \(d(a)\in\mathbf{B}(A)\).
**Proposition 3.8**.: _Let \(A\) be an MV-algebra, \(d\in\operatorname{Der}(A)\) and \(a\in A\). If \(d(a)\in\mathbf{B}(A)\), then \(d(d(a))=d(a)\)._
Proof.: Assume that \(d\in\operatorname{Der}(A)\), \(a\in A\) with \(d(a)\in\mathbf{B}(A)\). Then \(d(a)=d(a)\odot d(a)\leq d(a\odot a)=a\odot d(a)\leq d(a)\) by Proposition 3.3 (8) and Lemma 2.3 (1). Thus \(d(a)=a\odot d(a)\), and therefore \(d(d(a))=d(a\odot d(a))=(d(a)\odot d(a))\vee(a\odot d(d(a)))=d(a)\vee(a\odot d (d(a)))\) by Eq. (1). Consequently, we get \(d(a)\leq d(d(a))\). Also, we have \(d(d(a))\leq d(a)\) by Proposition 3.3 (4). Hence \(d(d(a))=d(a)\).
### \((\odot,\vee)\)-derivations on MV-chains
In this subsection we will determine the cardinality of \(\operatorname{Der}(A)\) when \(A\) is a finite MV-chain. Let \(n\geq 2\) be a positive integer. Recall that every \(n\)-element MV-chain is isomorphic to the MV-chain \(L_{n}\), where \(L_{n}\) is given in Example 2.1.
**Remark 3.9**.: In \(L_{n}\), \(\frac{n-m-1}{n-1}=(\frac{n-2}{n-1})^{m}\) for each \(m\in\{1,2,\cdots,n-1\}\). That is to say, for any \(x\in L_{n}\backslash\{1\}\), \(x\) can be expressed as a power of \(\frac{n-2}{n-1}\).
**Theorem 3.10**.: _Let \(d\) be an operator on \(L_{n}\) and \(v=\frac{n-2}{n-1}\). Suppose that \(d(v)\leq v\). Then \(d\in\operatorname{Der}(L_{n})\) if and only if \(d\) satisfies the following conditions:_
1. \(d(v^{m})=v^{m-1}\odot d(v)\) _for each_ \(m\in\{1,2,\cdots,n-1\}\)_;_
2. \(v\odot d(1)\leq d(v)\)_._
Proof.: If \(d\in\operatorname{Der}(L_{n})\), then for each \(m\in\{1,2,\cdots,n-1\}\), we have \(d(v^{m})=v^{m-1}\odot d(v)\) by Proposition 3.3 (2), and \(v\odot d(1)\leq d(v\odot 1)=d(v)\) by Proposition 3.3 (5). Thus \(d\) satisfies the conditions (1) and (2).
Conversely, suppose that \(d\) satisfies the conditions (1) and (2). Let \(x,y\in L_{n}\). By Remark 3.2 (1), we can assume that \(x\neq 1\) or \(y\neq 1\) and distinguish the following cases:
If \(x\neq 1\) and \(y\neq 1\), then \(x=v^{k}\) and \(y=v^{l}\) for some \(k,l\in\{1,2,\cdots,n-1\}\). By the condition (1), we get \(d(x\odot y)=d(v^{k}\odot v^{l})=v^{k+l-1}\odot d(v)=((v^{k-1}\odot d(v))\odot v ^{l})\vee(v^{k}\odot(v^{l-1}\odot d(v)))=(d(x)\odot y)\vee(x\odot d(y))\).
If \(x=1\) or \(y=1\) (but not both), say \(x\neq 1\) and \(y=1\), then \(x=v^{k}\) for some \(k\in\{1,2,\cdots,n-1\}\). By the condition (1), we have \(d(x)=d(x\odot 1)=d(v^{k})=v^{k-1}\odot d(v)\). Also, we have \(x\odot d(1)=v^{k-1}\odot v\odot d(1)\leq v^{k-1}\odot d(v)=d(x)\odot 1\) by condition (2). Thus we have derived that \(d(x\odot 1)=d(x)=d(x)\odot 1=(d(x)\odot 1)\vee(x\odot d(1))\).
Therefore, we conclude that \(d\in\operatorname{Der}(L_{n})\).
From Theorem 3.10, we see that if \(d\in\operatorname{Der}(L_{n})\), then for any \(x\in L_{n}\) with \(x<\frac{n-2}{n-1}\), \(d(x)\) is determined by the value \(d(\frac{n-2}{n-1})\). However, if \(L\) is an infinite MV-chain with an anti-atom \(v\) (i.e, \(v\) is the maximum element in \(L\backslash\{1\}\)) and \(d\in\operatorname{Der}(L)\), then for any \(x<v\), \(d(x)\) may not be determined by the value \(d(v)\). For example, let \(\mathcal{C}\) be the MV-chain in Example 2.2. Then \(c^{*}\) is the anti-atom of \(\mathcal{C}\). Define operators \(d\) and \(d^{\prime}\) on \(\mathcal{C}\) as follows:
\[d(x):=\begin{cases}x\odot c^{*},&\text{if }x\in C_{1}\\ x,&\text{if }x\in C_{0}\end{cases}\quad\text{and}\quad d^{\prime}(x):=x\odot c^{*}\]
Then \(d\in\operatorname{Der}(\mathcal{C})\) by Remark 3.5 and \(d^{\prime}\) is a principal \((\odot,\vee)\)-derivation. Furthermore, \(d(c^{*})=d^{\prime}(c^{*})\) but \(d\neq d^{\prime}\) since \(d(c)=c\neq 0=d^{\prime}(c)\).
**Theorem 3.11**.: _Let \(n\geq 2\) be a positive integer. Then \(|\operatorname{Der}(L_{n})|=\frac{(n-1)(n+2)}{2}\)._
Proof.: Assume that \(d\in\operatorname{Der}(L_{n})\) and denote \(\frac{n-2}{n-1}\) by \(v\). Then \(d(v)\leq v\) by Proposition 3.3 (4), and so \(d(v)=\frac{i}{n-1}\) for some \(i\in\{0,1,2,\cdots,n-2\}\). For any \(x\in L_{n}\) with \(x<v\), \(d(x)\) is determined by the value \(d(v)\) by Theorem 3.10.
Now consider the value \(d(1)\). By the condition (2) of Theorem 3.10, we have
\[v\odot d(1)=\frac{n-2}{n-1}\odot d(1)\leq d(v)=\frac{i}{n-1}. \tag{2}\]
Notice that for all \(k,l\in\{0,1,2,\cdots,n-2\}\), we have \(\frac{k}{n-1}\odot\frac{l}{n-1}=\max\{0,\frac{k+l}{n-1}-1\}\). Eq. (2) implies that \(d(1)\leq\frac{i+1}{n-1}\). So \(d(1)\) has \(i+2\) choices.
Summarizing the above arguments, we get
\[|\operatorname{Der}(L_{n})|=\sum_{i=0}^{n-2}(i+2)=2+3+\cdots+n=\frac{(n-1)(n+2)}{ 2}.\qed\]
By Theorem 3.11, we obtain \(|\operatorname{Der}(L_{2})|=\frac{(2-1)(2+2)}{2}=2\) and \(|\operatorname{Der}(L_{3})|=\frac{(3-1)(3+2)}{2}=5\). Thus \(\operatorname{Der}(L_{2})=\{\operatorname{Id}_{L_{2}},\mathbf{0}_{L_{2}}\}\). Let \(A\) be an MV-algebra. In what follows, we will show that \(|\operatorname{Der}(A)|=2\) iff \(A\) is isomorphic to \(L_{2}\); and \(|\operatorname{Der}(A)|=5\) iff \(A\) is isomorphic to \(L_{3}\). For this purpose, we first give a family of derivations on \(A\).
**Proposition 3.12**.: _Let \(A\) be an MV-algebra and \(d\in\operatorname{Der}(A)\). Let \(u\in A\) be given with \(u\leq d(1)\) and define an operator \(d^{u}\) on \(A\) by_
\[d^{u}(x):=\begin{cases}u&\text{if }x=1\\ d(x)&\text{otherwise}\end{cases}\]
_Then \(d^{u}\) is also in \(\operatorname{Der}(A)\)._
Proof.: Let \(x,y\in A\). By Remark 3.2 (1), we can assume that \(x\neq 1\) or \(y\neq 1\).
If \(x\neq 1\) and \(y\neq 1\), then \(d^{u}(x)=d(x)\), \(d^{u}(y)=d(y)\) and \(x\odot y\in A\setminus\{1\}\), which implies that \(d^{u}(x\odot y)=d(x\odot y)=(d(x)\odot y)\vee(x\odot d(y))=(d^{u}(x)\odot y) \vee(x\odot d^{u}(y))\).
If \(x=1\) or \(y=1\) (but not both), say \(x\neq 1\) and \(y=1\), then since \(d^{u}(1)=u\leq d(1)\), we have \(x\odot d^{u}(1)\leq x\odot d(1)\leq d(x)\) by Proposition 3.3 (4) and so
\[d^{u}(x\odot y)=d^{u}(x)=d(x)=d(x)\vee(x\odot d^{u}(1))=(d^{u}(x)\odot y) \vee(x\odot d^{u}(y)).\]
Thus we conclude that \(d^{u}\) is in \(\operatorname{Der}(A)\).
**Corollary 3.13**.: _Let \(A\) be an MV-algebra, and \(u\in A\). Define operators \(\chi^{(u)}\) as follows:_
\[\chi^{(u)}(x):=\begin{cases}u,&\text{if }x=1\\ x,&\text{otherwise}.\end{cases}\]
_Then \(\chi^{(u)}\in\operatorname{Der}(A)\)._
Proof.: Since \(\operatorname{Id}_{A}\in\operatorname{Der}(A)\) and \(u\leq 1=\operatorname{Id}_{A}(1)\), we have \(\chi^{(u)}=(\operatorname{Id}_{A})^{u}\in\operatorname{Der}(A)\) by Proposition 3.12.
**Lemma 3.14**.: _Let \(A\) be an MV-algebra. Then the following statements hold:_
1. \(\chi^{(0)}\neq d\) _for any_ \(d\in\operatorname{Der}(A)\) _with_ \(d(1)\neq 0\)_. In particular,_ \(\chi^{(0)}\neq\chi^{(u)}\) _and_ \(\chi^{(0)}\neq d_{u}\) _for any_ \(u\in A\backslash\{0\}\)_._
2. _If_ \(|A|\geq 3\)_, then_ \(\chi^{(u)}\neq d_{v}\) _for any_ \(u,v\in A\backslash\{0,1\}\)_._
Proof.: (1) Since \(\chi^{(0)}(1)=0\), it follows that \(\chi^{(0)}\neq d\) for any \(d\in\operatorname{Der}(A)\) with \(d(1)\neq 0\), which implies that \(\chi^{(0)}\neq\chi^{(u)}\) and \(\chi^{(0)}\neq d_{u}\) for any \(u\in A\backslash\{0\}\), since \(\chi^{(u)}(1)=d_{u}(1)=u\neq 0\).
(2) Assume that \(|A|\geq 3\) and let \(u,v\in A\backslash\{0,1\}\). Then \(u^{*},v^{*}\in A\backslash\{0,1\}\).
If \(u\neq v\), then \(\chi^{(u)}\neq d_{v}\), since \(\chi^{(u)}(1)=u\neq v=d_{v}(1)\). If \(u=v\), then \(\chi^{(u)}\neq d_{u}\), since \(\chi^{(u)}(u^{*})=u^{*}\neq 0=u\odot u^{*}=d_{u}(u^{*})\).
**Corollary 3.15**.: _Let \(A\) be an MV-algebra. Then the following statements hold:_
1. _If_ \(|A|\geq 3\)_, then_ \(|\operatorname{Der}(A)|\geq 5\)_._
2. _If_ \(|A|\geq 4\)_, then_ \(|\operatorname{Der}(A)|\geq 7\)_._
3. _If_ \(|A|\geq 5\)_, then_ \(|\operatorname{Der}(A)|\geq 13\)
Proof.: (1) Assume that \(|A|\geq 3\) and let \(u\in A\backslash\{0,1\}\). Then we immediately have \(d_{u}\), \(\chi^{(0)},\chi^{(u)}\in\operatorname{Der}(A)\) by Corollary 3.13. Furthermore, it is easy to see that \(d_{u}\neq\operatorname{Id}_{A}\), \(d_{u}\neq\mathbf{0}_{A}\), \(\chi^{(0)}\neq\operatorname{Id}_{A}\), \(\chi^{(0)}\neq\mathbf{0}_{A}\), \(\chi^{(u)}\neq\operatorname{Id}_{A}\) and \(\chi^{(u)}\neq\mathbf{0}_{A}\). Also, \(\chi^{(0)}\neq d_{u}\), \(\chi^{(0)}\neq\chi^{(u)}\) and \(\chi^{(u)}\neq d_{u}\) by Lemma 3.14. Consequently, we have that \(\operatorname{Id}_{A}\), \(\mathbf{0}_{A}\), \(d_{u}\), \(\chi^{(0)}\) and \(\chi^{(u)}\) are mutually different \((\odot,\vee)\)-derivations on \(A\).
(2) Assume that \(|A|\geq 4\) and let \(u,v\in A\backslash\{0,1\}\) with \(u\neq v\). By Lemma 3.14 (1), we have \(\chi^{(0)}\neq\operatorname{Id}_{A}\), \(\chi^{(0)}\neq\chi^{(u)}\), \(\chi^{(0)}\neq\chi^{(u)}\), \(\chi^{(0)}\neq d_{u}\) and \(\chi^{(0)}\neq d_{v}\). Clearly, \(\chi^{(u)}\neq\chi^{(v)}\) and \(d_{u}\neq d_{v}\). In addition, \(d_{p}\neq\chi^{(q)}\) for any \(p,q\in\{u,v\}\) by Lemma 3.14 (2). Thus we conclude that \(\operatorname{Id}_{A}\), \(\mathbf{0}_{A}\), \(d_{u}\), \(d_{v}\), \(\chi^{(0)},\chi^{(u)}\), \(\chi^{(v)}\) are mutually different \((\odot,\vee)\)-derivations on \(A\).
(3) Assume that \(|A|\geq 5\). Then there exist \(u,v\in A\backslash\{0,1\}\) with \(u<v\) (i.e, \(u\leq v\) and \(u\neq v\)). In fact, if \(x,y\) are not comparable for any \(x,y\in A\backslash\{0,1\}\) with \(x\neq y\), then the distributive lattice \((A,\leq)\) has a copy of \(M_{5}\), which is contradicting to [7, Theorem 3.6].
Let \(w\in A\backslash\{0,u,v,1\}\). By Lemma 3.14 (1), we have \(\chi^{(0)}\neq\operatorname{Id}_{A}\), \(\chi^{(0)}\neq\chi^{(u)}\), \(\chi^{(0)}\neq\chi^{(v)}\), \(\chi^{(0)}\neq\chi^{(w)}\), \(\chi^{(0)}\neq d_{u}\), \(\chi^{(0)}\neq d_{v}\) and \(\chi^{(0)}\neq d_{w}\). In addition, \(d_{p}\neq\chi^{(q)}\) for any \(p,q\in\{u,v,w\}\) by Lemma 3.14 (2). Furthermore, \((d_{v})^{0},(d_{v})^{u}\in\operatorname{Der}(A)\) by Proposition 3.12. By Corollary 3.13, we can get that \((d_{v})^{r}\neq\chi^{(s)}\) for any \(r\in\{0,u\},s\in\{0,u,v,w\}\). Also, \((d_{v})^{0}\neq\mathbf{0}_{A},(d_{v})^{u}\neq d_{u}\). Indeed, if \((d_{v})^{0}=\mathbf{0}_{A}\), we have \((d_{v})^{0}(u^{*})=v\odot u^{*}=0\). It follows by Lemma 2.2 that \(v\leq u\), contradicting to the fact that \(u<v\). If \((d_{v})^{u}=d_{u}\), then \(v\odot u^{*}=(d_{v})^{u}(u^{*})=d_{u}(u^{*})=u\odot u^{*}=0\). Similarly, we get \(v\leq u\), contradicting to the fact that \(u<v\).
Note that \(w\) must be comparable with \(u\) or \(v\). Otherwise, if \(w\) is not comparable for \(u,v\in A\backslash\{0,1\}\) with \(u\leq v\), then the distributive lattice \((A,\leq)\) has a copy of \(N_{5}\), which is contradicting to [7, Theorem 3.6]. There are two cases. If \(u<w\), \(v\) is not comparable for \(w\), we have \((d_{w})^{0},(d_{w})^{u}\in\operatorname{Der}(A)\) and similarly, \((d_{w})^{r}\neq\chi^{(s)}\) for any \(r\in\{0,u\},s\in\{0,u,v,w\}\). Also, it can be proved in the same way as shown before that \((d_{w})^{0}\neq 0,(d_{w})^{u}\neq d_{u}\). If \(w<v\), \(u\) is not comparable for \(w\), we have \((d_{w})^{0},(d_{v})^{w}\in\operatorname{Der}(A)\) and they are different from other \((\odot,\vee)\)-derivations on \(A\).
Finally, it is easy to check that \(\operatorname{Id}_{A}\), \(\mathbf{0}_{A}\), \(d_{u}\), \(d_{v}\), \(d_{w}\), \((d_{v})^{u},(d_{v})^{0},(d_{w})^{u}((d_{v})^{w}),\chi^{(0)},\chi^{(u)},\chi^{( v)}\), \(\chi^{(w)}\) are mutually different \((\odot,\vee)\)-derivations on \(A\).
**Proposition 3.16**.: _Let \(A\) be an nontrivial MV-algebra. Then the following statements hold:_
1. \(|\operatorname{Der}(A)|=2\) _if and only if_ \(|A|=2\)_._
2. \(|\operatorname{Der}(A)|=5\) _if and only if_ \(|A|=3\)_._
3. \(|\operatorname{Der}(A)|=9\) _if and only if_ \(|A|=4\)_._
Proof.: (1) Assume that \(A\) is a \(2\)-element MV-algebra. Then \(A=\{0,1\}\) is a \(2\)-element MV-chain, and so \(|\operatorname{Der}(A)|=2\) by Theorem 3.11.
Conversely, assume that \(|\operatorname{Der}(A)|=2\). If \(|A|\geq 3\), then \(|\operatorname{Der}(A)|\geq 5\) by Corollary 3.15 (1), a contradiction. Since \(A\) is nontrivial, finally we get \(|A|=2\).
(2) Assume that \(A\) is a \(3\)-element MV-algebra. Then \(A\) is a \(3\)-element MV-chain by Lemma 2.8, and so \(|\operatorname{Der}(A)|=5\) by Theorem 3.11.
Conversely, assume that \(|\operatorname{Der}(A)|=5\). If \(|A|\geq 4\), then \(|\operatorname{Der}(A)|\geq 7\) by Corollary 3.15 (2), a contradiction. Thus \(|A|\leq 3\). But \(A\) is nontrivial and \(|A|=2\) implies that \(|\operatorname{Der}(A)|=2\) by (1). Therefore, \(|A|=3\), and consequently \(A\) is a \(3\)-element MV-chain.
(3) Assume that \(A\) is a \(4\)-element MV-algebra. Then \(A\) is isomorphic to the \(4\)-element MV-chain \(L_{4}\) or the \(4\)-element Boolean algebra \(B_{4}\) by Lemma 2.8. Recall Corollary 3.7 that when the MV-algebra \(A\) is a Boolean algebra, \(d\) is an \((\odot,\vee)\)-derivation on \(A\) if and only if \(d\) is a derivation on the lattice \((A,\leq)\). It follows by Theorem 3.11 and [14, Theorem 3.21] that \(|\operatorname{Der}(A)|=9\).
Conversely, assume that \(|\operatorname{Der}(A)|=9\). If \(|A|\geq 5\), then \(|\operatorname{Der}(A)|\geq 13\) by Corollary 3.15 (3), a contradiction. Thus \(|A|\leq 4\). But \(A\) is nontrivial and Items (1) and (2) imply that \(|A|\neq 2\) and \(|A|\neq 3\). Therefore, \(|A|=4\).
### Isotone \((\odot,\vee)\)-derivations on MV-algebras
In this subsection, we consider the condition when an \((\odot,\vee)\)-derivation \(d\) is isotone and characterize the properties of the fixed point set of \(d\).
**Definition 3.17**.: Let \(A\) be an MV-algebra and \(d\in\operatorname{Der}(A)\). \(d\) is called **isotone** if for all \(x,y\in A\), \(x\leq y\) implies that \(d(x)\leq d(y)\).
It is clear that \(\operatorname{Id}_{A}\) and \(\mathbf{0}_{A}\) are isotone. Furthermore, we have:
**Lemma 3.18**.: _Let \(A\) be an MV-algebra and \(a\in A\). Then the principal \((\odot,\vee)\)-derivation \(d_{a}\) is isotone._
Proof.: Let \(x,y\in A\) with \(x\leq y\). Then \(d_{a}(x)=a\odot x\leq a\odot y=d_{a}(y)\) by Lemma 2.3 (4), and thus \(d_{a}\) is isotone.
By [14, Proposition 2.5], we know that a derivation \(d\) on a bounded lattice \(L\) is isotone iff \(d\) is principal. However, there are other isotone \((\odot,\vee)\)-derivations on an MV-algebra \(A\) besides principal \((\odot,\vee)\)-derivations.
**Example 3.1**.: Let \(d=\chi^{\{\frac{3}{3}\}}\in\operatorname{Der}(L_{4})\) (see Corollary 3.13), i.e, \(d(0)=0,d(\frac{1}{3})=\frac{1}{3},d(\frac{2}{3})=\frac{2}{3},d(1)=\frac{2}{3}\). Then \(d\) is isotone, while \(d\) is not principal, since \(d(1)=\frac{2}{3}=\frac{2}{3}\odot 1\) but \(d(\frac{1}{3})=\frac{1}{3}\neq 0=\frac{2}{3}\odot\frac{1}{3}\).
Proposition 3.19 says that if \(d\) is an \((\odot,\vee)\)-derivation on an MV-algebra \(A\) with \(d(1)\in\mathbf{B}(A)\), then \(d\) is isotone iff \(d\) is principal.
**Proposition 3.19**.: _Let \(A\) be an MV-algebra and \(d\in\operatorname{Der}(A)\) with \(d(1)\in\mathbf{B}(A)\). Then the following statements are equivalent:_
1. \(d\) _is isotone;_
2. \(d(x)\leq d(1)\) _for any_ \(x\in A\)_;_
3. \(d(x)=d(1)\odot x\) _for any_ \(x\in A\)_;_
4. \(d(x\wedge y)=d(x)\wedge d(y)\) _for all_ \(x,y\in A\)_;_
5. \(d(x\lor y)=d(x)\lor d(y)\) _for all_ \(x,y\in A\)_;_
Proof.: \((1)\Rightarrow(2)\) is clear since \(x\leq 1\) holds for any \(x\in A\).
\((2)\Rightarrow(3)\). Assume that \(d(x)\leq d(1)\) for any \(x\in A\). Since \(d(x)\leq x\) by Proposition 3.3 (4), it follows that
\[d(x)\leq d(1)\wedge x=d(1)\odot x\leq d(x)\]
by Lemma 2.5 (6) and Proposition 3.3 (5). Thus \(d(x)=d(1)\odot x\).
\((3)\Rightarrow(4)\). Assume that \(d(x)=d(1)\odot x\) for any \(x\in A\). Then for all \(x,y\in A\), we have \(d(x\wedge y)=d(1)\odot(x\wedge y)=(d(1)\odot x)\wedge(d(1)\odot y)=d(x)\wedge d (y)\) by Lemma 2.3 (6).
\((4)\Rightarrow(1)\). Assume that \(d(x\wedge y)=d(x)\wedge d(y)\) for all \(x,y\in A\). Let \(x\leq y\). Then \(d(x)=d(x\wedge y)=d(x)\wedge d(y)\leq d(y)\), and thus \(d\) is isotone.
\((3)\Rightarrow(5)\). Assume that \((3)\) holds. Then for all \(x,y\in A\), we have \(d(x\lor y)=(d(1)\odot x)\vee(d(1)\odot y)=d(x)\lor d(y)\) by Lemma 2.3 (7).
(5) \(\Rightarrow\) (1). Assume that (5) holds. Then for all \(x,y\in A\) with \(x\leq y\), we have \(d(x)\leq d(x)\lor d(y)=d(x\lor y)=d(y)\), and thus \(d\) is isotone.
**Corollary 3.20**.: _Let \(A\) be an MV-algebra. Denote the set of all isotone \((\odot,\vee)\)-derivations with \(d(1)\in\mathbf{B}(A)\) by \(\mathrm{IDer}(A)\), i.e, \(\mathrm{IDer}(A)=\{d\in\mathrm{Der}(A)\mid d\mbox{ is isotone and }d(1)\in\mathbf{B}(A)\}\). Then there is a bijection between \(\mathrm{IDer}(A)\) and \(\mathbf{B}(A)\)._
Proof.: Define a map \(f:\mathrm{IDer}(A)\to\mathbf{B}(A)\) by \(f(d)=d(1)\) for any \(d\in\mathrm{IDer}(A)\). And define a map \(g:\mathbf{B}(A)\to\mathrm{IDer}(A)\) by \(g(a)=d_{a}\) for any \(a\in A\). Then by Proposition 3.19, we have \(fg=\mathrm{Id}_{A}\) and \(gf=\mathrm{Id}_{\mathrm{IDer}(A)}\). Hence \(f\) is a bijection.
Generally, \(d\) is an isotone \((\odot,\vee)\)-derivation on an MV-algebra \(A\) does not necessarily imply that \(d(x\oplus y)=d(x)\oplus d(y)\) for all \(x,y\in A\). For example, In the MV-algebra \(L_{3},\chi^{(\frac{1}{2})}\in\mathrm{Der}(A)\) and is isotone while \(\chi^{(\frac{1}{2})}(\frac{1}{2}\oplus\frac{1}{2})=\chi^{(\frac{1}{2})}(1)= \frac{1}{2}\neq 1=\frac{1}{2}\oplus\frac{1}{2}=\chi^{(\frac{1}{2})}(\frac{1}{2}) \oplus\chi^{(\frac{1}{2})}(\frac{1}{2})\). In the following proposition, the condition \(d(1)\in\mathbf{B}(A)\) cannot be removed.
**Proposition 3.21**.: _Let \(A\) be an MV-algebra, and \(d\in\mathrm{Der}(A)\). Then the following statements are equivalent:_
1. \(d\in\mathrm{IDer}(A)\)_;_
2. \(d(x\oplus y)=d(x)\oplus d(y)\) _for all_ \(x,y\in A\)_;_
3. \(d(x\odot y)=d(x)\odot d(y)\) _for all_ \(x,y\in A\)_._
Proof.: (1) \(\Rightarrow\) (2). Assume \(d\in\mathrm{IDer}(A)\). By Lemma 2.9, supposing that \(A\) is a subdirect product of a family \(\{A_{i}\}_{i\in I}\) of MV-chains, let \(h:A\to\prod_{i\in I}A_{i}\) be a one-one homomorphism and for each \(j\in I\), the composite map \(\pi_{j}\circ h\) is a homomorphism onto \(A_{j}\). Let \(d(1)=a=(a_{i})_{i\in I}\in\mathbf{B}(A)\). Then \(a_{i}\in\mathbf{B}(A_{i})\) and by Lemma 2.4 (1) we have \(a_{i}=0\) or \(a_{i}=1\) for each \(i\in I\). Since \(d\in\mathrm{IDer}(A)\), it follows by Proposition 3.19 that for any \(x=(x_{i})_{i\in I}\in A\), \(d(x)=x\odot a=(x_{i}\odot a_{i})_{i\in I}\). Therefore, \(d(x\oplus y)=((x_{i}\oplus y_{i})\odot_{i}a_{i})_{i\in I}=((x_{i}\odot_{i}a_{ i})\oplus(y_{i}\odot_{i}a_{i}))_{i\in I}=d(x)\oplus d(y)\).
(2) \(\Rightarrow\) (1). Assume that \(d(x\oplus y)=d(x)\oplus d(y)\) for all \(x,y\in A\), we immediately get \(d(1)=d(1)\oplus d(1)\). Hence \(d(1)\in\mathbf{B}(A)\). To prove that \(d\) is isotone, let \(x\leq y\). Then by Lemma 2.2 (4) there exists an element \(z\in A\) such that \(y=x\oplus z\). So \(d(y)=d(x\oplus z)=d(x)\oplus d(z)\leq d(x)\), and thus \(d\) is isotone.
(1) \(\Rightarrow\) (3). Assume that (1) holds, by Proposition 3.19 we have \(d(x)=d(1)\odot x\). Since \(d(1)\in\mathbf{B}(A)\), \(d(x\odot y)=d(1)\odot(x\odot y)=(d(1)\odot x)\odot(d(1)\odot y)=d(x)\odot d(y)\) for all \(x,y\in A\).
(3) \(\Rightarrow\) (1). Assume that (3) holds. Then for any \(x\in A\), we have \(d(x)=d(x\odot 1)=d(x)\odot d(1)\leq d(1)\) by Lemma 2.3 (1). Set \(x=y=1\) in (3), we have \(d(1)=d(1)\odot d(1)\). Hence \(d(1)\in\mathbf{B}(A)\). Thus by Propostion 3.19 we get \(d\) isotone. Therefore, (1) holds.
**Corollary 3.22**.: _Let \(A\) be an MV-algebra and \(d\in\mathrm{IDer}(A)\). Then \(d\) is idempotent, that is, \(d^{2}=d\)._
Proof.: Assume that \(d\in\mathrm{IDer}(A)\). By Proposition 3.19 (3) and Proposition 3.21, we have \(d(d(x))=d(1\odot d(x))=d(1)\odot d(x)=d(1\odot x)=d(x)\) for any \(x\in A\). Thus \(d^{2}=d\).
Generally, the converse of Corollary 3.22 does not hold. For example, let \(d=\chi^{(0)}\in\mathrm{Der}(L_{3})\). Then \(d(1)=0\in\mathbf{B}(A)\) and \(d\) is idempotent. But \(d\) is not isotone, since \(d(\frac{1}{2})=\frac{1}{2}>0=d(1)\).
Using the fixed point sets of isotone derivations, the characterizations of some different types of lattice have been described in [33]. Analogously, we next discuss the relation between ideals and fixed point sets of \((\odot,\vee)\)-derivations on MV-algebras.
Let \(A\) be an MV-algebra and \(d\in\operatorname{Der}(A)\). Denote the set of all **fixed point of \(d\)** by \(\operatorname{Fix}_{d}(A)\), i.e.,
\[\operatorname{Fix}_{d}(A)=\{x\in A\mid d(x)=x\}.\]
By Proposition 3.3 (10), \(\operatorname{Fix}_{d}(A)\) is a downset.
**Proposition 3.23**.: _Let \(A\) be an MV-algebra. If \(d\in\operatorname{PDer}(A)\), then \(\operatorname{Fix}_{d}(A)\) is a lattice ideal of \(A\)._
Proof.: Assume that \(d\in\operatorname{PDer}(A)\), and let \(d=d_{a}\), where \(a\in A\). Then \(d(x)=a\odot x\) for any \(x\in A\). To prove that \(\operatorname{Fix}_{d}(A)\) is closed under \(\vee\), let \(x,y\in\operatorname{Fix}_{d}(A)\). Then \(d(x)=x\) and \(d(y)=y\). It follows by Lemma 2.3 (7) that \(d(x\lor y)=a\odot(x\lor y)=(a\odot x)\vee(a\odot y)=d(x)\lor d(y)=x\lor y\), and so \(x\lor y\in\operatorname{Fix}_{d}(A)\). Thus \(\operatorname{Fix}_{d}(A)\) is closed under \(\vee\). This, together with the fact that \(\operatorname{Fix}_{d}(A)\) is a downset, implies that \(\operatorname{Fix}_{d}(A)\) is a lattice ideal of \(A\).
## 4. Direct product of \((\odot,\vee)\)-derivations
In this section, we will discuss the relation between direct product of \((\odot,\vee)\)-derivations and \((\odot,\vee)\)-derivations on the direct product of MV-algebras.
**Definition 4.1**.: [11] Let \(\Omega\) be an index set. The **direct product**\(\prod_{i\in\Omega}A_{i}\) of a family \(\{A_{i}\}_{i\in\Omega}\) of MV-algebras is the MV-algebra obtained by endowing the set-theoretical cartesian product of the family with the MV-operations defined pointwise. In other words, \(\prod_{i\in\Omega}A_{i}\) is the set of all functions \(f:\Omega\to\bigcup_{i\in\Omega}A_{i}\) such that \(f(i)\in A_{i}\) for all \(i\in\Omega\), with the operations " \(*\) " and " \(\oplus\) " defined by
\[(f^{*})(i)=_{\operatorname{def}}(f(i))^{*}\quad\text{ and }\quad(f\oplus g)(i)=_{ \operatorname{def}}f(i)\oplus g(i)\text{ for all }i\in\Omega.\]
The zero element \(0\) of \(\prod_{i\in\Omega}A_{i}\) is the function \(i\in\Omega\mapsto 0_{i}\in A_{i}\), and the element \(1\) of \(\prod_{i\in\Omega}A_{i}\) is the function \(i\in\Omega\mapsto 1_{i}\in A_{i}\) for all \(i\in\Omega\).
The binary operation " \(\odot\) " and " \(\ominus\) " on \(\prod_{i\in\Omega}A_{i}\) can be induced by " \(\oplus\) " and " \(\ast\) ". Let \(g,h\in\prod_{i\in\Omega}A_{i}\). By Lemma 2.2 we know that \(g\leqslant h\) in \(\prod_{i\in\Omega}A_{i}\) if and only if \(g^{*}\oplus h=1\) if and only if \((g(i))^{*}\oplus h(i)=1_{i}\) in \(A_{i}\) if and only if \(g(i)\leqslant h(i)\) for any \(i\in\Omega\). As usual, we write \((g(i))_{i\in\Omega}\) for \(g\).
**Definition 4.2**.: [11] For each \(i\in\Omega\), define the map \(\pi_{i}:\prod_{i\in\Omega}A_{i}\to A_{i}\) by \(\pi_{i}(g)=g(i)\) for any \(g\in\prod_{i\in\Omega}A_{i}\), and define the map \(\rho_{i}:A_{i}\to\prod_{i\in\Omega}A_{i}\) by
\[(\rho_{i}(a))\,(j)=\begin{cases}a,&\text{ if }j=i\\ 0_{j},&\text{ otherwise}\end{cases}\]
for any \(a\in A_{i}\). \(\pi_{i}\) is called the \(i\)**-th projection**, and \(\rho_{i}\) is called the \(i\)**-th embedding**.
**Definition 4.3**.: For each \(i\in\Omega\), let \(d_{i}\) be an operator on \(A_{i}\). Define an operator \(\prod_{i\in\Omega}d_{i}:\prod_{i\in\Omega}A_{i}\to\prod_{i\in\Omega}A_{i}\) by \((\prod_{i\in\Omega}d_{i})\,(g)=(d_{i}(g(i)))_{i\in\Omega}\) for any \(g\in\prod_{i\in\Omega}A_{i}\), and we call \(\prod_{i\in\Omega}d_{i}\) the **direct product of the \(\{d_{i}\}_{i\in\Omega}\)**.
When \(\Omega=\{1,2,\cdots,n\}\), we denote the direct product of \(\{A_{i}\}_{i\in\Omega}\) and the direct product of \(\{d_{i}\}_{i\in\Omega}\), respectively, by \(A_{1}\times A_{2}\times\cdots\times A_{n}\) and \(d_{1}\times d_{2}\times\cdots\times d_{n}\).
**Lemma 4.4**.: _Let \(\Omega\) be an index set, \(\{A_{i}\}_{i\in\Omega}\) be a family of MV-algebras, and \(d\) be an operator on \(\prod_{i\in\Omega}A_{i}\). Then the following statements hold:_
1. \(d\in\operatorname{Der}(\prod_{i\in\Omega}A_{i})\) _implies that_ \(\pi_{i}d\rho_{i}\in\operatorname{Der}(A_{i})\) _for each_ \(i\in\Omega\)_;_
2. \(d\in\operatorname{Der}(\prod_{i\in\Omega}A_{i})\) _and_ \(d\) _isotone implies that_ \(\pi_{i}d\rho_{i}\in\operatorname{Der}(A_{i})\) _and is isotone for each_ \(i\in\Omega\)_;_
3. \(d\in\operatorname{PDer}(\prod_{i\in\Omega}A_{i})\) _implies that_ \(\pi_{i}d\rho_{i}\in\operatorname{PDer}(A_{i})\) _for each_ \(i\in\Omega\)
Proof.: (1) Assume that \(d\in\operatorname{Der}(\prod_{i\in\Omega}A_{i})\). For each \(i\in\Omega\), let \(x,y\in A_{i}\). Then we have
\[(\pi_{i}d\rho_{i})\left(x\odot y\right) =\pi_{i}d\left(\rho_{i}(x\odot y)\right)=\pi_{i}\left(d\left(\rho_{ i}(x)\odot\rho_{i}(y)\right)\right)\] \[=\pi_{i}\left((d\left(\rho_{i}(x)\right)\odot\rho_{i}(y)\right) \vee\left(\rho_{i}(x)\odot d\left(\rho_{i}(y)\right)\right)\right)\] \[=(\pi_{i}d\rho_{i}(x)\odot\pi_{i}\rho_{i}(y))\vee\left(\pi_{i} \rho_{i}(x)\odot\pi_{i}d\rho_{i}(y)\right)\] \[=(\pi_{i}d\rho_{i}(x)\odot y)\vee\left(x\odot\pi_{i}d\rho_{i}(y)\right)\]
and so \(\pi_{i}d\rho_{i}\in\operatorname{Der}(A_{i})\).
(2) Assume that \(d\in\operatorname{Der}(\prod_{i\in\Omega}A_{i})\) and \(d\) is isotone. For each \(i\in\Omega\), we know by (1) that \(\pi_{i}d\rho_{i}\in\operatorname{Der}(A_{i})\). Also, since \(\pi_{i}\) and \(\rho_{i}\) are isotone, it follows that \(\pi_{i}d\rho_{i}\) is isotone. Thus (2) holds.
(3) Assume that \(d\in\operatorname{PDer}(\prod_{i\in\Omega}A_{i})\), i.e, \(d=d_{a}\) for some \(a=(a_{i})_{i\in\Omega}\in\prod_{i\in\Omega}A_{i}\). For each \(i\in\Omega\), let \(x\in A_{i}\). Then we have
\[(\pi_{i}d\rho_{i})\left(x\right)=\pi_{i}d\left(\rho_{i}(x)\right)=\pi_{i} \left(\rho_{i}(x)\odot a\right)=\pi_{i}\left(\rho_{i}(x)\right)\odot\pi_{i}(a) =x\odot a_{i},\]
and thus \(\pi_{i}d\rho_{i}\in\operatorname{PDer}(A_{i})\).
Combining the structures of an MV-algebra and an \((\odot,\vee)\)-derivation in the language of universal algebra [7], we give
**Definition 4.5**.: A **differential MV-algebra** is an algebra \((A,\oplus,*,d,0)\) of type \((2,1,1,0)\) such that
1. \((A,\oplus,*,0)\) is an MV-algebra, and
2. \(d\) is an \((\odot,\vee)\)-derivation on \(A\).
Let \(\Omega\) be an index set, \(\{A_{i}\}_{i\in\Omega}\) be a family of MV-algebras, and \(d_{i}\in\operatorname{Der}(A_{i})\). Then \((A_{i},\oplus_{i},*_{i},d_{i},0_{i})\) is a differential MV-algebra. From the viewpoint of universal algebra [7, Theorem 11.9], we know that the class of all differential MV-algebras forms a variety. Thus the direct product \((\prod_{i\in\Omega}A_{i},\oplus,*,\prod_{i\in\Omega}d_{i},0)\) is also a differential MV-algebra, and so \(\prod_{i\in\Omega}d_{i}\in\operatorname{Der}(\prod_{i\in\Omega}A_{i})\). Hence we obtain that
\[\prod_{i\in\Omega}\operatorname{Der}(A_{i})\subseteq\operatorname{Der}(\prod_ {i\in\Omega}A_{i}) \tag{3}\]
But \(\prod_{i\in\Omega}\operatorname{Der}(A_{i})\neq\operatorname{Der}(\prod_{i\in \Omega}A_{i})\) whenever \(|\Omega|\geq 2\), see Remark 4.8.
**Example 4.1**.:
1. Let \(L_{2}=\{0,1\}\) be the 2-element MV-chain. Then \(\operatorname{Der}(L_{2})=\{\operatorname{Id}_{L_{2}},\mathbf{0}_{L_{2}}\}\) by Theorem 3.11, so \(\operatorname{Der}(L_{2})\times\operatorname{Der}(L_{2})=\{d_{1}=\operatorname{ Id}_{L_{2}}\times\operatorname{Id}_{L_{2}},d_{2}=\operatorname{Id}_{L_{2}} \times\mathbf{0}_{L_{2}},d_{3}=\mathbf{0}_{L_{2}}\times\operatorname{Id}_{L_{2} },d_{4}=\mathbf{0}_{L_{2}}\times\mathbf{0}_{L_{2}}\}\subseteq\operatorname{Der} (L_{2}\times L_{2})\). Notice that in [14], \(L_{2}\times L_{2}\) is denoted by \(M_{4}\), and \(d_{1},d_{2},d_{3},d_{4}\) are denoted by \(\operatorname{Id}_{M_{4}},y_{2},y_{4},\mathbf{0}_{M_{4}}\), respectively. By [14, Theorem 3.21], \(|\operatorname{Der}(L_{2}\times L_{2})|=9\), so \(\operatorname{Der}(L_{2})\times\operatorname{Der}(L_{2})\neq\operatorname{Der} (L_{2}\times L_{2})\).
2. Let \(L_{3}=\{0,\frac{1}{2},1\}\) be the 3-element MV-chain with \(0<\frac{1}{2}<1\). By Theorem 3.11 we have \(\operatorname{Der}(L_{3})=\{\operatorname{Id}_{L_{3}},\mathbf{0}_{L_{3}},d_{ \frac{1}{2}},\chi^{(0)},\chi^{(\frac{1}{2})}\}\). Thus \[|\operatorname{Der}(L_{2})\times\operatorname{Der}(L_{3})|=|\operatorname{Der} (L_{2})|\times|\operatorname{Der}(L_{3})|=10.\] Let \(\mathbf{0}=(0,0)\), \(\mathbf{a}=(0,\frac{1}{2})\), \(\mathbf{b}=(0,1)\), \(\mathbf{c}=(1,0)\), \(\mathbf{d}=(1,\frac{1}{2})\) and \(\mathbf{1}=(1,1)\). Then the Hasse diagram of \(L_{2}\times L_{3}\) is given below (see Figure 1). We give all elements of \(\operatorname{Der}(L_{2}\times L_{3})\) in Table 1 by Python (Full details are given in Appendix I listing 1). It can be verified that there are 23 elements (from \(d_{11}\) to \(d_{33}\)) in \(\operatorname{Der}(L_{2}\times L_{3})\) but not in \(\operatorname{Der}(L_{2})\times\operatorname{Der}(L_{3})\).
**Theorem 4.6**.: _Let \(\Omega\) be an index set, \(\{A_{i}\}_{i\in\Omega}\) be a family of MV-algebras, and \(d_{i}\) be an operator on \(A_{i}\) for each \(i\in\Omega\). Let \(A=\prod_{i\in\Omega}A_{i}\). Then the following statements hold:_
1. \(\pi_{i}\left(\prod_{i\in\Omega}d_{i}\right)\rho_{i}=d_{i}\)_, and_ \(\pi_{i}\left(\prod_{i\in\Omega}d_{i}\right)=d_{i}\pi_{i}\) _for each_ \(i\in\Omega\)_._
2. \(\prod_{i\in\Omega}d_{i}\in\mathrm{Der}(A)\) _if and only if_ \(d_{i}\in\mathrm{Der}(A_{i})\) _for each_ \(i\in\Omega\)_._
3. \(\prod_{i\in\Omega}d_{i}\in\mathrm{Der}(A)\) _and_ \(\prod_{i\in\Omega}d_{i}\) _isotone if and only if_ \(d_{i}\in\mathrm{Der}(A_{i})\) _and_ \(d_{i}\) _is isotone for each_ \(i\in\Omega\)_._
4. \(\prod_{i\in\Omega}d_{i}\in\mathrm{PDer}(A)\) _if and only if_ \(d_{i}\in\mathrm{PDer}(A_{i})\) _for each_ \(i\in\Omega\)_._
5. _For any_ \(i\in\Omega\)_, if_ \(d_{i}\left(0_{i}\right)=0_{i}\)_, then_ \(\left(\prod_{i\in\Omega}d_{i}\right)\rho_{i}=\rho_{i}d_{i}\)_, that is, the corresponding diagram is commutative (put_ \(d=\prod_{i\in\Omega}d_{i}\)_)._
Proof.: (1) Let \(i\in\Omega\) and \(a\in A_{i}\). It is easy to see that \(\left(\pi_{i}\left(\prod_{i\in\Omega}d_{i}\right)\rho_{i}\right)(a)=d_{i}(a)\), and so \(\pi_{i}\left(\prod_{i\in\Omega}d_{i}\right)\rho_{i}=d_{i}.\) Also, for any \(z\in A\), we have \(\left(\pi_{i}\left(\prod_{i\in\Omega}d_{i}\right)\right)(z)=d_{i}\pi_{i}(z)\), since \(z=\left(\pi_{i}(z)\right)_{i\in\Omega}.\) Thus \(\pi_{i}\left(\prod_{i\in\Omega}d_{i}\right)=d_{i}\pi_{i}.\)
(2) Assume that \(d_{i}\in\operatorname{Der}(A_{i})\) for each \(i\in\Omega\). Then \(\prod_{i\in\Omega}d_{i}\in\operatorname{Der}(A)\) by Eq. (3).
Conversely, if \(\prod_{i\in\Omega}d_{i}\in\operatorname{Der}(A)\), then \(d_{i}=\pi_{i}\left(\prod_{i\in\Omega}d_{i}\right)\rho_{i}\in\operatorname{Der} (A_{i})\) by (1) and Lemma 4.4 (1).
(3) Assume that \(d_{i}\in\operatorname{Der}(A_{i})\) for each \(i\in\Omega\) and \(d_{i}\) is isotone for each \(i\in\Omega\). Then \(\prod_{i\in\Omega}d_{i}\in\operatorname{Der}(A)\) by (2). And it can be verified that \(\prod_{i\in\Omega}d_{i}\) is isotone. In fact, let \(x,y\in A\) and \(x\leq y\), that is, \(x_{i}\leq y_{i}\) for each \(i\in\Omega\), we have \(\left(\prod_{i\in\Omega}d_{i}\right)(x)=\prod_{i\in\Omega}d_{i}(x_{i})\leq\prod _{i\in\Omega}d_{i}(y_{i})=\left(\prod_{i\in\Omega}d_{i}\right)(y)\).
Conversely, if \(\prod_{i\in\Omega}d_{i}\in\operatorname{Der}(A)\) and \(\prod_{i\in\Omega}d_{i}\) isotone, then \(d_{i}=\pi_{i}\left(\prod_{i\in\Omega}d_{i}\right)\rho_{i}\in\operatorname{Der} (A_{i})\) and \(d_{i}\) is isotone by (1) and Lemma 4.4 (2).
(4) Assume that \(d_{i}\in\operatorname{PDer}(A_{i})\) for each \(i\in\Omega\). Then \(d_{i}\left(x_{i}\right)=x_{i}\odot a_{i}\), where \(a_{i}\in A_{i}\). Let \(\left(x_{i}\right)_{i\in\Omega}\in A\), and so \(\left(\prod_{i\in\Omega}d_{i}\right)\left(\left(x_{i}\right)_{i\in\Omega} \right)=\left(d_{i}\left(x_{i}\right)\right)_{i\in\Omega}=\left(x_{i}\odot a_{ i}\right)_{i\in\Omega}=\left(x_{i}\right)_{i\in\Omega}\odot\left(a_{i} \right)_{i\in\Omega}.\) Thus \(\prod_{i\in\Omega}d_{i}\in\operatorname{PDer}(A)\).
Conversely, if \(\prod_{i\in\Omega}d_{i}\in\operatorname{PDer}(A)\), then \(d_{i}=\pi_{i}\left(\prod_{i\in\Omega}d_{i}\right)\rho_{i}\in\operatorname{PDer }(A_{i})\) by (1) and Lemma 4.4 (3).
(5) Assume that \(d_{i}\left(0_{i}\right)=0_{i}\) for any \(i\in\Omega\), and \(\prod_{i\in\Omega}d_{i}=d\). To prove that \(d\rho_{i}=\rho_{i}d_{i}\), let \(x\in A_{i}\). We have \(d\rho_{i}(x)=\rho_{i}d_{i}(x)\), since
\[\pi_{j}\left(d\rho_{i}(x)\right)=\begin{cases}d_{i}(x),&\text{ if }j=i\\ d_{j}\begin{pmatrix}0_{j}\end{pmatrix},&\text{ otherwise}\end{cases}\]
and
\[\pi_{j}\left(\rho_{i}d_{i}(x)\right)=\begin{cases}d_{i}(x),&\text{ if }j=i\\ 0_{j},&\text{ otherwise}.\end{cases}\]
Thus \(d\rho_{i}=\rho_{i}d_{i}\).
**Corollary 4.7**.: _Let \(\Omega\) be an index set, \(\left\{A_{i}\right\}_{i\in\Omega}\) be a family of MV-algebras, and \(d\) be an operator on \(\prod_{i\in\Omega}A_{i}\). Put \(A=\prod_{i\in\Omega}A_{i}\). Then the following statements hold:_
1. _If_ \(d\in\operatorname{Der}(A)\)_, then_ \(d\in\prod_{i\in\Omega}\operatorname{Der}(A_{i})\) _if and only if_ \(d=\prod_{i\in\Omega}\pi_{i}d\rho_{i}\)_._
2. _If_ \(d\in\operatorname{PDer}(A)\)_, then_ \(d\in\prod_{i\in\Omega}\operatorname{PDer}(A_{i})\) _if and only if_ \(d=\prod_{i\in\Omega}\pi_{i}d\rho_{i}\)_._
Proof.: (1) Assume that \(d\in\operatorname{Der}(A)\). Then \(\pi_{i}d\rho_{i}\in\operatorname{Der}(A_{i})\) for each \(i\in\Omega\) by Lemma 4.4 (1), which implies that \(d\in\prod_{i\in\Omega}\operatorname{Der}(A_{i})\) if \(d=\prod_{i\in\Omega}\pi_{i}d\rho_{i}\).
Conversely, if \(d\in\prod_{i\in\Omega}\operatorname{Der}(A_{i})\), then \(d=\prod_{i\in\Omega}d_{i}\) for some \(d_{i}\in\operatorname{Der}(A_{i}).\) It follows by Theorem 4.6 (1) that \(\pi_{i}d\rho_{i}=d_{i}\), and so \(d=\prod_{i\in\Omega}\pi_{i}d\rho_{i}\).
(2) Assume that \(d\in\operatorname{PDer}(A_{i})\). Then \(\pi_{i}d\rho_{i}\in\operatorname{PDer}(A_{i})\) for each \(i\in\Omega\) by Lemma 4.4 (3), which implies that \(d\in\prod_{i\in\Omega}\operatorname{PDer}(A_{i})\) if \(d=\prod_{i\in\Omega}\pi_{i}d\rho_{i}\).
Conversely, if \(d\in\prod_{i\in\Omega}\operatorname{PDer}(A_{i})\), then \(d=\prod_{i\in\Omega}d_{i}\) for some \(d_{i}\in\operatorname{PDer}(A_{i}).\) It follows by Theorem 4.6 (1) that \(\pi_{i}d\rho_{i}=d_{i}\), and so \(d=\prod_{i\in\Omega}\pi_{i}d\rho_{i}\).
**Remark 4.8**.: Let \(\Omega\) be an index set with \(\left|\Omega\right|\geq 2\), \(\left\{A_{i}\right\}_{i\in\Omega}\) be a family of MV-algebras. Then \(\prod_{i\in\Omega}\operatorname{Der}(A_{i})\neq\operatorname{Der}(\prod_{i\in \Omega}A_{i})\), since for any \(a\in\prod_{i\in\Omega}A_{i}\backslash\{1\}\), we have \(\chi^{(a)}\in\operatorname{Der}(\prod_{i\in\Omega}A_{i})\) by Corollary 3.13, but \(\chi^{(a)}\notin\prod_{i\in\Omega}\operatorname{Der}(A_{i})\). In fact, for each \(i\in\Omega\), we have \(\pi_{i}\chi^{(a)}\rho_{i}\in\operatorname{Der}(A_{i})\) by Lemma 4.4 and
\[\pi_{i}\chi^{(a)}\rho_{i}(1_{i})=\pi_{i}(\chi^{(a)}(\rho_{i}(1_{i})))=\pi_{i}( \rho_{i}(1_{i}))=1_{i}.\]
It follows that \(\pi_{i}\chi^{(a)}\rho_{i}=\operatorname{Id}_{A_{i}}\) by Proposition 3.4, so \(\chi^{(a)}\neq\operatorname{Id}_{\prod_{i\in\Omega}A_{i}}=\prod_{i\in\Omega} \pi_{i}\chi^{(a)}\rho_{i}\). Thus \(\chi^{(a)}\notin\prod_{i\in\Omega}\operatorname{Der}(A_{i})\) by Corollary 4.7 (1), and hence \(\prod_{i\in\Omega}\operatorname{Der}(A_{i})\neq\operatorname{Der}(\prod_{i\in \Omega}A_{i})\).
**Proposition 4.9**.: _Let \(\Omega\) be an index set, \(\{A_{i}\}_{i\in\Omega}\) be a family of MV-algebras. Then \(\operatorname{PDer}(\prod_{i\in\Omega}A_{i})=\prod_{i\in\Omega}\operatorname{ PDer}(A_{i})\)._
Proof.: Firstly, we have \(\prod_{i\in\Omega}\operatorname{PDer}(A_{i})\subseteq\operatorname{PDer}( \prod_{i\in\Omega}A_{i})\) by Theorem 4.6 (4).
To prove that \(\operatorname{PDer}(\prod_{i\in\Omega}A_{i})\subseteq\prod_{i\in\Omega} \operatorname{PDer}(A_{i})\), let \(d\in\operatorname{PDer}(\prod_{i\in\Omega}A_{i})\). Then for any \(x=(x_{i})_{i\in\Omega}\in\prod_{i\in\Omega}A_{i}\), by Proposition 3.19, \(d(x)=x\odot a\) for some \(a=(a_{i})_{i\in\Omega}\in\prod_{i\in\Omega}A_{i}\), so \((\prod_{i\in\Omega}\pi_{i}d\rho_{i})(x)=(\pi_{i}d\rho_{i}(x_{i}))_{i\in\Omega} =(\pi_{i}(\rho_{i}(x_{i})\odot a))_{i\in\Omega}=(\pi_{i}\rho_{i}(x_{i})\odot \pi_{i}(a))_{i\in\Omega}=(x_{i}\odot\pi_{i}(a))_{i\in\Omega}=(x_{i})_{i\in \Omega}\odot(a_{i})_{i\in\Omega}=x\odot a=d(x)\). It follows that \(d=\prod_{i\in\Omega}\pi_{i}d\rho_{i}\), and so \(d\in\operatorname{PDer}(\prod_{i\in\Omega}A_{i})\) by Corollary 4.7 (2).
## 5. Lattice structure of \((\odot,\vee)\)-derivations on MV-algebras
Let \((A,\oplus,*,0)\) be an MV-algebra and let \(\operatorname{O}(A)\) be the set of all operators on \(A\). Define a relation \(\preceq\) on \(\operatorname{O}(A)\) by:
\[(\forall\ d,d^{\prime}\in\operatorname{O}(A))\ d\preceq d^{\prime}\text{ if }d(x)\leq d^{\prime}(x)\text{ for any }x\in A.\]
It is easy to verify that \(\preceq\) is a partial order on \(\operatorname{O}(A)\) and \(\mathbf{0}_{A}\preceq d\preceq\mathbf{1}_{A}\) for any \(d\in\operatorname{O}(A)\), where \(\mathbf{1}_{A}\) is defined by \(\mathbf{1}_{A}(x):=1\) for any \(x\in A\). For any \(d\in\operatorname{Der}(A)\), we have \(\mathbf{0}_{A}\preceq d\preceq\operatorname{Id}_{A}\) since \(0\leq d(x)\leq x\) for any \(x\in A\).
We also define the following binary operations on \(\operatorname{O}(A)\). For \(d,d^{\prime}\in\operatorname{O}(A)\), set
\[\left(d\lor d^{\prime}\right)(x):=d(x)\lor d^{\prime}(x),\ \left(d\wedge d^{ \prime}\right)(x):=d(x)\wedge d^{\prime}(x) \tag{4}\]
for any \(x\in A\).
**Lemma 5.1**.: _Let \(A\) be an MV-algebra. Then \((\operatorname{O}(A),\preceq,\mathbf{0}_{A},\mathbf{1}_{A})\) is a bounded lattice for which \(d\lor d^{\prime}\) and \(d\wedge d^{\prime}\) are, respectively, the least upper bound and the greatest lower bound of \(d\) and \(d^{\prime}\)._
Proof.: Recall that every MV-algebra induces a natural bounded lattice structure. Since the class of all lattices is a variety and \(\operatorname{O}(A)\) is the direct product of \(|A|\) copies of \(A\), the lemma follows immediately from the usual notions of universal algebra [7, Definition 7.8].
We next explore the partial order structure of the set of \((\odot,\vee)\)-derivations on MV-algebras.
**Lemma 5.2**.: _Let \(A\) be an MV-algebra. Then \(d\lor d^{\prime}\in\operatorname{Der}(A)\) for all \(d,d^{\prime}\in\operatorname{Der}(A)\)._
Proof.: Let \(d,d^{\prime}\in\operatorname{Der}(A)\) and \(x,y\in A\). Then we have
\[(d\lor d^{\prime})(x\odot y) = d(x\odot y)\lor d^{\prime}(x\odot y)\] \[= ((d(x)\odot y)\vee(x\odot d(y)))\vee((d^{\prime}(x)\odot y)\vee( x\odot d^{\prime}(y)))\] \[= ((d(x)\odot y)\vee(d^{\prime}(x)\odot y))\vee((x\odot d(y))\vee( x\odot d^{\prime}(y)))\] \[= ((d(x)\lor d^{\prime}(x))\odot y)\vee(x\odot(d(y)\lor d^{\prime}( y)))\] \[= ((d\lor d^{\prime})(x)\odot y)\vee(x\odot(d\lor d^{\prime})(y))\]
by Lemma 2.3 (7), and so \(d\lor d^{\prime}\in\operatorname{Der}(A)\).
For \(d,d^{\prime}\in\operatorname{Der}(A)\), note that the operator \(d\wedge d^{\prime}\) are not necessarily in \(\operatorname{Der}(A)\) even if \(A\) is a Boolean algebra, see [14, Example 3.7 and Remark 4.2].
**Proposition 5.3**.: _Let \(A\) be an MV-algebra._
1. _If_ \(d\wedge d^{\prime}\in\operatorname{Der}(A)\) _for all_ \(d,d^{\prime}\in\operatorname{Der}(A)\)_, then_ \((\operatorname{Der}(A),\vee,\wedge,\mathbf{0}_{A},\operatorname{Id}_{A})\) _is a lattice._
2. _If_ \(A\) _is a finite MV-algrbra, then_ \((\operatorname{Der}(A),\preceq,\boldsymbol{\theta}_{A},\operatorname{Id}_{A})\) _is a lattice._
Proof.: (1) For \(d,d^{\prime}\in\operatorname{Der}(A)\) and \(x,y\in A\), we have known \(d\lor d^{\prime}\in\operatorname{Der}(A)\). Assume that \(d\wedge d^{\prime}\in\operatorname{Der}(A)\) for all \(d,d^{\prime}\in\operatorname{Der}(A)\). Then \((\operatorname{Der}(A),\preceq)\) is a sublattice of the lattice \((\operatorname{O}(A),\preceq)\) by Lemma 5.1. Thus we complete the proof.
(2) Assume that \(A\) is a finite MV-algebra, by Lemma 5.2 we have \(d\lor d^{\prime}\in\operatorname{Der}(A)\) for all \(d,d^{\prime}\in\operatorname{Der}(A)\). Since \(\operatorname{Der}(A)\) is finite as a subset of the finite set \(\operatorname{O}(A)\), it follows that \(\bigvee B\colon=\bigvee_{b\in B}b\) exists for every subset \(B\) of \(\operatorname{Der}(A)\). Noticing that \(\bigvee\emptyset=\boldsymbol{0}_{A}\), hence \((\operatorname{Der}(A),\preceq,\boldsymbol{0}_{L},\operatorname{Id}_{A})\) is a lattice by [7, Theorem 4.2].
In what follows, we will describe the lattice \(\operatorname{Der}(L_{n})\)\((n\geq 2)\).
**Lemma 5.4**.: _Let \((L,\preceq)\) be a chain with the bottom element \(0\), and let_
\[\mathcal{A}(L)=\{(x,y)\in L\times L\mid y\leq x\}\backslash\{(0,0)\}.\]
_Then \((\mathcal{A}(L),\prec)\) is a sublattice of the lattice \((L\times L,\prec)\), where \(\prec\) is defined by: for any \((x_{1},y_{1}),(x_{2},y_{2})\in L\times L\),_
\[(x_{1},y_{1})\prec(x_{2},y_{2})\text{ if and only if }x_{1}\leq x_{2}\text{ and }y_{1}\leq y_{2}.\]
Proof.: It is well known that \((L\times L,\prec)\) is a lattice and for any \((x_{1},y_{1}),(x_{2},y_{2})\in L\times L\),
\[(x_{1}\lor x_{2},y_{1}\lor y_{2})=(x_{1},y_{1})\vee(x_{2},y_{2}),\quad(x_{1} \wedge x_{2},y_{1}\wedge y_{2})=(x_{1},y_{1})\wedge(x_{2},y_{2}).\]
To prove that \((\mathcal{A}(L),\prec)\) is a sublattice of the lattice \((L\times L,\prec)\), let \((a,b),(c,d)\in\mathcal{A}(L)\). Then \(b\leq a\), \(d\leq c\) and \((a,b)\neq(0,0)\), \((c,d)\neq(0,0)\). It follows that \(b\lor d\leq a\lor c\), \(b\wedge d\leq a\wedge c\), \(a\neq 0\) and \(c\neq 0\), so \(a\lor c\neq 0\) and \(a\wedge c\neq 0\) since \(L\) is a chain. Thus \((a\lor c,b\lor d)\neq(0,0)\), and \((a\wedge c,b\wedge d)\neq(0,0)\). So \((a,b)\vee(c,d)\in\mathcal{A}(L)\) and \((a,b)\wedge(c,d)\in\mathcal{A}(L)\). Consequently, we get that \((\mathcal{A}(L),\prec)\) is a sublattice of the lattice \((L\times L,\prec)\).
**Lemma 5.5**.: _Let \(n\geq 2\) be a positive integer, \(L_{n}\) be the \(n\)-element MV-chain, and let \(\mathcal{A}(L_{n})=\{(x,y)\in L_{n}\times L_{n}\mid y\leq x\}\backslash\{(0,0)\}\). Then the following statements hold:_
1. \((d_{x})^{y}\neq(d_{z})^{w}\) _for any_ \((x,y),(z,w)\in\mathcal{A}(L_{n})\) _with_ \((x,y)\neq(z,w)\)_, where_ \((d_{x})^{y}\) _is defined by_ \[(d_{x})^{y}(z):=\begin{cases}y&\text{if }z=1\\ d_{x}(z)=x\odot z&\text{otherwise}.\end{cases}\] _(See Proposition_ 3.12_)._
2. \(\operatorname{Der}(L_{n})=\{(d_{x})^{y}\mid(x,y)\in\mathcal{A}(L_{n})\}\)_._
3. \((d_{x})^{y}\wedge(d_{z})^{w}=(d_{x\wedge z})^{y\wedge w}\) _and_ \((d_{x})^{y}\vee(d_{z})^{w}=(d_{x\lor z})^{y\lor w}\) _for any_ \((x,y),(z,w)\in\mathcal{A}(L_{n})\)_._
4. \(\operatorname{Der}(L_{n})\) _is a sublattice of_ \((\operatorname{O}(L_{n}),\prec)\)_._
Proof.: (1) Let \((x,y),(z,w)\in\mathcal{A}(L_{n})\) with \((x,y)\neq(z,w)\). Then \(y\leq x\), \(x\neq 0\), and \(w\leq z\), \(z\neq 0\). So \(x^{*}\neq 1\) and \(z^{*}\neq 1\).
If \(y\neq w\), then \((d_{x})^{y}(1)=y\neq w=(d_{z})^{w}(1)\), and so \((d_{x})^{y}\neq(d_{z})^{w}\).
If \(x\neq z\) and \(y=w\), then we also have \((d_{x})^{y}\neq(d_{z})^{w}\). Indeed, suppose on the contrary that \((d_{x})^{y}=(d_{z})^{w}\). Since \(x^{*}\neq 1\) and \(z^{*}\neq 1\), we have
\[z\odot x^{*}=(d_{z})^{w}(x^{*})=(d_{x})^{y}(x^{*})=x\odot x^{*}=0\]
and \(x\odot z^{*}=(d_{x})^{y}(z^{*})=(d_{z})^{w}(z^{*})=z\odot z^{*}=0\), which implies that \(z\leq x\) and \(x\leq z\) by Lemma 2.2, and so \(x=z\), a contradiction.
(2) Denote the set \(\{(d_{x})^{y}\mid(x,y)\in\mathcal{A}(L_{n})\}\) by \(\mathcal{B}\). For any \((x,y)\in\mathcal{A}(L_{n})\), we have \(y\leq x=d_{x}(1)\) and so \((d_{x})^{y}\in\operatorname{Der}(L_{n})\) by Proposition 3.12. Thus \(\mathcal{B}\subseteq\operatorname{Der}(L_{n})\). Also, by Item (1) we obtain that \(|\mathcal{B}|=|\mathcal{A}(L_{n})|=\frac{n(n+1)}{2}-1=\frac{(n+2)(n-1)}{2}\), so \(|\mathcal{B}|=|\operatorname{Der}(L_{n})|\) by Theorem 3.11. Hence \(\mathcal{B}=\operatorname{Der}(L_{n})\).
(3) Let \((x,y),(z,w)\in\mathcal{A}(L_{n})\). Then \(((d_{x})^{y}\wedge(d_{z})^{w})(1)=(d_{x})^{y}(1)\wedge(d_{z})^{w}(1)=y\wedge w =(d_{x\wedge z})^{y\wedge w}(1)\) and \(((d_{x})^{y}\vee(d_{z})^{w})(1)=(d_{x})^{y}(1)\vee(d_{z})^{w}(1)=y\lor w=(d_{ x\lor z})^{y\lor w}(1)\).
Also, for \(c\in L_{n}\backslash\{1\}\), we have
\[((d_{x})^{y}\wedge(d_{z})^{w})(c)=(d_{x})^{y}(c)\wedge(d_{z})^{w}(c)=(x\odot c )\wedge(z\odot c)=(x\wedge z)\odot c=(d_{x\wedge z})^{y\wedge w}(c)\]
by Lemma 2.3 (6), and
\[((d_{x})^{y}\vee(d_{z})^{w})(c)=(d_{x})^{y}(c)\vee(d_{z})^{w}(c)=(x\odot c) \vee(z\odot c)=(x\lor z)\odot c=(d_{x\lor z})^{y\lor w}(c)\]
by Lemma 2.3 (7). It follows that \((d_{x})^{y}\wedge(d_{z})^{w}=(d_{x\wedge z})^{y\wedge w}\) and \((d_{x})^{y}\vee(d_{z})^{w}=(d_{x\lor z})^{y\lor w}\).
(4) It follows immediately by Items (2), (3) and Lemma 5.4 that \(\operatorname{Der}(L_{n})\) is closed under \(\vee\) and \(\wedge\), so \(\operatorname{Der}(L_{n})\) is a sublattice of \((\operatorname{O}(L_{n}),\preceq)\).
**Theorem 5.6**.: _Let \(n\geq 2\) be a positive integer, \(L_{n}\) be the \(n\)-element MV-chain, and let \(\mathcal{A}(L_{n})=\{(x,y)\in L_{n}\times L_{n}\mid y\leq x\}\backslash\{(0,0)\}\). Then the lattice \(\operatorname{Der}(L_{n})\) is isomorphic to the lattice \(\mathcal{A}(L_{n})\)(see the following diagram)._
Proof.: Let \(\mathcal{B}=\{(d_{x})^{y}|(x,y)\in\mathcal{A}(L_{n})\}\). Then \(\mathcal{B}=\operatorname{Der}(L_{n})\) by Lemma 5.5 (2).
Define a map \(f\colon\mathcal{A}(L_{n})\to\mathcal{B}\) by \((x,y)\mapsto(d_{x})^{y}\) for any \((x,y)\in\mathcal{A}(L_{n})\). Then \(f\) is injective by Lemma 5.5 (1). Also, it is clear that \(f\) is surjective by the definition of \(\mathcal{B}\).
To prove that \(f\) is a homomorphism, let \((x,y),(z,w)\in\mathcal{A}(L_{n})\). Then, by Lemma 5.5, we have \(f((x,y)\vee(z,w))=f((x\lor z,y\lor w))=(d_{x\lor z})^{y\lor w}=(d_{x})^{y} \vee(d_{z})^{w}=f((x,y))\lor f((z,w))\) and \(f((x,y)\wedge(z,w))=f((x\wedge z,y\wedge w))=(d_{x\wedge z})^{y\wedge w}=(d_{x} )^{y}\wedge(d_{z})^{w}=f((x,y))\wedge f((z,w))\). Thus \(f\) is a lattice isomorphism.
**Example 5.1**.:
1. We draw Hasse diagrams of \(\operatorname{Der}(L_{n})(2\leq n\leq 5)\) in the following:
\(\begin{array}{cccc}\text{Der}(L_{2})&\text{Der}(L_{3})&\text{Der}(L_{4})& \text{Der}(L_{5})\\ \end{array}\)
2. The Hasse diagram of \(\text{Der}(L_{2}\times L_{2})\) is given in [14, Example 4.21(iii)], where \(d_{1}-d_{4}\) are in Example 4.1 (1) and others are the same as \(\text{DO}(M_{4})\) in [14]. And we can get the Hasse diagram of \(\text{Der}(L_{2}\times L_{3})\) by Table 1 in Example 4.1 (2) using python. For details, see the Appendix II listing 2.
\(\begin{array}{cccc}\text{Der}(L_{2})&\text{Der}(L_{3})&\text{Der}(L_{4})& \text{Der}(L_{5})\\ \end{array}\)
\(\begin{array}{cccc}\text{Der}(L_{2}\times L_{2})&\text{Der}(L_{2}\times L_{3 })&\text{Der}(L_{2}\times L_{3})\\ \end{array}\)
\(\begin{array}{cccc}\text{Der}(L_{2})&\text{Der}(L_{3})&\text{Der}(L_{4})& \text{Der}(L_{5})\\ \end{array}\)
\(\begin{array}{cccc}\text{Der}(L_{2}\times L_{2})&\text{Der}(L_{2}\times L_{ 3})&\text{Der}(L_{2}\times L_{3})\\ \end{array}\)
\(\begin{array}{cccc}\text{Der}(L_{2})&\text{Der}(L_{3})&\text{Der}(L_{4})& \text{Der}(L_{5})\\ \end{array}\)
\(\begin{array}{cccc}\text{Der}(L_{2}\times L_{2})&\text{Der}(L_{2}\times L_{ 3})&\text{Der}(L_{2}\times L_{3})\\ \end{array}\)
\(\begin{array}{cccc}\text{Der}(L_{2})&\text{Der}(L_{3})&\text{Der}(L_{4})& \text{Der}(L_{5})\\ \end{array}\)
\(\begin{array}{cccc}\text{Der}(L_{2}\times L_{2})&\text{Der}(L_{2}\times L_{ 3})&\text{Der}(L_{2}\times L_{3})\\ \end{array}\)
\(\begin{array}{cccc}\text{Der}(L_{2})&\text{Der}(L_{3})&\text{Der}(L_{4})& \text{Der}(L_{5})\\ \end{array}\)
\(\begin{array}{cccc}\text{Der}(L_{2}\times L_{2})&\text{Der}(L_{2}\times L_{ 3})&\text{Der}(L_{2}\times L_{3})\\ \end{array}\)
\(\begin{array}{cccc}\text{Der}(L_{2})&\text{Der}(L_{3})&\text{Der}(L_{4})& \text{Der}(L_{5})\\ \end{array}\)
\(\begin{array}{cccc}\text{Der}(L_{2}\times L_{2})&\text{Der}(L_{2}\times L_{ 3})&\text{Der}(L_{2}\times L_{3})\\ \end{array}\)
\(\begin{array}{cccc}\text{Der}(L_{2})&\text{Der}(L_{3})&\text{Der}(L_{4})& \text{Der}(L_{5})\\ \end{array}\)
\(\begin{array}{cccc}\text{Der}(L_{2}\times L_{2})&\text{Der}(L_{2}\times L_{ 3})&\text{Der}(L_{2}\times L_{3})\\ \end{array}\)
\(\begin{array}{cccc}\text{Der}(L_{2})&\text{Der}(L_{3})&\text{Der}(L_{4})& \text{Der}(L_{5})\\ \end{array}\)
\(\begin{array}{cccc}\text{Der}(L_{2}\times L_{2})&\text{Der}(L_{2}\times L_{ 3})&\text{Der}(L_{2}\times L_{3})\\ \end{array}\)
\(\begin{array}{cccc}\text{Der}(L_{2})&\text{Der}(L_{3})&\text{Der}(L_{4})& \text{Der}(L_{5})\\ \end{array}\)
\(\begin{array}{cccc}\text{Der}(L_{2}\times L_{2})&\text{Der}(L_{2}\times L_{ 3})&\text{Der}(L_{2}\times L_{3})\\ \end{array}\)
\(\begin{array}{cccc}\text{Der}(L_{2})&\text{Der}(L_{3})&\text{Der}(L_{4})& \text{Der}(L_{5})\\ \end{array}\)
\(\begin{array}{cccc}\text{Der}(L_{2}\times L_{2})&\text{Der}(L_{2}\times L_{ 3})&\text{Der}(L_{2}\times L_{3})\\ \end{array}\)
\(\begin{array}{cccc}\text{Der}(L_{2})&\text{Der}(L_{3})&\text{Der}(L_{4})& \text{Der}(L_{5})\\ \end{array}\)
\(\begin{array}{cccc}\text{Der}(L_{2}\times L_{2})&\text{Der}(L_{2}\times L_{ 3})&\text{Der}(L_{2}\times L_{3})\\ \end{array}\)
\(\begin{array}{cccc}\text{Der}(L_{2})&\text{Der}(L_{3})&\text{Der}(L_{4})& \text{Der}(L_{5})\\ \end{array}\)
\(\begin{array}{cccc}\text{Der}(L_{2})&\text{Der}(L_{3})&\text{Der}(L_{4})& \text{Der}(L_{5})\\ \end{array}\)
**Lemma 5.7**.: _[_11_, Lemma 6.6.4]_ _Let \(A\) be a complete MV-algebra, \(x\in A\) and let \(\{x_{i}\}_{i\in\Omega}\) be a family elements of \(A\). Then_
\[x\odot\bigvee_{i\in\Omega}x_{i}=\bigvee_{i\in\Omega}\left(x\odot x_{i}\right). \tag{6}\]
**Theorem 5.8**.: _Let \(A\) be a complete MV-algebra and \(\{d_{i}\}_{i\in\Omega}\) be a family elements of \(\mathrm{Der}(A)\). Then the following statements hold:_
1. \(\bigvee_{i\in\Omega}d_{i}\in\mathrm{Der}(A)\)_._
2. \((\mathrm{Der}(A),\preceq,\mathbf{0}_{A},\mathrm{Id}_{A})\) _is a complete lattice._
Proof.: (1) For any \(x,y\in A\), we have
\[(\bigvee_{i\in\Omega}d_{i})(x\odot y) = \bigvee_{i\in\Omega}d_{i}(x\odot y)\] \[\stackrel{{\eqref{eq:A_i}}}{{=}} \bigvee_{i\in\Omega}((d_{i}(x)\odot y)\vee(x\odot d_{i}(y)))\] \[\stackrel{{\eqref{eq:A_i}}}{{=}} (\bigvee_{i\in\Omega}((d_{i}(x)\odot y)))\vee\bigvee_{i\in \Omega}(x\odot d_{i}(y))\] \[\stackrel{{\eqref{eq:A_i}}}{{=}} ((\bigvee_{i\in\Omega}d_{i}(x))\odot y))\vee(x\odot\bigvee_{i\in \Omega}d_{i}(y))\] \[= ((\bigvee_{i\in\Omega}d_{i})(x)\odot y)\vee(x\odot(\bigvee_{i\in \Omega}d_{i})(y)),\]
and so \(\bigvee_{i\in\Omega}d_{i}\in\mathrm{Der}(A)\).
(2) We shall prove that \(\bigvee_{i\in\Omega}d_{i}\) is the least upper bound of \(\{d_{i}\}_{i\in\Omega}\) in the poset \((\mathrm{Der}(A),\preceq)\). Indeed, firstly, we have \(\bigvee_{i\in\Omega}d_{i}\in\mathrm{Der}(A)\) by Item (1). Secondly, for each \(i\in\Omega\), we have \(d_{i}(x)\leq\bigvee_{i\in\Omega}d_{i}(x)=(\bigvee_{i\in\Omega}d_{i})(x)\) for any \(x\in A\) and so \(d_{i}\preceq\bigvee_{i\in\Omega}d_{i}\). Thus \(\bigvee_{i\in\Omega}d_{i}\) is an upper bound of \(\{d_{i}\}_{i\in\Omega}\). Finally, let \(d^{\prime}\in\mathrm{Der}(A)\) such that \(d_{i}\preceq d^{\prime}\) for each \(i\in\Omega\). Then \(d_{i}(x)\leq d^{\prime}(x)\) for any \(x\in A\), which implies that \((\bigvee_{i\in\Omega}d_{i})(x)=\bigvee_{i\in\Omega}d_{i}(x)\leq d^{\prime}(x)\) and so \(\bigvee_{i\in\Omega}d_{i}\preceq d^{\prime}\). Therefore, we obtain that \(\bigvee_{i\in\Omega}d_{i}\) is the least upper bound of \(\{d_{i}\}_{i\in\Omega}\) in the poset \((\mathrm{Der}(A),\preceq)\). Note that \(\bigvee\emptyset=\mathbf{0}_{A}\) and hence \((\mathrm{Der}(A),\preceq,\mathbf{0}_{A},\mathrm{Id}_{A})\) is a complete lattice by [7, Theorem I.4.2].
Next we will consider several lattice structure of derivations which are isomorphic to the underlying lattice \(\mathbf{L}(A)\) of an MV-algebra \(A\).
**Lemma 5.9**.: _Let \(A\) be an MV-algebra. Then the following statements hold:_
1. \(d_{u}\lor d_{v}=d_{u\lor v}\) _and_ \(d_{u}\wedge d_{v}=d_{u\wedge v}\) _for any_ \(u,v\in A\)_._
2. \((\mathrm{PDer}(A),\vee,\wedge,\mathbf{0}_{A},\mathrm{Id}_{A})\) _is a sublattice of_ \((\mathrm{O}(A),\preceq)\)_._
3. \(d\lor d^{\prime},d\wedge d^{\prime}\in\mathrm{IDer}(A)\) _for any_ \(d,d^{\prime}\in\mathrm{IDer}(A)\)_._
4. \((\mathrm{IDer}(A),\vee,\wedge,\mathbf{0}_{A},\mathrm{Id}_{A})\) _is a sublattice of_ \((\mathrm{O}(A),\preceq)\)_._
Proof.: (1) Let \(u,v\in A\). Then, for any \(x\in A\), by Lemma 2.3 (6)(7) we have
\[(d_{u}\lor d_{v})(x)=d_{u}(x)\lor d_{v}(x)=(u\odot x)\vee(v\odot x)=(u\lor v) \odot x=d_{u\lor v}(x),\]
\[(d_{u}\wedge d_{v})(x)=d_{u}(x)\wedge d_{v}(x)=(u\odot x)\wedge(v\odot x)=(u \wedge v)\odot x=d_{u\wedge v}(x).\]
Thus \(d_{u}\lor d_{v}=d_{u\lor v}\) and \(d_{u}\wedge d_{v}=d_{u\wedge v}\).
(2) It follows immediately from Item (1) that \(\mathrm{PDer}(A)\) is closed under \(\vee\) and \(\wedge\). So \((\mathrm{PDer}(A),\vee,\wedge,\mathbf{0}_{A},\mathrm{Id}_{A})\) is a sublattice of \((\mathrm{O}(A),\preceq)\), since \(\mathbf{0}_{A},\mathrm{Id}_{A}\in\mathrm{PDer}(A)\).
(3) Let \(d,d^{\prime}\in{\rm IDer}(A)\). Then \(d(1),d^{\prime}(1)\in{\bf B}(A)\) and \(d,d^{\prime}\in{\rm PDer}(A)\) by Proposition 3.19. Recall that \({\bf B}(A)\) is a subalgebra of \(A\), since \(d(1),d^{\prime}(1)\in{\bf B}(A)\), it follows that \((d\lor d^{\prime})(1)=d(1)\lor d^{\prime}(1)=d(1)\oplus d^{\prime}(1)\in{\bf B} (A)\) by Lemma 2.5 (5). Similarly, \((d\wedge d^{\prime})(1)\in{\bf B}(A)\). Moreover, we have \(d\lor d^{\prime},d\wedge d^{\prime}\in{\rm PDer}(A)\) by Item (1). Thus \(d\lor d^{\prime},d\wedge d^{\prime}\in{\rm IDer}(A)\).
(4) It follows immediately from Item (3) that \({\rm IDer}(A)\) is closed under \(\vee\) and \(\wedge\). So \(({\rm IDer}(A),\vee,\wedge,{\bf 0}_{A},{\rm Id}_{A})\) is a sublattice of \(({\rm O}(A),\preceq)\), since \({\bf 0}_{A},{\rm Id}_{A}\in{\rm IDer}(A)\).
**Proposition 5.10**.: _Let \(A\) be an MV-algebra. Then_
1. \(({\rm PDer}(A),\vee,\wedge,{\bf 0}_{A},{\rm Id}_{A})\) _is a lattice isomorphic to_ \({\bf L}(A)\)_; and_
2. \(({\rm IDer}(A),\vee,\wedge,{\bf 0}_{A},{\rm Id}_{A})\) _is a lattice isomorphic to_ \({\bf B}(A)\)_._
Proof.: (1) It follows by Lemma 5.9 (2) that \(({\rm PDer}(A),\vee,\wedge,{\bf 0}_{A},{\rm Id}_{A})\) is a lattice.
Define a map \(g:{\rm PDer}(A)\to{\bf L}(A)\) by \(g(d_{u})=u\) for any \(d_{u}\in{\rm PDer}(A)\). Then \(g\) is a bijection. In fact, if \(g(d_{u})=g(d_{v})\), then \(u=v\), and so \(d_{u}=d_{v}\). Thus \(g\) is injective. Also, for each \(u\in A\), there exists \(d_{u}\in{\rm PDer}(A)\) such that \(g(d_{u})=u\), so \(g\) is surjective. By Lemma 5.9 (1), we have \(g(d_{u}\lor d_{v})=g(d_{u\lor v})=u\lor v=g(d_{u})\lor g(d_{v})\) and \(g(d_{u}\wedge d_{v})=g(d_{u\wedge v})=u\wedge v=g(d_{u})\wedge g(d_{v})\). Thus \(g\) is a lattice isomorphism.
(2) It follows by Lemma 5.9 (4) that \(({\rm IDer}(A),\vee,\wedge,{\bf 0}_{A},{\rm Id}_{A})\) is a lattice.
Define a map \(f:{\rm IDer}(A)\to{\bf B}(A)\) by \(f(d)=d(1)\) for any \(d\in{\rm IDer}(A)\). By Corollary 3.20, \(f\) is a bijection. Also, it is clear that \(f({\bf 0}_{A})={\bf 0}_{A}(1)=0\) and \(f({\rm Id}_{A})={\rm Id}_{A}(1)=1\). By Lemma 5.9 (1), we have \(f(d_{u}\lor d_{v})=f(d_{u\lor v})=u\lor v=f(d_{u})\lor f(d_{v})\) and \(f(d_{u}\wedge d_{v})=f(d_{u\wedge v})=u\wedge v=f(d_{u})\wedge f(d_{v})\). Thus \(f\) is a lattice isomorphism.
Let \(\chi^{(A)}=\{\chi^{(u)}\mid u\in A\}\), where \(\chi^{(u)}\) is defined in Corollary 3.13. We will show that \((\chi^{(A)},\preceq)\) is also a lattice isomorphic to \({\bf L}(A)\).
**Lemma 5.11**.: _Let \(A\) be an MV-algebra and \(u,v\in A\). Then the following statements hold:_
1. \(\chi^{(u)}\vee\chi^{(v)}=\chi^{(u\lor v)}\) _and_ \(\chi^{(u)}\wedge\chi^{(v)}=\chi^{(u\wedge v)}\)_._
2. \(\chi^{(u)}=\chi^{(v)}\) _if and only if_ \(u=v\)_._
Proof.: (1) For any \(x\in A\), we have
\[(\chi^{(u)}\vee\chi^{(v)})(x)=\chi^{(u)}(x)\vee\chi^{(v)}(x)=\begin{cases}u \lor v,&\text{if $x=1$;}\\ x,&\text{otherwise}\end{cases}\]
and
\[(\chi^{(u)}\wedge\chi^{(v)})(x)=\chi^{(u)}(x)\wedge\chi^{(v)}(x)=\begin{cases}u \wedge v,&\text{if $x=1$;}\\ x,&\text{otherwise}\end{cases}\]
Thus \(\chi^{(u)}\vee\chi^{(v)}=\chi^{(u\lor v)}\) and \(\chi^{(u)}\wedge\chi^{(v)}=\chi^{(u\wedge v)}\).
(2) It is clear that \(u=v\) implies \(\chi^{(u)}=\chi^{(v)}\). Conversely, if \(\chi^{(u)}=\chi^{(v)}\), then \(u=\chi^{(u)}(1)=\chi^{(v)}(1)=v\).
**Proposition 5.12**.: _If \(A\) is an MV-algebra, then \((\chi^{(A)},\preceq)\) is a sublattice of \(({\rm O}(A),\preceq)\) and \((\chi^{(A)},\preceq)\) is isomorphic to \({\bf L}(A)\)._
Proof.: Let \(u,v\in A\). Then \(\chi^{(u)}\vee\chi^{(v)}=\chi^{(u\lor v)}\in\chi^{(A)}\) and \(\chi^{(u)}\wedge\chi^{(v)}=\chi^{(u\wedge v)}\in\chi^{(A)}\) by Lemma 5.11. Thus \((\chi^{(A)},\preceq)\) is a sublattice of \(({\rm O}(A),\preceq)\) by Lemma 5.1.
Define a map \(f:{\bf L}(A)\to\chi^{(A)}\) by \(f(u)=\chi^{(u)}\) for any \(u\in{\bf L}(A)\). By Lemma 5.11, \(f\) is an injective homomorphism. Also, it is clear that \(f\) is surjective by the definition of \(\chi^{(A)}\). Hence \(f\) is a lattice isomorphism.
Recall that a **filter**[11] of a lattice \(L\) is a non-empty subset \(F\) of \(L\) such that: \((i)\)\(a,b\in F\) implies \(a\wedge b\in F\) and \((ii)\)\(a\in F\), \(c\in L\) and \(a\leq c\) imply \(c\in F\).
**Proposition 5.13**.: _Let \(A\) be an MV-algebra. If \((\operatorname{Der}(A),\vee,\wedge,\mathbf{0}_{A},\operatorname{Id}_{A})\) is a lattice, then \(\chi^{(A)}\) is a filter of the lattice \(\operatorname{Der}(A)\)._
Proof.: Assume that \((\operatorname{Der}(A),\vee,\wedge,\mathbf{0}_{A},\operatorname{Id}_{A})\) is a lattice. It is clear that \(\chi^{(A)}\) is a non-empty subset of \(\operatorname{Der}(A)\) since \(\chi^{(0)}\in\chi^{(A)}\). Also, by Lemma 5.11, \(\chi^{(A)}\) is closed under \(\wedge\).
Finally, assume that \(d\in\operatorname{Der}(A)\) such that \(\chi^{(u)}\preceq d\) for some \(u\in A\). Then \(A\backslash\{1\}\subseteq\operatorname{Fix}_{d}(A)\). In fact, for any \(x\in A\backslash\{1\}\), we have \(x=\chi^{(u)}(x)\leq d(x)\) and so \(d(x)=x\), since \(d(x)\leq x\) by Proposition 3.3 (4). It follows that \(x\in\operatorname{Fix}_{d}(A)\) and hence \(A\backslash\{1\}\subseteq\operatorname{Fix}_{d}(A)\). Consequently, we have \(d\in\chi^{(A)}\). Therefore, \(\chi^{(A)}\) is a filter of the lattice \(\operatorname{Der}(A)\).
## 6. Discussions
In this paper, we give a detailed algebraic study of \((\odot,\vee)\)-derivations on MV-algebra. There are many different types of derivation on MV, which may lead to more researches and applications.
We list some questions at the end of this paper.
1. We have seen in Proposition 3.16 that the relation between the cardinality of MV-algebra \(|A|\) and the cardinality of derivation \(|\operatorname{Der}(A)|\) under small orders. The question is whether we can find the relation when consider larger cardinary \(|\operatorname{Der}(A)|\)?
2. In any finite MV-algebra \(A\), we have shown that \((\operatorname{Der}(A),\preceq,\mathbf{0}_{A},\operatorname{Id}_{A})\) is a lattice in Proposition 5.3 (2). Can we characterize the Hasse diagram of it?
3. In Lemma 5.5, it has been shown that for any MV-chain \(L_{n}(n\geq 2)\), \((\operatorname{Der}(L_{n}),\preceq)\) is a lattice. Naturally, we will ask: for any MV-algebra \(A\), is the poset \((\operatorname{Der}(A),\preceq,\mathbf{0}_{A},\operatorname{Id}_{A})\) a lattice?
4. For any two MV-algebras \(A\) and \(A^{\prime}\), if \((\operatorname{Der}(A),\preceq,\mathbf{0}_{A},\operatorname{Id}_{A})\) and \((\operatorname{Der}(A^{\prime}),\preceq,\mathbf{0}_{A^{\prime}},\operatorname {Id}_{A^{\prime}})\) are isomorphic lattices, then are \(A\) and \(A^{\prime}\) isomorphic?
## Declaration
This article does not use any particular data, or human participant. Indeed, the results obtained have been established from the articles cited in the references. However, we remain ready to transmit any information useful for a good understanding of our article.
**(1) Ethical approval**: We declare that we have complied with the ethical standards for publishing articles in this journal.
**(2) Funding details**: The work is partially supported by CNNSF (Grants: 12171022, 62250001).
**(3) Conflict of interest**: The authors have no conflicts of interest to declare that are relevant to the content of this article.
**(4) Informed Consent**: Not applicable.
**(5) Authorship contributions**: All authors contributed to this article.
|
2306.06109 | Learning to Quantize Vulnerability Patterns and Match to Locate
Statement-Level Vulnerabilities | Deep learning (DL) models have become increasingly popular in identifying
software vulnerabilities. Prior studies found that vulnerabilities across
different vulnerable programs may exhibit similar vulnerable scopes, implicitly
forming discernible vulnerability patterns that can be learned by DL models
through supervised training. However, vulnerable scopes still manifest in
various spatial locations and formats within a program, posing challenges for
models to accurately identify vulnerable statements. Despite this challenge,
state-of-the-art vulnerability detection approaches fail to exploit the
vulnerability patterns that arise in vulnerable programs. To take full
advantage of vulnerability patterns and unleash the ability of DL models, we
propose a novel vulnerability-matching approach in this paper, drawing
inspiration from program analysis tools that locate vulnerabilities based on
pre-defined patterns. Specifically, a vulnerability codebook is learned, which
consists of quantized vectors representing various vulnerability patterns.
During inference, the codebook is iterated to match all learned patterns and
predict the presence of potential vulnerabilities within a given program. Our
approach was extensively evaluated on a real-world dataset comprising more than
188,000 C/C++ functions. The evaluation results show that our approach achieves
an F1-score of 94% (6% higher than the previous best) and 82% (19% higher than
the previous best) for function and statement-level vulnerability
identification, respectively. These substantial enhancements highlight the
effectiveness of our approach to identifying vulnerabilities. The training code
and pre-trained models are available at https://github.com/optimatch/optimatch. | Michael Fu, Trung Le, Van Nguyen, Chakkrit Tantithamthavorn, Dinh Phung | 2023-05-26T04:13:31Z | http://arxiv.org/abs/2306.06109v1 | # Learning to Quantize Vulnerability Patterns and Match to Locate Statement-Level Vulnerabilities
###### Abstract
Deep learning (DL) models have become increasingly popular in identifying software vulnerabilities. Prior studies found that vulnerabilities across different vulnerable programs may exhibit similar vulnerable scopes, implicitly forming discernible vulnerability patterns that can be learned by DL models through supervised training. However, vulnerable scopes still manifest in various spatial locations and formats within a program, posing challenges for models to accurately identify vulnerable statements. Despite this challenge, state-of-the-art vulnerability detection approaches fail to exploit the vulnerability patterns that arise in vulnerable programs. To take full advantage of vulnerability patterns and unleash the ability of DL models, we propose a novel vulnerability-matching approach in this paper, drawing inspiration from program analysis tools that locate vulnerabilities based on pre-defined patterns. Specifically, a vulnerability codebook is learned, which consists of quantized vectors representing various vulnerability patterns. During inference, the codebook is iterated to match all learned patterns and predict the presence of potential vulnerabilities within a given program. Our approach was extensively evaluated on a real-world dataset comprising more than 188,000 C/C++ functions. The evaluation results show that our approach achieves an F1-score of 94% (6% higher than the previous best) and 82% (19% higher than the previous best) for function and statement-level vulnerability identification, respectively. These substantial enhancements highlight the effectiveness of our approach to identifying vulnerabilities. The training code and pre-trained models are available at [https://github.com/optimatch/optimatch](https://github.com/optimatch/optimatch).
## 1 Introduction
The number of software vulnerabilities has been escalating rapidly in recent years. In particular, National Vulnerability Database (NVD) [6] reported 26,448 software vulnerabilities in 2022, soaring 40% from 18,938 in 2019. The extensive use of open-source libraries, in particular, may contribute to this rise in vulnerabilities. For instance, the Apache Struts vulnerabilities [31] indicate that this poses a tangible threat to organizations. The root cause of these vulnerabilities is often insecure coding practices, making the source code exploitable by attackers who can use them to infiltrate software systems and cause considerable financial and social harm.
To mitigate security threats, security experts leverage static analysis tools that check the code against a set of known patterns of insecure or vulnerable code, such as buffer overflow vulnerabilities and other common security flaws. In contrast, deep learning-based vulnerability detection (VD) identifies vulnerabilities at the file or function levels by implicitly learning vulnerability patterns during training [33; 38; 28; 30]. DL-based vulnerability detection (VD) methods have demonstrated higher accuracy compared to static analysis tools that only target specific vulnerability types [23; 17; 11].
Additionally, recent advancements have introduced fine-grained VDs that offer statement-level vulnerability predictions, aiming to minimize the manual analysis burden on security analysts. Previous studies have employed graph structure of source code like the code property graph [21], along with graph neural networks to detect vulnerabilities at the statement level [22; 19]. Additionally, transformers have demonstrated their capability to learn semantic features of code using self-attention, which is particularly beneficial for handling long sequences compared to RNN models [17; 13].
_In this paper, we consider a vulnerable scope of a function as the collection of all vulnerable statements in that function._ As illustrated in Figure 1, each function consists of two vulnerable statements that form similar vulnerable scopes. This suggests that even if two functions contain the same CWE-787 out-of-bound write vulnerability (the top-1 dangerous CWE-ID in 2022 [10]), the specific vulnerable statements can be written in different ways and located in different parts of the code. Therefore, identifying vulnerabilities at the statement level is challenging for both machine learning and deep learning models. Despite this difficulty, our analysis reveals that state-of-the-art VD approaches have not successfully leveraged the information contained in vulnerable statements (that could be grouped to form vulnerable scopes) to further improve the capability of machine learning and deep learning vulnerability detection approaches at both the function and statement levels.
To address this issue, we propose a novel DL-based framework that can effectively utilize the information presented in vulnerable scopes. To achieve this, we develop a method for quantizing similar vulnerable scopes that share the same pattern into a vulnerability codebook consisting of common codewords which represent common patterns. This codebook captures a diverse range of vulnerabilities from the training data and facilitates the process of vulnerability matching inspired by the pattern-matching concept utilized in program analysis tools [1; 2; 3; 4]. Our approach is _the first to successfully exploit the benefits of vulnerability matching and codebook-based quantization to enhance DL-based VD_. This allows us to effectively identify vulnerabilities in source code data, ultimately improving the overall capability of DL-based VD.
Our approach involves collecting and quantizing a set of vulnerable scopes from the training set before using the optimal transport (OT) [16] to cluster this set into a vulnerability codebook consisting of a set of vulnerability centroids (i.e., codewords). The vulnerable scopes (collected from the training set) that share a similar pattern would stay closely in representations, hence we cluster them into a centroid to summarize them. By clustering the set of vulnerable scopes into a smaller set of centroids, we reduce the dimensionality of the feature space and make it easier for the model to perform matching during inference. Additionally, the use of centroids ensures that similar vulnerable scopes are mapped to the same location in the feature space. During training, we minimize the Wasserstein distance [16] between the set of vulnerable scopes and the vulnerability codebook, which allows us to effectively cluster vulnerable scopes and learn the representative centroids in the codebook. During inference, our model matches the input program against all centroids in the learned vulnerability codebook. By examining all the vulnerability patterns in the codebook, the matching process enables a thorough search for potential vulnerabilities. This explicit matching method supports the identification of specific vulnerability patterns and their associated statements, providing a comprehensive approach to identifying vulnerabilities. We name this model OptiMatch, a function and statement-level vulnerability identification approach via optimal transport quantization and vulnerability matching.
In summary, our work presents several contributions: (i) an innovative vulnerability-matching DL framework utilizing optimal transport and vector quantization for function and statement-level vulnerability detection (VD); (ii) a novel statement embedding approach using recurrent neural networks (RNNs); and (iii) a thorough evaluation of our proposed method compared to other DL-based vulnerability prediction techniques on a large benchmark dataset of real-world vulnerabilities.
## 2 Related Work
Researchers have proposed various deep learning-based vulnerability detections (VDs) such as convolutional neural networks (CNNs) [33], recurrent neural networks (RNNs) [24; 29; 27], graph neural networks (GNNs) [38; 7; 22; 19; 30; 13], and pre-trained transformers [15; 18; 17; 13]. RNN-based methods [33; 24; 23] have been shown more accurate than program analysis tools such as Checkmarc [1] and RATS [4] to predict function-level vulnerabilities. However, RNNs face difficulty in capturing long-term dependencies in long sequences as the model's sequential nature may result in the loss of earlier sequence information. Furthermore, function-level predictions lack the required granularity to accurately identify the root causes of vulnerabilities. Thus, researchers have proposed transformer-based methods that predict statement-level vulnerabilities and capture
long-term dependencies [13; 17] while ICVH [28] leverages bidirectional RNNs with information theory to detect statement-level vulnerabilities. On the other hand, Zhou _et al._[38] embed the abstract syntax tree (AST), control flow graph (CFG), and data flow graph (DFG) for a code function and learn the graph representations for function-level predictions. Nguyen _et al._[30] proposed constructing a code graph as a flat sequence for function-level predictions. Hin _et al._[19] constructed program dependency graphs (PDGs) for functions and predicted statement-level vulnerabilities.
In contrast to the above methods, we propose a deep learning-based vulnerability matching method inspired by the principles of program analysis security tools. Specifically, we gather a group of vulnerability patterns from the training set and develop a vulnerability codebook using optimal transport [16] and vector quantization [35]. Our goal is to detect statements caused the vulnerabilities by matching functions with the representative patterns we have learned in our codebook.
## 3 Approach
Deep learning (DL) models have been proving their abilities in capturing vulnerabilities more accurately than program analysis tools using implicit vulnerability patterns learned from the training data set [11; 17]. However, in real-world source code data sets, common vulnerable scopes would be written in different styles (e.g., variable naming conventions) and appear at different spatial locations in different vulnerable sections (i.e., functions or programs) [28]. Existing DL-based VD approaches often fail to consider the common vulnerable scopes (which could be clustered into patterns) that exist in vulnerable functions or programs during both training and inference, instead relying on implicit learning through supervised learning. To address this limitation, we propose a novel DL framework that integrates vulnerable scopes into centroids via a vulnerability codebook. The example in Figure 1 demonstrates that the two vulnerable functions have the same vulnerable scope, consisting of two vulnerable statements, but are presented in different variable names and spatial locations. To overcome this issue, we group these vulnerable scopes with the same pattern and quantize them into a codebook containing representative vulnerability centroids that can represent a set of similar scopes. This codebook is then used to facilitate vulnerability matching during the inference phase, effectively addressing the lack of consideration for vulnerable scopes in existing DL-based VD approaches.
In general, our approach consists of two phases. The warm-up phase illustrated in Figure 2 aims to gradually adjust the model parameters, with the goal of improving the representation of embeddings for input programs and vulnerable scopes. The main training phase is illustrated in Figure 3. The yellow section on the left shows how we construct and learn our vulnerability codebook from vulnerable scopes in our training data using optimal transport. The grey section on the right shows how to utilize the codebook during training, which matches functions with the learned vulnerability centroids in the codebook, allowing us to identify and highlight the statements that caused the vulnerabilities. Below, we first formulate our problem by defining common notations followed by how we map textual source code to vector space and warm up the embeddings. We then introduce the motivation and method on why and how to learn a DL framework to achieve vulnerability matching.
Figure 1: In the left function, _writeToBuffer_, if the sum of _offset_ and \(i\) exceeds or equals 20, it results in writing data beyond the buffer array’s end. This overwrites memory beyond the array, posing a potential program crash. Similarly, the _copyToMemory_ function on the right uses the _start_ index to determine the starting point for copying data in _memoryBlock_. However, if the sum of _start_ and \(i\) surpasses or equals the size of _memoryBlock_, it leads to overwriting memory beyond the array, causing an out-of-bounds write vulnerability. Despite sharing the same vulnerability type and similar vulnerable scopes, the vulnerable statements in each function are different in their written form, variable names, and positions.
### Problem Statement
Let us consider a dataset of \(N\) functions in the form of source code. The data set includes both vulnerable and benign functions, where the function-level and statement-level ground truths have been labeled by security experts. We denote a function as a set of code statements, \(X_{i}=[\mathbf{x}_{1},...,\mathbf{x}_{n}]\), where \(n\) is the max number of statements we consider in a function. Let a sample of data be \(\big{\{}(X_{i},y_{i},\mathbf{z}_{i}):X_{i}\in\mathcal{X},y_{i}\in\mathcal{Y}, \mathbf{z}_{i}\in\mathcal{Z},i\in\{1,2,...,N\}\big{\}}\), where \(\mathcal{X}\) denotes a set of code functions, \(\mathcal{Y}=\{0,1\}\) with 1 represents vulnerable function and 0 otherwise, and \(\mathcal{Z}=\{0,1\}^{n}\) denotes a set of binary vectors with 1 represents vulnerable code statement and 0 otherwise. Our objective is to identify the vulnerability on both _function and statement levels_. We formulate the identification of vulnerable functions as a binary classification problem and the identification of vulnerable statements as a multi-label classification problem. Given a function \(X_{i}\), we first input to a statement embedding layer (\(SEMB\)) to obtain statement embeddings, namely \(S_{i}\) and \(P_{i}\), as specified in Equation 2 (refer to Section 3.2 for the embedding details). \(S_{i}\in\mathbb{R}^{n\times d}\) is the d-dimensional statement embedding vectors for \(X_{i}\). Prior studies have found that in a vulnerable function, there are code statements associated with the vulnerabilities (i.e., vulnerable scopes) [28]. Let us denote \(X_{i}^{val}\) as a set of all vulnerable statements in a vulnerable function. To explicitly capture vulnerable scopes, we extract \(X_{i}^{val}\) from the vulnerable function and encode those statements using d-dimensional statement embeddings as \(P_{i}\in\mathbb{R}^{q\times d}\). \(q\) is the maximum number of vulnerable statements we consider in a vulnerable function and we set \(q=12\) by applying truncation and padding because 95% of vulnerable functions in our data have less than 12 vulnerable statements. Note that for each benign function without any vulnerable statements, we leverage a special learnable embedding denoted as \(P_{benign}\in\mathbb{R}^{q\times d}\) to represent \(P_{i}\). In addition, we apply an RNN layer (\(RNN_{val}\)) to summarize \(P_{i}\) into a flat vector denoted as \(\mathbf{v}_{\mathbf{i}}\in\mathbb{R}^{d}\), which can facilitate the learning of our vulnerability codebook introduced in Section 3.4.2. Let us denote a stack of transformer encoders as \(\mathcal{F}\), we concatenate \(S_{i}\), \(\mathbf{v}_{\mathbf{i}}\), and feed them into transformer encoders as \(\mathcal{F}(S_{i},\mathbf{v}_{\mathbf{i}})\). We then make function and statement-level predictions based on the output of \(\mathcal{F}\). The mapping from \(X_{i}\) to \(y_{i}\) and \(z_{i}\) is learned by minimizing the cross-entropy loss function, denoted by \(\mathcal{L}(\cdot)\), as follows:
\[min\frac{1}{N}\sum_{i=1}^{N}\Bigl{[}\mathcal{L}_{function}\Bigl{(}\mathcal{F}(S_ {i},\mathbf{v}_{\mathbf{i}}),y_{i}|X_{i}\Bigr{)}+\mathcal{L}_{statement}\Bigl{(} \mathcal{F}(S_{i},\mathbf{v}_{\mathbf{i}}),\mathbf{z}_{i}|X_{i}\Bigr{)}\Bigr{]} \tag{1}\]
### Statement Embedding Using RNN
Figure 2 depicts the forward passes involved in our warm-up step to adjust the embeddings for statements and vulnerable scopes. We now introduce our motivations and method to embed statements
Figure 2: The overview of the warm-up phase in our approach. We tokenize each statement in a vulnerable function (i.e., \(X_{i}\)) followed by an embedding layer to map each token into a vector. We use \(RNN_{statement}\) to summarize the token embeddings and get the statement embedding (\(S_{i}\), \(P_{i}\)). For benign functions, \(P_{i}\) is replaced by a special learnable embedding, \(P_{benign}\). Additionally, we use \(RNN_{val}\) to summarize vulnerable statement embeddings \(P_{i}\) to a vector \(\mathbf{v}_{\mathbf{i}}\) that represents the vulnerable scope. We concatenate \(S_{i}\) and \(\mathbf{v}_{\mathbf{i}}\) as the input to transformer encoders to consider vulnerable scopes that arise in the function and align with our vulnerability matching process introduced in Section 3.5. We select the statement embeddings output from the last encoder, i.e., \(H^{12}[1:n]\). Each statement embedding vector is mapped to a probability as statement-level predictions, the function-level prediction is obtained by summarising \(H^{12}[1:n]\) to a vector using an \(RNN_{function}\) and mapping it to a probability.
and vulnerable scopes. Large language models (LLMs) pre-trained for source code have been shown effective to predict vulnerabilities [15; 18; 19; 17]. However, those LLMs leverage token embeddings that only preserve 512 tokens (tokenized by the byte pair encoding (BPE) algorithm [34]) for each input function while extra tokens need to be truncated. This could lead to information loss for long functions with more than 512 tokens. To address this limitation, we propose the statement embedding layer \(SEMB\) to encode a function (e.g., \(X_{i}\)) as a set of statement embeddings:
\[S_{i}=SEMB(X_{i}),\ P_{i}=SEMB(X_{i}^{val}),\ \text{where}\ X_{i},X_{i}^{val} \in\mathcal{X} \tag{2}\]
Given \(X_{i}=[\mathbf{x}_{1},...,\mathbf{x}_{n}]\), we use BPE to tokenize \(\mathbf{x}_{j}\) to a list of tokens, \([t_{1},...,t_{r}]\), where \(r\) is the number of tokens we consider in a code statement. We then obtain a token embedding for each \(t_{j}\) using an embedding layer \(E\in\mathbb{R}^{v\times d}\) where \(v\) is the vocab size of the tokenizer. This results in a token embedding matrix \(S_{i}\in\mathbb{R}^{n\times r\times d}\) for all statements in \(X_{i}\). Similarly, we obtain token embeddings of vulnerable statements \(X_{i}^{val}\) as \(\bar{P_{i}}\in\mathbb{R}^{q\times r\times d}\) to represent a vulnerable scope. We apply truncation and padding to make \(q\) a constant for each vulnerable function.
With \(n=155\) and \(r=20\) (see Section 4.2), we can process 3,100 tokens per function, which is six times more than the 512 tokens. Our statement embedding method provides a more complete representation of code functions compared to the token embedding method. Specifically, our method can fully represent 99% of the functions in our dataset that have less than 2,700 tokens, while the token embedding method can only fully represent around 85% of the functions that have less than 500 tokens. Table 1 shows that our statement embedding method results in a 33% and 32% enhancement in the performance of CodeBERT and CodeGPT models for statement-level predictions.
Previous studies such as Sentence-BERT [32] leverage max or mean pooling to aggregate token embeddings. The max pooling would lead to information loss since it considers the maximum token embedding for each statement, discarding all other token embeddings in the sequence. While the mean pooling considers all token embeddings, it treats all the token embeddings equally regardless of their importance or relevance to the statement they belong where the prominent token features could be disregarded. In contrast, we propose to learn an RNN [9] with \(r\) (max number of tokens in each statement) time steps to aggregate the token embeddings and obtain statement embeddings as below:
\[S_{i}[j]=RNN_{statement}(\bar{S}[j,:,:]),\forall_{j}\in\{1,\dots,n\} \tag{3}\]
\[P_{i}[j]=RNN_{statement}(\bar{P}[j,:,:]),\forall_{j}\in\{1,\dots,q\} \tag{4}\]
To acquire the \(j^{th}\) statement embedding for \(S_{i}\) and \(P_{i}\), we summarize the token embeddings of length \(r\) using \(RNN_{statement}\). Following the convention of Python lists, we represent the \(j^{th}\) statement embeddings as \(S_{i}[j]\). While mean or max pooling operations are not learnable, the \(RNN_{statement}\) layer allows us to learn to pool token embeddings in each statement into a statement embedding vector while preserving prominent token features and mitigating the potential information loss. Finally, we use \(RNN_{val}\) to summarize our vulnerable scope \(P_{i}\) into a flat vector \(\mathbf{v_{i}}\) (see Section 3.4.1 for more details).
### Training of Warm-Up Phase
To consider the statement embeddings and the vulnerable scope of \(X_{i}\), we concatenate \(S_{i}\) and \(\mathbf{v_{i}}\) to obtain the input to transformer encoders as \(H^{0}=S_{i}\oplus\mathbf{v_{i}}\). We select the statement embeddings output from the trail encoder, i.e., \(H^{12}[1:n]\) where the \(\mathbf{v_{i}}\) embedding is omitted. We provide details of the transformer self-attention operation in Appendix A.1. We use \(RNN_{function}\) with \(n\) time steps to summarize statement embeddings into a vector and map it to the function-level prediction \(\hat{y_{i}}\in[0,1]\) as follows:
\[\hat{y_{i}}=\sigma\Big{(}drop\big{(}tanh(drop(RNN(H^{12}_{1:n}))W^{G})\big{)} W^{U}\Big{)} \tag{5}\]
where \(W^{G}\in\mathbb{R}^{d\times d}\) and \(W^{U}\in\mathbb{R}^{d\times 1}\) are model parameters, \(drop\) is a dropout layer, and \(\sigma\) is a sigmoid function. We map statement embeddings to a statement-level prediction \(\hat{z_{i}}=[\hat{z_{i}}^{1},\dots,\hat{z_{i}}^{n}]\in[0,1]^{n}\) via:
\[\hat{z_{i}}=\sigma\Big{(}drop\big{(}tanh(drop(H^{12}_{1:n})W^{I})\big{)}W^{J }\Big{)} \tag{6}\]
where \(W^{I}\in\mathbb{R}^{d\times d}\) and \(W^{J}\in\mathbb{R}^{d\times 1}\) are model parameters, and \(\sigma\) is a sigmoid function.
### Vulnerability Codebook and Subsequent Main Training Phase
Our model parameters are now warmed up to embed statements and vulnerable scopes. Our objective is to achieve vulnerability matching using trainable vulnerability centroids. In the following, we outline our motivations and approach for creating, training, and employing our vulnerability codebook during the primary training phase.
#### 3.4.1 Collect vulnerable scopes from Vulnerable Functions
To exploit and capture common vulnerable scopes in source code, we aim to learn a _vulnerability codebook_ containing representative centroids that group vulnerable scopes sharing the same pattern. Unlike those patterns in program analysis tools, our vulnerability centroids are represented in vectors to conform with DL models, whose representation is adjustable during training, enabling the model to recognize typical vulnerability patterns that may occur at various spatial locations within a vulnerable function.
Given training data consisting of \(a\) vulnerability functions, we extract vulnerable statements to form vulnerable scopes for each function as presented in the very first left part of Figure 3. To simplify the process of building our vulnerability codebook introduced in Section 3.4.2, we take two steps. First, we use \(RNN_{val}\) to summarize our vulnerable scopes into flat vectors. Then, we reduce the dimensionality of these vectors. This enables us to easily group them into vulnerability centroids and construct our vulnerability codebook. We have denoted our vulnerable scope as \(\mathbf{v}_{\mathbf{i}}\) in Section 3.1. \(\mathbf{v}_{\mathbf{i}}\) is obtained by applying \(RNN_{statement}\) and \(RNN_{val}\) to get the vulnerable statement embeddings and condense them into a flat vector. To reduce the dimensionality of \(\mathbf{v}_{\mathbf{i}}\), we linearly project the d-dimensional vector to the h-dimensional and normalize it as \(\mathbf{v}_{\mathbf{i}}=LN(\mathbf{v}_{\mathbf{i}}\cdot W^{F})\) where \(W^{F}\in\mathbb{R}^{d\times h}\) is model parameters and \(LN\) is layer normalization. We then accumulate each \(\mathbf{v}_{\mathbf{i}}\) extracted from vulnerable functions to form a _vulnerability collection_ denoted as \(V\in\mathbb{R}^{a\times h}\) where \(a\) is the total number of vulnerable functions in our training data.
#### 3.4.2 Learn to Transport Vulnerable Scopes to Vulnerability Centroids in Codebook
However, \(V\) may consist of repeated or similar vulnerable scopes. Additionally, the huge collection size of \(V\) will also require many computing resources during inference since we need to match each function with \(a\) number of scopes (in our training data, \(a=6,361\)). To address such issues, we propose to learn a vulnerability codebook denoted as \(C=[\mathbf{c}_{1},...,\mathbf{c}_{k}]\) where \(\mathbf{c}_{i}\in\mathbb{R}^{h}\) is a vulnerability centroid. Intuitively, this codebook integrates similar vulnerable scopes and forms common vulnerability patterns. In particular, we reduce the 6,361 number of \(\mathbf{v}\) vectors in our vulnerability collection to 150 vulnerability centroids in our codebook.
Figure 3: The overview of the main training phase in our approach. We introduce how to learn our vulnerability codebook on the left. We first collect a set of vulnerable statement embeddings from our training data. We then use \(RNN_{val}\) to pool a set of statement embeddings from each vulnerable function, forming a vulnerable scope represented by a vector \(\mathbf{v}_{\mathbf{i}}\). The set of these scopes forms our vulnerability collection \(V=\{\mathbf{v}_{1},\dots,\mathbf{v}_{a}\}\). Next, we learn vulnerability centroids \(\mathbf{c}_{j}\) using the Wasserstein distance metric to create a more compact vulnerability codebook \(C=\{\mathbf{c}_{1},\dots,\mathbf{c}_{k}\}\), where each centroid represents a group of vulnerable scopes. During training, we minimize the Wasserstein distance between each \(\mathbf{v}_{\mathbf{i}}\) and its corresponding vulnerability centroid \(\mathbf{c}_{\mathbf{v}_{i}}^{*}\). We illustrate this main training phase on the right side which is the same as our warm-up phase except that we concatenate \(S_{i}\) and \(\mathbf{c}_{\mathbf{v}_{i}}^{*}\) to obtain \(H^{0}\) as detailed in Section 3.4.3. To overcome the non-differentiability of the \(argmax\) operation in the networks, we copy the gradients from \(\mathbf{v}\) to \(\mathbf{c}_{\mathbf{v}_{i}}^{*}\) to learn the statement embedding and pattern summarization RNNs for vulnerability patterns.
To ensure that vulnerability centroids can represent a group of similar vulnerable scopes, we leverage the optimal transport theory to transfer vulnerability patterns to their corresponding vulnerability centroid. We minimize the Wasserstein distance [36] using the Sinkhorn approximation [12] between our vulnerability collection and codebook. Consequently, the vulnerable scopes and their respective vulnerability centroids will converge towards each other. Ultimately, our codebook will comprise vulnerability centroids acting as representative patterns that symbolize different sets of vulnerability scopes. This allows us to aggregate similar vulnerability patterns based on Euclidean distance. We summarize the process as follows:
\[min_{C}\ W_{d}(P_{V},P_{C}),\ \ \text{where}\ \ P_{V}=\frac{1}{a}\sum_{i=1}^{a} \delta_{\mathbf{v}_{i}}\ \ \text{and}\ \ P_{C}=\frac{1}{a}\sum_{j=1}^{a}\delta_{\mathbf{e}_{j}} \tag{7}\]
where \(W_{d}\) is the Wasserstein distance [36] and \(\delta\) represents the Dirac delta distribution. According to the clustering view of optimal transport [26; 20], when minimizing \(min_{C}\ W_{d}(P_{V},P_{C})\), the set of codebooks \(C\) will become the centroids of the clusters formed by \(V\). This clustering approach ensures that similar vulnerable scopes potentially sharing the same vulnerability pattern are grouped together, leading to a quantized vulnerability codebook that is more concise and effective. We randomly initialize the embedding space of our vulnerability codebook as \(C=[\mathbf{c}_{1},...,\mathbf{c}_{k}]\) with \(k\) number of clusters.
#### 3.4.3 Main Training Phase
The right part of Figure 3 highlighted in grey summarizes our main training phase. We load the model parameters warmed up in our previous phase. By employing the same statement embedding methodology introduced in Section 3.2, we obtain the statement embeddings \(S_{i}\) and a summarized vulnerable scope vector \(\mathbf{v_{i}}\) for the input function \(X_{i}\).
Instead of concatenating \(S_{i}\) with \(\mathbf{v_{i}}\), we employ a cluster selection process to map the vulnerable scope \(\mathbf{v_{i}}\) to its most similar vulnerability centroid (denoted as \(\mathbf{c}_{\mathbf{v}_{i}}^{\star}\in\mathbb{R}^{1\times h}\)) selected from our codebook. By doing so, the model inherently develops an understanding of the vulnerability centroids stored in our vulnerability codebook, which are closely linked to vulnerable functions. We utilize the cross-attention (see Appendix A.2) between the vulnerable scope and the codebook and determine the vulnerability centroid for \(\mathbf{v_{i}}\) as \(\mathbf{c}_{\mathbf{v}_{i}}^{\star}=argmax_{C}CrossAtt(\mathbf{v_{i}},C)\). The \(argmax\) function selects the index of the vulnerability centroid with the highest attention score, which corresponds to the closest vector to \(\mathbf{v_{i}}\) in terms of similarity. We linearly project \(\mathbf{c}_{\mathbf{v}_{i}}^{\star}\) from the factorized h-dimension to the d-dimension to align with the dimension of our statement embeddings. Different from our warm-up phase where we concatenate \(S_{i}\) with \(\mathbf{v_{i}}\), we now concatenate \(S_{i}\) with \(\mathbf{c}_{\mathbf{v}_{i}}^{\star}\) (the most similar centroid to the vulnerable scope \(\mathbf{v_{i}}\)). Thus, the input to the encoders becomes \(H^{0}=S_{i}\oplus\mathbf{c}_{\mathbf{v_{i}}}^{\star}\). The subsequent forward passes are the same as our warm-up phase described in Section 3.3.
Note that no real gradient is defined for \(\mathbf{v_{i}}\) once we map it to a \(\mathbf{c}_{\mathbf{v}_{i}}^{\star}\) via an \(argmax\) operation that causes the networks non-continuous and non-differentiable. To let the networks which embed and summarize vulnerable statements be trainable via backpropagation, we follow the idea in VQ-VAE [35] which was shown effective for vector quantization. We approximate the gradient similar to the straight-through estimator [5] and copy gradients from summarized vulnerable scope \(\mathbf{v_{i}}\) to selected vulnerability centroid \(\mathbf{c}_{\mathbf{v}_{i}}^{\star}\). Below, we introduce how to leverage our learned codebook for vulnerability matching during inference.
### Vulnerability Identification Through Explicit Vulnerability Patterns Matching
Our approach utilizes vulnerable patterns that are often ignored by existing methods. By matching vulnerability centroids during inference, our approach enables us to fully harness the capabilities of DL models for vulnerability identification. We first obtain d-dimensional statement embeddings \(S_{i}\) from an input function \(X_{i}\) as described in Section 3.2. For each vulnerability centroid \(\mathbf{c_{j}}\) in our codebook, we linearly project \(\mathbf{c_{j}}\) from h-dimensional to d-dimensional space and concatenate it with \(S_{i}\) as \(H_{j}^{0}=S_{i}\oplus\mathbf{c_{j}}\). We then pass \(H_{j}^{0}\) through transformer encoders (\(\mathcal{F}\)) to obtain function-level and statement-level vulnerability predictions, which is summarized as \(P_{i}^{func},P_{i}^{stmt}=\mathcal{F}(S_{i},\mathbf{c_{j}})\ \ \forall_{j}\in\{1,\dots,k\}\) where \(P_{ij}^{func}\in[0,1]\) and \(P_{ij}^{stmt}\in[0,1]^{n}\). Thus, we get \(k\) (number of centroids in our codebook) function and statement-level predictions. We use max pooling to pick the most prominent vulnerability-matching results as \(\bar{P}_{i}^{func}=max_{k}P_{i}^{func}\) and predict if \(X\) is a vulnerable function
using a probability threshold of 0.5. If \(X\) is predicted as a benign function, we directly output a zero vector as the statement-level prediction. Otherwise, we employ mean pooling to consider the prediction from each vulnerability centroid in our codebook as \(\hat{P_{i}}^{\mathit{stmt}}=\frac{1}{k}\sum_{j=1}^{k}P_{ij}^{\mathit{stmt}}\) and predict if each statement is vulnerable using a probability threshold of 0.5.
## 4 Experiments
### Experimental Dataset and Baseline Methods
To identify vulnerabilities on function and statement levels, we select the Big-Vul data set created by Fan _et al._[14] as it is one of the largest vulnerability data sets with statement-level vulnerability labels and has been used to assess statement-level vulnerability detection methods [19; 17]. The data set was collected from 348 Github projects and consists of 188k C/C++ functions with 3,754 code vulnerabilities spanning 91 vulnerability types. The data distribution in our experiments resembles real-world scenarios, where the proportion of vulnerable to benign functions is 1:20. Our training data set comprises 6,361 vulnerability scopes before we group them into patterns in our codebook.
We compare our approach with (i) LLMs for code (i.e., CodeBERT [15] and GraphCodeBERT [18]), (ii) Transformer-based VD (i.e., LineVul [17] and VELVET [13]), (iii) GNN-based VD (i.e., LineVD [19], ReGVD [30], and Devign [38]), (iv) RNN-based ICVH [28], and (v) CNN-based TextCNN [8]. More details of the baselines are provided in Appendix A.3.
### Parameter Settings and Model Training
We split the data into 80% for training, 10% for validation, and 10% for testing. For both our approach and baselines, we consider \(n=155\) statements in each function and \(r=20\) tokens in each statement as the descriptive statistics of the whole data set suggest that 95% of source code functions have less than 155 statements and 95% of statements have less than 20 tokens. To initialize our transformer encoders, we make use of the pre-trained model provided by Wang _et al._[37]. This model has undergone pre-training through various denoising objectives associated with programming languages. Details of the hyperparameter settings for our method in both phases are provided in Appendix A.4. In both training phases, we train our model through specific epochs and select the model that demonstrates the highest F1 score for statement-level prediction in the validation set. The experiments were conducted on a Linux machine with an AMD Ryzen 9 5950X processor, 64 GB of RAM, and an NVIDIA RTX 3090 GPU. The potential limitations imposed by our experimental setup are discussed in Appendix A.5.
### Main Results
We conduct our experiments several times and report the average numbers. The experimental data set and baseline methods are detailed in Section 4.1. We report accuracy (Acc), precision (Pre), recall (Re), and F1-score (F1) for function-level and statement-level vulnerability prediction tasks for a comprehensive evaluation of each approach. This enables us to assess the models' performance on both positive and negative classes, regardless of the class imbalance between vulnerable and benign functions. Note that the statement-level metrics are computed on the statement level instead of the function level to determine if each statement is correctly predicted. The experimental results are shown in Table 1. Our approach yields an improvement in function-level F1-score of 6% to 65% and an improvement in statement-level F1-score of 19% to 71%.These results highlight the effectiveness of our approach in accurately predicting vulnerabilities, both at the function and statement levels, outperforming all other state-of-the-art methods. Furthermore, our RNN statement embedding method significantly enhances the performance of CodeBERT (30% \(\rightarrow\) 63%) and CodeGPT (12% \(\rightarrow\) 44%) in statement-level vulnerability prediction. This finding validates our intuition that the statement embeddings learned by our method can capture contextual information and locate statements associated with vulnerabilities more accurately than token embeddings.
### Ablation Study
To assess the effectiveness of the proposed components in our OptiMatch approach, we conduct an ablation study. Specifically, we compare our RNN statement embedding method with mean or max pooling methods. Furthermore, we examine the impact of our vulnerability codebook and matching by comparing our approach with a variant that employs the same model architecture and pre-trained weights, but without using the vulnerability codebook and matching. Finally, we demonstrate the impact of the number of vulnerability centroids (i.e., \(k\)) on the performance of our approach.
The experimental results are shown in Table 2. The utilization of mean or max pooling to summarize token embeddings into statement embeddings results in a slight decrease of 1.75% and 0.45% in function-level F1-score and 4.6% and 4.12% in statement-level F1-score, respectively, as compared to using an RNN. The results confirm the effectiveness of our RNN statement embedding method, indicating that it is more effective in summarizing token embeddings by retaining token features at each time step. The performance significantly deteriorates by 33.58% and 45.2% for function and statement-level predictions when the vulnerability codebook and matching components are removed. This underscores the importance of these components in achieving high-performance levels. The results suggest that the vulnerability codebook plays a crucial role in our approach, which is responsible for retaining and leveraging the vulnerability patterns information present in vulnerable functions. This information is then utilized to identify vulnerable statements effectively during the vulnerability-matching inference. The lower section of Table 2 illustrates the impact of the number of vulnerability centroids on our approach. The results demonstrate that our approach attains favorable statement-level F1-scores for \(k\in[100,150,200]\), and we set \(k=150\) as it produces the optimal statement-level F1-score. Notably, \(k\) represents a crucial factor, where a small value of \(k\) (e.g., 50) may result in unsatisfactory performance due to the grouping of too many vulnerability patterns together, resulting in an inadequate representation of each pattern. Conversely, a large value of \(k\) (e.g., 400) leads to a substantial embedding space of our codebook, making it challenging to update during the backward process.
## 5 Conclusion
This paper presents a novel vulnerability-matching method for function and statement-level vulnerability detection (VD). Our approach capitalizes on the vulnerability patterns present in vulnerable programs, which are typically overlooked in deep learning-based VD. To be specific, we collect vulnerability patterns from the training data and learn a more compact vulnerability codebook from the pattern collection using optimal transport (OT) and vector quantization. During inference, the codebook is utilized to match all learned patterns and detect potential vulnerabilities within a given program. Our comprehensive evaluation, conducted on over 188,000 real-world C/C++ functions, demonstrates that our method surpasses other competitive baseline techniques, while our ablation study confirms the soundness of our approach.
\begin{table}
\begin{tabular}{c|c|c c c c|c c c c} \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Embedding**} & \multicolumn{4}{c|}{**Function Level**} & \multicolumn{4}{c}{**Statement Level**} \\ & & **Acc** & **Pre** & **Re** & **F1** & **Acc** & **Pre** & **Re** & **F1** \\ \hline \multicolumn{2}{c|}{OptiMatch(ours)} & Statement & **99.45** & **97.66** & **89.83** & **93.58** & **99.65** & 86.8 & **77.96** & **82.14** \\ \hline \multicolumn{2}{c|}{OptiMatch(ours)} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline CodeBERT + our embedding & Statement & 98.91 & 92.15 & 82.89 & 87.28 & 99.19 & 59.39 & 67.84 & 63.33 \\ CodeBERT & Token & 98.75 & 93.9 & 77.27 & 84.78 & 96.89 & 19.29 & 63.54 & 29.6 \\ \hline CodeGPT + our embedding & Statement & 98.95 & 91.25 & 84.81 & 87.91 & 98.23 & 32.54 & 67.34 & 43.88 \\ CodeGPT & Token & 95.69 & 56.18 & 19.02 & 28.42 & 98.48 & 14.4 & 9.7 & 11.6 \\ \hline GraphCodeBERT & Token & 95.51 & 50.11 & 27.03 & 35.12 & 96.94 & 10.56 & 26.34 & 15.08 \\ \hline LineVul & Token & 98.61 & 89.25 & 78.47 & 83.51 & - & - & - & - \\ VELVET & Statement & 98.88 & 93.37 & 80.86 & 86.67 & 98.5 & 38.19 & 73.5 & 50.26 \\ \hline LineVD & Statement & - & - & - & - & 95.19 & 27.1 & 53.3 & 36 \\ ReGVD & Token & 97.12 & 77.92 & 50.24 & 61.09 & - & - & - & - \\ Deviin & Token & 96.9 & 72.29 & 50.24 & 59.28 & - & - & - & - \\ \hline ICVH & Statement & 96.56 & 77.44 & 33.25 & 46.53 & 97.77 & 21.31 & 43.17 & 28.53 \\ TextCNN & Statement & 95.95 & 62.31 & 25.12 & 35.81 & 98.15 & 21.03 & 28.91 & 24.34 \\ \hline \end{tabular}
\end{table}
Table 1: (Main Results) We compare our OptiMatch approach against other baseline methods and present results in percentage.
\begin{table}
\begin{tabular}{c|c c c c c|c c c c} \hline \multirow{2}{*}{**Method**} & \multicolumn{4}{c|}{**Function Level**} & \multicolumn{4}{c}{**Statement Level**} \\ & **Acc** & **Pre** & **Re** & **F1** & **Acc** & **Pre** & **Re** & **F1** \\ \hline \multicolumn{2}{c|}{OptiMatch (ours)} & **99.45** & 97.66 & 89.83 & 93.58 & **99.65** & 86.8 & 77.96 & **82.14** \\ \hline \multicolumn{2}{c|}{w/o RNN embedding (mean pooling applied)} & 99.31 & 98.49 & 86 & 91.83 & 99.59 & **90.4** & 67.89 & 77.54 \\ \multicolumn{2}{c|}{w/o RNN embedding (max pooling applied)} & 99.4 & 96.53 & 89.95 & 93.13 & 99.56 & 79.7 & 76.4 & 78.02 \\ \multicolumn{2}{c|}{w/o vulnerability codebook \& matching} & 94.81 & 45.91 & 86.6 & 60 & 98.19 & 28.77 & 51.57 & 36.94 \\ \hline \multicolumn{2}{c|}{OptiMatch wt 50 vulnerability centroids} & 85.9 & 23.95 & **98.21** & 38.51 & 95.5 & 16.92 & **86.13** & 28.28 \\ \multicolumn{2}{c|}{OptiMatch wt 100 vulnerability centroids} & 99.38 & 98.13 & 87.92 & 92.74 & 99.64 & 88.14 & 74.98 & 81.03 \\ \multicolumn{2}{c|}{OptiMatch wt 150 vulnerability centroids} & **99.45** & 97.66 & 89.83 & 93.58 & **99.65** & 86.8 & 77.96 & **82.14** \\ \multicolumn{2}{c|}{OptiMatch wt 200 vulnerability centroids} & **99.45** & 96.69 & 90.91 & **93.71** & 99.63 & 83.44 & 80.02 & 81.69 \\ \multicolumn{2}{c|}{OptiMatch wt 400 vulnerability centroids} & 98.28 & **99.05** & 62.32 & 76.51 & 99.54 & 81.91 & 70.47 & 75.76 \\ \hline \end{tabular}
\end{table}
Table 2: (Ablation Results) We compare our proposed method to other variants to investigate the impact of the individual components. The metrics are reported as percentages. |
2304.06770 | Near-Core Acoustic Glitches are Not Oscillatory: Consequences for
Asteroseismic Probes of Convective Boundary Mixing | Asteroseismology has been used extensively in recent years to study the
interior structure and physical processes of main sequence stars. We consider
prospects for using pressure modes (p-modes) near the frequency of maximum
oscillation power to probe the structure of the near-core layers of main
sequence stars with convective cores by constructing stellar model tracks.
Within our mass range of interest, the inner turning point of p modes as
determined by the JWKB approximation evolves in two distinct phases during the
main sequence, implying a sudden loss of near-core sensitivity during the
discontinuous transition between the two phases. However, we also employ
non-JWKB asymptotic analysis to derive a contrasting set of expressions for the
effects that these structural properties will have on the mode frequencies,
which do not encode any such transition. We show analytically that a
sufficiently near-core perturbation to the stellar structure results in
non-oscillatory, degree-dependent perturbations to the star's oscillation mode
frequencies, contrasting with the case of an outer glitch. We also demonstrate
numerically that these near-core acoustic glitches exhibit strong angular
degree dependence, even at low degree, agreeing with the non-JWKB analysis,
rather than the degree-independent oscillations which emerge from JWKB
analyses. These properties have important implications for using p-modes to
study near-core mixing processes for intermediate-mass stars on the main
sequence, as well as for the interpretation of near-center acoustic glitches in
other astrophysical configurations, such as red giants. | Christopher J. Lindsay, J. M. Joel Ong, Sarbani Basu | 2023-04-13T18:29:13Z | http://arxiv.org/abs/2304.06770v1 | # Near-Core Acoustic Glitches are Not Oscillatory:
###### Abstract
Asteroseismology has been used extensively in recent years to study the interior structure and physical processes of main sequence stars. We consider prospects for using pressure modes (p-modes) near the frequency of maximum oscillation power to probe the structure of the near-core layers of main sequence stars with convective cores by constructing stellar model tracks. Within our mass range of interest, the inner turning point of p modes as determined by the JWKB approximation evolves in two distinct phases during the main sequence, implying a sudden loss of near-core sensitivity during the discontinuous transition between the two phases. However, we also employ non-JWKB asymptotic analysis to derive a contrasting set of expressions for the effects that these structural properties will have on the mode frequencies, which do not encode any such transition. We show analytically that a sufficiently near-core perturbation to the stellar structure results in non-oscillatory, degree-dependent perturbations to the star's oscillation mode frequencies, contrasting with the case of an outer glitch. We also demonstrate numerically that these near-core acoustic glitches exhibit strong angular degree dependence, even at low degree, agreeing with the non-JWKB analysis, rather than the degree-independent oscillations which emerge from JWKB analyses. These properties have important implications for using p-modes to study near-core mixing processes for intermediate-mass stars on the main sequence, as well as for the interpretation of near-center acoustic glitches in other astrophysical configurations, such as red giants.
asteroseismology - stars: solar-type - stars: oscillations - stars: interiors 0000-0002-2880-788X]Christopher J. Lindsay
0000-0002-4070-3888]J. M. Joel Ong
0000-0002-4880-7888]Sarran Basu
## 1 Introduction
The long temporal baselines of the _Kepler_(Borucki et al., 2010) and TESS (Ricker et al., 2015) missions make the interiors of thousands of stars amenable to examination through the asteroseismology of individual mode frequencies in their photometric power spectra. Stars with convective envelopes, such as our Sun, oscillate in multiple modes excited by convective motions (Goldreich & Keeley, 1977, 1978).The frequencies of these oscillation modes can trace stellar structure in the deep interior, thereby encoding information about the star's evolutionary state (Chaplin & Miglio, 2013; Garcia & Ballot, 2019, and references therein). The internal modes of solar-type oscillators are generally classified as either p-modes -- where the restoring force is pressure -- or g-modes -- where the restoring force is gravity. Asymptotic analysis of wave propagation in stars under the Jeffreys-Wentzel-Kramers-Brillouin (JWKB) approximation (see Gough, 2007) indicates that all non-radial modes are limited in their sensitivity to different regions of the star, depending on their character. In Sun-like stars, p-modes and g-modes occur in different regions: p-modes propagate in the outer convective envelope, and g-modes in the core. The loci of these different classes of modes are governed by two characteristic frequencies: the Lamb frequency
\[S_{\ell}^{2}=\frac{\ell(\ell+1)c_{s}^{2}}{r^{2}}, \tag{1}\]
and the Brunt-Vaisala (or buoyancy) frequency
\[N^{2}=-g\left(\frac{1}{\Gamma_{1}P}\frac{dP}{dr}-\frac{1}{\rho}\frac{d\rho}{ dr}\right); \tag{2}\]
where \(\ell\) is the angular degree of the mode, \(c_{s}\) the sound speed, \(P\) the pressure, \(\rho\) the density, and \(r\) is the radial coordinate. Waves which are higher in frequency than both these frequencies are p-modes (shaded in orange in Fig. 2),
while those that are lower in frequency than both are g-modes (shaded in blue). For stars on the main sequence, these p- and g-mode cavities are well separated both in frequency and spatially, so any normal modes with observable amplitudes (that is, with frequencies near that of maximum oscillation power, \(\nu_{\rm max}\)) are purely acoustic (p-modes). The depth to which p-modes sample the stellar structure is set by the Lamb frequency of the corresponding \(\ell\), which, due to its dependence on the sound speed, will also depend on the mean-molecular-weight gradient \(\nabla_{\mu}\). Depending on the properties of near-core mixing, as well as how evolved the star is along the main sequence, the observed p-modes may therefore not penetrate deeply enough to reach the convective cores of main sequence stars, thereby, in principle, limiting the applicability of these p-modes for diagnosing the nature of such near-core mixing, under the WKB approximation.
The interior locations of convective boundaries and abundant element ionization zones are frequently studied through asteroseismic "glitch" analysis. Steep variations in the first adiabatic index, \(\Gamma_{1}\), or in the sound speed, are known to introduce an oscillatory component (\(\delta\nu\)) to the frequencies of low angular degree stellar oscillation modes (e.g. Gough & Thompson, 1988; Vorontsov, 1988; Gough, 1990; Basu et al., 1994). A number of investigations have explored the theoretical seismic consequences arising from sharp variations in the internal stellar structure at the convective envelope boundary (e.g., Monteiro et al., 2000) or in the region of helium ionization (Monteiro et al., 1998; Houdek & Gough, 2007). These glitch signals are present across a range of stellar evolutionary states, and have been used to investigate the properties of convection and ionization zones in both red giant stars (Miglio et al., 2010; Corsaro et al., 2015; Vrard et al., 2015; Dreau et al., 2020) and Sun-like main-sequence stars (Mazumdar et al., 2012, 2014). Since these convective envelope and helium ionization zone features are localized far outside the convective core, we will refer to these glitches in this work as "outer glitches".
The sharp variations in stellar structure present at the boundaries of convective cores are also expended to leave a signature on the oscillation frequencies of a star. However, any such signature from a convective core will have different properties than those generated from an outer glitch, owing to their localization close to the stellar center, rather than surface. Roxburgh & Vorontsov (2001) investigated the expected seismic signatures resulting from a glitch in the neighborhood of a convective core, provided that the structural variation was located well within the mode propagation cavity, far from the turning point of the oscillations. Similar analyses were performed by Provost et al. (1993) in the case of structural variations at the boundary of Jupiter's core and by Audard et al. (1995) in the case of intermediate mass (1.7 - 2.0 \(M_{\odot}\)) stars with g-mode pulsations. In the case of a lower mass (1.2-1.5 \(M_{\odot}\)) main-sequence star though, the aforementioned works do not apply as the convective core is small, with its boundary located very close to the inner turning point of the oscillation modes.
Mazumdar et al. (2006) studied the seismic effects of small convective cores in stellar models and proposed a combination of small frequency separations with the goal of determining the presence of convective overshooting. A similar investigation was carried out by Cunha & Metcalfe (2007), who found that the seismic signatures of small convective cores are non-oscillatory and frequency-dependent. They suggest a combination of frequency separation ratios that may have diagnostic potential for studying convective cores in real stars with high quality asteroseismic data. However, as with Mazumdar et al. (2006), their proposed diagnostic combined information from modes of different degrees. As such, they were unable to investigate the angular degree dependence of the seismic signal (instead assuming a priori that it would only affect the radial modes). Brandao et al. (2010) further investigated these diagnostics to look for age-dependence. Cunha & Brandao (2011) built on the work of Cunha & Metcalfe (2007) and further investigated the seismic signatures of small convective cores. In particular, the work modelled the structural variation at the edge of convective cores in a more physically-motivated fashion to study the evolution of their seismic diagnostic as a star advances in age.
In this work, we investigate the near-core locations available for study through low angular degree mode glitch signature analysis, both within (section 2) and outside (section 3) the WKB approximation, using evolutionary tracks of stellar models. We discuss our results and compare our work to previous studies of the seismic signatures of convective cores in section 4.
## 2 WKB Analysis with Stellar Models
To illustrate the evolution of the well-mixed core and p-mode penetration depths, we construct stellar model tracks with masses between 1.2 and 1.5 \(M_{\odot}\) using MESA version r12778 (Paxton et al., 2011, 2013, 2015, 2018, 2019). We construct models using an Eddington-gray atmospheric boundary condition and the mixing-length prescription of Cox & Giuli (1968). Elemental diffusion following the formulation of Thoul et al. (1994) was included with mass-dependent scaling (see Viani et al., 2018). We show results for 3 model tracks with \(M\) = 1.2, 1.4, and 1.5 \(M_{\odot}\), calculated with solar-calibrated initial values of Helium abundance (\(Y_{0}\) = 0.273), metallicity (relative to Grevesse & Sauval, 1998), and mixing length (\(\alpha_{\rm milt}\) = 1.81719). MESA's implementation of overmixing (cf. SS2 of Lindsay et al., 2022) from the convective core was also used, with \(f_{ov}\) = 0.05.
Within the WKB approximation, non-radial p-modes are assumed to only be sensitive to stellar structure within a mode cavity bounded on the inside by the WKB inner turning point, where the mode angular frequencies are equal to the Lamb frequency. For non-radial modes of the same frequency, this inner turning point's radius value increases with \(\ell\), and is thus deepest for dipole modes. Accordingly, we show in Fig. 1 the evolution of both the outer boundary of the well-mixed core, \(R_{c}\), as well as of the inner turning points of dipole p-modes at \(\nu_{\rm max}\), \(R_{\ell=1}\), over the course of evolution along these tracks (parameterized by the central hydrogen fraction \(X_{H}\)). We define \(R_{c}\) as the location where the chemical gradient \(\nabla_{\mu}\) changes by more than 0.1 between adjacent mesh points, while \(R_{\ell=1}\) is the innermost point where \(S_{\ell=1}=2\pi\nu_{\rm max}\). Locations are indicated with respect to the relative mass coordinate \(m(r)/M\).
From Figure 1, we see that for the 1.4 and 1.5 \(M_{\odot}\) evolutionary tracks (solid and dot dashed lines), \(R_{\ell=1}\) begins increasing steadily with evolution along the main sequence. This steady rise is interrupted by a sharp discontinuity just after reaching \(X_{H}=0.3\) for the 1.4 \(M_{\odot}\) track and just before reaching \(X_{H}=0.3\) for the 1.5 \(M_{\odot}\) track. After this jump in \(R_{\ell=1}\), the dipole p-mode inner turning point at \(\nu_{\rm max}\) lies outside the near-core layers of these stars, rendering them insensitive to this region. Conversely, this discontinuous jump in \(R_{\ell=1}\) does not occur in the 1.2 \(M_{\odot}\) track since, given our specific combination of model input parameters, the position of the p-mode inner turning point starts (at zero age main sequence) at approximately the same location as the well-mixed core outer boundary (\(R_{\ell=1}\approx R_{c}\)). Therefore, for this example 1.2 \(M_{\odot}\) track, non-radial modes may not be used (under the JWKB approximation) to probe the near-core layers no matter how far along the main sequence the star has evolved.
These discontinuities in the evolution of the p-mode inner turning point (\(R_{\ell=1}\)) emerge from kinks in the Lamb frequency profile, caused by the change in mean molecular weight gradient at the boundary of the star's convective core. To examine the underlying mechanism for this, we show propagation diagrams from the 1.4 \(M_{\odot}\) track, before and after this discontinuous jump in \(R_{\ell=1}\), in Figure 2. The spikes in buoyancy frequency (\(N\), solid black lines), which correspond to local enhancements of \(\nabla_{\mu}\), coincide with localized kinks in Lamb frequency (\(S_{\ell=1}\), dotted lines). As the star evolves, these kinks move inwards, and the Lamb frequency at their location increases relative to \(\nu_{\rm max}\) (horizontal dot dashed line). When these kinks coincide with \(\nu_{\rm max}\), this results in a temporally discontinuous increase in \(R_{\ell=1}\). Since the pulsation wavefunction is assumed to decay exponentially in the WKB-evanescent region, this corresponds to a discontinuous reduction in the probing power of dipole modes to these near-core features on either side of this evolutionary boundary.
Unlike the non-radial modes, the radial (\(\ell=0\)) modes are known to penetrate more deeply into the stellar interior. These modes admit description by an equation of Schrodinger form (i.e. the "normal form" of Gough 2007, for JWKB analysis), with respect to the acoustic radial co
Figure 1: Evolution of the well-mixed core outer boundary (blue) inner turning point (orange) of \(\ell=1\) modes near \(\nu_{\rm max}\) in mass coordinates for 1.2 \(M_{\odot}\) (dotted), 1.4 \(M_{\odot}\) (solid), and 1.5 \(M_{\odot}\) (dot dashed) mass model tracks. Evolution goes from left to right as central hydrogen fraction decreases along the main sequence.
Figure 2: **Upper Left Panel**: Propagation diagram for a 1.4 \(M_{\odot}\) model with \(X_{H}=0.4\) (left-most vertical gray dotted line in Fig. 1) showing, in units of \(\nu_{\rm max}\), the buoyancy frequency (\(N\)), the Lamb frequency (\(S_{\ell=1}\)), and the acoustic potential, \(V\), which is a characteristic frequency describing the propagation of a star’s radial modes. The orange region of the propagation diagrams represent the regions where p-modes can propagate, while the blue regions represent the g-mode propagation regions. In the outer layers of the star, the minimum frequency at which p-modes can propagate is governed by the critical frequency (\(\nu_{\rm crit}\) mirrors \(V\) near the model’s surface) in the outer layers. **Upper Right Panel**: Same as the left panel but for the 1.4 \(M_{\odot}\) model later in evolution (right gray dotted line in Fig. 1, \(X_{H}=0.05\)) **Lower Panels**: Propagation diagrams for the same two models as in the upper panels, but the x-axis shows the relative mass coordinate in log-scale in order to show the near-core features in more detail. The acoustic potential, \(V\), shows a sharp, localized peak at the position of the near-core glitch.
ordinate \(t(r)=\int_{0}^{r}\mathrm{d}r/c_{s}\), where the acoustic potential function \(V\) (**shown in Figure 2**), is set by the stellar structure and determines the behavior of their wavefunctions near the center (see e.g. 10; 11; 12; 13), for a thorough discussion of the radial-mode acoustic potential). Localized enhancements in this potential function are known to yield oscillatory signatures (e.g. 10), known colloquially as "glitches". Accordingly, we show this acoustic potential function in both propagation diagrams of Fig. 2 (scaled by \(v_{\mathrm{max}}\), using the gray dot-dashed lines). Sharply localized peak-like features in \(V\) can be seen to emerge, corresponding to the locations in the star where chemical abundances vary rapidly with depth (near \(m/M=0.09\) and \(m/M=0.07\) for the \(X_{H}=0.4\) and \(X_{H}=0.05\) Figure 2 propagation diagrams, respectively). As such, these features must also have a direct effect on the radial-mode frequencies.
## 3 Beyond the WKB Approximation
Thus far, our discussion has taken place within the context of the commonly-used WKB approximation (10; 11). This is qualitatively suitable where the acoustic glitches are situated far enough away from the turning points of the mode cavity that the behavior of the wavefunctions there may be treated as approximately sinusoidal. However, at the turning points, the solutions are, instead, more accurately approximated by Airy functions, which relate to these sinusoidal solutions through the asymptotic expansion of the Airy functions at large argument, by way of Jeffreys connection formulae (i.e. the "J" of JWKB). In turn, the use of Airy functions near the turning points is only justifiable when boundary conditions of the pulsation problem as a whole can be neglected. The resulting oscillatory variations induced into the mode frequencies from such analysis (e.g. 10) have, historically, been assumed to emerge even in existing theoretical studies of p-mode convective-core signatures (e.g. 10; 11). However, the near-core structural discontinuities in the mass range under consideration here do not possess these properties. Since these features, as well as the turning points themselves, are localized close to the core, the inner boundary condition may no longer be neglected. Since the glitches may not be inside the WKB-oscillatory region as is typically assumed, the wavefunctions likewise may not be well approximated as sinusoidal there. As such, we must pursue an alternative derivation of the frequency perturbations induced by these glitch features accounting for these properties, which may correspondingly yield qualitatively different behavior from the standard sinusoidal phenomenology.
### Analytic Development
From local asymptotic theory, it is known that the scaled p-mode radial Lagrangian displacement wavefunctions, \(\psi=\xi_{r}\sqrt{\rho r^{2}c_{s}}\) near the center of the star may be approximated by linear combinations of Riccati-Bessel functions of degree \(\ell\), with argument \(\omega t\). These linear combinations are in turn well-described by using only the Bessel function of the first kind, with further position-dependent phases added to the argument. We refer the reader to 10; 11 for more details about this construction, and to 10; 12 for more detailed discussion of the use of such phase functions in the context of p-modes. Here we use \(s_{\ell}(x)=xj_{\ell}(x)\) to refer to the Riccati-Bessel functions of degree \(\ell\) of the first kind, rather than the customary \(S_{\ell}\), to avoid confusion with the Lamb frequency.
We first consider sharp variations in the Brunt-Vaisala frequency, relative to a smooth background: \(N^{2}=N_{\mathrm{smooth}}^{2}+\delta N^{2}\), and wavefunctions that are unit-normalised under the usual inner product. These sharp variations exist in our stellar models (see the \(N\) profiles in figure 2) and are collocated with enhancements to the acoustic potential \(V\), which encapsulates all the relevant information for radial modes. By inspection of the wave equations (e.g. 12, and also 10), \(\delta N^{2}\) induces deviations in the mode frequencies, ceteris paribus, as
\[\delta(\omega^{2})\sim\int\xi_{r}^{2}\cdot\delta N^{2}\ \mathrm{d}m \tag{3}\]
compared to if only \(N_{\mathrm{smooth}}\) were present, to leading order in perturbation theory (as also used in e.g. 10; 11). We first recount how the usual expression of these glitches, relating \(\delta\)-function features in the Brunt-Vaisala frequency, \(\delta N^{2}\sim\delta(r-r_{0})\), to sinusoidal perturbations to the mode frequencies, may be recovered from this description. Near the outer boundary, \(t=T\), the glitch signature of such a \(\delta\)-function feature may be computed with an approximate expression for the outer phase function of the form
\[\begin{split}\delta(\omega^{2})\sim\omega\delta\omega& \sim\int\mathrm{d}m\;\xi_{r}^{2}\;\delta(r-r_{0})\sim\int\mathrm{d }t\;\psi^{2}(t)\;\delta(t-t_{0})\\ &\sim s_{\ell}^{2}\left(\omega t_{0}-\alpha_{\ell}(\omega,T-t_{0} )+\pi\left(n_{p}+\frac{\ell}{2}\right)-\omega T\right),\end{split} \tag{4}\]
where \(r_{0}\) and \(t_{0}\) are the physical and acoustic radii of the localized feature, \(n_{p}\) is the radial order of the mode, and \(\alpha_{\ell}\) is the phase function induced by the outer boundary condition. Far away from the center, the star may be well-approximated as being plane-parallel-stratified, and so the outer phase functions \(\alpha_{\ell}\) do not materially depend on \(\ell\) at low degree (e.g. 10). The usual expression for acoustic glitches -- i.e. \(\delta\omega\sim\sin\left[2\omega(T-t_{0})+\phi\right]\) for all \(\ell\), up to some frequency-dependent amplitude function -- is then recovered upon introducing the asymptotic expansion of Riccati-Bessel functions as sinusoids at large argument: \(s_{\ell}(x)\sim\sin\left(x-\pi\frac{\ell}{2}\right)+\mathcal{O}(1/x)\).
However, as we have described above, this usual derivation does not apply to these core acoustic glitches. Rather, since the glitches we consider are localized near the center of the star, we must instead make use of the converse expansion of Riccati-Bessel functions as power laws at small argument (see Arfken & Weber, 2005, and Appendix A) :
\[s_{l}(x)\sim\frac{x^{\ell+1}}{(2\ell+1)!!},\ \text{where}\ |x|\ll\sqrt{\ell+ \frac{3}{2}}, \tag{5}\]
with the double exclamation marks denoting the semifactorial. Accordingly, the frequency perturbation induced by such near-core features takes the form
\[\delta\omega\sim\left(\frac{[\omega t_{0}-\delta_{\ell}(\omega,t_{0})]^{\ell+1 }}{(2\ell+1)!!}\right)^{2}, \tag{6}\]
where \(\delta_{\ell}(\omega,t)\) is an inner phase function induced by the inner boundary condition, satisfying \(\delta_{\ell}(\omega,t)\to 0\) as \(t\to 0\) under regular boundary conditions at the center (cf. Roxburgh, 2010, 2016, although we note that by using Riccati-Bessel
Figure 3: **Top Panels**: Plot of the second differences of the oscillation mode frequencies as a function of mode frequency for the same two 1.4 \(M_{\odot}\) models as in Fig. 2. The frequency ranges between 800 \(\mu\)Hz up to the acoustic cutoff frequency of the model (\(\nu_{\mathrm{ac}}=\frac{1}{2\pi}\frac{c_{B}\rho}{P}\)) where the sound speed (\(c_{s}\)), density (\(\rho\)), and pressure (\(P\)) are taken at the model surface (outermost grid point). The second differences are taken for each set of \(\ell=0,1,2\), or 3 modes with respect to the radial order \(n_{p}\). The gray line shows a spline fit through the \(\ell=1,2\), and 3 modes. **Bottom Panels**: Corresponding residuals (second differences minus the spline fit) as a function of frequency.
functions here rather than sinusoids as in those works, we absorb the phase lag of \(\pi\ell/2\) shown there -- cf. Appendix A). This quantity can be seen to depend on \(\ell\). Qualitatively, this implies that any frequency perturbation induced by a near-center feature must (1) diverge gradually with increasing frequency (as opposed to being sinusoidal, like outer glitches), and (2) possess an amplitude which decreases rapidly with increasing \(\ell\) (as opposed to the \(\ell\)-independent behavior of outer glitches). In particular, since the semifactorial suppression with increasing \(\ell\) is so steep, this effectively produces an offset of the radial-mode frequencies relative to all other \(\ell\).
The variations to the Brunt-Vaisala frequency profile by themselves do not account completely for all structural variations at the boundary. For example, there are also variations to the sound speed profile at the convective core boundary, (seismic properties of which have been studied by, e.g., Mazumdar et al., 2006; Cunha & Metcalfe, 2007; Cunha & Brandao, 2011) which could dominate the glitch signature for radial modes. In this work, we restrict our analysis to only a deviation in the Brunt-Vaisala frequency profile as we are interested in the qualitative properties of the near-core glitch signatures, namely their apparent non-oscillatory nature and strong dependence on angular degree, \(\ell\). As demonstrated in Figure 2, the sharp Brunt-Vaisala frequency features are collocated with enhancements to the acoustic potential, \(V\), which also carries information about sound speed discontinuities.
Similar analysis to the one done for the Brunt-Vaisala frequency, applied to the sound speed or other structural properties, will yield the same strong-degree-dependence behavior in the frequency perturbations. More precise statements concerning the exact frequency dependence and amplitudes of the mode frequency differences would require an analysis similar to Cunha & Metcalfe (2007), incorporating structurally self-consistent perturbations to the relevant acoustic potentials. Perturbations to different quantities may yield different power-law indices in the frequency that may differ from that attributed to the Brunt-Vaisala frequency (if derivatives or integrals of the wavefunctions enter into the analogous kernel expressions to Eq. (3)), while those caused by different features will have the arguments of their power laws be evaluated at different acoustic depths. Thus, the overall frequency perturbation that we would expect from these near-core features will take the form of a sum of various power-law terms. However, we note that the argument and overall amplitude of any one of these power-law terms are in effect entirely degenerate. Thus, an unprivileged observer, given a combination of power law components resulting from near-core perturbations to the stellar structure, will find it mathematically impossible to distinguish the inner glitch depth from its amplitude.
### Empirical Diagnostics
In the absence of a more quantitative description of the frequency dependence of the near-core glitch signatures, we can illustrate the qualitative properties of these near-core glitch signatures by computing the mode frequencies for each stellar model along our 1.2 \(M_{\odot}\), 1.3 \(M_{\odot}\), 1.4 \(M_{\odot}\), and 1.5 \(M_{\odot}\) tracks using the stellar oscillation code GYRE (version 6.0, Townsend & Teitler, 2013). We calculate the radial (\(\ell=0\)) as well as non-radial (\(\ell=1,2,\) and 3) mode frequencies in a wide frequency range, from a lower bound of \(\Delta\nu\) up to \(2\nu_{\rm max}\). We use scaling relations to approximate the global asteroseismic parameters of our stellar models based on the models' mass, radius, and temperature (see Kjeldsen & Bedding, 1995), setting \(\nu_{\rm max}=\nu_{\rm max,\odot}M/(R^{2}\sqrt{T_{\rm eff}})\) and \(\Delta\nu=\Delta\nu_{\rm\odot}\sqrt{M/R^{3}}\) where \(M\), \(R\), and \(T_{\rm eff}\) are in solar units.
In order to enhance the visibility of these glitch signatures (oscillatory or otherwise), we take the second differences of the mode frequencies with respect to the modes' radial order \(n_{p}\) (\(\delta^{2}\nu_{n,\ell}\), see Gough, 1990; Basu et al., 1994, 2004; Mazumdar, 2005; Verma et al., 2014) given by,
\[\delta^{2}\nu_{n,\ell}=\nu_{n-1,\ell}-2\nu_{n,l}+\nu_{n+1,\ell}. \tag{7}\]
For illustration, we show these second differences in the top panels of Fig. 3 for the same two 1.4 \(M_{\odot}\) models as in Fig. 2 (before and after the discontinuous jump in the position of the p-mode inner turning point). As discussed previously, these can be seen to be dominated, for all \(\ell\) shown, by the oscillatory variability of the outer ionization-zone/convective-boundary glitches, which should affect all \(\ell\) equally, at least at these low degrees. We thus fit this overall oscillatory signal in the second differences using a cubic spline, incorporating only second differences of the \(\ell=1,2,\) and 3 modes, as shown in the top panels of Fig. 3 as gray lines. We show the residuals to this fit in the bottom panels of Fig. 3. The residuals are such that the \(\ell=0\) mode are clearly systematically offset from the non-radial mode frequencies. Thus, our analytic prediction for the amplitude of the glitch signature decreasing with increasing angular degree (Eq. 6) is bourne out numerically. We note that this systematic dependence of the residuals on \(\ell\) stays consistent between different choices for how the outer glitches are detrended (e.g. fitting a high-order polynomial instead of a spline, or also including \(\ell=0\) modes in the fit). Moreover, no obvious qualitative difference between the two stellar models can be seen, despite their being in different JWKB regimes (as earlier described).
To investigate the amplitude of the near-core glitch over the course of the main sequence evolution of our models, we computed the average second-frequency-difference residuals for each \(\ell\) after subtracting a high-order polynomial fitted against \(\delta^{2}\nu\) for \(\ell=0,1,2,3\). For our 1.4 \(M_{\odot}\) model track, we plot this average residual as a function of center Hydrogen fraction for each value of \(\ell\) in the left panel of Fig. 4. Over
all, the amplitude of the \(\ell=0\) residuals is much larger than for the non-radial orders, agreeing with the analytic prediction that the amplitude of the frequency differences caused by the near-core glitch will decrease with increasing \(\ell\) (Eq. (6)). The right panel of Fig. 4 shows the evolution of the average \(\ell=0\) residuals for our 1.3, 1.4, and 1.5 \(M_{\odot}\) model tracks. These appear very similar for the 1.4 and 1.5 \(M_{\odot}\) tracks, and remain approximately constant over the course of their main sequence evolution.
Our procedure for isolating the near-core glitch's affect on the second-difference residuals results in the radial-mode residuals containing, but not necessarily completely isolating, the near-core glitch signal. For example, for much of the main sequence, the average \(\ell=0\), \(\delta^{2}\nu\) residual amplitudes for the 1.3 \(M_{\odot}\) track are overall smaller when compared with the residual amplitudes for the 1.4 and 1.5 \(M_{\odot}\) tracks, in keeping with the smaller size of the 1.3 \(M_{\odot}\) models' convective cores. While the 1.3 \(M_{\odot}\) track residuals may be seen to vary much more significantly after passing \(X_{H}\approx 0.4\), this is not a feature of the near-core glitch, but rather a property of the outer glitches contaminating the second-difference residual signal. In particular, the convective envelope boundary of the 1.3 \(M_{\odot}\) models is much deeper (in relative acoustic depth) compared with those of the 1.4 and 1.5 \(M_{\odot}\) models. At around \(X_{H}\approx 0.4\), the acoustic depth of the 1.3 \(M_{\odot}\) model's convective envelope boundary increases past \(\tau=T/2\) (where \(T\) is the acoustic radius of the star). Interior to this, the glitch modulations affect not only the degree-independent outer phase function \(\alpha_{\ell}\), but also the degree-dependent inner phase functions \(\delta_{\ell}\) (cf. Fig. 5 of Roxburgh, 2010); thus, the outer glitch may itself no longer be simply described as a function of frequency alone. As such, in this regime, the radial-mode residuals from such a fit will also contain contributions originating from the outer glitch, and no longer serve to describe the near-core glitch well. Thus, we cannot guarantee that this method necessarily uniquely isolates the near-core glitch signal.
Cunha & Metcalfe (2007) have previously proposed an alternative means of eliminating the outer phase function by way of scaled separation ratios:
\[\frac{D_{0,2}}{\Delta v_{n-1,1}}-\frac{D_{1,3}}{\Delta v_{n,0}}, \tag{8}\]
where
\[D_{\ell,\ell+2}\equiv\frac{v_{n,\ell}-v_{n-1,\ell+2}}{4\ell+6} \tag{9}\]
and
\[\Delta v_{n,\ell}\equiv v_{n+1,\ell}-v_{n,\ell}. \tag{10}\]
The un-scaled separation ratios (\(r_{\ell,\ell+2}=d_{\ell,\ell+2}/\Delta v_{n,\ell}\)), considered as a function of frequency, were shown by Roxburgh (2005) to be well-approximated by differences of the interior phase functions, \(\delta_{\ell+2}(\nu)-\delta_{\ell}(\nu)\). The difference of the scaled ratios in Eq. 8, which is the diagnostic of Cunha & Metcalfe (2007), is thus equivalent to taking a linear combination of the inner phase functions \(\delta_{0},\delta_{1},\delta_{2},\delta_{3}\), evaluated at some notional inner matching point, which is usually left underspecified. By contrast, in separating the acoustic depth from the inner phase function, we evaluate the inner phase function at the acoustic depth of the inner glitch. A further, subtle difference between these inner phase functions \(\delta_{\ell}\) appearing here, and in our expressions, is that those we consider above are of the "smooth" structure: they are therefore completely uninformative regarding the inner glitches, with all information about them contained instead in the \(\omega t_{0}\) term, and the implicit constant of proportionality. By contrast, the \(\delta_{\ell}\) of the procedure of Cunha & Metcalfe (2007) are associated with the actual mode frequencies, including the near-core glitches. These differences render these two diagnostics not immediately quantitatively commensurate to each other, and deriving an explicit relationship between their diagnostic and ours lies beyond the scope of this work; at best, we will be able to perform only a qualitative comparison.
Plotting the diagnostic from Eq. 8 as a function of frequency (figure 5a) for three 1.4\(M_{\odot}\) stellar models of different ages shows that this diagnostic, calculated for our model frequencies, shows similar properties to those shown Cunha & Metcalfe (2007), despite differences in the modelling physics and global stellar properties. Comparing the residuals of our outer glitch subtracting procedure (from figure 3) for the same three models (shown in figure 5b) with the diagnostic curves in figure 5a, shows the curvature is reversed between the two methods of displaying the inner, near-core glitch signal. However, both methods show that subtracting the outer glitches is necessary to reveal the small-amplitude seismic signatures of the convective cores. In addition, both panels of 5 show that both methods of isolating the seismic signal of the near-core glitch on the radial modes reveals that these signals are age-dependent, meaning their glitch signature shape changes as a main sequence star evolves on the main sequence.
Practically speaking, an operational difference between the diagnostic in Eq. 8 and our method of subtracting a spline fit from the non-radial modes detailed in section 3 is that octopole (\(\ell=3\)) modes are not explicitly required by our construction. In principle, our proposed methodology accommodates data sets containing both fewer, and more, degrees than \(\ell\in\{0,1,2,3\}\). However, retrieving the asteroseismic signal of the convective cores would still be difficult unless many non-radial mode frequencies are available.
## 4 Discussion and Conclusion
Our analysis shows two distinct WKB regimes for the solar-like oscillators in question. Throughout the first regime, before the discontinuous increase in \(R_{\ell=1}\), the realm that non-radial p-mode oscillations probe includes the near-core re
gions of the star, which is of particular importance for studies of convective boundary mixing processes like convective overshoot. During the second regime, after the sharp increase in \(R_{\ell=1}\), the near-core layers around the stellar core are no longer accessible to non-radial p-mode oscillations with frequencies near \(\nu_{\rm max}\). This suggested that the usefulness of these modes to study core processes in main sequence stars would depend on the exact evolutionary history of that particular object, due to the sharp boundary between the two regimes. Acoustic glitch fitting is often used to determine the locations of particular stellar layers of interest, such as the boundaries of convection zones and the locations of ionization zones. In the first regime we discuss, where the p-mode outer turning point is exterior to the boundary of the well-mixed core, acoustic glitch fitting would be used to study the different layers of the star down past the boundary of the well-mixed core. In this case, glitch signatures from the boundary of the convective core will impart perturbations to the (near-\(\nu_{\rm max}\)) frequencies of both the radial and non-radial oscillation modes of the star. On the other hand, assuming
Figure 4: **Left Panel:** The evolution of the 1.4 \(M_{\odot}\) model track’s average second-frequency-difference residuals left over after subtracting a high-order polynomial fit from the \(\ell=0\), 1, 2, and 3 values of \(\delta^{2}\nu\). The evolution is shown as a function of the central Hydrogen fraction, \(X_{H}\). **Right Panel:** The evolution of the \(\ell=0\) values of average \(\delta^{2}\nu\) residuals as a function of \(X_{H}\) for our 1.3, 1.4, and 1.5 \(M_{\odot}\) tracks. Each curved is smoothed with a boxcar kernel of size 5 points.
Figure 5: **Left Panel**: Frequency differences diagnostic defined by equation 8 as a function of frequency for three 1.4\(M_{\odot}\) models at different ages. The model represented with the red dotted curve is the same as the model shown in the right panels of figure 3. The model modes considered in this figure range in radial order from \(n=8\) to \(n=28\). **Right Panel**: The second difference (\(\delta^{2}\nu\)) residuals obtained after subtracting the outer glitch signature from just the radial mode values of \(\delta^{2}\nu\), as a function of frequency. The curves are shown for the same three models shown in the left panel.
the WKB approximation holds, after the discontinuous jump in \(R_{\ell=1}\), the near-core glitch signature from the core convection zone boundary should be inaccessible to acoustic glitch analysis.
However, since the well-mixed convective core boundary exists sufficiently close to the center of the star such that the inner boundary condition can no longer be neglected, and \(R_{c}\) may not be inside the WKB-oscillatory region in any case, the glitch signature pattern from the core boundary imprints instead an \(\ell\)-dependent signal onto the frequencies of the stellar oscillation modes. We demonstrate this behavior in Fig. 3 where after fitting out the dominant acoustic glitch signature from the \(\ell=0\), 1, 2, and 3 modes; an additional glitch signature is visible in the radial-mode residuals in both cases, which in neither case appears oscillatory. The results of this procedure can be seen to qualitatively resemble those obtained from other prior proposed diagnostics, such as that of Cunha and Metcalfe (2007) (Figure 5).
In contrast to the oscillatory and degree-independent nature of acoustic glitches in the outer parts of the star, we would therefore expect observationally to ultimately obtain, from isolating the inner glitch signal, some combination of non-oscillatory components, each exhibiting some power-law nature and strong angular degree dependence in their frequency perturbations. Crucially for the outer glitches, it is precisely their sinusoidal form which permits different components, localized at and attributed to different physical features, to be separately identified and characterized through their modulation frequency and amplitude (Houdek and Gough, 2006; Monteiro et al., 1998; Mazumdar and Antia, 2001). By contrast, since the amplitude and argument of a power law are mathematically indistinguishable, it is impossible to distinguish the amplitude of the inner glitch from its argument (i.e. location, via acoustic depth), let alone disentangle and assign interpretations to such linear combinations of them as we should expect to obtain in practice, without the imposition of further constraints from stellar modelling. Thus, our key qualitative result in this work -- that inner glitches hold to power-law parameterizations -- also indicates that any quantitative signal, however isolated observationally, will not be amenable to as easy interpretation as those derived from the outer glitches. Unfortunately, this means that any insights into the nature of near-core convective boundary mixing must necessarily derive from explicit reference to numerical models of stellar structure, unlike the model-independent diagnostic quantities returned from the outer glitches.
We note that the aforementioned near-core glitch properties we discuss in this work are only applicable to stars which are massive enough to host convective cores, but with low enough masses to have significant convective envelopes which drive p-mode oscillations. Therefore, future searches for seismic signatures of main-sequence small convective cores will need to limit their consideration to stars within a narrow mass range with good asteroseismic data.
Stellar structures like those discussed in this paper, characterized by steep, localized variations in structure near their cores, are not just present in intermediate mass main sequence stars. As low and intermediate mass stars run out of hydrogen in their cores and begin to evolve across the subgiant branch and up the red giant branch, their convective envelopes expand while their cores contract (cf. Hekker and Christensen-Dalsgaard, 2017). At this stage of evolution, the interior boundary of the convective envelope reaches far into the core of the giant star, depending on the amount of envelope overshooting (see SS4, Figure 4 of Lindsay et al., 2022). The steep variation in sound speed at the interior boundary of the envelope convection zone will induce a glitch component to the mode frequencies of the giant star and, since the location of the glitch would be near the core in this case, an \(\ell\)-dependent signature similar to that described in section 3 will be likewise present in the notional p-modes of these giants. Since the observable modes in red giant stars are in practice modes of mixed g- and p-like character, disentangling the deep envelope convection zone glitch signature from the overall mixed mode pattern of observed red giant oscillation modes would be challenging, but rewarding. Such constraints on the location of the envelope convective boundaries, which would define the correct amount of envelope overshooting, which should be incorporated into evolved star stellar models. These would be complementary to other constraints from "buoyancy" glitches, derived from the g-mode cavity (e.g. Cunha et al., 2019; Vrard et al., 2022). At the same time, the considerations we outline here may be required to interpret such buoyancy glitches: should they be localized near the g-mode turning points (as we describe in Lindsay et al., 2022), the relevant wavefunctions should be described with Airy functions of the first kind, as also used in Cunha and Metcalfe (2007), which would yield different behavior from the sinusoids considered in Cunha et al. (2019).
CJL acknowledges support from a Gruber Science Fellowship. JO acknowledges support from NASA through the NASA Hubble Fellowship grant HST-HF2-51517.001 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. SB acknowledges NSF grant AST-2205026. We also thank Dan Hey, Marc Hon, and the anonymous referee for their useful and constructive discussion, as well as the Yale Center for Research Computing for guidance and use of the research computing infrastructure. MESA (Paxton et al., 2011, 2013, 2015, 2018, 2019), GYRE (Townsend and Teitler, 2013), SciPy (Virtanen et al., 2020), Pandas (Reback et al., 2021)
The MESA and GYRE inlists we used to generate our models and frequencies, as well as the resulting MESA models and frequencies, are archived on Zenodo and can be downloaded at [https://doi.org/10.5281/zenodo.7705648](https://doi.org/10.5281/zenodo.7705648).
## Appendix A Spherical Bessel Functions
The usual expression for the mode frequency shifts resulting from an acoustic glitch has the form \(\delta\omega\sim\sin[2\omega(T-t_{0})+\phi]\). However, in the case where the acoustic glitch is located sufficiently close to the real (coordinate) singularity at the center of the star, the usual expression for acoustic glitches does not apply. In section 3, we made use of the converse expansion of Riccati-Bessel functions as power laws at small argument. Here we expand on this discussion of spherical Bessel functions, drawing extensively from SS11.7 of Arfken and Weber (2005).
Upon separation of variables of the wave equation, the radial wavefunction \(R\) satisfies an ordinary differential equation that is well-approximated near the center of the star by an expression of the form
\[x^{2}\frac{d^{2}R}{dx^{2}}+2x\frac{dR}{dx}\sim-[x^{2}-\ell(\ell+1)]R,\] (A1)
where \(\ell(\ell+1)\) is the separation constant from the angular components (\(\ell\) is a non-negative integer) and the dimensionless argument \(x\sim k_{r}r\) enters from the coordinate transformation required to put the original Helmholtz equation into this form. If one makes the substitution \(R(x)=\frac{Z(x)}{\sqrt{x}}\), the radial equation becomes
\[x^{2}\frac{d^{2}Z}{dx^{2}}+x\frac{dZ}{dx}+\left\{x^{2}-\left(\ell+\frac{1}{2} \right)^{2}\right\}Z=0,\] (A2)
which is Bessel's equation with \(Z\) being a Bessel function of order \(\ell+\frac{1}{2}\). Defining
\[j_{\ell}(x)=\frac{s_{\ell}(x)}{x}=\sqrt{\frac{\pi}{2x}}J_{\ell+1/2}(x)\] (A3)
and expressing \(J_{\ell}\) as a series (c.f. SS11.1 of Arfken and Weber, 2005):
\[J_{\ell+1/2}(x)=\sum_{s=0}^{\infty}\frac{(-1)^{s}}{s!(s+\ell+\frac{1}{2})} \bigg{(}\frac{x}{2}\bigg{)}^{2s+\ell+\frac{1}{2}}=\frac{x^{\ell}}{2^{\ell}\ell!}-\frac{x^{\ell+2}}{2^{\ell+2}(\ell+1)!}+...\] (A4)
we apply the Legendre duplication formula, \(z!\left(z+\frac{1}{2}\right)!=2^{-2z-1}\sqrt{\pi}(2z+1)!\), to each term. Thus, we have a series form for \(j_{\ell}(x)\),
\[\begin{split} j_{\ell}(x)&=\sqrt{\frac{\pi}{2x}} \sum_{s=0}^{\infty}\frac{(-1)^{s}2^{2s+2\ell+1}(s+\ell)!}{\sqrt{\pi}(2s+2\ell+ 1)!s!}\!\left(\frac{x}{2}\right)^{2s+\ell+\frac{1}{2}}\\ &=2^{\ell}x^{\ell}\sum_{s=0}^{\infty}\frac{(-1)^{s}}{s!(2s+2 \ell+1)!}x^{2s}.\end{split}\] (A5)
Now in the limit of small argument, where \(x\ll 2\sqrt{\frac{(2\ell+2)(2\ell+3)}{(\ell+1)}}\), we have
\[j_{\ell}(x)\approx\frac{2^{\ell}\ell!}{(2\ell+1)!}x^{\ell}=\frac{x^{\ell}}{(2 \ell+1)!!}.\] (A6)
The expressions in section 3 are then recovered with argument \(x=\omega t_{0}-\delta_{\ell}(\omega,t_{0})\).
Our use of the Riccati-Bessel functions here requires also that our inner boundary condition for \(\delta\) differs from that of Roxburgh (2010, 2016), who use a sinusoidal approximation, such that the phase function required to approximate a wavefunction \(\psi\) with a sinusoid as \(A\sin\left(\omega t+\delta\right)\) can be found as \(\delta\sim\arctan\left(\psi\middle/\frac{\mathrm{d}\phi}{\delta(\omega t)} \right)-\omega t\). For illustration, we show this in Fig. 6 for Riccati-Bessel functions \(s_{\ell}\) of various degree. This is known to yield an offset of \(\ell\pi/2\) in the argument as \(t\to 0\), which is exactly equal to the inner boundary condition of Roxburgh (2010, 2016). Thus, \(\delta_{\ell}\to 0\) as \(t\to 0\) for all \(\ell\) in our description, for consistency with these works further into the stellar interior.
## Appendix B Deviations to the mode frequencies
The displacement eigenfunctions \(\mathbf{\xi}\) of normal modes with angular frequency \(\omega\) satisfy the constraint
\[-\rho\omega^{2}\mathbf{\xi}=-\nabla P^{\prime}+\mathbf{\mathrm{g}}\rho^{\prime}+\rho \nabla\Phi^{\prime},\] (B7)
where \(P^{\prime},\rho^{\prime}\), and \(\Phi^{\prime}\) are the accompanying eigenfunctions in the pressure, density, and gravitational potential perturbations for that mode. These other eigenfunctions may be eliminated with the use of other physical constraint equations to yield an operator eigenvalue equation, customarily written in the manifestly Hermitian form
\[\begin{split}-\rho\omega^{2}\mathbf{\xi}&=\nabla \left(\mathbf{\xi}\cdot\nabla P+c_{s}^{2}\rho\nabla\cdot\mathbf{\xi}\right)-\mathbf{ \mathrm{g}}\nabla\cdot\left(\rho\mathbf{\xi}\right)-\rho G\nabla\left(\int\mathrm{ d}^{3}x^{\prime}\frac{\nabla\cdot\left(\rho\mathbf{\xi}\right)}{\left|x-x^{\prime} \right|}\right)\\ &\equiv\rho\,\hat{\mathcal{L}}\mathbf{\xi}.\end{split}\] (B8)
Figure 6: The inner phase function \(\delta\) required to approximate a Riccati-Bessel function using a sinusoid.
Small (and necessarily Hermitian) perturbations to the wave operator, of the form \(\hat{\mathcal{L}}\mapsto\hat{\mathcal{L}}+\lambda\hat{\mathcal{V}}\), then yield perturbations to the mode frequencies as
\[\delta(-\omega_{i}^{2})\sim\lambda V_{ii}+\lambda^{2}\sum_{j\neq i}\frac{|V_{ij}| ^{2}}{\omega_{0,i}^{2}-\omega_{0,j}^{2}}+\mathcal{O}(\lambda^{3}),\] (B9)
where \(V_{ij}=\int\mathrm{d}^{3}r\;\rho\boldsymbol{\xi}_{i}^{*}\cdot\hat{\mathcal{V}} \boldsymbol{\xi}_{j}\) are the matrix elements of the perturbing operator, and the Lagrangian displacement functions are assumed to be unit-normalized. For instance, \(\hat{\mathcal{V}}\) may be considered to be the difference between the wave operators of two different stellar structures with identical global properties, such that the matrix elements may be expressed as integrals with respect to localized perturbations in the physical quantities of the stellar structure (e.g. the inversion kernels of Kosovichev 1999). In discussions of acoustic glitches, however, one instead supposes that the wave operator \(\hat{\mathcal{L}}\) may be notionally decomposed as \(\hat{\mathcal{L}}_{\mathrm{smooth}}+\hat{\mathcal{V}}_{\mathrm{sharp}}\). The first term is, in the abstract, the wave operator associated with a "smooth" stellar structure, such that (by assumption) its eigenfunctions are well-described by asymptotic approximations such as the JWKB construction, while the second term is associated with localized, sharp, variations in the stellar structure. Since such a decomposition is at best notional, we are free to consider expressions for \(\hat{\mathcal{V}}_{\mathrm{sharp}}\) which might otherwise correspond to unphysical structural perturbations in the traditional sense. Moreover, by Eq. (B9), we may restrict our attention to the matrix elements of various operators, rather than the operators themselves. In particular, we note that a subset of the terms in Eq. (B8), which we shall use to define an operator \(\hat{\mathcal{H}}\), have matrix elements
\[H_{12}=\left\langle\boldsymbol{\xi}_{1},\frac{1}{\rho}\left(\nabla\left( \boldsymbol{\xi}_{2}\cdot\nabla P\right)-\mathbf{g}\nabla\cdot\left(\rho \boldsymbol{\xi}_{2}\right)\right)\right\rangle=-\int\mathrm{d}^{3}x\;\left[ \boldsymbol{\xi}_{1}\cdot\mathbf{g}\nabla\cdot\left(\rho\boldsymbol{\xi}_{2} \right)+\boldsymbol{\xi}_{2}\cdot\mathbf{g}\nabla\cdot\left(\rho\boldsymbol{ \xi}_{1}\right)-\left(\boldsymbol{\xi}_{2}\cdot\mathbf{g}\right)\left( \boldsymbol{\xi}_{1}\cdot\nabla\rho\right)\right]\] (B10)
that are Hermitian, by the divergence theorem (and since both \(\mathbf{g}\) and \(\nabla\rho\) point strictly radially). Focusing on the first two terms in particular, the constraints of adiabaticity \(\rho^{\prime}=\frac{P^{\prime}}{c_{s}^{2}}+\rho(\mathbf{e}_{r}\cdot\boldsymbol {\xi})\frac{N^{2}}{g}\), and of continuity \(\rho^{\prime}=-\nabla\cdot\left(\rho\boldsymbol{\xi}\right)\), allow us to rewrite this expression as
\[\left\langle\boldsymbol{\xi}_{1},\hat{\mathcal{H}}\boldsymbol{\xi}_{2}\right\rangle =\int\rho\;\mathrm{d}^{3}r\Big{(}-2N^{2}(\mathbf{e}_{r}\cdot \boldsymbol{\xi}_{1})(\mathbf{e}_{r}\cdot\boldsymbol{\xi}_{2})\Big{)}+\text{ (other Hermitian terms)}.\] (B11)
Accordingly, if we were to consider a notional, unphysical decomposition of \(\hat{\mathcal{L}}\) as above, in which for \(\hat{\mathcal{L}}_{\mathrm{smooth}}\) only this Brunt-Vaisala frequency term were to be modified as \(N^{2}\mapsto N_{\mathrm{smooth}}^{2}+\delta N^{2}\), the corresponding perturbation induced into the mode frequencies, ceteris paribus, would then go as
\[\delta(\omega^{2})\sim\left\langle\boldsymbol{\xi},\delta\hat{\mathcal{H}} \boldsymbol{\xi}\right\rangle\sim\int\xi_{r}^{2}\cdot\delta N^{2}\;\mathrm{d}m.\] (B12)
More principled decompositions necessarily have a more complicated form. For instance, one might prefer to consider frequency differences arising from more traditional perturbations to the equilibrium \(\rho\), \(\Gamma_{1}\), \(P\), etc. in a physically and structurally self-consistent fashion, for which the frequency differences arising from perturbations to specific quantities \(q\) are associated with integral kernels of the form
\[V_{ij}=\int(\delta q/q)\;\;K_{q}[\boldsymbol{\xi}_{i},\boldsymbol{\xi}_{j}] \;\mathrm{d}m.\] (B13)
By inspection of Eq. (B8) these structure kernels, \(K\), must necessarily be bilinear (and therefore quadratic, on the diagonal) in the wavefunctions of the modes corresponding to each matrix index, or potentially their (anti)derivatives with respect to radial position, as \(\hat{\mathcal{V}}\) is permitted to be a general integro-differential operator. Since the asymptotic radial dependence of the (appropriately scaled) wavefunctions is as a power law as \(r\to 0\), as we describe above, the overall signature of the near-core feature would then be a linear combination of components, each satisfying a power-law description of the kind we have provided. Our qualitative results are thus not substantially changed were a different quantity to be primarily responsible for producing the acoustic glitch (although each component may have an incremented or decremented power-law index, or different constant of proportionality).
|
2305.06459 | SlicerTMS: Real-Time Visualization of Transcranial Magnetic Stimulation
for Mental Health Treatment | We present a real-time visualization system for Transcranial Magnetic
Stimulation (TMS), a non-invasive neuromodulation technique for treating
various brain disorders and mental health diseases. Our solution targets the
current challenges of slow and labor-intensive practices in treatment planning.
Integrating Deep Learning (DL), our system rapidly predicts electric field
(E-field) distributions in 0.2 seconds for precise and effective brain
stimulation. The core advancement lies in our tool's real-time neuronavigation
visualization capabilities, which support clinicians in making more informed
decisions quickly and effectively. We assess our system's performance through
three studies: First, a real-world use case scenario in a clinical setting,
providing concrete feedback on applicability and usability in a practical
environment. Second, a comparative analysis with another TMS tool focusing on
computational efficiency across various hardware platforms. Lastly, we
conducted an expert user study to measure usability and influence in optimizing
TMS treatment planning. The system is openly available for community use and
further development on GitHub: \url{https://github.com/lorifranke/SlicerTMS}. | Loraine Franke, Tae Young Park, Jie Luo, Yogesh Rathi, Steve Pieper, Lipeng Ning, Daniel Haehn | 2023-05-10T21:04:26Z | http://arxiv.org/abs/2305.06459v4 | SlicerTMS: Interactive Real-time Visualization of Transcranial Magnetic Stimulation using Augmented Reality and Deep Learning
###### Abstract
Transcranial magnetic stimulation (TMS) is a non-invasive neuromodulation approach that effectively treats various brain disorders. One of the critical factors in the success of TMS treatment is accurate coil placement, which can be challenging, especially when targeting specific brain areas for individual patients. Calculating the optimal coil placement and the resulting electric field on the brain surface can be expensive and time-consuming. We introduce SlicerTMS, a simulation method that allows the real-time visualization of the TMS electromagnetic field within the medical imaging platform 3D Slicer. Our software leverages a 3D deep neural network, supports cloud-based inference, and includes augmented reality visualization using WebXR. We evaluate the performance of SlicerTMS with multiple hardware configurations and compare it against the existing TMS visualization application SimNIBS. All our code, data, and experiments are openly available: [https://github.com/lorifranke/SlicerTMS](https://github.com/lorifranke/SlicerTMS)
Keywords:Neuronavigation, Transcranial Magnetic Stimulation, Visualization, AI, Electric Field, Virtual Reality
## 1 Introduction
Transcranial magnetic stimulation (_TMS_) [4] is a powerful non-invasive brain stimulation technique (_NIBS_). TMS is used to treat various types of disorders such as major depressive disorders, migraines, clinical research on Parkinson's or Alzheimer's disease [13, 6, 2]. TMS works by placing a field generator, a _TMS coil_, close to a patient's scalp to induce a current pulse. The pulse produces a magnetic and electric field (_E-field_) through electromagnetic induction. The E-field stimulates particular brain regions by exciting or inhibiting neurons trans-synaptically. This technique can also produce brain mappings to investigate brain |
2306.00578 | Does Black-box Attribute Inference Attacks on Graph Neural Networks
Constitute Privacy Risk? | Graph neural networks (GNNs) have shown promising results on real-life
datasets and applications, including healthcare, finance, and education.
However, recent studies have shown that GNNs are highly vulnerable to attacks
such as membership inference attack and link reconstruction attack.
Surprisingly, attribute inference attacks has received little attention. In
this paper, we initiate the first investigation into attribute inference attack
where an attacker aims to infer the sensitive user attributes based on her
public or non-sensitive attributes. We ask the question whether black-box
attribute inference attack constitutes a significant privacy risk for
graph-structured data and their corresponding GNN model. We take a systematic
approach to launch the attacks by varying the adversarial knowledge and
assumptions. Our findings reveal that when an attacker has black-box access to
the target model, GNNs generally do not reveal significantly more information
compared to missing value estimation techniques. Code is available. | Iyiola E. Olatunji, Anmar Hizber, Oliver Sihlovec, Megha Khosla | 2023-06-01T11:49:43Z | http://arxiv.org/abs/2306.00578v1 | # Does Black-box Attribute Inference Attacks on Graph Neural Networks Constitute Privacy Risk?
###### Abstract
Graph neural networks (GNNs) have shown promising results on real-life datasets and applications, including healthcare, finance, and education. However, recent studies have shown that GNNs are highly vulnerable to attacks such as membership inference attack and link reconstruction attack. Surprisingly, attribute inference attacks has received little attention. In this paper, we initiate the first investigation into attribute inference attack where an attacker aims to infer the sensitive user attributes based on her public or non-sensitive attributes. We ask the question whether black-box attribute inference attack constitutes a significant privacy risk for graph-structured data and their corresponding GNN model. We take a systematic approach to launch the attacks by varying the adversarial knowledge and assumptions. Our findings reveal that when an attacker has black-box access to the target model, GNNs generally do not reveal significantly more information compared to missing value estimation techniques. Code is available.
Keywords:Attribute Inference Attack Privacy Risk Graph Neural Network.
## 1 Introduction
Several real-world data can be modeled as graphs. For instance, graphs are widely used to represent interactions between biological entities [16, 3, 15], model the interaction between users in social networks [14, 9] and for designing recommendation systems [5, 23]. Graph neural networks (GNNs) are a type of machine learning model that are specifically designed to handle graph-structured data. They have demonstrated effectiveness in diverse graph-based learning tasks, including node classification, link prediction, and community detection. GNNs leverage recursive aggregation of node information from neighboring nodes to generate informative graph representations [9]. However, despite their usefulness, GNNs can pose privacy threats to the data they are trained on. Multiple studies have
shown that GNNs are more vulnerable to privacy attacks than traditional machine learning methods. These attacks include membership inference attacks [13, 4], link stealing attacks [7, 25, 14], backdoor attacks [26], and adversarial attacks [22, 30]. One main reason for the high vulnerability of GNNs to attacks is their use of graph topology during training, which can lead to the leakage of sensitive information [13, 12]. However, attribute inference attacks (AIA) have been under-explored for GNNs. In AIA, the attacker's goal is to infer the sensitive attribute value of a node via access to the target model. This poses a severe privacy risk. For instance, if a health insurance company knows the disease status of a potential client, they may discriminate against them and increase their insurance premium. We take the first step of systematically investigating the privacy risks posed by AIA on GNNs under the practical black-box access assumptions.
As machine learning as a service (MLaaS) becomes more prevalent and GNNs are used in privacy-sensitive domains, it is essential to consider the privacy implications of black-box attribute inference attacks on GNNs. In this scenario, a user sends data to the trained model via an API, and receives a predictions. The user does not have access to the internal workings of the model. Motivated by this, we ask the question: _what is the privacy implication of black-box attribute inference attack on GNNs?_. To investigate this issue, we construct several attacks in a practical scenario where an attacker has black-box access to the trained model.
We develop two attribute inference attack (AIA) methods, namely the _attribute inference attack via repeated query of the target model_ (Fp-ma) and the _feature propagation-only attribute inference attack_ (Fp). In the Fp attack, we reconstruct the missing sensitive attributes by updating the attribute with the attribute value of the neighboring node via a feature propagation algorithm[17]. On the other hand, the Fp-ma attack employs a _feature propagation_ algorithm iteratively for each candidate node (nodes with sensitive attributes). It queries the target model with the estimated attribute and outputs a model confidence, which is then compared to a threshold to determine whether the inferred attribute is the true attribute. Additionally, we propose a _shadow-based attribute inference attack_ (Sa) that assumes an attacker has access to a shadow dataset and a shadow model, similar to the target model.
The contributions of this paper can be summarized as follows: (i) we develop two black-box attribute inference attack on GNNs and a relaxed shadow attack. (ii) while most AIA focus on inferring single binary attributes, our attacks go beyond these limitations. Our approach enables the inference of both single or multiple binary attributes, as well as continuous attribute values. (iii) through experimentation and extensive discussion, we show that having black-box access to a trained model may not result in a greater privacy risk than using missing value estimation techniques in a practical scenario.
## 2 Related Works
Several recent studies have demonstrated the vulnerabilities of GNNs to adversarial attacks [2, 20, 22, 26, 30]. These attacks encompass various techniques aimed
at deceiving the GNN model, such as altering node features or structural information. Additionally, researchers have explored attacks which aim to steal links from the trained model [7, 28] and extracting private graph information through feature explanations [14]. Property inference attacks have also been launched on GNNs [29], where an adversary can infer significant graph statistics, reconstruct the structure of a target graph, or identify sub-graphs within a larger graph. Another type of attack, membership inference, distinguishes between inputs the target model encountered during training and those it did not [13, 4]. Model inversion attacks aim to infer edges of a graph using auxiliary knowledge about graph attributes or labels [27]. The vulnerabilities of GNNs extend beyond these attack techniques, with model extraction attacks [21] and stealing attacks [19] posing additional risks.
Collectively, these studies provide valuable insights into the various vulnerabilities that GNNs are prone to. To the best of our knowledge, no attribute inference attack has been proposed to infer sensitive attributes (node features) from queries to a target model (black-box access). In addition, previous AIAs proposed on tabular data are not applicable to graphs [6, 11, 24, 8]. This is because first, all the attacks assume that the data points that are nodes are independent and secondly, they only consider binary attributes. This is not the case for graphs where nodes can be linked to other nodes by some graph properties, the node attributes can be highly correlated, and may have continuous values. Moreover, all other attacks can only infer one sensitive attribute at a time. Here, we propose multi-attribute inference attacks for GNNs. However, it should be noted that [4] performed preliminary white-box AIA and their method makes assumptions that the attacker has access to node embedding and sensitive attribute from the auxiliary graph (shadow data). The attacker trains a supervised attack to map embedding to sensitive attribute using the shadow data on the shadow model. The learnt mapping is then used to infer the attribute from the publicly released target embedding. It is important to mention that their approach focuses on the white-box setting where the attacker has access to the internal workings (node embeddings) of the target model and also requires strong assumptions of shadow data and shadow model from the same distribution.
## 3 Overview
**Attribute Inference Attack (AIA).** The task in AIA is to infer sensitive attributes (or features) on a number of nodes. These attributes could hold any possible kind of value. In this paper, we consider datasets having two categories of attribute values, binary and continuous. We note that AIA in its traditional form has only been used for inferring binary attribute values. We take the first step to evaluate on continuous attribute value. We define the task of AIA as follows:
Definition 1 (Aia).: _Let some GNN model \(\Phi\) be trained on a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) and node feature matrix \(\mathbf{X}\in\mathbb{R}^{|\mathcal{V}|\times d}\), let \(\mathbf{X}^{*}\in\mathbb{R}^{|\mathcal{V}|\times d}\) be the partial feature matrix
such that the sensitive attributes of a subset of nodes \(\mathcal{V}^{\prime}\subset\mathcal{V}\) are masked/hidden. Given access to \(\mathbf{X}^{*}\) and blackbox access to the GNN model \(\Phi\), the goal of the attacker is to reconstruct the hidden entries of \(\mathbf{X}^{*}\)._
#### 3.0.2 Black-box vs White-box access.
Here, we distinguish between two types of model access: black-box and white-box. In a black-box access scenario, an adversary can query the model and receive outputs for the query nodes, but does not have knowledge of the model's internal workings. On the other hand, in a white-box scenario, the attacker possesses knowledge of the target model's internal mechanisms, including its weights and embeddings. However, acquiring such white-box access can be challenging in practice due to intellectual property concerns. Hereafter, we focus on the practical black-box access scenario which is commonly encountered in MLaaS.
### Threat Model
#### 3.0.1 Adversary's background knowledge.
In AIA, the adversary has access to the trained (target) model, knows the non-sensitive attribute values, and the data distribution (which we later relax). We make no assumption about the adversary access to the graph structure but rather experimentally vary such access. Also, the attacker might be interested in inferring multiple attributes. Concisely, we characterize the adversary's background knowledge along two dimensions:
_Available nodes._ This quantifies the amount of nodes with sensitive attributes that are available to the attacker. Note, this is different from the sensitive attribute she wants to infer. This refer to Setting-1 and Setting-2 in our experiments. In **Setting-1**, the attacker only knows the non-sensitive attributes while in **Setting-2**, she knows 50% of the sensitive values and all non-sensitive attributes.
_Graph structure._ Whether the attacker has access to the graph structure used in training the target model or not. In the case where she has no access to the graph structure, she artificially creates a KNN graph from the candidate nodes.
## 4 Proposed Attacks
Towards this, we develop two different attacks. The first attack which we call _attribute inference attack via repeated query of the target model_ (Fp-ma) involves querying the target model multiple times with attributes produced by a feature propagation algorithm. The attacker iteratively performs feature propagation by querying the target model with the estimated attributes and evaluates the result using a confidence score. We develop a variant of the attack, Ri-ma, that initializes the sensitive attribute based on a random initialization mechanism and also iteratively queries the target model.
In the second attack which we call _feature propagation-only attribute inference attack_ denoted as Fp, relies on a single execution of a feature propagation
algorithm. The result obtained from this feature propagation is subsequently considered as the final outcome of the attack. As a baseline, we replace the missing attribute with random values, we refer to this as Ri attack. Lastly, we also propose a _shadow-based attribute inference attack_ denoted as Sa that assumes that an attacker has access to a shadow dataset and a corresponding shadow model similar to the target model. We note that this attack has a major limitation in that it may be difficult to obtain such shadow dataset and model in practice but we include it as a relaxed version of our black-box AIA attacks.
Note that all attack methods except Sa do not utilize any prior information about the labels.
### Data Partitioning
For all attacks except Sa, the dataset is partitioned into train/test sets, which are utilized to train and evaluate the target model. The candidate set is chosen from the training set and includes nodes with missing sensitive attributes for which the attacker wants to infer.
In the case of the Sa attack, the dataset is initially divided into two parts, namely the target and shadow datasets. The train and test sets are then derived from each dataset to train and evaluate the respective target and shadow models. The candidate set for evaluating the attacks is selected from the training set of the target dataset. All attacks are evaluated using the candidate set \(\mathbf{X}^{*}\).
### Attack Modules
In the following, the major components of the attacks are introduced. Specifically, feature propagation and confidence score. Feature propagation algorithm is used by the attacks for estimating the sensitive attributes, while confidence score acts as a threshold for measuring the correctness of the inferred attributes.
**Feature Propagation** The feature propagation algorithm is a method for constructing missing features by propagating the known features in the graph. As shown in Algorithm 1, the algorithm takes as input a set of nodes \(\mathbf{X}^{*}\) which have missing attributes, the adjacency matrix (either given or constructed via KNN), and the number of iterations which is the stopping criteria for the algorithm.
Feature propagation starts by computing a normalized Laplacian of the graph \(\mathbf{L}\) (line 1), then initializes only the missing attributes in \(\mathbf{X}^{*}\) with random values. This randomly initialized \(\mathbf{X}^{*}\) is denoted as \(\mathbf{X}^{*}_{\mathbf{R}}\) (line 3). \(\mathbf{X}^{*}_{\mathbf{R}}\) is propagated over the graph by multiplying it with the computed graph Laplacian \(\mathbf{L}\) (line 4). This step is repeated for multiple iterations until convergence [17]. Since convergence might be difficult to attain on a real-world dataset, we fix the values of iterations experimentally. If the attribute values are binary, the updated values of \(\mathbf{X}^{*}_{\mathbf{R}}\) are rounded up such that any values above 0.5 are assigned the value 1 and 0 otherwise (line 7). For a dataset with continuous values, this step is omitted. Then, the values of the known attributes in \(\mathbf{X}^{*}\) (attributes that were not considered missing at the start of feature propagation) will be reassigned to
nodes in \(\mathbf{X}_{\mathbf{R}}^{*}\) (line 9). Since feature propagation is an algorithm that reconstructs missing values by diffusion from the known neighbors, the non-missing attributes of the neighbors are always reset to their true values after each iteration.
**Confidence Score** In our attacks, since the attacker has no information about the ground truth label, she utilizes the confidence of the model based on its prediction. The underlying idea is that a model that is confident in its prediction will assign a higher probability to the corresponding class label. Thus, the attacker leverages this confidence as a measure of the confidence score. One approach is to consider the value of the class with the highest probability as the confidence score. However, a problem arises when the target model generates an output where either all classes have similar probabilities or there are multiple classes with comparable probabilities.
To address this issue, we propose a solution by applying a "tax" on the highest class probability. First, we compute the average class probability of the remaining classes, and then determine the difference between this average and the highest probability in \(\mathbf{y}\). This difference serves as the confidence score. Intuitively, if the class probabilities in \(\mathbf{y}\) are similar, the final score will be low, indicating a lower level of confidence. Conversely, if there is a substantial difference between the highest class probability and the others, the taxation is reduced, resulting in a higher final confidence score. It is important to note that the output vector \(\mathbf{y}\) is normalized, ensuring that the maximum confidence score is 1 and the minimum is 0. Additionally, the confidence score is computed on a per-node basis.
### Attribute Inference Attack via Repeated Query of the Target Model (FP-MA)
Our first attack, referred to as Fp-ma, involves multiple queries to the target model in order to infer the sensitive attribute. It relies on the feature propagation
algorithm (Algorithm 1) to initialize the sensitive attribute. The attack process is depicted in Figure 1. Furthermore, we introduce another variation of the attack, known as Ri-ma, which initializes the sensitive attribute through a random initialization mechanism while accessing the target model. The overall procedure of the attack is outlined in Algorithm 2.
```
input : Incomplete node feature matrix with missing sensitive attributes \(\mathbf{X}^{*}\), Edges \(\mathcal{E}\), confidence threshold \(\mathbf{cs_{threshold}}\) output :\(\mathbf{X}^{*}_{\mathbf{R}}\) with inferred attributes, mean confidence score \(\mathbf{cs_{mean}}\)
1\(\mathbf{X}^{*}_{\mathbf{R}}\leftarrow\)\(\mathbf{X}^{*}\)
2while\(\mathbf{X}^{*}_{\mathbf{R}}\) has missing valuesdo
3\(\mathbf{X}^{*}_{\mathbf{R}}\leftarrow\)InitAlgorithm(\(\mathbf{X}^{*}_{\mathbf{R}}\), \(\mathcal{E}\)) \(\backslash\)Do Feature Propagation
4\(\mathbf{Y}\leftarrow\)TargetGCN(\(\mathbf{X}^{*}_{\mathbf{R}}\),\(\mathcal{E}\)) \(\backslash\)Query target Model
5CS\(\leftarrow\)CalculateConfidenceScores(\(\mathbf{Y}\))
6ifCS has a value greater than\(\mathbf{cs_{threshold}}\)then
7\(i\leftarrow\)IndexOfMax(CS) \(\backslash\) Check index of maximum Confidence Score (\(\mathbf{cs}\))
8\(\mathbf{X}^{*}_{\mathbf{R}}[i]\leftarrow\)max(\(\mathbf{X}^{*}_{\mathbf{R}}[i]_{\mathbf{cs}}\)\(\backslash\) Choose node with maximum \(\mathbf{cs}\)
9 Reset \(\mathbf{cs_{threshold}}\)
10else
11\(\mathbf{cs_{threshold}}\leftarrow\)LowerBy5Percent(\(\mathbf{cs_{threshold}}\))
12
13 end if
14
15 end for
16Y\(\leftarrow\)TargetGCN(\(\mathbf{X}^{*}_{\mathbf{R}}\), \(\mathcal{E}\))
17CS\(\leftarrow\)CalculateConfidenceScores(\(\mathbf{Y}\))
18cs\(\mathbf{{}_{mean}}\leftarrow\)Mean(CS) return\(\mathbf{X}^{*}_{\mathbf{R}}\), \(\mathbf{cs_{mean}}\)
```
**Algorithm 2**Attack algorithm. InitAlgorithm refers to either the feature propagation (FP-MA) or the random initilization (RI-MA)
First, the attacker has \(\mathbf{n}\) candidate nodes with \(\mathbf{m}\) missing sensitive attributes, the corresponding edges of these nodes (or she computes the edges via KNN if she has no access), and a confidence score threshold \(\mathbf{cs_{threshold}}\). These nodes are in the matrix \(\mathbf{X}^{*}\). In this first phase, the attack procedure runs the initialization algorithm (feature propagation (Section 4.2) or random initialization) to obtain an estimated value for the missing attribute (line 3). The attacker then queries the target model with the attributes obtained from the initialization algorithm \(\mathbf{X}^{*}_{\mathbf{R}}\) and the edges \(\mathcal{E}\) (line 4). As shown in lines 5-7, the attacker computes the confidence scores as described in Section 11 and then chooses the node which produces the highest confidence score if it passes the confidence score threshold. Any node whose confidence score is higher than the threshold is "fixed" (line 8). That is, the estimated values for the missing attributes of those nodes does not change in the next iteration of the attack.
To incentivize the attack to infer nodes with high confidence scores, a threshold for the confidence score \(\mathbf{c}_{\mathbf{Sthreshold}}\) is selected by the attacker at the start. A node with inferred attributes with a confidence score lower than the threshold will not be fixed (line 11). The threshold assures that the algorithm will have multiple iterations to maximize the confidence score and in turn predict the right attribute value. But a problem arise when the threshold is too high and no nodes can obtain such confidence score. To tackle this problem, we propose a method that lowers the confidence score by 5% after each iteration when a node is not fixed (line 11). The lowered threshold is set back to the original value when a node is finally fixed (line 9). We reset the threshold to ensure that the attacker is re-incentivized when the feature propagation algorithm produces new randomized values for the rest of the nodes. This is an iterative process and are repeated until no nodes are left with missing values.
When the attacker infers values for all nodes with missing sensitive attributes, it queries the target model with these nodes and edges, compute the confidence scores, and then takes the mean of all the confidence scores of all inferred nodes (lines 14-16). The mean of the confidence score is returned for experimental purposes to compare the behavior of the confidence score to other attack methods. The attacker finally returns the candidate nodes with their inferred attributes \(\mathbf{X}_{\mathbf{R}}^{*}\) and the mean of the confidence scores \(\mathbf{c}_{\mathbf{s}\mathbf{c}\mathbf{c}\mathbf{c}\mathbf{c}\mathbf{c} \mathbf{c}}\) (line 17).
### Feature Propagation-only Attribute Inference Attack (FP)
The feature propagation-only attribute inference attack (FP) only execute a feature propagation algorithm once to infer the sensitive attribute.
One advantage of FP is that it is simpler and faster in runtime than other methods. This is because FP attack does not utilize the information obtained by querying the model in the attack process. The attack procedure is as follows. First, FP takes the missing attributes in \(\mathbf{X}^{*}\) and their edges \(\mathcal{E}\) as input. The
Figure 1: The black-box attack FP-ma. For the Ri-ma attack, we replace the feature propagation module with a random initializer.
attacker then runs feature propagation algorithm as described in Section 4.2. The output of the feature propagation algorithm is considered as the inferred nodes. Similar to Fp-ma, the target model is queried with the final inferred nodes, not for finetuning the inferred estimate but only to compute the mean of the confidence scores as a measure of comparison with other methods. Finally, Fp returns the candidate nodes with their inferred attributes \(\mathbf{X}_{\mathbf{R}}^{*}\) and the mean confidence \(\mathbf{cs_{mean}}\).
### Shadow-based attribute inference attack (SA)
We adapt (with several modification) the white-box attack proposed by [4] into the black-box setting. In this attack, the model's output (posteriors) is used to determine whether the attribute has been correctly inferred or not. The purpose of the attack is to study the behavior of the model on nodes with sensitive attribute values that have already been seen. To achieve this, a pseudo-model called a shadow model is trained, which the attacker can use to observe the model's behavior on her shadow dataset. The attacker chooses a candidate set from the train set of her shadow model, which includes nodes with complete attributes and some with missing sensitive attribute values. During the query, she randomly assigns random values to the missing sensitive attributes, observes the posterior, and compares it with the posterior of the node-set with the true attributes. After obtaining the labeled posteriors from the candidate set of the shadow dataset, she proceeds to train the attack model, which is a 3-layer MLP model. In this step, the attack model is trained using the labeled posteriors, where the posteriors with the original sensitive attribute value are labeled as 1, while those with the assigned random attribute value are labeled as 0. To infer the attribute of the candidate set (node of interest), she queries the target model, obtains posteriors, and inputs them into her trained attack model to infer attribute values. We note that this attack is only applicable for sensitive binary attribute values.
## 5 Datasets and Implementation details
We utilize three private datasets, namely the credit defaulter graph, Facebook, and LastFM social network dataset, along with two benchmark datasets, Cora and PubMed. The details for all the datasets are provided in Table 1 and in Appendix 0.B. The target GNN model \(\Phi\) is a 2-layer graph convolution network (GCN). All our experiments were conducted for 10 different instantiations. We report the mean values across the runs in the main paper. The full result with standard deviation is on GitHub.
#### 5.0.1 Attack Evaluation
For all experiment, we choose 100 candidate nodes (\(\mathbf{X}^{*}\)) at random from the training set with the objective of inferring the sensitive attribute values. Note that these candidate nodes are fixed across all experiments unless otherwise stated. To assess the success of the attack, we compare the
inferred attributes with the values of the original node-set, which refers to the nodes that were not modified. We employed two metrics that measures the percentage of the inferred attributes for binary values and mean-squared error for continuous values. Details of the evaluation methods are in Appendix 0.C.
## 6 Results
**Inference of sensitive binary attributes.** In the case when the missing sensitive attributes are binary, we observe a similar trend among all datasets: the attack performance is worse when the attacker has access to a black-box model compared to when they have no access, as shown in Figure 2. In Setting-1, Fp and Ri, which do not require access to the target model, exhibit slightly better performance (an improvement of at most 4%) compared to the black-box attack models Fp-ma and Ri-ma, which rely on such access. It is important to note that in Setting-1, the attacker only has information on all non-sensitive nodes and no additional knowledge about some of the sensitive attributes, unlike
\begin{table}
\begin{tabular}{l r r r r r r} \hline \hline & Credit & Cora & Pubmed & Facebook & LastFM & Texas \\ \hline \# attributes & 13 & 1,433 & 500 & 1,406 & 128 & 10 \\ \(|\mathcal{V}|\) & 30,000 & 2,708 & 19,717 & 4,167 & 7,624 & 925,128 \\ \(|\mathcal{E}|\) & 1,436,858 & 5,278 & 44,324 & 178,124 & 55,612 & – \\ \(deg\) & 95.79 & 3.90 & 4.50 & 42.7 & 7.3 & – \\ \# classes & 2 & 7 & 3 & 2 & 18 & 100 \\ Train Acc. & 0.78 & 0.95 & 0.88 & 0.98 & 0.80 & 0.53 \\ Test Acc. & 0.77 & 0.78 & 0.86 & 0.98 & 0.85 & 0.45 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of dataset used in all experiments
Figure 2: Performance of the binary attribute inference attack at different settings. Setting-1 is represented by a solid line with triangle marker, while Setting-2 is represented by a dashed line with a circle marker. We fix the size of the training dataset to 1000 for all datasets.
in Setting-2. When Fp and Ri outperform Fp-ma and Ri-ma respectively, it suggests that even with access to the trained model, no further sensitive information can be leaked. One reason for this is that the available information from non-sensitive nodes already captures the majority of patterns and correlations in the dataset. Therefore, the incremental advantage gained from accessing the model is minimal, resulting in comparable or slightly inferior performance of the black-box attack methods. In Setting-2, the performance of Ri is similar to Setting-1, where Ri performs slightly better than Ri-ma in inferring the sensitive attribute. However, the opposite phenomenon is observed for Fp. Specifically, Fp-ma achieves a higher inferred attribute rate. For instance on the Cora dataset, this difference amounts to an improvement of 21%.
On all datasets except Facebook, we observe that when the attacker has access to the graph structure, it does not provide additional advantages to the attack models (Fp-ma and Ri-ma). Computing the graph based on K nearest neighbor strategy to query the target model leads to better inference than access to the true graph structure. One reason for this is that the candidate nodes are disjoint from the training nodes, and the connections among the candidate nodes are relatively sparse. Therefore, when using the original sparse neighborhood the feature representation of the query node might be less informative as compared to using the neighborhood constructed based on feature closeness.
For the Sa attack, the attack performance is a random guess (\(<=50\%\)) on all datasets. Therefore, in addition to the difficulty in obtaining a shadow dataset, such attacks are also not successful given a black-box access to the model. We omit the results due to space limitation.
#### 4.2.2 Inference of sensitive continuous attributes.
As shown in Figure 3, on the LastFM dataset, Fp-ma performs the worst, while Ri-ma achieves the best results. The performance of Fp varies depending on the availability of the full graph structure or using the KNN approach. Throughout the experiments, using the full graph structure consistently yields better results for Fp-ma. The Ri and Ri-ma method, on the other hand, is not sensitive to the graph structure. Ri-ma method consistently outperforms the other methods in Setting-1.
In Setting-2, the performance pattern follows a similar trend as Setting-1, but with improved results. This can be attributed to the attacker having knowledge about some sensitive attributes. For example, at 100 training dataset size, the error rate drops by 60% for Fp, 45% for Fp-ma, 51% for Ri, and 50% for Ri-ma. This highlights the impact of having knowledge of nodes with sensitive attributes. An attacker with such information can launch a stronger attack compared to an attacker with no knowledge, as the error rates significantly decrease when partial knowledge is available.
Similarly, on the PubMed dataset (Figure 3), we observe that an attacker in Setting-2 can infer the sensitive attributes better compared to an attacker in Setting-1, with a significant decrease in error rates of up to 52%. Among the inference methods used, Fp-ma consistently achieves the best performance (lowest error) in inferring the sensitive attribute across all settings in the PubMed dataset. The reason behind this superiority can be attributed to the utilization
of feature propagation and the availability of the graph structure. By leveraging feature propagation algorithms, Fp-ma can effectively propagate information and exploit the relationships between nodes to make more accurate inferences.
In both the settings, having access to the graph structure consistently yields the highest inference capability (lowest error rates) for all other methods except for Ri and Ri-ma. This observation is not surprising unlike in the case of inferring binary attributes. This is because continuous attributes typically exhibit a smooth and gradual variation, whereas binary inference focuses on identifying distinct decision boundaries within the data. Hence, the connectivity patterns of the nodes in the graph structure play a crucial role in propagating information and inferring missing values. The inference methods leverage these patterns to make accurate estimations.
**Summary.** For inferring binary attribute, the underlying algorithms used by Fp and Ri methods allow them to leverage the dataset's inherent characteristics effectively. On the other hand, Fp-ma and Ri-ma, relying on access to the target model, may exhibit slightly lower performance in the absence of specific knowledge about the sensitive attributes such as in Setting-2. Additionally, successful attacks can be carried out without full access to the graph structure.
For inferring continuous attribute, the results emphasize the importance of utilizing the graph structure, and additional information in improving the accuracy of inferring the sensitive attributes.
**Effect of the Training Data Size on Inferring Sensitive Attributes** Here, we investigate the influence of the data size used to train the target model on the inference of the sensitive attribute. On the Cora and Facebook dataset (Figure 4), we observe that it is easier to infer more attributes when there is less data used in training the target model. One reason for this is that when there is less training data available, the target model may not have learned all the relevant features, and therefore the attack can leverage this to infer the sensitive attribute more easily. However, as the number of available training data increases, the
Figure 3: Performance of inferring continuous attributes on the PubMed and LastFM dataset. Setting-1 is represented by a solid line with triangle marker, while Setting-2 is represented by a dashed line with a circle marker. We fix the size of the training dataset to 1000.
target model becomes better at learning the underlying patterns in the data, which in turn makes it more difficult for an attacker to infer sensitive attributes from the model's output. Additionally, the increased training data may also reduce overfitting and improve generalization performance, which can make the model less vulnerable to attacks. We also observe that the attack achieves greater success in Setting-2 compared to Setting-1. This outcome is expected, as it demonstrates the attack's ability to leverage the information from the known 50% sensitive attribute values to enhance the accuracy of inferring the remaining sensitive attribute values.
On the Credit dataset in Figure 4, the effect of the target's training data on inferring the sensitive attribute is minimal. For instance, the performance of Fp-ma is similar across different variations of training size. Furthermore, the additional knowledge of certain sensitive attributes (Setting-2) does not have any noticeable effect. On the Pubmed and LastFM datasets (Figure 5), Ri-ma consistently achieves lower error rates as training data sizes increases. This indicates that Ri-ma benefits from larger training datasets in terms of capturing more diverse patterns leading to improved performance and lower error rates. However, for Fp-ma, we observed a convex-shaped effect as the training data size increases. Although this is strange, we believe that as the training size increases, the model may encounter more outliers or noise that hinder its performance, resulting in higher error rates. However, as the training size further increases, the model starts to learn the underlying patterns better, leading to improved performance and lower error rates.
**Inferring Multiple Attributes (m\(>1\))** In the multiple attributes inference experiment, the attacker is interested in inferring more than one attribute. For example, on the Credit dataset, the attacker may want to infer both the age and education level of the victims. In this experiment, we set \(m=2\). As shown in Figure 6, the results of the multiple attribute inference closely follow the trends observed in the case of single attribute inference. However, one notable difference is that on the Credit, Cora, and Facebook datasets, the performance of inferring multiple attributes is lower as compared to single attribute inference (solid lines).
Figure 4: Performance of varying target model’s training size on black-box attacks (Ri-ma and Fp-ma) on binary AIA.
This is expected because of the increased complexity to the inference task. The presence of multiple attributes introduces additional dependencies and interactions among the attributes, making it more challenging to accurately infer all attributes simultaneously. Moreover, some attributes may have conflicting patterns or dependencies, making it difficult for the inference algorithm to reconcile the competing information.
For inferring sensitive continuous values, the opposite is observed, especially for Fp-ma (Figure 7). Specifically, inferring multiple attributes achieves lower error rates than inferring a single attribute, with up to a 99% decrease in error on both the PubMed and LastFM datasets (dashed lines). This interesting phenomenon can be attributed to the unique characteristics of the feature propagation algorithm used in Fp-ma. The feature propagation algorithm utilizes the relationships and dependencies among the attributes to propagate information and refine the inferred values. When inferring multiple attributes simultaneously, the propagated information from one attribute can provide valuable insights and constraints for the inference of other related attributes. Specifically, if certain attributes have missing or noisy data, the presence of other attributes with similar patterns may compensate for the errors and improve the robustness of the inference process.
**Additional Experiment** We perform additional experiment by varying the distribution assumption in Appendix 0.A. The result on the Texas dataset demonstrates that having access to skewed distributions (where the candidate and the training nodes are from different distributions) leaks more information than having access to the same distribution when the training dataset size is small.
## 7 Conclusion
In this paper, we develop several black-box attribute inference attacks to quantify the privacy leakage of GNNs. Our findings are as follows:
(i) For a stronger attacker with additional access to some sensitive attribute (Setting-2), the performance of black-box attacks can improve by up to 21%
Figure 5: Performance of varying target model’s training size on black-box attacks (Ri-ma and Fp-ma) on continuous AIA
compared to an attacker without such access (Setting-1).
(ii) The graph structure plays a significant role in inferring sensitive continuous values, leading to a remarkable reduction in error rate of up to 99%. However, when it comes to inferring sensitive binary values, except for the Facebook dataset, the graph structure has no noticeable impact.
(iii) Despite a stronger attacker (Setting-2) and access to the graph structure, our black-box attribute inference attacks generally does not leak any additional information compared to missing value estimation algorithms, regardless of whether the sensitive values are binary or continuous.
#### Acknowledgment.
This work is, in part, funded by the Lower Saxony Ministry of Science and Culture under grant no. ZN3491 within the Lower Saxony "Vorab" of the Volkswagen Foundation and supported by the Center for Digital Innovations (ZDIN), and the Federal Ministry of Education and Research (BMBF), Germany, under the project LeibnizKILabor (grant no. 01DD20003).
|
2306.17004 | Learning thermodynamically constrained equations of state with
uncertainty | Numerical simulations of high energy-density experiments require equation of
state (EOS) models that relate a material's thermodynamic state variables --
specifically pressure, volume/density, energy, and temperature. EOS models are
typically constructed using a semi-empirical parametric methodology, which
assumes a physics-informed functional form with many tunable parameters
calibrated using experimental/simulation data. Since there are inherent
uncertainties in the calibration data (parametric uncertainty) and the assumed
functional EOS form (model uncertainty), it is essential to perform uncertainty
quantification (UQ) to improve confidence in the EOS predictions. Model
uncertainty is challenging for UQ studies since it requires exploring the space
of all possible physically consistent functional forms. Thus, it is often
neglected in favor of parametric uncertainty, which is easier to quantify
without violating thermodynamic laws. This work presents a data-driven machine
learning approach to constructing EOS models that naturally captures model
uncertainty while satisfying the necessary thermodynamic consistency and
stability constraints. We propose a novel framework based on physics-informed
Gaussian process regression (GPR) that automatically captures total uncertainty
in the EOS and can be jointly trained on both simulation and experimental data
sources. A GPR model for the shock Hugoniot is derived and its uncertainties
are quantified using the proposed framework. We apply the proposed model to
learn the EOS for the diamond solid state of carbon, using both density
functional theory data and experimental shock Hugoniot data to train the model
and show that the prediction uncertainty reduces by considering the
thermodynamic constraints. | Himanshu Sharma, Jim A. Gaffney, Dimitrios Tsapetis, Michael D. Shields | 2023-06-29T15:02:16Z | http://arxiv.org/abs/2306.17004v2 | # Learning thermodynamically constrained equations of state with uncertainty
###### Abstract
Numerical simulations of high energy-density experiments require equation of state (EOS) models that relate a material's thermodynamic state variables - specifically pressure, volume/density, energy, and temperature. EOS models are typically constructed using a semi-empirical parametric methodology, which assumes a physics-informed functional form with many tunable parameters calibrated using experimental/simulation data. Since there are inherent uncertainties in the calibration data (parametric uncertainty) and the assumed functional EOS form (model uncertainty), it is essential to perform uncertainty quantification (UQ) to improve confidence in the EOS predictions. Model uncertainty is challenging for UQ studies since it requires exploring the space of all possible physically consistent functional forms. Thus, it is often neglected in favor of parametric uncertainty, which is easier to quantify without violating thermodynamic laws. This work presents a data-driven machine learning approach to constructing EOS models that naturally captures model uncertainty while satisfying the necessary thermodynamic consistency and stability constraints. We propose a novel framework based on physics-informed Gaussian process regression (GPR) that automatically captures total uncertainty in the EOS and can be jointly trained on both simulation and experimental data sources. A GPR model for the shock Hugoniot is derived and its uncertainties are quantified using the proposed framework. We apply the proposed model to learn the EOS for the diamond solid state of carbon, using both density functional theory data and experimental shock Hugoniot data to train the model and show that the prediction uncertainty reduces by considering the thermodynamic constraints.
Introduction
Hydrodynamics simulations, which are widely used to predict and understand the evolution of experiments in high energy density physics, inertial confinement fusion, laboratory astrophysics, and geophysics, are underpinned by equation of state (EOS) models which are needed to relate the thermodynamic state variables of the materials of interest [1]. The accuracy and precision of the EOS, and the development of methods to quantify uncertainty in EOS models, is therefore a crucial concern. This is a challenging task that will require novel methods to complete.
EOS models are typically constructed using semi-empirical functions where the functional form is motivated by the physics, and the parameters are calibrated using a complex combination of experimental and first-principles simulation data from a variety of sources. Once calibrated, the EOS model can be used to interpolate and extrapolate over the wide range of input states needed in hydrodynamic simulations. The semi-empirical approach is subject to two sources of uncertainty; uncertainty in the values of the parameters in the EOS model (parameter uncertainty) and uncertainty in the form of the EOS model itself (model uncertainty). Both sources must be quantified simultaneously to give a complete picture of the total EOS uncertainty. While parametric uncertainty in the EOS has been addressed in several recent works, model uncertainty remains a significant challenge. In this work, we describe a new machine learning based approach to UQ which accounts for _all_ sources of uncertainty and provides an analytical framework for combining heterogeneous data sources into a single, uncertainty-aware EOS model.
Our new approach uses Gaussian process (GP) regression [2] to construct a data-driven EOS model that automatically satisfies all thermodynamic constraints. The resulting model provides pointwise predictions in the thermodynamic state space that include both model and data uncertainty. Incorporating thermodynamic constraints ensures that predictions satisfy the underlying physics across the entire domain, thereby avoiding pathologies that lead to the failure of downstream tasks like hydrodynamics modeling. Next, we derive a GP model for the shock Hugoniot directly from the uncertain EOS. This allows us to derive a novel unified approach enabling the model to be trained from first-principles simulation data, various experimental data sources, or both. We apply the proposed method to EOS modeling for the diamond phase of carbon, for which first-principles simulation data were first used to train the model. Then, experimental Hugoniot data are integrated into the unified GP EOS to create a jointly-trained uncertain EOS. The proposed model provides a powerful yet flexible non-parametric EOS that can be learned directly from het
erogeneous data, obeys the important thermodynamic principles, and quantifies uncertainties that stem from noisy and sparse data from disparate sources.
## II EOS models with uncertainty: motivation & methods
### Relevant Prior Work
In the standard setting, EOS parameter calibration depends on individual modelers who leverage domain knowledge and expertise to align EOS predictions with given experimental and simulation data. Recently, it has been common to pose the calibration process as an optimization problem to solve for the best parameters that give the least prediction error compared to available data [3, 4, 5, 6]. This approach can naturally be extended to capture parametric uncertainty by considering uncertainties in the calibration data sets. Ali _et al._[7] proposed a method that considers small perturbations in experimental data to calibrate model parameters using an optimization routine and propagate the experimental uncertainties using Monte Carlo simulation through the EOS models. Brown and Hund [8] apply Bayesian model calibration to estimate parameters using dynamic material properties experiments under extreme conditions. Lindquist and Jadrich [9] proposed a Bayesian framework to perform UQ of a multi-phase EOS model for carbon by accounting for calibration data uncertainty and yielding an ensemble of model parameters set. Walters _et al._[10] uses a Bayesian statistical approach to quantify parametric uncertainty by coupling hydrocode simulations and velocimetry measurements of a series of plate impact experiments of an aluminum alloy.
Model uncertainty, on the other hand, is more challenging to quantify since it requires exploring the infinite-dimensional space of possible functional forms that are thermodynamically constrained. Nonetheless, some work has been done to explore model uncertainty [11, 12]. For example, Kamga _et al._[11] have performed UQ in a single model by exploring discrepancies in legacy experimental data. Gaffney, Yang, and Ali [12] used GP regression to capture model uncertainty in the EOS of B\({}_{4}\)C, accounting for the thermodynamic consistency constraint by explicitly modeling the free energy. They showed that the constraint reduces model uncertainty in the EOS by limiting the space of functions that can be fit to first-principles simulations. However, their model ignores the important thermodynamics stability constraints, which ensure that the specific heat and isothermal compressibility remains positive, and that will quickly cause hydrodynamic simulations to fail
when violated.
### Uncertainty in Parametric EOS
An EOS model is a semi-empirical equation that relates a set of state variables in a material such as temperature \(T\), mass density \(\rho\) (or volume \(V\)), pressure \(P\), internal energy \(E\), entropy \(S\), etc. The standard process of building EOS models is to leverage expert knowledge of the material state under different conditions and assume a functional form that obeys the laws of thermodynamics. These often involve many parameters that must be carefully calibrated using a combination of experimental results and first-principles simulations. A generic EOS model may be written as,
\[\mathbf{F}_{\alpha}(\mathbf{\Theta})=0 \tag{1}\]
where \(\mathbf{F}\) is a vector function relating a set of state variables, \(\alpha\) are the set of parameters unique to the assumed EOS model and \(\mathbf{\Theta}\) are the set of state variables (e.g. \(\mathbf{\Theta}=\{P,V,T,E\}\)) that the EOS relates. In hydrodynamic simulations, for example, it is common to express the EOS in terms of the volume \(V\) and temperature \(T\) in the following form:
\[\{P,E\}=\mathbf{F}_{\alpha}(T,V) \tag{2}\]
Once the parameters \(\alpha\) are learned, the EOS model can be utilized to predict the desired material state in the thermodynamics phase space.
To learn these parameters with uncertainty, we can solve the requisite inverse problem in a Bayesian setting. Here, we can determine the distribution of the parameters \(\alpha\) conditioned on the observed data \(\mathbf{d}\) as:
\[p(\alpha|\mathbf{d})=\frac{p(\mathbf{d}|\alpha)p(\alpha)}{p(\mathbf{d})} \tag{3}\]
where \(p(\mathbf{d}|\alpha)\) is the likelihood function, \(p(\alpha)\) is the prior distribution reflecting our existing knowledge of the parameters, and \(p(\mathbf{d})\) is the evidence that serves as a normalization and does not need to be computed in the application for parameter estimation. In a general setting, this Bayesian inference problem is solved indirectly by drawing samples from \(p(\alpha|\mathbf{d})\) using various Markov Chain Monte Carlo (MCMC) methods. As we'll see, this Bayesian inference process can be difficult when data are limited and/or the parameter vector \(\alpha\) is very high-dimensional.
UQ for existing parametric EOS models is further limited by the prescribed form of these models. Although often derived from physical principles, these models are nonetheless built upon
assumptions, simplifications, and approximations of known physics, while neglecting physics that are poorly understood. Consequently, these models may be very accurate in certain regimes (e.g. of \(T\) and \(P\)) and inadequate in others. The resulting uncertainty in these predictions is referred to as model-form uncertainty and cannot be accounted for in existing parametric models. In certain cases, competing parametric models can be compared and selected using Bayesian model selection. However, this requires the computation of the evidence term (denominator in Eq. (3)), which poses significant practical challenges. Model-form uncertainty, combined with parametric uncertainty, results in a range of outputs for a fixed input state, thus yielding an ensemble of valid EOS models. Hence, it is also necessary to quantify these model-form uncertainties in a rigorous UQ framework to enhance our confidence in EOS predictions. Significant research efforts have been made in quantifying parametric uncertainty; however, model-form UQ for equations of state has not received adequate attention, and only a few publications are available in the literature [11; 12]. The biggest hurdle in quantifying model-form uncertainty is enumerating all the potential mappings that could form physics-consistent EOSs. In this work, we develop a framework using Gaussian process regression that automatically explores the range of physics-consistent EOS models to capture both sources of uncertainty. The proposed method is non-parametric and data-driven, yet satisfies both thermodynamic consistency and stability constraints.
### Parametric EOS with Uncertainty: An Illustration
An example of a parametric equation of state is the Mie-Gruneisen-Debye model [13] for single phase diamond. This model, which has been widely used for modeling materials (e.g. carbon and neon) in high temperature and high pressure environments has the following form:
\[P(V,T)=P_{V}(V,T=0)+P_{TH\;\mathrm{Debye}}(V,T) \tag{4}\]
where \(P_{V}(V,T=0)\) is the zero temperature Vinet EOS [14] given by
\[P_{V}=3K_{0}x^{-2}(1-x)\exp[(1.5K_{0}^{\prime}-1.5)(1-x)] \tag{5}\]
where \(x=(V/V_{0})^{1/3}\) and
\[P_{TH\;\mathrm{Debye}}(V,T)=\frac{9RT\gamma_{D}}{V}\left(\frac{T}{\Theta_{D}} \right)^{3}\int_{0}^{\theta_{D}/T}\frac{z^{3}}{e^{z}-1}dz. \tag{6}\]
is the Debye thermal pressure. In total, the model has six parameters. The Vinet model has parameters \(V_{0}\) the atomic volume, \(K_{0}\) the bulk modulus, and \(K_{0}^{\prime}\) the pressure derivative of \(K_{0}\) - all
at a reference state of ambient pressure and zero temperature. The Debye thermal pressure has an additional three parameters. The volume dependent characteristic Debye temperature \(\theta_{D}\) is given by:
\[\theta_{D}=\theta_{0}x^{-1.5}\exp[\gamma_{1}(1-x^{3q})/q] \tag{7}\]
and the Debye-Gruneisen parameter is given by
\[\gamma_{D}=-\frac{d\ln\theta_{D}}{d\ln V}=\gamma_{1}x^{3q}+1/2. \tag{8}\]
Fitting the model therefore requires a complicated calibration process to infer the following vector of six parameters \(\alpha=\{V_{0},K_{0},K_{0}^{\prime},\theta_{0},\gamma_{1},q\}\) from data at various pressures and temperatures[15; 16]. This calibration has been performed in the literature[15] and we have further conducted a preliminary Bayesian parameter estimation using the same data with the resulting parameter distributions (obtained using Markov Chain Monte Carlo, MCMC, methods for Bayesian inference) shown in Figure 1.
Figure 1: Joint probability distribution function from Bayesian calibration of the Mie-Grüneisen-Debye EOS model for diamond phase carbon. Inset table: Deterministic parameters from Dawaele et al.[15]
However, naive application of Bayesian methods do not necessarily provide satisfactory parameter uncertainties for these models. For example, Figure 1 shows that some parameters, such as \(V_{0}\), exhibit reasonable convergence toward those identified in the literature (inlay table in Figure 1), while others show wide uncertainty and do not converge toward parameters from the literature. Although this parameter estimation exercise is only preliminary, and could almost certainly be improved with informed priors from expert modelers and perhaps improved MCMC methods, it exhibits the difficulty in Bayesian model calibration - even when models have a modest number of parameters. However, many physics-based and empirical EOS models have a much larger number of parameters. For example, the Carbon EOS in the Radiative Emissivity and Opacity of Dense Plasmas (REODP) code [17] has 17 parameters for the diamond phase of carbon (plus 17 each for the BC8, SC, SH phases and 30 parameters for the liquid phase), which makes even deterministic calibration a massive undertaking [18]. Calibrating this set of parameters using Bayesian inference is practically impossible without huge data sets and highly specialized expertise.
### Thermodynamic constraints on EOS models
The EOS expresses the thermodynamic response of a material and so is subject to the laws of thermodynamics. These laws impose two types of constraints, often known as thermodynamic consistency and thermodynamic stability. The consistency constraint arises from the fact that changes in the various thermodynamic variables \(\mathbf{\Theta}\) are all related to changes in a single quantity, the thermodynamic potential. For an EOS of the form in Eq. (2), this potential is the Helmholtz Free Energy given by:
\[F=E-TS \tag{9}\]
where \(E\) is internal energy, \(T\) is temperature, and \(S\) is the entropy. According to the first and second laws of thermodynamics, changes in the state variables induce a change in the free energy \(dF=-SdT-PdV\) allowing us to express the pressure and energy by:
\[\begin{split} P&=-\left.\frac{\partial F}{ \partial V}\right|_{T},\\ E&=F-T\frac{\partial F}{\partial T}\right|_{V} \end{split} \tag{10}\]
Taking the derivatives \(\frac{\partial P}{\partial T}\) and \(\frac{\partial E}{\partial V}\) gives the thermodynamic consistency constraint
\[P=T\frac{\partial P}{\partial T}-\frac{\partial E}{\partial V}. \tag{11}\]
Deviations from thermodynamic consistency represent an erroneous source or sink of heat or work in hydrocode simulations, and therefore any valid (useful) EOS model must satisfy this equality constraint. Thus, the space of functions that satisfy Eq. (11) forms the maximal set of possible EOS functions for any system and therefore provides an upper bound on EOS uncertainty [12]. Note that we have chosen the Helmholtz free energy to suit the data typically used to train EOS models (which have \(T\) and \(V\) as independent variables); the above discussion can be applied to any other choice of thermodynamic potential depending on the application.
The second EOS constraints, known as thermodynamic stability, are derived from the second law of thermodynamics that requires that the Helmholtz free energy is a convex function. As a result, the isothermal compressibility (\(\kappa_{T}\)) and specific heat (\(c_{V}\)) are positive quantities, and the thermodynamic stability constraints are given as
\[\left(\frac{\partial^{2}F}{\partial V^{2}}\right)_{T} =-\left(\frac{\partial P}{\partial V}\right)_{T}=\frac{1}{V\kappa _{T}}\geq 0\Longleftrightarrow\kappa_{T}>0\Longleftrightarrow\left(\frac{ \partial P}{\partial V}\right)_{T}\leqslant 0, \tag{12}\]
\[\left(\frac{\partial^{2}S}{\partial E^{2}}\right)_{V} =-\frac{1}{T^{2}}\left(\frac{\partial T}{\partial E}\right)_{V}=- \frac{1}{T^{2}c_{V}}\leqslant 0\Longleftrightarrow c_{V}>0\Longleftrightarrow \left(\frac{\partial E}{\partial T}\right)_{V}\geqslant 0 \tag{13}\]
Deviations from thermodynamic stability can be catastrophic in hydrocode simulations, leading to numerical instability, and so the above convexity conditions provide another important constraint on the functional space of valid EOSs.
### Gaussian process regression
The EOS model developed in this work is developed using physically constrained Gaussian process regression (GPR). GPR is a non-parametric supervised machine learning method that is widely used to construct surrogate models for expensive physics-based models [19; 20] since it can approximate complex non-linear functions with an inbuilt probabilistic estimate of prediction uncertainty. It is also easily interpretable, such that the Gaussian probability measure defined at each prediction point makes it straightforward to understand prediction uncertainty and establish a degree of confidence in the model. Furthermore, the model hyper-parameters establish the length-scale of the process and can be easily interpreted in terms of correlations among point predictions. Their flexibility allows them to cover a wide range of functional forms in a single model. These features of GPR makes it an ideal choice for quantifying model uncertainty.
Formally, a GP is a stochastic process that is a collection of an infinite number of random variables indexed by time or space, such that any finite collection of these random variables \(\left\{y=f(\mathbf{x}(\theta))\mid\mathbf{x}(\theta)\in\mathbb{R}^{d},\theta\in \Omega\right\}\) forms a multivariate Gaussian distribution. Hence, a GP can be completely defined by a joint Gaussian probability distribution over a set of functions [2]. A single function \(f\) drawn from the set of admissible functions is known as a realization of the GP and, in our case, represents one possible model for the EOS. Our task in GPR is to identify the appropriate joint Gaussian probability distribution that best represents a set of available data.
Consider that, for a given set of \(N\) observation of the input \(\mathbf{x}\), i.e. \(\mathbf{X}=\left\{\mathbf{x}^{(1)},\mathbf{x}^{(2)},\ldots\mathbf{x}^{(N)} \right\},\quad\mathbf{x}^{(i)}\in\mathbb{R}^{d}\), we have the respective output vector \(\mathbf{y}=\left[y^{(1)},y^{(2)},\ldots y^{(N)}\right]^{\top},\ y^{(i)}\in \mathbb{R}\). We aim to use a GP to approximate the underlying function \(Y(\cdot,\cdot):\mathbb{R}^{d}\times\Omega\rightarrow\mathbb{R}\). In a Bayesian framework, we start by assuming a prior for GP \(Y(\mathbf{x})\) as,
\[Y(\mathbf{x})\sim GP\left[\mu(\mathbf{x}),K\left(\mathbf{x},\mathbf{x}^{ \prime}\right)\right] \tag{14}\]
where \(\mu(\cdot):\mathbb{R}^{d}\rightarrow\mathbb{R}\) and \(K(\cdot,\cdot):\mathbb{R}^{d}\times\mathbb{R}^{d}\rightarrow\mathbb{R}\) are the mean function and covariance function, respectively, defined as,
\[\mu(\mathbf{x})=\mathbb{E}[Y(\mathbf{x})] \tag{15}\] \[K\left(\mathbf{x},\mathbf{x}^{\prime}\right)=\mathbb{E}[Y( \mathbf{x})-\mu(\mathbf{x})]\left[Y\left(\mathbf{x}^{\prime}\right)-\mu\left( \mathbf{x}^{\prime}\right)\right]\]
The covariance function, selected as a positive definite kernel, defines the degree of linear dependence between the output values computed at input points \(\mathbf{x}\) and \(\mathbf{x}^{\prime}\). Typically, the closer two points are in the input space (by some measure, e.g. Euclidean distance), the more strongly correlated they are in the output space. A variety of kernel functions are available in the literature [2]. Throughout this work, we will use the square exponential covariance kernel with noise, given by
\[K\left(\mathbf{x},\mathbf{x}^{\prime}\right)=\sigma^{2}\exp\left(\frac{-\left\| \mathbf{x}-\mathbf{x}^{\prime}\right\|_{2}^{2}}{2l^{2}}\right)+\sigma_{n}^{2} \delta_{\mathbf{x},\mathbf{x}^{\prime}} \tag{16}\]
where \(l\), \(\sigma^{2}\), \(\sigma_{n}^{2}\), and \(\delta_{x,x^{\prime}}\) are length-scale (correlation length, input scale), signal variance (output scale), Gaussian noise variance, and Kronecker delta function, respectively. Generally, \(\theta=(\sigma,l,\sigma_{n})\) denotes the set of hyper-parameters that are estimated from the training data.
Next, define the matrix \(\mathbf{K}=\left[K\left(\mathbf{x}^{(i)},\mathbf{x}^{(j)}\right)\right]_{ij}\), the mean vector \(\boldsymbol{\mu}=\left[\mu\left(\mathbf{x}^{(1)}\right),\mu\left(\mathbf{x}^ {(2)}\right),\ldots,\mu\left(\mathbf{x}^{(N)}\right)\right]^{\text{T}}\), and the kernel entry \(k\left(\mathbf{x}^{\prime}\right)=\left[K\left(\mathbf{x}^{(i)},\mathbf{x}^{ \prime}\right)\right]_{i}-\sigma_{n}^{2}\delta_{\mathbf{x}^{(i)},\mathbf{x}^{ \prime}}\).
The posterior predictive distribution of the output \(y^{*}\) for a new test input \(\mathbf{x}^{*}\) conditioned on the training data set \(\left(\mathbf{X},\mathbf{y}\right)\) is given by,
\[Y\left(\mathbf{x}^{*}\right)\mid\mathbf{y},\mathbf{X}\sim\mathcal{N}\left[m \left(\mathbf{x}^{*}\right),s^{2}\left(\mathbf{x}^{*}\right)\right], \tag{17}\]
where
\[m\left(\mathbf{x}^{*}\right) =\mu\left(\mathbf{x}^{*}\right)+k\left(\mathbf{x}^{*}\right)^{ \mathrm{T}}\mathbf{K}^{-1}(\mathbf{y}-\mathbf{\mu}), \tag{18}\] \[s^{2}\left(\mathbf{x}^{*}\right) =K\left(\mathbf{x}^{*},\mathbf{x}^{*}\right)-k\left(\mathbf{x}^{* }\right)^{\mathrm{T}}\mathbf{K}^{-1}k\left(\mathbf{x}^{*}\right). \tag{19}\]
The mean \(m\left(\mathbf{x}^{*}\right)\) and variance \(s^{2}\left(\mathbf{x}^{*}\right)\) are determined by estimating the hyper-parameters \(\mathbf{\theta}\). One popular approach to determine the optimal \(\mathbf{\theta}\) is to minimize the negative marginal log-likelihood [2], given by
\[-\log\left[p(\mathbf{y}\mid\mathbf{X},\mathbf{\theta})\right]=\frac{1}{2}\left[(\mathbf{y}- \mathbf{\mu})^{\mathrm{T}}\mathbf{K}^{-1}(\mathbf{y}-\mathbf{\mu})+\log\left|\mathbf{K} \right|+N\ \log(2\pi)\right] \tag{20}\]
This is performed by using a numerical optimizer. For more details on standard GPR and its implementation, we refer the reader to the textbook by Williams and Rasmussen [2].
The standard GPR output is unconstrained, making it impractical for physics-based models such as EOS. Recently, several techniques have been developed to incorporate physical constraints on the GPR output [21]. In our work, we employ two approaches to incorporate the thermodynamic consistency and stability constraints described in Section II.4. The first approach is based on the work by Jidling _et al._[22], where they modify the kernel to incorporate the known linear operator constraints. We use this approach to design a specialized kernel that encodes the desired thermodynamic consistency constraint. The second approach is recently proposed by Pensoneault, Yang, and Zhu [23] to incorporate inequality-type constraints by minimizing the negative marginal log-likelihood function (Eq. (20)) while requiring that the probability of violating the constraints is small. We use this approach to impose thermodynamic stability constraints. The mathematical details of incorporating these approaches in our proposed framework are described next.
## III Mathematical formulation of the constrained GP EOS
In the following sections, we present a novel constrained GPR framework to build an uncertain EOS model constrained by the laws of thermodynamics. We first construct a GP EOS model constrained by thermodynamic consistency and stability constraints presented in Section II.4. We
then use the resulting model to derive a GP model for the shock Hugoniot with uncertainty. Finally, we present a unified GPR that can be jointly trained from first-principles simulations and experimental shock Hugoniot data.
### Thermodynamically constrained GP EOS model
Let \(\mathbf{X}=\left\{\left(V,T\right)^{\left(i\right)}\right\}_{i=1}^{N}\in \mathcal{X}\) be \(N\) input data points from the index set \(\mathcal{X}\) with the corresponding EOS output \(\mathbf{Y}=\left\{\left(P,E\right)^{\left(i\right)}\right\}_{i=1}^{N}\). We assume a GP prior for the Helmholtz free energy as
\[F\sim GP\left[\mu_{F}(\mathbf{X}),k_{FF}\left(\mathbf{X},\mathbf{X}^{\prime} \right)\right] \tag{21}\]
Using Eq. (10), we can define a linear operator as,
\[\mathcal{L}_{\mathbf{X}}=\left(\begin{array}{c}-\frac{\partial}{\partial V} \\ 1-T\frac{\partial}{\partial T}\end{array}\right) \tag{22}\]
Since GPs are closed under linear operations, we can derive the joint GP priors for \(P\) and \(E\) as[22],
\[\left[\begin{array}{c}P\\ E\end{array}\right]\mid\mathbf{X}\sim GP\left[\mathcal{L}_{\mathbf{X}}\mu_{F}( \mathbf{X}),\mathcal{L}_{\mathbf{X}}k_{FF}\left(\mathbf{X},\mathbf{X}^{\prime }\right)\mathcal{L}_{\mathbf{X}^{\prime}}^{T}\right] \tag{23}\]
which can be rewritten as,
\[\left[\begin{array}{c}P\\ E\end{array}\right]\mid\mathbf{X}\sim GP\left(\left[\begin{array}{c}\frac{ \partial\mu_{F}\left(\mathbf{X}\right)}{\partial V}\\ \mu_{F}-T\frac{\partial\mu_{F}\left(\mathbf{X}\right)}{\partial T}\end{array} \right],\left[\begin{array}{cc}\frac{\partial}{\partial V }\frac{\partial}{\partial V^{\prime}}k_{FF}\left(\mathbf{X},\mathbf{X}^{ \prime}\right)&-\frac{\partial}{\partial V}\left(1-T^{\prime}\frac{ \partial}{\partial T^{\prime}}\right)k_{FF}\left(\mathbf{X},\mathbf{X}^{ \prime}\right)\\ -\frac{\partial}{\partial V^{\prime}}\left(1-T\frac{\partial}{\partial T} \right)k_{FF}\left(\mathbf{X},\mathbf{X}^{\prime}\right)&\left(1-T\frac{ \partial}{\partial T}\right)\left(1-T^{\prime}\frac{\partial}{\partial T^{ \prime}}\right)k_{FF}\left(\mathbf{X},\mathbf{X}^{\prime}\right)\end{array} \right]\right) \tag{24}\]
and ensures that the thermodynamic consistency constraint is guaranteed. For notational simplicity, let us denote Eq. (24) as,
\[\left[\begin{array}{c}P\\ E\end{array}\right]\mid\mathbf{X}\sim GP\left(\left[\begin{array}{c}\mu_{P} \left(\mathbf{X}\right)\\ \mu_{E}\left(\mathbf{X}\right)\end{array}\right],\left[\begin{array}{cc}K_{ PP}\left(\mathbf{X},\mathbf{X}\right)&K_{PE}\left(\mathbf{X},\mathbf{X}\right)\\ K_{EP}\left(\mathbf{X},\mathbf{X}\right)&K_{EE}\left(\mathbf{X},\mathbf{X} \right)\end{array}\right]\right) \tag{25}\]
From this joint GP, the prediction \(P_{*},E_{*}\) at a new point \(\mathbf{X}_{*}\) can be calculated by conditioning as
\[\left[\begin{array}{c}P\\ E\\ E_{*}\end{array}\right]\mid\mathbf{X},\mathbf{X}_{*}\sim GP\left(\left[\begin{array} []{c}\mu_{P}\left(\mathbf{X}\right)\\ \mu_{E}\left(\mathbf{X}\right)\\ \mu_{P}\left(\mathbf{X}_{*}\right)\end{array}\right],\left[\begin{array}{c}K _{PP}\left(\mathbf{X},\mathbf{X}\right)&K_{PE}\left(\mathbf{X},\mathbf{X} \right)&K_{PP}\left(\mathbf{X},\mathbf{X}_{*}\right)&K_{PE}\left(\mathbf{X}, \mathbf{X}_{*}\right)\\ \frac{K_{EP}\left(\mathbf{X},\mathbf{X}\right)&K_{EE}\left(\mathbf{X},\mathbf{X} \right)&K_{EP}\left(\mathbf{X},\mathbf{X}_{*}\right)&K_{EE}\left(\mathbf{X}, \mathbf{X}_{*}\right)\\ \frac{K_{EP}\left(\mathbf{X}_{*},\mathbf{X}\right)&K_{EE}\left(\mathbf{X}_{*}, \mathbf{X}\right)&K_{EP}\left(\mathbf{X}_{*},\mathbf{X}_{*}\right)&K_{PE} \left(\mathbf{X}_{*},\mathbf{X}_{*}\right)\\ K_{EP}\left(\mathbf{X}_{*},\mathbf{X}\right)&K_{EE}\left(\mathbf{X}_{*}, \mathbf{X}\right)&K_{EP}\left(\mathbf{X}_{*},\mathbf{X}_{*}\right)&K_{EE} \left(\mathbf{X}_{*},\mathbf{X}_{*}\right)\end{array}\right]\right) \tag{26}\]
Again for notational simplicity, let us denote the block covariance matrix in Eq. (26) by
\[\left[\begin{array}{cc}\mathbf{K}_{11}&\mathbf{K}_{12}\\ \mathbf{K}_{21}&\mathbf{K}_{22}\end{array}\right]\]
Further conditioning on the training data, we obtain the following GP model,
\[\left[\begin{array}{c}P_{*}\\ E_{*}\end{array}\right]\mid\mathbf{X},\mathbf{X}_{*},P,E\sim GP\left(\left[ \begin{array}{c}\mu_{P}\left(\mathbf{X}_{*}\right)\\ \mu_{E}\left(\mathbf{X}_{*}\right)\end{array}\right]+\mathbf{K}_{21}\left( \mathbf{K}_{11}\right)^{-1}\left(\left[\begin{array}{c}P\\ E\end{array}\right]-\left[\begin{array}{c}\mu_{P}\left(\mathbf{X}\right)\\ \mu_{E}\left(\mathbf{X}\right)\end{array}\right]\right)\right)\text{, }\mathbf{K}_{22}-\mathbf{K}_{21}\left(\mathbf{K}_{11}\right)^{-1}\mathbf{K}_{ 12}\right) \tag{27}\]
The negative log marginal likelihood of the joint GP is given by,
\[-\log p\left(P,E\mid\mathbf{X},\boldsymbol{\theta}\right)=\frac{ 1}{2}\left(\left[\begin{array}{c}P\\ E\end{array}\right]-\left[\begin{array}{c}\mu_{P}\left(\mathbf{X}\right)\\ \mu_{E}\left(\mathbf{X}\right)\end{array}\right]\right)^{T}\mathbf{K}_{11}^{-1 }\left(\left[\begin{array}{c}P\\ E\end{array}\right]-\left[\begin{array}{c}\mu_{P}\\ \mu_{E}\end{array}\right]\right)+\\ \frac{1}{2}\log|\mathbf{K}_{11}|+\frac{N}{2}\log(2\pi) \tag{28}\]
Next, we enforce the thermodynamic stability constraints (Eqs. (12) and (13)) by limiting the functional space through constrained hyper-parameter optimization [23]. We obtain the hyper-parameters by minimizing the negative marginal log-likelihood function in Eq. (28) while requiring that the probability of violating the thermodynamics stability constraints is small. Formally, for \(0<\eta\ll 1\), we impose the following probabilistic constraints at virtual locations in the input domain \(\mathbf{X}_{v}\) as,
\[P\left[\left(\frac{\partial P\left(\mathbf{X}_{v}\right)}{ \partial V}\mid\mathbf{X}_{v},P,\mathbf{X}\right)>0\right]\leq\eta,\text{ \ \ \ \ for all }\mathbf{X}_{v}\in\mathcal{X} \tag{29}\] \[P\left[\left(\frac{\partial E\left(\mathbf{X}_{v}\right)}{ \partial T}\mid\mathbf{X}_{v},E,\mathbf{X}\right)<0\right]\leq\eta,\text{ \ \ \ \ for all }\mathbf{X}_{v}\in\mathcal{X} \tag{30}\]
Since \(\frac{\partial P\left(\mathbf{X}_{v}\right)}{\partial V}\mid\mathbf{X}_{v},P, \mathbf{X}\) and \(\frac{\partial E\left(\mathbf{X}_{v}\right)}{\partial T}\mid\mathbf{X}_{v},E, \mathbf{X}\) follow a Gaussian distribution, the constraints in Eq. (29) and (30), can be simplified as,
\[\mu_{\frac{\partial P}{\partial V}}-\Phi^{-1}(\eta)\text{ }\sigma_{\frac{ \partial P}{\partial V}}\leq 0 \tag{31}\]
and
\[\mu_{\frac{\partial E}{\partial T}}+\Phi^{-1}(\eta)\text{ }\sigma_{\frac{ \partial E}{\partial T}}\geq 0 \tag{32}\]
where \(\Phi^{-1}\) is the inverse standard normal cumulative distribution function. By minimizing the objective function Eq. (28) subject to constraints Eq. (31) and Eq. (32), we can obtain a set of hyper-parameters, \(\boldsymbol{\theta}\), that ensures the resulting GP EOS model (Eq. (27)) satisfies both the thermodynamic consistency and stability constraints.
### Hugoniot derivation from the GP EOS model
In this section, we first derive the Hugoniot function (\(H\)) as a GP from the constrained GP EOS model described in Section III.1. Then, we obtain the probabilistic set of so-called Hugoniot points satisfying \(H(V,T)=0\).
The Hugoniot equation can be expressed as,
\[H(V,T)=E(V,T)-E_{0}+\frac{1}{2}\left(V-V_{0}\right)\left(P(V,T)+P_{0}\right) \tag{33}\]
where \(E_{0}\), \(P_{0}\) and \(V_{0}\) are initial energy, pressure and volume, respectively.
We recognize that \(H=f\left(P,E\right)\), where \(f\) is a linear function of both \(P\) and \(E\). Applying the Taylor series expansion about its mean, we get
\[H=f\left(P,E\right)=f\left(\mu_{P},\mu_{E}\right)+\left.\frac{\partial f\left( P,E\right)}{\partial P}\right|_{\mu_{P},\mu_{E}}\left(P-\mu_{P}\right)+\left. \frac{\partial f\left(P,E\right)}{\partial E}\right|_{\mu_{P},\mu_{E}}\left(E -\mu_{E}\right) \tag{34}\]
Since both \(P\) and \(E\) are GPs, \(H\) (a linear function of \(P\) and \(E\)) is also a GP. Taking the expectation of Eq. (34) yields:
\[\mathbb{E}\left[H\right]=\mathbb{E}\left[f\left(P,E\right)\right]=f\left(\mu _{P},\mu_{E}\right)=\mu_{H} \tag{35}\]
The covariance of the \(H\) GP is obtained by:
\[\mathbb{E}\left[(H-\mathbb{E}[H])(H-\mathbb{E}[H])\right] =\mathbb{E}\left[\left(\left.\frac{\partial f\left(P,E\right)}{ \partial P}\right|_{\mu_{P},\mu_{E}}\left(P-\mu_{P}\right)+\left.\frac{ \partial f\left(P,E\right)}{\partial E}\right|_{\mu_{P},\mu_{E}}\left(E-\mu_{ E}\right)\right)\right. \tag{36}\] \[\qquad\times\left(\left.\left(\left.\frac{\partial f\left(P,E \right)}{\partial P}\right|_{\mu_{P},\mu_{E}}\left(P-\mu_{P}\right)+\left. \frac{\partial f\left(P,E\right)}{\partial E}\right|_{\mu_{P},\mu_{E}}\left(E -\mu_{E}\right)\right)\right]\] \[=\left(\left.\left.\frac{\partial f\left(P,E\right)}{\partial P} \right|_{\mu_{P},\mu_{E}}\right)^{2}\mathbb{E}\left[\left(P-\mu_{P}\right) \left(P-\mu_{P}\right)\right]+\left(\left.\left.\frac{\partial f\left(P,E \right)}{\partial E}\right|_{\mu_{P},\mu_{E}}\right)^{2}\mathbb{E}\left[\left( E-\mu_{E}\right)\left(E-\mu_{E}\right)\right]\right.\] \[\qquad+2\left(\left.\left.\frac{\partial f\left(P,E\right)}{ \partial E}\right|_{\mu_{P},\mu_{E}}\right)\left(\left.\frac{\partial f\left( P,E\right)}{\partial P}\right|_{\mu_{P},\mu_{E}}\right)\mathbb{E}\left[\left(E-\mu_{E} \right)\left(P-\mu_{P}\right)\right]\right.\] \[=\frac{1}{4}\left(V-V_{0}\right)^{2}\text{Cov}\left(P,P\right)+ \text{Cov}\left(E,E\right)+\left(V-V_{0}\right)\text{Cov}(E,P)\] \[K_{HH}\left(\mathbf{X},\mathbf{X}^{\prime}\right) =\frac{1}{4}\left(V-V_{0}\right)^{2}K_{PP}\left(\mathbf{X},\mathbf{ X}^{\prime}\right)+K_{EE}\left(\mathbf{X},\mathbf{X}^{\prime}\right)+\left(V-V_{0} \right)K_{PE}\left(\mathbf{X},\mathbf{X}^{\prime}\right)\]
The cross-covariance of the \(H\) and \(P\) GPs is given by
\[\begin{split}\mathbb{E}\left[(H-\mathbb{E}[H])(P-\mathbb{E}[P])\right] &=\mathbb{E}\left[\left(\left.\frac{\partial f \left(P,E\right)}{\partial P}\right|_{\mu_{P},\mu_{E}}\left(P-\mu_{P}\right)+ \left.\frac{\partial f\left(P,E\right)}{\partial E}\right|_{\mu_{P},\mu_{E}} \left(E-\mu_{E}\right)\right)\right(\left(P-\mu_{P}\right)\right)\right]\\ &=\left.\frac{\partial f\left(P,E\right)}{\partial P}\right|_{\mu_ {P},\mu_{E}}\mathbb{E}\left[\left.\left(P-\mu_{P}\right)\left(P-\mu_{P}\right) \right.\right]+\left.\frac{\partial f\left(P,E\right)}{\partial E}\right|_{\mu _{P},\mu_{E}}\mathbb{E}\left[\left.\left(E-\mu_{E}\right)\left(P-\mu_{P} \right)\right.\right]\\ &=\frac{1}{2}\left(V-V_{0}\right)\text{Cov}\left(P,P\right)+ \text{Cov}\left(E,P\right)\\ K_{HP}\left(\mathbf{X},\mathbf{X}^{\prime}\right)& =\frac{1}{2}\left(V-V_{0}\right)K_{PP}\left(\mathbf{X},\mathbf{X}^{ \prime}\right)+K_{EP}\left(\mathbf{X},\mathbf{X}^{\prime}\right)\end{split} \tag{37}\]
Similarly, we can obtain the cross-covariance between the \(H\) and \(E\) GPs as,
\[K_{HE}\left(\mathbf{X},\mathbf{X}^{\prime}\right)=K_{EE}\left(\mathbf{X}, \mathbf{X}^{\prime}\right)+K_{EP}\left(\mathbf{X},\mathbf{X}^{\prime}\right) \tag{38}\]
From Eqs. (35) and (36), the \(H\) GP prior is given as
\[H\sim GP\left[\mu_{H}(\mathbf{X}),K_{HH}\left(\mathbf{X},\mathbf{X}^{\prime} \right)\right] \tag{39}\]
The predictive distribution of \(H\) at test points \(\mathbf{X}_{*}\) is given by
\[H_{*}\mid\mathbf{X},\mathbf{X}_{*},H\sim GP(\mu_{H}\left(\mathbf{ X}_{*}\right)+K_{HH}\left(\mathbf{X}_{*},\mathbf{X}\right)\left(K_{HH}\left( \mathbf{X},\mathbf{X}\right)\right)^{-1}\!\!\left(H-\mu_{H}\left(\mathbf{X} \right)\right),\\ K_{HH}\left(\mathbf{X}_{*},\mathbf{X}_{*}\right)-K_{HH}\left( \mathbf{X}_{*},\mathbf{X}\right)\left(K_{HH}\left(\mathbf{X},\mathbf{X}\right) \right)^{-1}\!\!K_{HH}\left(\mathbf{X},\mathbf{X}_{*}\right)), \tag{40}\]
Using Eq. (40), we can define a subset \(\mathcal{X}_{H}\subset\mathcal{X}\) such that \(\forall\ \mathbf{X}_{H}\in\mathcal{X}_{H}\), \(H(\mathbf{X}_{H})=0\) lies within the \(1-\alpha\%\) confidence intervals of the GP for \(H\). We can achieve this by defining the standardized GP
\[\widetilde{H}(\mathbf{X})=\frac{H(\mathbf{X})-\mu_{H}(\mathbf{X})}{\sigma_{H} (\mathbf{X})} \tag{41}\]
where \(\mathbf{X}_{H}\) satisfies \(P(|\widetilde{H}(\mathbf{X}_{H})|\leq z_{\alpha/2})=1-\alpha\). In other words, the points \(\mathbf{X}_{H}\) satisfy the following condition
\[|\mu_{H}(\mathbf{X}_{H})|\leq z_{\alpha/2}\times\sigma_{H}(\mathbf{X}_{H}). \tag{42}\]
Given an arbitrary point \(X_{H}\in\mathcal{X}\), we can therefore establish the predictive distribution for pressure and internal energy at this point, \(P_{H}\) and \(E_{H}\) using Eq. (27); thus providing an estimate of the uncertain Hugoniot curve satisfying \(H(X_{H})=0\).
### Unified GP EOS model learned from multiple data sources
In this section, we propose a unified framework to train the proposed constrained GP EOS model using heterogeneous data sources. In particular, we show that the EOS model can be
learned from a combination of first-principles simulation data and experimental shock Hugoniot observations. Let us define the model outputs as \(P\), \(E\), and \(H\), for respective inputs, \(\mathbf{X}_{P}\), \(\mathbf{X}_{E}\), and \(\mathbf{X}_{H}\). The joint GP of \(P,E,H\) is then defined as follows
\[\left[\begin{array}{c}P\\ E\\ H\end{array}\right]\mid\mathbf{X}_{P},\mathbf{X}_{E},\mathbf{X}_{H}\sim GP \left(\left[\begin{array}{c}\mu_{P}\left(\mathbf{X}_{P}\right)\\ \mu_{E}\left(\mathbf{X}_{E}\right)\\ \mu_{H}\left(\mathbf{X}_{H}\right)\end{array}\right],\left[\begin{array}{ ccc}K_{PP}\left(\mathbf{X}_{P},\mathbf{X}_{P}\right)&K_{PE}\left(\mathbf{X}_{P}, \mathbf{X}_{E}\right)&K_{PH}\left(\mathbf{X}_{P},\mathbf{X}_{H}\right)\\ K_{EP}\left(\mathbf{X}_{E},\mathbf{X}_{P}\right)&K_{EE}\left(\mathbf{X}_{E}, \mathbf{X}_{E}\right)&K_{EH}\left(\mathbf{X}_{E},\mathbf{X}_{H}\right)\\ K_{HP}\left(\mathbf{X}_{H},\mathbf{X}_{P}\right)&K_{HE}\left(\mathbf{X}_{H}, \mathbf{X}_{E}\right)&K_{HH}\left(\mathbf{X}_{H},\mathbf{X}_{H}\right)\end{array} \right]\right) \tag{43}\]
We can train the joint GP using any combination of available data and impose the thermodynamic constraints similar to the steps described in Section III.1 to obtain the predictive distribution of the joint GP of \(P\), \(E\), and \(H\) at any test point \(\mathbf{X}_{*}\). Further, we can condition on \(H\) by first partitioning the block covariance of the joint GP in Eq. (43) as follows
\[\left[\begin{array}{c}P\\ E\\ H\end{array}\right]\mid\mathbf{X}_{P},\mathbf{X}_{E},\mathbf{X}_{H}\sim GP \left(\left[\begin{array}{c}\mu_{P}\left(\mathbf{X}_{P}\right)\\ \mu_{E}\left(\mathbf{X}_{E}\right)\\ \mu_{H}\left(\mathbf{X}_{H}\right)\end{array}\right],\left[\begin{array}{ ccc}K_{PP}\left(\mathbf{X}_{P},\mathbf{X}_{P}\right)&K_{PE}\left(\mathbf{X}_{P}, \mathbf{X}_{E}\right)&K_{PH}\left(\mathbf{X}_{P},\mathbf{X}_{H}\right)\\ \mu_{E}\left(\mathbf{X}_{E},\mathbf{X}_{P}\right)&K_{EE}\left(\mathbf{X}_{E}, \mathbf{X}_{E}\right)&K_{EH}\left(\mathbf{X}_{E},\mathbf{X}_{H}\right)\\ \hline K_{HP}\left(\mathbf{X}_{H},\mathbf{X}_{P}\right)&K_{HE}\left(\mathbf{X} _{H},\mathbf{X}_{E}\right)&K_{HH}\left(\mathbf{X}_{H},\mathbf{X}_{H}\right) \end{array}\right]\right). \tag{44}\]
We denote this block covariance by
\[\left[\begin{array}{c}\mathbf{K}_{EP}&\mathbf{K}_{*H}\\ \mathbf{K}_{H*}&\mathbf{K}_{HH}\end{array}\right].\]
Conditioning on \(H\) gives the resulting conditional distribution
\[\left[\begin{array}{c}P\\ E\end{array}\right]\mid\mathbf{X}_{P},\mathbf{X}_{E},\mathbf{X}_{H},H( \mathbf{X}_{H})=0\sim GP\bigg{(}\mathbf{m}(\mathbf{X})=\left[\begin{array}{ c}\mu_{P}\left(\mathbf{X}_{P}\right)\\ \mu_{E}\left(\mathbf{X}_{H}\right)\end{array}\right]+\mathbf{K}_{*H}\left( \mathbf{K}_{HH}\right)^{-1}\left(H(\mathbf{X}_{H})-\mu_{H}\left(\mathbf{X}_{H} \right)\right),\\ \Sigma_{\mathbf{XX}}=\mathbf{K}_{EP}-\mathbf{K}_{*H}\left(\mathbf{K}_{HH} \right)^{-1}\mathbf{K}_{H*}\bigg{)} \tag{45}\]
Next, conditioning on a set of training points \(\mathbf{X}_{H}\) constrained by \(H(\mathbf{X}_{H})=0\) (e.g. from experimental data collected along the Hugoniot curve) yields
\[\left[\begin{array}{c}P\\ E\\ P_{H}\\ E_{H}\end{array}\right]\mid\mathbf{X}_{P},\mathbf{X}_{E},\mathbf{X}_{H},H( \mathbf{X}_{H})=0\sim GP\left(\left[\begin{array}{c}\mathbf{m}\left( \mathbf{X}\right)\\ \mathbf{m}\left(\mathbf{X}_{H}\right)\end{array}\right],\left[\begin{array}{ ccc}\Sigma_{\mathbf{XX}}&\Sigma_{\mathbf{XX}_{H}}\\ \Sigma_{\mathbf{X}_{H}\mathbf{X}}&\Sigma_{\mathbf{X}_{H}\mathbf{X}_{H}}\\ \Sigma_{\mathbf{X}_{H}\mathbf{X}}&\Sigma_{\mathbf{X}_{H}\mathbf{X}_{H}}\end{array} \right]\right) \tag{46}\]
Eq. (46) yields the predictive distribution of \(P\) and \(E\) at \(\mathbf{X}_{H}\) given by
\[\left[\begin{array}{c}P_{H}\\ E_{H}\end{array}\right]\mathbf{X}_{P},\mathbf{X}_{E},\mathbf{X}_{H},P,E,H( \mathbf{X}_{H})=0\sim GP\bigg{(}\mathbf{m}(\mathbf{X}_{H})+\mathbf{\Sigma}_{ \mathbf{X}_{H}\mathbf{X}}\left(\mathbf{\Sigma}_{\mathbf{X}\mathbf{X}}\right)^{ -1}\Big{(}\left[\begin{array}{c}P\\ E\end{array}\right]-\left[\begin{array}{c}\mu_{P}(\mathbf{X}_{P})\\ \mu_{E}(\mathbf{X}_{E})\end{array}\right]\Big{)},\\ \mathbf{\Sigma}_{\mathbf{X}_{H}\mathbf{X}_{H}}-\mathbf{\Sigma}_{\mathbf{X}_{H} \mathbf{X}}\left(\mathbf{\Sigma}_{\mathbf{X}\mathbf{X}}\right)^{-1}\mathbf{ \Sigma}_{\mathbf{X}\mathbf{X}_{H}}\bigg{)}. \tag{47}\]
Using these equations, it is now possible to learn the thermodynamically constrained GP EOS model using a combination of experimentally observed points along the shock Hugoniot and first-principles calculations that relate \(P,V,T,E\) as we demonstrate in the next section.
## IV Results and Discussion
In this section, we apply the proposed constrained GPR framework described in Section III to learn the EOS for the diamond phase of Carbon. We use a sample of 20 data points obtained from Density Functional Theory Molecular Dynamics (DFT-MD) simulations from Benedict et al. [18] to train the GP EOS model. We first build an uncertain EOS model that satisfies both the thermodynamic consistency and stability constraints. We then derive the Hugoniot function GP from the EOS model and obtain the resulting shock Hugoniots with uncertainty. Finally, we train the unified GP EOS model using the simulation and a limited number of experimental Hugoniot data from laser-driven shock compression experiments [24].
### Constrained GP EOS model for Diamond
The EOS model is trained by assuming a squared exponential covariance model (Eq. (16)) for the Helmholtz free energy and computing the mean and covariance function matrix of the joint GP for pressure and energy \((P,E)\) from Eq. (24) with input states of volume and temperature \((V,T)\). This procedure ensures that the thermodynamic consistency constraint is satisfied. We then perform constrained hyper-parameter optimization using the COBYLA optimizer from the SCIPY python package [25] to incorporate the probabilistic thermodynamic stability constraints from Eqs. (31) and (32) with \(\eta=0.025\).
Figures 2(a) and (b) show the marginalized pressure GP (P-GP) and energy GP (E-GP) as a function of state variables \((V,T)\), respectively, trained from 20 DFT-MD simulations where uncertainties are shown with colors denoting standard deviation. We clearly see that the uncertainties
are small near the training data points and larger in regions with no data. The overall uncertainty is small, with the coefficient of variation (COV) under 7% for the P-GP and under 1.3% for E-GP, suggesting high confidence in the predicted state. We similarly trained a GP model without imposing constraints as shown in Figure 2(c) and (d), which show that the unconstrained EOS model has higher uncertainty in general.
These results are supported by plotting the thermodynamic stability conditions for 2500 points from a \(50\times 50\) grid of \((V,T)\) points in Figures 3(a) and (b) for the constrained P-GP and E-GP,
Figure 2: Joint GP EOS model with uncertainty with and without thermodynamic constraints. (a) Constrained pressure marginal GP model. (b) Constrained energy marginal GP model. (c) Unconstrained pressure marginal GP model. (d) Unconstrained energy marginal GP model. Colors denote the standard deviation of the model at each point.
respectively. We see that the thermodynamic stability constraints are satisfied across the domain in the proposed constrained EOS model. Meanwhile, Figures 3(c) and (d) show the stability constraints for the unconstrained P-GP and E-GP EOS models, respectively, where we see that the energy stability constraint (Eq. (32)) is violated at many point (819 points in total) in the domain. This results in a negative specific heat at these points, which will likely cause a hydrodynamics simulation to crash. The pressure stability constraint, on the other hand, is not violated even for unconstrained energy GP EOS models.
Figure 3: Plots of the thermodynamic stability conditions at each point on a \(50\times 50\) grid of \((V,T)\). Regions of constraint violation are shaded. (a) Probabilistic stability constraint for constrained Pressure GP EOS model showing that the constraints are not violated. (b) Probabilistic stability constraint for constrained Energy GP EOS model showing that the constraints are not violated. (c) Probabilistic stability constraint for unconstrained Pressure GP EOS model showing that this specific constraint is not violated even when unconstrained. (d) Probabilistic stability constraint for unconstrained Energy GP EOS model showing that the constraint is violated at many points.
in the unconstrained GP. However, we note that the unconstrained GP EOS does not satisfy the thermodynamic consistency constraint because the covariance model is not physics informed.
Figure 3 further supports the observations made in Figure 2. The reduction in uncertainty in the constrained E-GP (Figure 2(b)) compared to unconstrained E-GP (Figure 2(d)) is due to the introduction of the energy stability constraint. Moreover, since the proposed GP has a physics-informed covariance model, which yields valid cross-covariances between the pressure and energy GP, improvement in the prediction of E-GP also improves predictions of P-GP even though the pressure stability constraints are not violated in the unconstrained P-GP. Hence, the resulting physics-informed data-driven EOS model provides an excellent fit to the DFT-MD data and a realistic estimate of total prediction uncertainty.
However, we note that the simulation data used for training do not include estimates of uncertainty. If such data uncertainty were provided, they could be easily incorporated into the proposed framework through the training described in Section III.
### GP Hugoniot Curve for Diamond
The GP Hugoniot function, \(H(V,T)\) is derived from the physics-constrained GP EOS model following the formulation presented in Section III.2 and illustrated in Figure 4. The input states \((V,T)\) that satisfy Eq. (42) for which \(H=0\) lies in the 95% confidence interval of H-GP are also shown in Figure 4(a). These Hugoniot points are plotted separately in Figure 4(b), and a deterministic curve is fit to establish a mapping \(V_{H}\to T_{H}\). For these Hugoniot points \((V_{H},T_{H})\), the predictive distribution of pressure and energy is computed using Eq. (27). The resulting Hugoniot points are shown in Figure 5(a) where uncertainties in pressure are shown with color representing the standard deviation. The corresponding projections of the Hugoniot showing the P-V and P-T curves are shown in Figure 5(b) and (c), respectively. Thus, we have demonstrated a way to derive the Hugoniot with uncertainty directly from the physics-constrained GP EOS.
### Unified GP EOS for Diamond Trained from Simulations and Experiments
In this section, we present a unified physics-informed GP EOS for the diamond phase of carbon that is trained from multiple data sources and provides accurate EOS predictions with uncertainty - as described in Section III.3. The given DFT-MD simulation data (same 20 data points used
above) is augmented by Hugoniot experimental data from dynamic shock wave experiments that relate pressure and volume [24] (3 points). These data are from a pressure regime where the diamond has significant strength, meaning that we expect to see a discrepancy between simulation and experiment which provides an interesting test for our unified modeling approach. Additionally, the temperature is not usually measured in these high-pressure experiments (except in those with static compression), which provides another difficulty to overcome with our approach. The GP EOS model trained in the previous sections can be used to estimate temperature for a given volume using the mapping (\(V_{H}\to T_{H}\)) shown in Figure 4(b). We apply a deterministic mapping here, but theoretically, we could also construct a GP relating temperature and volume, which would formally provide a probability distribution for temperature. However, such an approach requires a more challenging deep GP framework (since the input state \(T\) is a GP), which is beyond the scope of the present work. With these temperature estimates, we now have a training set \(\{\mathbf{X}_{P},P\}\) (23 data points) and \(\{\mathbf{X}_{E},E\}\) (20 data points). We use this training data to train the unified GP. The results are shown in Figure 6. As expected, the uncertainty reduces significantly near the experimental observations in Figure 6(a). Comparing Figure 6(a) with Figure 2(a), we can observe an increase in the uncertainty, which captures the inconsistencies between the simulation and experimental data. Since the P-GP and E-GP are correlated, a slight increase in uncertainty is also observed in Figure 6(b) compared to 2(b). Finally, as shown in Figure 7, the unified GP EOS still satisfies the
Figure 4: (a) GP Hugoniot function, \(H(V,T)\) showing points where \(H=0\) lies within the 95% confidence interval of the GP in black. (b) Hugoniot points where \(H=0\) lies within the 95% confidence interval of the GP shown in the \((V,T)\) plane.
thermodynamic stability constraints as necessary. We have therefore developed a comprehensive unified framework that has been successfully trained using both first-principles simulation data and experimental shock compression experiments.
Figure 5: Plots of the uncertain Hugoniot curve. The width of the curve represents uncertainty in the position of the Hugoniot (i.e. satisfying \(H=0\) with 95% confidence) along the EOS and the corresponding uncertainty in pressure is shown with colors denoting the standard deviation. (a) Complete Hugoniot in \((V,T,P)\) space showing all points along the EOS in which \(H=0\) lies within the 95% confidence intervals of the Hugoniot GP. (b) Uncertain Hugoniot curve as a function of Volume. (c) Uncertain Hugoniot curve as a function of Temperature.
Figure 6: Unified constrained GP EOS model trained on 20 DFT-MD simulation data and 3 shock compression experimental data relating pressure and volume. (a) Pressure marginal GP EOS model (b) Energy marginal GP model
Figure 7: Plots of the thermodynamic stability conditions at each point on a \(50\times 50\) grid of \((V,T)\) for the unified constrained GP EOS trained on simulation and experimental data. Regions of constraint violation are shaded. (a) Probabilistic stability constraint for unified constrained Pressure GP EOS model showing that the constraints are satisfied. (b) Probabilistic stability constraint for unified constrained Energy GP EOS model showing that the constraints are satisfied.
Conclusion
In this work, we have developed a novel data-driven framework to construct thermodynamically constrained equation of state (EOS) models with uncertainty. The proposed framework is based on a non-parametric constrained Gaussian process regression (GPR), which inherently captures the model and data uncertainties while satisfying the essential thermodynamic stability and consistency constraints. Violation of these constraints results in a non-physical EOS that will cause problems in downstream applications such as hydrodynamics simulation. The key benefits of using GPR to build the EOS model is that it can be trained on relatively small data sets compared to other machine learning methods like neural networks and automatically estimates the prediction uncertainty. The resulting EOS model also yields a GP for the shock Hugoniot with uncertainty, which has been derived herein. Further, we proposed a unified framework such that the GP can leverage both simulation and experimental data and provides pointwise EOS predictions with uncertainty. The resulting EOS can therefore be directly incorporated into hydrocode simulations for uncertainty quantification studies.
We have specifically demonstrated the training of this physics-constrained GP EOS for the diamond phase of Carbon from first-principles DFT-MD simulations. We show that the model satisfies the thermodynamic constraints, which results in a reduction in uncertainty at certain points as well. In short, we show that considering thermodynamic constraints improves confidence in EOS predictions and supplements limited data. We then derive the Hugoniot for diamond from the GP EOS and demonstrate that the trained model can be augmented with experimental shock Hugoniot data to improve the EOS - thus demonstrating our unified framework for EOS training.
The proposed framework can be similarly applied to different material phases. However, an extension of the proposed framework to a more generalized multiphase EOS model that captures phase transitions is the subject of further work. Finally, we anticipate that the proposed framework opens the door to a wide range of improvements in EOS modeling and the associated downstream applications. For example, in future studies the prediction uncertainty can be used to inform the choice of points for new simulations/experiments through e.g. Bayesian optimization resulting in smaller data sets requirements and thus accelerating the development of new EOSs. Moreover, the proposed framework can be integrated with physics-based parametric models such that the physically constrained GP serves as a correction to the parametric model and potentially facilitates multi-fidelity modeling. Finally, these physics-informed GP EOS models will be integrated into
hydrocode simulations of shock experiments to enable uncertainty quantification studies.
## Data Availability Statement
Code developed in this work and presented results will be made publicly available via GitHub upon publication.
## Acknowledgments
This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, and was supported by Defense Threat Reduction Agency, Award HDTRA12020001. LLNL-JRNL-850088
|
2307.00651 | More Synergy, Less Redundancy: Exploiting Joint Mutual Information for
Self-Supervised Learning | Self-supervised learning (SSL) is now a serious competitor for supervised
learning, even though it does not require data annotation. Several baselines
have attempted to make SSL models exploit information about data distribution,
and less dependent on the augmentation effect. However, there is no clear
consensus on whether maximizing or minimizing the mutual information between
representations of augmentation views practically contribute to improvement or
degradation in performance of SSL models. This paper is a fundamental work
where, we investigate role of mutual information in SSL, and reformulate the
problem of SSL in the context of a new perspective on mutual information. To
this end, we consider joint mutual information from the perspective of partial
information decomposition (PID) as a key step in \textbf{reliable multivariate
information measurement}. PID enables us to decompose joint mutual information
into three important components, namely, unique information, redundant
information and synergistic information. Our framework aims for minimizing the
redundant information between views and the desired target representation while
maximizing the synergistic information at the same time. Our experiments lead
to a re-calibration of two redundancy reduction baselines, and a proposal for a
new SSL training protocol. Extensive experimental results on multiple datasets
and two downstream tasks show the effectiveness of this framework. | Salman Mohamadi, Gianfranco Doretto, Donald A. Adjeroh | 2023-07-02T20:02:58Z | http://arxiv.org/abs/2307.00651v1 | # More Synergy, Less Redundancy: Exploiting Joint Mutual Information for Self-Supervised Learning
###### Abstract
Self-supervised learning (SSL) is now a serious competitor for supervised learning, even though it does not require data annotation. Several baselines have attempted to make SSL models exploit information about data distribution, and less dependent on the augmentation effect. However, there is no clear consensus on whether maximizing or minimizing the mutual information between representations of augmentation views practically contribute to improvement or degradation in performance of SSL models. This paper is a fundamental work where, we investigate role of mutual information in SSL, and reformulate the problem of SSL in the context of a new perspective on mutual information. To this end, we consider joint mutual information from the perspective of partial information decomposition (PID) as a key step in **reliable multivariate information measurement**. PID enables us to decompose joint mutual information into three important components, namely, unique information, redundant information and synergistic information. Our framework aims for minimizing the redundant information between views and the desired target representation while maximizing the synergistic information at the same time. Our experiments lead to a re-calibration of two redundancy reduction baselines, and a proposal for a new SSL training protocol. Extensive experimental results on multiple datasets and two downstream tasks show the effectiveness of this framework.
Salman Mohamadi, Gianfranco Doretto, Donald A. Adjeroh Lane Department of Computer Science and Electrical Engineering, West Virginia University, USA
## 1 Introduction
Self-supervised learning (SSL) is among very successful principles that are needless of huge labeled datasets [1]. While deep learning has shown tremendous success in many domains and applications including computer vision [2], biometrics [3], genomics [4], and etc, data-efficiency has been the focus of few problem domains such as deep active learning [5, 6], and SSL [7]. Essentially, SSL frameworks consist of two key elements, namely, loss function, and pretext task [8]. Basically, the pretext task is a proxy task which is to be solved using a supervisory signal from the unlabeled data, guided by an objective (loss) function [8]. Loss functions on the other hand generally guides learning the representation of a given sample by comparing two or multiple augmented views of the same sample with each other or with views of other samples. In fact, early baselines known as contrastive baselines were developed around the idea of contrasting augmented views of a sample with each other (positive pairs) and also with the views from other samples (negative pairs) [9, 10, 11, 7, 12]. This type of baselines, however, suffer from the problem of potential representation collapse, as well as the need for large negative batches for effective representation. Next generation of baselines emerged as non-contrastive or negative pair-free baselines [13, 14], essentially eliminating the need to contrast against negative views (negative pairs), and also almost with no risk of representation collapse. There is also a class of baselines known as clustering baselines such as [15], primarily based on clustering views of samples in the latent space. Two most recent baselines are based on redundancy reduction in representation of augmented views of the samples [16, 17]. This class of approaches mainly suggests that whitening the latent/embedding space of the a pair of networks trained on augmented views of samples allows for reducing redundant information in representation of the sample [5]. Later theoretical work on whitening baselines showed that the prime reason for their success is eliminating another type of collapse, dimensional collapse [18, 19].
In this work, we assess how this whitening process unwittingly eliminates the synergistic information along with redundant information. This relates to a larger controversy on how mutual information relates to learning the target representation. Hence, in this paper, we start with investigating long-standing ambiguity about the role of mutual information in SSL. This eventually leads us to reconsider the problem of mutual information between two variable (two views of a sample) by reformulating it as joint mutual information between three variables (two views and the target representation). To elaborate on the controversy, the general idea is to maximize the mutual information between encoder-representation of two augmented views for better representation; however some work [20, 21] suggested that more mutual information does not necessarily improve the representation. A recent work based on Info-Min principle suggests that, in fact, less mutual information between augmented views along with more task-associated information would improve the representation using a certain augmentation setting [22]. Another very recent work acknowledges the questionable role of mutual information, and suggests that decomposing the estimation of mutual information by adding an extra term representing the condition on the image with some blocked patches
would reinforce the role of mutual information. However, this work is different from our work as they decompose the estimation of two-variable mutual information, whereas we focus on three-variable joint mutual information decomposition [23]. In fact, we seek out the solution in the theory of partial information decomposition (PID). Eventually, this leads us to decompose the joint mutual information into its integral components, i.e., unique, redundant and synergistic component as was first introduced by [24]. In the following, we first state the problem and discuss the decomposition of joint mutual information, then re-define SSL in this new context. We elaborate on the SSL baselines that rely on redundancy reduction, and propose a new training protocol for such SSL models, then empirically evaluate the new protocol.
## 2 Methods
### Problem Statement
From an information theoretic perspective, the general, though controversial, idea is that SSL frameworks generally tend to maximize the mutual information between encoder representation \(f(.)\) of two augmented views \(x_{1}\) and \(x_{2}\) of sample data \(x\) upper bounded by \(I(x_{1};x_{2})\), i.e., \(I(f(x_{1});f(x_{2}))\leq I(x_{1};x_{2})\)[25, 26]. This objective comes with challenges including how to optimally generate \(x_{1}\) and \(x_{2}\)[22] for actionable mutual information, as well as how to reduce redundant information in the representation [17, 16]. To elaborate on former challenge, Tian et al [22] suggested an heterodox idea, indicating that the augmentation process for generating views should be modified in a way that will enable reducing the mutual information between representation of positive views without affecting task-relevant information, i.e., mutual information is not necessarily task-relevant information. The later challenge, on the other hand, suggests that whitening the latent/embedding space would reduce redundant information. However, we argue that rather than focusing on mutual information between the representation of augmented views', the joint mutual information between views' representations and the target representation could provide a possible way to resolve this controversy. Hence, we take a totally different approach by formulating the core of SSL in terms of **joint mutual information between views and the target representation**. This leads us to the observation that, even though rigorous redundancy reduction through whitening such as in [16] drops redundant information, it also risks reduction of useful synergistic information. This motivates us to design experiments to assess this claim in Sec. 2.4, and then to offer a training protocol to alleviate this loss of the synergistic element in joint mutual information. Specifically, we find it necessary to revisit the SSL principle from the joint mutual information perspective. Therefore, we assess two most recent baseline, Barlow-Twins [16] and W-MSE [17] which aim for redundancy reduction. Below we elaborate on joint mutual information (in contrast with mutual information) and then we investigate two most recent baselines on whitening, which are also most relevant baseline to study redundancy and synergy.
### Decomposing Joint Mutual Information
For _the first time ever_ we consider the general SSL problem setting from the viewpoint of PID, which has diverse practical applications including in neuroscience, game theory and statistical learning. Hence, first we present the PID introduced in [24] and then reformulate the SSL accordingly. We note that PID is not the only approach to multivariate measurement of information. However, it has multiple advantages in our SSL context, including non-negative decomposition of information as well as separate and simultaneous measurement of redundancy and synergy as distinct quantities [27]. This new interpretation of SSL is primarily posed to address the ambiguity in the role of mutual information in SSL.
The PID is an approach to a non-overlapping decomposition of the joint mutual information between two sets of variables, a set of two or more source variables carrying information about a target, as well as the single target variable. This decomposition has been challenging as the proposed solutions mostly consisted of negative information terms, until a breakthrough work by [24] which introduced a non-negative decomposition in terms of quantifying three components, the unique, redundant, and synergistic information.
In its simplest form, suppose we have two source variables \(S_{1}\) and \(S_{2}\) carrying joint mutual information \(I(T;S_{1},S_{2})\) about a target variable \(T\). Hence each of the source variables has mutual information with the target variable. Decomposing the joint mutual information into some non-negative components, models information interaction to assess the contribution offered by each source variable and combination of sources. According to [24], as shown in Fig. 1 the joint mutual information between sources and target, could be decomposed as three elements, unique, redundant, and synergistic information. Unique information is the part provided by each source separately, redundant information is the minimum information provided by each source (aka common mutual information), and synergistic information is the information provided only by a combination of \(S_{1}\) and \(S_{2}\) about \(T\), which neither alone can provide [23].
\[\begin{array}{l}I(S_{1},S_{2}:T)=\text{Redundancy}(T;S_{1},S_{2})+\\ \text{Synergy}(T;S_{1},S_{2})+\text{Unique}(T;S_{1})+\text{Unique}(T;S_{2}) \end{array} \tag{1}\]
Now consider the general setting of SSL, where at least two random augmented views of a sample are generated. The goal
Figure 1: Partial information decomposition in case of three variables.
is to contrast them in order to learn a representation that is maximally informative about the original sample distribution, while minimally informative about the augmentation. This contrast in essence creates an information interaction between the information of the variables which could be studied under the PID framework. Here, the two augmented views could be seen as source variables \(S_{1}\) and \(S_{2}\), whereas the original sample distribution is the target variable \(T\). In a more general sense, \(T\) could be considered the class distribution representing the invariant representation of the views of a given sample, i.e., the class the data sample belongs to. Here, as only redundant and synergistic information will be the results of interaction in contrasting views in SSL frameworks, unique information is not the subject matter of our study in this work. Unique information would be the subject of non-contrastive supervised learning on labeled data.
### Redundancy Reduction Baselines
Interestingly, two most recent SSL baselines [17, 16] are redundancy reduction (aka hard/soft whitening) baselines. Both baselines take advantage of whitening (Cholskey whitening) of latent/embedding space of a cross-correlation matrix computed from augmented views of the same sample. Ermolov et al [17] proposed a hard whitening method based on a recent version of Cholesky decomposition [28, 29] for whitening the latent space vectors. At the same time, Zbontar et al [16] has gained more popularity by proposing as simpler process called soft whitening, which essentially forces the cross-correlation matrix of the embedding vectors of two networks to identity matrix. The later approach, known as Barlow-Twins, suggests that their whitening approach intuitively results in redundancy reduction embedded in off-diagonal elements of the cross-correlation matrix. We use both approaches for our investigation, and provide further insight on the synergy versus redundancy. However due to the lack of space we only represent the theoretical reformulation of Barlow-Twins under our framework, as it is more popular. The following is the loss function of Barlow-Twins:
\[\mathcal{L}_{BT} \triangleq\sum_{i}(1-C_{ii})^{2}+\lambda\sum_{i}\sum_{j\neq i}(C_ {ij})^{2} \tag{2}\] \[C_{ij} \triangleq\frac{\sum_{m}z_{m,i}^{A}z_{m,j}^{B}}{\sqrt{\sum_{m}(z_ {m,i}^{A})^{2}}\sqrt{\sum_{m}(z_{m,j}^{B})^{2}}} \tag{3}\]
where \(C_{ij}\) are elements of the cross-correlation matrix \(C\) between the embedding vectors with element \(z\) of two networks (twins), as presented in Eq. 3. \(\lambda\) as a weighting factor, originally set to \(5\times 10^{-3}\).
### Assessing synergy and redundancy
In order to lay a context for PID in the SSL context, we find it necessary to design simple experiments around redundancy reduction and synergy in Barlow-Twins (BT). Note that as the augmented views for a sample generated under standard augmentation for SSL share lots of information in common (redundant or commonly known as mutual information), BT attains desirable performance by implementing rigorous redundancy reduction. However we argue that if the redundant information was not as much, the performance would drop sharply. To assess this, we apply heavy augmentation on samples (such as [30]) to generate views with significantly less redundant information, and then test BT performance on these. The top-1 accuracy for CIFAR10 and CIFAR100 (under experimental settings in next section) drops by \(\%5.69\) and \(\%5.13\) respectively. Now under same heavy augmentation, we re-calibrate BT by setting \(\lambda=0.1\) and also forcing off-diagonal elements to a multivariate Gaussian \(\mathcal{N}(0,1)\) rather than zero to allow them better affect the learned representation, we gain accuracy, \(\%0.91\), and \(\%0.81\) compared with the former case. This implies that the off-diagonal elements not only carry redundant information, but also some other type of information. Otherwise allowing more redundancy by using multivariate Gaussian off-diagonal elements would have degraded the performance. We argue that off-diagonal elements do not only represent redundant information, **but also synergistic information**. This is why when we reduce the redundant information by implementing heavy augmentation, BT's rigorous redundancy reduction constraint on off-diagonal elements of the cross-correlation matrix, degrades the performance by targeting synergistic information. Below, we propose a training protocol that works even better than forcing off-diagonal elements to multivariate Gaussian, and present our experimental results on two baselines BT and W-MSE in Sec. 4 to show the generality of our framework.
## 3 Synergy-based training protocol
We aim for re-calibrating the redundancy reduction in BT [16] and W-MSE [17] toward protecting the most synergistic information during the redundancy reduction process. In its current form, BT approach does not seem to optimally reduce redundancy, without significant loss in the synergistic component. Our approach consists of a serial pre-training with first phase of dropping redundancy and second phase of adding to synergy. Hence, in this section, we define a new training protocol aiming for extracting more synergistic information during the process of redundancy reduction which will be implemented on both BT and W-MSE. We present this protocol aimed at more synergy and less redundancy via the use of engineered off-diagonal elements, to show the effectiveness of the joint mutual information decomposition in SSL. As the augmented views of a sample under standard augmentation share lots of mutual information, we find it practically more efficient to update/replace the loss function of BT and W-MSE after initial pre-training with the original loss function which solely aims at redundancy reduction. This is done under a new training protocol with two phases of pre-training in two different settings. First phase aims at reducing the redundancy, while the second phase aims at adding to synergy. Below we only present the new formulation for BT, however, we provide the experimental results for both BT and W-MSE.
**A. Gaussian off-diagonal:** After initial pre-training of original model, here BT, the network is fixed, to resume the training with an updated loss. For BT we set \(\lambda=0.1\) and replace the second term in Eq. 2 with \(\lambda\sum_{i}\sum_{j\neq i}(C_{ij}-G_{ij})^{2}\) where \(G_{ij}\) are the multivariate Gaussian elements of a square matrix \(G\) of proper size. This allows the BT to better consider the off-diagonal elements of the cross-correlation matrix, which convey synergy and redundancy.
**B. Reinforced off-diagonal:** After initial pre-training of original model, here BT, the network is fixed and the average \(C_{ij}^{Ave}=\frac{1}{n}\sum_{n}C_{ij}\) over all \(n\) samples will be computed. Then training resumes with new \(\lambda=0.1\) and the second term in Eq. 2 updated as \(\lambda\sum_{i}\sum_{j\neq i}(C_{ij}-C_{ij}^{Ave})^{2}\) forcing each off-diagonal element to its corresponding average.
## 4 Experiments and Results
### Experiments
**Baselines:** Our modification on BT and W-MSE [16, 17] resulted in GSBT and RSBT, as well as GSW-MSE and RSW-MSE respectively. We perform experiments using our new training protocol under standard and heavy data augmentation. We contrast it with most recent baselines including Whitening-MSE (\(d=4\)) [17], a non-contrastive baseline BYOL [13], and a clustering-based baseline SwAV [15]. Following [17], latent spaces of all methods are \(L_{2}\)-normalized.
**Dataset and augmentation:** We use six datasets including ImageNet [31], CIFAR10, CIFAR100 [32], Tiny ImageNet [33], ImageNet-100, and VOC0712. We use two sets of augmentation protocols, standard and heavy. For standard augmentation including random grayscaling, random crop, color jittering, aspect ratio adjustment, and horizontal mirroring, we follow the settings in [7], and for heavy augmentation we follow the settings in [30].
**Network & implementation details:** For CIFAR10/100, following the details of each baseline [7, 13, 14, 15, 17, 16], we use ResNet18 while for ImageNet, Tiny ImageNet, and VOC0712 we use ResNet50 [34], for the encoder and the same projector head as [16], with the same size of projector output in all baselines. For VOC0712 similar to [16], Faster R-CNN [35] is used. Optimization of all experiments were done using Adam optimizer [36]. Pre-training of RSBT, GSBT as well as RSW-MSE and GSW-MSE are performed in two phases, a phase one (redundancy reduction) consists of 500 epochs with batch size of 1024, which starts with a learning rate of \(0.15\) for some 20 epochs and drops to \(0.001\) for the remaining epochs. Phase two (synergy addition) also consists of another 500 epochs with the learning rate of \(0.001\), with their modified loss functions. The weight decay in both phases and all other experiments is \(10^{-6}\).
### Evaluation and results
Similar to former methods, we perform the standard supervised linear evaluation for classification task as well as detection. Classification involves fixing the encoder weights after pre-training and replacing the projector with a linear classifier (fully connected followed by softmax), and training the linear classifier for some 500 epochs on evaluation data, and then testing it. The classification resluts for ImageNet, CIFAR10/100, Tiny ImageNet, and ImageNet-100 with different settings of proposed training protocol are presented in the Tables 1, 2, and 3, whereas the detection results with VOC0712 is presented in Table 1. Results for modified BT using our protocol is presented in Table 1 and 2, whereas the results for modified W-MSE using our protocol is available in Table 3. In both settings of data augmentation, our method outperforms prior approaches. While heavy augmentation degrade the performance of other approaches, it even improves the RSBT, GSBT, as well as RSW-MSE and GSW-MSE which shows robustness of our approach. |
2305.15343 | Modeling Multiple Irregularly Spaced Financial Time Series | In this paper we propose univariate volatility models for irregularly spaced
financial time series by modifying the regularly spaced stochastic volatility
models. We also extend this approach to propose multivariate stochastic
volatility (MSV) models for multiple irregularly spaced time series by
modifying the MSV model that was used with daily data. We use these proposed
models for modeling intraday logarithmic returns from health sector stocks data
obtained from Trade and Quotes (TAQ) database at Wharton Research Data Services
(WRDS). | Chiranjit Dutta, Nalini Ravishanker, Sumanta Basu | 2023-05-24T16:55:59Z | http://arxiv.org/abs/2305.15343v1 | # Modeling Multiple Irregularly Spaced Financial Time Series
###### Abstract
In this paper we propose univariate volatility models for irregularly spaced financial time series by modifying the regularly spaced stochastic volatility models. We also extend this approach to propose multivariate stochastic volatility (MSV) models for multiple irregularly spaced time series by modifying the MSV model that was used with daily data. We use these proposed models for modeling intraday logarithmic returns from health sector stocks data obtained from Trade and Quotes (TAQ) database at Wharton Research Data Services (WRDS).
## 1 Introduction
In finance, we often encounter time-series data that exhibits irregular spacing in time meaning that the time intervals between successive data points are not the same. This type of data is known as irregularly spaced time series data. We propose univariate and multivariate stochastic volatility models for irregularly spaced time series. Specifically, we build on the gap time modeling idea of Nagaraja et al. (2011) to construct useful time series models that can help better understand volatility patterns in irregularly spaced financial time series. To achieve this, we modify existing stochastic volatility models that were originally designed for regularly spaced data. Additionally, we extend this approach to model multiple irregularly spaced time series by modifying multivariate stochastic volatility (MSV) model of (Chib et al., 2009) that was used with daily data. High-frequency financial data are inherently irregularly spaced since trades can occur at any point in time. Moreover, the microstructure of financial markets, such as the methods of placing and executing orders, can also cause unevenly spaced data. For instance, in electronic markets, trades can be executed rapidly,
and traders may use different strategies and algorithms that result in varying transaction frequencies. The literature on dynamic statistical modeling is sparse. Practitioners interested in understanding dynamic evolution of stock properties require specialized models since the ones designed for regularly spaced data are inappropriate. One of the early approaches to model the return series sampled at irregularly spaced time intervals set by the trade arrivals is ACD-GARCH model by Ghysels and Jasiak (1998). This is a random coefficient GARCH, or doubly stochastic GARCH, where the stochastic durations between transactions determine the parameter dynamics for the GARCH equation. They proposed a Generalized Method of Moments (GMM) based two-step procedure for estimating the parameters. ACD-GARCH is quite cumbersome to estimate and also difficult to generalize to multiple time series. Meddahi et al. (2006) proposed a GARCH type model for irregularly spaced data which is an exact discretization of continuous time stochastic volatility processes observed at irregularly spaced times. Their model combines the advantages of ACD-GARCH Ghysels and Jasiak (1998) and ACD (Engle and Russell, 1998). A continuous version of GARCH (COGARCH) is another way to model irregularly spaced time series data (Maller et al., 2008). Recently, Buccheri et al. (2021a) propose to model intraday log-prices through a multivariate local-level model with score-driven covariance matrices and to treat asynchronicity as a missing value problem.
More details are needed
Write a short description about each section?
## 2 Volatility Models for Univariate Irregularly Spaced Time Series
Let \(\{r_{t_{j}}\}\) be a sequence of log-returns of a financial asset, where \(t_{j}\) denotes the time of the \(j^{th}\)transaction. Let and \(g_{j}=t_{j}-t_{j-1},j>1\) be the known gap times between consecutive returns. Unlike regularly spaced time series (such as daily returns), for irregularly spaced time series (such as transaction level intra-day returns), the gap times \(g_{j},\ j>1\) are not the same.
### Irregularly spaced stochastic volatility (IR-SV) Model
We define an irregular stochastic volatility (IR-SV) model for \(r_{t_{j}}\) as
\[r_{t_{j}} =\exp{\left(\frac{h_{t_{j}}}{2}\right)}\epsilon_{t_{j}},\quad \epsilon_{t_{j}}\sim N(0,1), \tag{1}\] \[h_{t_{1}} =\mu+\eta_{t_{1}},\quad\eta_{t_{1}}\sim N\Big{(}0,\frac{\sigma_{ \eta}^{2}}{1-\phi^{2}}\Big{)},\] (2) \[h_{t_{j}} =\mu+\phi^{g_{j}}(h_{t_{j-1}}-\mu)+\eta_{t_{j}},\quad\eta_{t_{j} }\sim N\Bigg{(}0,\frac{\sigma_{\eta}^{2}(1-\phi^{2g_{j}})}{1-\phi^{2}}\Bigg{)},\ \text{for}\ j>1, \tag{3}\]
where \(\phi\) is the persistence parameter and \(\mu\) is the location parameter. In (2)-(3), we assume that the log-volatility process \(\{h_{t_{j}}\}\) is a stationary Gaussian autoregressive (AR) process
with \(|\phi|<1\), and \(\phi\neq 0\). We also assume that the gap times \(g_{j},\ j>1\) are bounded away from \(0\).
The formulation of the IR-SV model in (1)-(3) can be viewed as an extension to volatility modeling of the stationary gap time autoregressive (gap-AR) model for an irregularly spaced time series described in Nagaraja et al. (2011).
**Proposition 2.1**.: \(h_{t_{j}}\) _is a weakly stationary process and the unconditional distribution of \(h_{t_{j}}\) is_
\[h_{t_{j}}\sim\emph{N}\Big{(}\mu,\frac{\sigma_{\eta}^{2}}{1-\phi^{2}}\Big{)},\]
Proof.: Let \(x_{t_{j}}=h_{t_{j}}-\mu,\forall j\in\mathbb{Z}^{+}\). From (1)-(3), \(x_{t_{j}}\) can be recursively written as
\[x_{t_{j}}=\sum_{k=1}^{j-1}\phi^{\sum_{i=k+1}^{j}g_{i}}\eta_{t_{k}}+\eta_{t_{j}} \tag{4}\]
Hence it follows that \(E(x_{t_{j}})=0,\forall j\in\mathbb{Z}^{+}\). For \(l=0,1,\cdots\), the gap time \(l=t_{j+l}-t_{j}=\sum_{i=j+1}^{j+l}g_{i}\) and the covariance of \(x_{t_{j}}\) and \(x_{t_{j+l}}\) is
\[Cov(x_{t_{j}},x_{t_{j+l}}) =\mathrm{E}\left[\Bigg{(}\sum_{k=1}^{j-1}\phi^{\sum_{i=k+1}^{j}g_ {i}}\eta_{t_{k}}+\eta_{t_{j}}\Bigg{)}\Bigg{(}\sum_{k=1}^{j+l-1}\phi^{\sum_{i= k+1}^{j+l}g_{i}}\eta_{t_{k}}+\eta_{t_{j+l}}\Bigg{)}\right]\] \[=\sum_{k=1}^{j-1}\phi^{\sum_{i=k+1}^{j}g_{i}+\sum_{i=k+1}^{j+l}g _{i}}\mathrm{E}\left[\eta_{t_{k}}^{2}\right]+\mathrm{E}\left[\eta_{t_{j}}^{2}\right]\] \[=\frac{\sigma_{\eta}^{2}}{1-\phi^{2}}\left[\phi^{2\sum_{i=2}^{j}g_ {i}+l}+\sum_{k=2}^{j-1}\phi^{2\sum_{i=k+1}^{j}g_{i}+l}(1-\phi^{2g_{k}})+\phi^{ l}(1-\phi^{2g_{j}})\right]\] \[=\frac{\sigma_{\eta}^{2}\phi^{l}}{1-\phi^{2}}\left[\phi^{2\sum_{i =2}^{j}g_{i}}-\phi^{2\sum_{i=2}^{j}g_{i}}+\phi^{2\sum_{i=3}^{j}g_{i}}+\cdots+ \phi^{2g_{j}}-\phi^{2g_{j}}+1\right]\] \[=\frac{\sigma_{\eta}^{2}\phi^{l}}{1-\phi^{2}}\]
Thus \(Cov(x_{t_{j}},x_{t_{j+l}})\) is a function of gap time \(l\) only and the series \(x_{t_{j}}\) and hence \(h_{t_{j}}\) is a weakly stationary process. By stationarity of \(h_{t_{j}}\) and (3),
\[\mathrm{Var}(h_{t_{j}})=\phi^{2g_{j}}\mathrm{Var}(h_{t_{j}})+ \mathrm{Var}(\eta_{t_{j}})\] \[\implies\mathrm{Var}(h_{t_{j}})=\frac{\sigma_{\eta}^{2}}{1-\phi^ {2}}\]
Also, \(\mathrm{E}(h_{tj})=\mu\) and hence the unconditional distribution of \(h_{t_{j}}\) is \(\emph{N}\Big{(}\mu,\frac{\sigma_{\eta}^{2}}{1-\phi^{2}}\Big{)}\).
**Proposition 2.2**.: _We discuss properties of the distribution of \(\{r_{t_{j}}\}\)._
1. _The expectation and variance of the squared returns is_ \[E\!\left(r_{t_{j}}^{2}\right)=\exp\Big{(}\mu+\frac{\sigma_{\eta}^{2}}{ 2(1-\phi^{2})}\Big{)}\] (5) \[\text{Var}\!\left(r_{t_{j}}^{2}\right)=\exp\Big{(}2\mu+\frac{\sigma_ {\eta}^{2}}{1-\phi^{2}}\Big{)}\!\left(3\exp\Big{(}\frac{\sigma_{\eta}^{2}}{1- \phi^{2}}\Big{)}-1\right)\] (6)
2. _Let the gap time_ \(l=t_{j+l}-t_{j}=\sum_{i=j+1}^{j+l}g_{i}\)_,_ \(l=0,1,\cdots\)_, the autocovariance function of_ \(r_{t_{j}}^{2}\) _and_ \(r_{t_{j+l}}^{2}\) _is_ \[\text{Cov}\!\left(r_{t_{j}}^{2},r_{t_{j+l}}^{2}\right)=\exp\Big{(}2\mu+\frac{ \sigma_{\eta}^{2}}{1-\phi^{2}}\Big{)}\left[\exp\Big{(}\frac{\sigma_{\eta}^{2} \phi^{l}}{1-\phi^{2}}\Big{)}-1\right]\] (7)
3. _The kurtosis of the returns is_ \[\text{K}\!\left(r_{t_{j}}\right)=3\exp\Big{(}\frac{\sigma_{\eta}^{2}}{1-\phi^ {2}}\Big{)}\] (9)
4. _The sequence of squared returns_ \(r_{t_{j}}^{2}\) _is a stationary process._
Proof.: The first two even moments of \(r_{t_{j}}\) are
\[\text{E}\!\left(r_{t_{j}}^{2}\right)=\text{E}\!\left(\exp(h_{t_{j }})\epsilon_{t_{j}}^{2}\right)=\text{E}\!\left(\exp(h_{t_{j}})\right)\!\text{E }\!\left(\epsilon_{t_{j}}^{2}\right)=\exp\Big{(}\mu+\frac{\sigma_{\eta}^{2}}{2 (1-\phi^{2})}\Big{)},\] \[\text{E}\!\left(r_{t_{j}}^{4}\right)=\text{E}\!\left(\exp(2h_{t_{ j}})\epsilon_{t_{j}}^{4}\right)=\text{E}\!\left(\exp(2h_{t_{j}})\text{E}\!\left( \epsilon_{t_{j}}^{4}\right)=3\exp\Big{(}2\mu+\frac{2\sigma_{\eta}^{2}}{1-\phi^ {2}}\Big{)}.\]
Then,
\[\text{Var}\!\left(r_{t_{j}}^{2}\right)= 3\exp\Big{(}\mu+\frac{3\sigma_{\eta}^{2}}{2(1-\phi^{2})}\Big{)}- \exp\Big{(}2\mu+\frac{\sigma_{\eta}^{2}}{(1-\phi^{2})}\Big{)}\] \[= \exp\Big{(}2\mu+\frac{\sigma_{\eta}^{2}}{1-\phi^{2}}\Big{)}\Big{(} 3\exp\Big{(}\frac{\sigma_{\eta}^{2}}{1-\phi^{2}}\Big{)}-1\Big{)},\]
and the kurtosis is
\[\text{K}\!\left(r_{t_{j}}\right)=\frac{\text{E}\!\left(r_{t_{j}}^{4}\right)}{ \text{E}\!\left(r_{t_{j}}^{2}\right)^{2}}=3\exp\Big{(}\frac{\sigma_{\eta}^{2} }{1-\phi^{2}}\Big{)}.\]
The kurtosis is always greater than \(3\) as long as \(\sigma_{\eta}^{2}>0\). For \(l=0,1,\cdots\), the autocovariance function of \(r_{t_{j}}^{2}\) is
\[\text{Cov}\!\left(r_{t_{j+l}}^{2},r_{t_{j}}^{2}\right) =\text{E}\!\left(r_{t_{j+l}}^{2}r_{t_{j}}^{2}\right)-\text{E}\! \left(r_{t_{j+l}}^{2}\right)\!\text{E}\!\left(r_{t_{j}}^{2}\right)\] \[=\text{E}\!\left(\exp(h_{t_{j}}+h_{t_{j+l}})\epsilon_{t_{j}}^{2} \epsilon_{t_{j+l}}^{2}\right)-\exp\Big{(}2\mu+\frac{\sigma_{\eta}^{2}}{1-\phi ^{2}}\Big{)}\]
By independence of \(\epsilon_{t_{j}}\)'s and \(h_{t_{j}}\)'s, we have
\[\text{Cov}(r_{t_{j+l}}^{2},r_{t_{j}}^{2})=\text{E}(\exp(h_{t_{j}}+h_{t_{j+l}}))- \exp\Big{(}2\mu+\frac{\sigma_{\eta}^{2}}{1-\phi^{2}}\Big{)}. \tag{10}\]
We note that
\[\text{Var}(h_{t_{j}}+h_{t_{j+l}}) =\text{Var}(h_{t_{j}})+\text{Var}(h_{t_{j+l}})+2\text{Cov}(h_{t_{j }},h_{t_{j+l}})\] \[=\frac{2\sigma_{\eta}^{2}(1+\phi^{l})}{1-\phi^{2}}\]
By normality of \(h_{t_{j}}\), it follows that \(h_{t_{j}}+h_{t_{j+l}}\sim N\Big{(}2\mu,\frac{2\sigma_{\eta}^{2}(1+\phi^{l})}{ 1-\phi^{2}}\Big{)}\). Therefore (10) becomes
\[\text{Cov}(r_{t_{j+l}}^{2},r_{t_{j}}^{2})=\exp\Big{(}2\mu+\frac{\sigma_{\eta}^{ 2}}{1-\phi^{2}}\Big{)}\left[\exp\Big{(}\frac{\sigma_{\eta}^{2}\phi^{l}}{1-\phi ^{2}}\Big{)}-1\right]. \tag{11}\]
Therefore, \(r_{t_{j}}^{2}\) is a stationary process.
### Bayesian Estimation of the IR-SV Model
Let \(\mathbf{r}=(r_{t_{1}},\cdots,r_{t_{T}})\) be the set of irregularly spaced observed returns following the IR-SV model in (1)-(3). Let \(\mathbf{\theta}=(\sigma_{\eta}^{2},\phi,\mu)\) be the set of model parameters. For \(j=1,\cdots,T\), we can rewrite (1)-(3) as follows:
\[r_{t_{j}}|h_{t_{j}},\theta \sim N\big{(}0,\exp(h_{t_{j}})\big{)} \tag{12}\] \[h_{t_{1}} \sim N\big{(}\mu,\frac{\sigma_{\eta}^{2}}{1-\phi^{2}}\big{)}\] (13) \[h_{t_{j}}|h_{t_{j-1}} \sim N\Big{(}\mu+\phi^{g_{j}}(h_{t_{j-1}}-\mu),\frac{\sigma_{\eta}^ {2}(1-\phi^{2g_{j}})}{1-\phi^{2}}\Big{)},\quad j>1. \tag{14}\]
We assume the following priors:
\[\frac{\phi+1}{2}\sim B(20,1.5),\] \[\frac{1}{\sigma_{\eta}^{2}}\sim G(2.5,0.025),\] \[\mu\sim N(0,10),\]
where \(B\) denotes the beta distribution, \(G\) denotes the gamma distribution and \(N\) is the normal distribution. Let \(p(\mathbf{\theta})=p(\phi)p(\sigma_{\eta}^{2})p(\mu)\) denote the joint prior of all the parameters, assuming independence. The joint posterior distribution of the hidden volatility states \(\mathbf{h}=(h_{t_{1}},\cdots,h_{t_{T}})\) and the parameters \(\mathbf{\theta}=(\sigma_{\eta}^{2},\phi,\mu)\) is obtained by Bayes' rule as
\[p(\mathbf{h},\mathbf{\theta}|\mathbf{r})\propto p(\mathbf{\theta})\prod_{j=1}^{T}p(r_{t_{j}}|h_ {t_{j}},\mathbf{\theta})p(h_{t_{j}}|h_{t_{j-1}},\mathbf{\theta}), \tag{15}\]
where \(p(r_{t_{j}}|h_{t_{j}},\mathbf{\theta})\) and \(p(h_{t_{j}}|h_{t_{j-1}},\mathbf{\theta})\) follows from (12)-(14). We employ a Metropolis-Hastings adaptive random-walk sampler for each of the parameters with a univariate normal proposal distribution within \(R\) package NIMBLE package (de Valpine et al., 2017) to generate samples from their respective posterior distribution.
### Simulation Study
We demonstrate the accuracy of the Bayesian estimation of the IR-SV model parameters under a few different simulation setups. We generated 100 sets (replicates) of zero mean log returns data, each of length \(T=5000\). The gap times \(g_{j},1\leq j\leq T\) are generated as follows:
* Generate \(g_{j}\sim\mathcal{P}(\lambda=3)\) and \(g_{j}>0,\forall\ j\) where \(\mathcal{P}(\lambda)\) is the Poisson distribution with mean \(\lambda\).
* Scale the \(g_{j}\)'s such that \(0<g_{j}\leq 1,\forall j\)
In our empirical analysis we have observed the mean gap times to be around 3 seconds with 10% of them greater than 5 seconds and since the persistence parameter \(|\phi|<1\), we scale the gap times such that \(g_{j}\in(0,1]\) to ensure \(\phi^{g_{j}}\) is bounded away from 0, \(\forall j\). To study the effect of persistence parameter \(\phi\) on the estimation accuracy, in all three scenarios we generated zero mean log returns from the IR-SV model in (1-3) with the true parameter values of \(\mu=-9\), \(\sigma=0.8\). In scenario 1, the true value of the persistence parameter \(\phi\) is 0.2, in scenario 2, true value of \(\phi\) is 0.6 and in scenario 3, true value of \(\phi\) is 0.9.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Parameter & True Value & Mean & Q(2.5\%) & Q(97.5\%) \\ \hline \hline \(\mu\) & -9.0000 & -8.9997 & -9.1117 & -8.8627 \\ \(\phi\) & 0.2000 & 0.2438 & 0.1565 & 0.3346 \\ \(\sigma\) & 0.8000 & 0.7918 & 0.7366 & 0.8551 \\ \hline \hline \end{tabular}
\end{table}
Table 1: True values and posterior estimates of parameters for Scenario 1 from the IR-SV model.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Parameter & True Value & Mean & Q(2.5\%) & Q(97.5\%) \\ \hline \hline \(\mu\) & -9.0000 & -8.9833 & -9.2253 & -8.7326 \\ \(\phi\) & 0.6000 & 0.6253 & 0.5515 & 0.7056 \\ \(\sigma\) & 0.8000 & 0.7832 & 0.7026 & 0.8769 \\ \hline \hline \end{tabular}
\end{table}
Table 2: True values and posterior estimates of parameters for Scenario 2 from the IR-SV model.
We run 520,000 MCMC iterations and discard the first 20,000 as burn-in and thinned every \(1000^{th}\) sample to reduce autocorrelation between MCMC samples. Convergence for the parameters are assessed using trace and posterior density plots, **autocorrelation and thinning** (not shown here). In Table 1, Table 2, and Table 3, we report the posterior sample means of the parameters along with their true values and 95% credible intervals, averaged the 100 data sets (replicates). The true values of the parameters lie inside the 95% credible intervals for all parameters in all three scenarios.
## 3 Multivariate Stochastic Volatility Models for Irregularly Spaced Financial Time Series (IR-MSV)
We propose a multivariate stochastic volatility model to fit irregularly spaced synchronized intraday zero mean log returns for multiple assets by modifying the basic multivariate stochastic volatility (BMSV) model (Chib et al., 2009) and using the gap time idea of Nagaraja et al. (2011) for handling the latent state.
### Refresh Time Sampling
Why synchronization is necessary? In this section we describe refresh time sampling which allows synchronization of high frequency prices from multiple stocks. In high frequency trading (HFT) conducting multivariate analysis is difficult since assets do not trade on a fixed grid, trades and quotes don't arrive synchronously. Hence synchronization of HFT data from multiple assets is necessary. As in Section (2.5.3), we briefly describe the refresh sampling procedure for synchronization of high frequency prices from multiple stocks.
Assume there are \(p\) stocks, and trading time of the \(i^{th}\) stock is given by \(t_{i\ell}\), \(\ell=1,\cdots,n_{i}\), \(i=1,\cdots,p\). For a given time \(t\), define \(N_{t}^{i}=\) the number of \(t_{i\ell}\leq t\), \(\ell=1,\cdots,n_{i}\), which counts the number of distinct data points \(t_{i\ell}\) available for the \(i^{th}\) asset up to time \(t\). The first refresh time is defined as \(\tau_{1}=\max\left\{t_{11},\ldots,t_{p1}\right\}\), which is the first time taken to trade all assets and refresh their posted prices. The subsequent refresh times are defined as follows. Given the \(j^{th}\) refresh time \(\tau_{j}\), define the \((j+1)^{th}\) refresh time
\[\tau_{j+1}=\max\left\{t_{1,N_{j}^{1}+1},\ldots,t_{p,N_{j}^{p}+1}^{p}\right\} \tag{16}\]
Suppose there are \(m\) refresh time points \(\tau_{1},\cdots,\tau_{m}\). Intuitively, \(\tau_{2}\) is the second time when all the assets are traded and their prices are refreshed. In our empirical analysis we consider three health sector stocks BMY, CVS and MDT traded on NYSE on \(24^{th}\) June,
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Parameter & True Value & Mean & Q(2.5\%) & Q(97.5\%) \\ \hline \hline \(\mu\) & -9.0000 & -8.9872 & -9.9697 & -8.0021 \\ \(\phi\) & 0.9000 & 0.8960 & 0.8365 & 0.9458 \\ \(\sigma\) & 0.8000 & 0.7860 & 0.7059 & 0.8818 \\ \hline \hline \end{tabular}
\end{table}
Table 3: True values and posterior estimates of parameters for Scenario 3 from the IR-SV model.
2016. We consider the high frequency prices for these three stocks and illustrate the refresh time sampling procedure for synchronization of multivariate analysis which we discuss later. Figure 1 illustrates the refresh time sampling idea and in this example we have \(\tau_{1}=09:45:01.00\),\(\tau_{2}=09:45:02.00\), \(\tau_{3}=09:45:03.00\), \(\tau_{4}=09:45:06.00\) and \(\tau_{5}=09:45:07.00\) are the first five refresh times.
### IR-MSV Model Formulation
Let \(\mathbf{r}_{t_{j}}=(r_{1t_{j}},r_{2t_{j}},\cdots,r_{pt_{j}})^{\prime}\), \(j=1,\cdots,T\) be a \(p\)-dimensional vector of zero mean log returns of \(p\) financial assets observed at irregularly spaced time points \(t_{1},\cdots t_{T}\). Here, \(t_{j}\) denotes the time of the \(j^{th}\) observation, and \(g_{j}=t_{j}-t_{j-1},j>1\) denotes the known gap time between consecutive observations. We introduce irregular multivariate stochastic volatility (IR-MSV) model as
\[\mathbf{r}_{t_{j}}=\mathbf{H}_{t_{j}}^{1/2}\mathbf{\epsilon}_{\mathbf{t_{j}}},\quad\mathbf{ \epsilon}_{\mathbf{t_{j}}}\sim\mathrm{MVN}(\mathbf{0},\mathbf{R}), \tag{17}\]
where,
\[\mathbf{H}_{t_{j}} =\mathrm{diag}\big{(}\exp(h_{1,t_{j}}),\cdots,\exp(h_{p,t_{j}}) \big{)},\quad j=1,\cdots,T, \tag{18}\] \[h_{i,t_{1}} =\mu_{i}+\eta_{i,t_{1}},\quad\eta_{i,t_{1}}\sim\mathrm{N}\bigg{(} 0,\frac{\sigma_{i}^{2}}{1-\phi_{i}^{2}}\bigg{)},\quad i=1,\cdots,p,\] \[h_{i,t_{j}} =\mu_{i}+\phi_{i}^{g_{j}}(h_{i,t_{j-1}}-\mu_{i})+\eta_{i,t_{j}}, \quad\eta_{i,t_{j}}\sim\mathrm{N}\bigg{(}0,\frac{\sigma_{i}^{2}(1-\phi_{i}^{2 g_{j}})}{1-\phi_{i}^{2}}\bigg{)},\quad i=1,\cdots,p;j=1,\cdots,T,\]
where \(\phi_{i}\) is the persistence parameter constrained by \(|\phi_{i}|<1\), \(i=1,\cdots,p\), \(\mu_{i}\) is the location parameter for each asset \(i=1,\cdots,p\) and \(\mathbf{R}\) is the correlation matrix of the observation errors. Following Nagaraja et al. (2011) and our discussion in the previous section, we observe that \(\{h_{i,t_{j}}\}\) is a stationary process for each \(i=1,\cdots,p\). Since the error correlation
Figure 1: Refresh Time Sampling for BMY, CVS and MDT. The vertical dotted blue lines represent the refresh time points on \(24^{th}\) June, 2016
matrix \(\mathbf{R}\) is assumed to remain constant over time, we refer to (17) as a constant correlation IR-MSV model.
**Proposition 3.1**.: _Let \(\mathbf{r}_{t_{j}}^{2}=(r_{1t_{j}}^{2},r_{2t_{j}}^{2},\cdots,r_{pt_{j}}^{2})^{\prime}\), \(j=1,\cdots,T\) be the vector of squared returns. We discuss properties of the distribution of \(\{\mathbf{r}_{t_{j}}^{2}\}\)._
1. _Let_ \(\text{E}(\mathbf{r}_{t_{j}}^{2})=\mathbf{m}=(m_{1},\cdots,m_{p})^{\prime}\) _be the_ \(p\times 1\) _vector of expected values where_ \(m_{i}\) _is_ \[m_{i}=\text{E}\big{(}r_{i,t_{j}}^{2}\big{)}=\exp\Big{(}\mu_{i}+\frac{\sigma_{i} ^{2}}{2(1-\phi_{i}^{2})}\Big{)},\quad i=1,\cdots,p\] (19)
2. _Let_ \(\text{Cov}(\mathbf{r}_{t_{j}}^{2})=\mathbf{\Sigma}\) _be the_ \(p\times p\) _variance-covariance matrix of squared returns vector. The_ \((i,k)^{th}\) _element of_ \(\mathbf{\Sigma}\) _is_ \[\text{Cov}(r_{i,t_{j}}^{2},r_{k,t_{j}}^{2})=\mathbf{\Sigma}_{ik}=\begin{cases} \exp\Big{(}2\mu_{i}+\frac{\sigma_{i}^{2}}{1-\phi_{i}^{2}}\Big{)}\Bigg{(}3\exp \Big{(}\frac{\sigma_{i}^{2}}{1-\phi_{i}^{2}}\Big{)}-1\Bigg{)}&i=k\\ 2\rho_{ik}^{2}\exp\bigg{[}(\mu_{i}+\mu_{k})+\frac{1}{2}\Big{(}\frac{\sigma_{i }^{2}}{1-\phi_{i}^{2}}+\frac{\sigma_{k}^{2}}{1-\phi_{k}^{2}}\Big{)}\bigg{]}&i \neq k\end{cases}\] (21)
Proof.:
### Bayesian Inference
In this section, we describe Bayesian analysis for fitting the IR-MSV model in (17) to irregularly spaced multiple financial time series. We show the likelihood, prior, posterior and also discuss the computational aspects using MCMC algorithms. The likelihood function is
\[\mathcal{L}(\mathbf{\Theta}|\mathbf{r}_{t_{1}},\cdots,\mathbf{r}_{t_{T}})=|\mathbf{\Sigma}_{ \mathbf{r}}|^{-T/2}\prod_{j=1}^{T}\exp\Bigg{(}\frac{-\mathbf{r}_{t_{j}}\mathbf{\Sigma}_{ \mathbf{r}}^{-1}\mathbf{r}_{t_{j}}^{\prime}}{2}\Bigg{)}, \tag{22}\]
where \(\mathbf{\Sigma}_{\mathbf{r}}=\mathbf{H}_{t_{j}}^{1/2}\mathbf{RH}_{t_{j}}^{1/2}\), \(\mathbf{\sigma}=(\sigma_{1},\cdots,\sigma_{p})\) and \(\mathbf{\phi}=(\phi_{1},\cdots,\phi_{p})\), and \(\mathbf{\Theta}=(\mathbf{\sigma},\mathbf{\phi},\mathbf{R},\mu)\).
For \(i=1,\cdots,p\), we assume the following priors:
\[\mu_{i}\sim\text{N}(0,10),\] \[\frac{1}{\sigma_{i}^{2}}\sim\text{G}(2.5,0.025),\] \[\mathbf{R}\sim\text{LKJ}(\eta=1.2),\text{ where LKJ}(\mathbf{\Sigma}|\eta) \propto\det(\mathbf{\Sigma})^{(\eta-1)},\] \[\phi_{i}\sim\text{N}(0,0.5),\]
where Lewandowski-Kurowicka-Joe (LKJ) distribution Lewandowski et al. (2009) is a useful prior for correlation matrices, and IG denotes the inverse gamma distribution.
The posterior distribution of \(\mathbf{\Theta}\) is
\[\pi(\mathbf{\Theta}|\mathbf{r}_{t_{1}},\cdots,\mathbf{r}_{t_{T}})\propto |\mathbf{\Sigma}_{\mathbf{r}}|^{-T/2}\prod_{j=1}^{T}\exp\left(\frac{-\mathbf{r }_{t_{j}}\mathbf{\Sigma}_{\mathbf{r}}^{-1}\mathbf{r}_{t_{j}}^{\prime}}{2}\right) \tag{23}\] \[\times\prod_{j=1}^{T}\pi(\mathbf{h}_{t_{j}}|\mathbf{h}_{t_{j-1}},\mathbf{ \sigma},\mathbf{\phi})\times\pi(\mathbf{\sigma})\pi(\mathbf{\phi})\pi(\mathbf{\mathrm{R}})\pi( \mu),\]
where we have assumed independence of the priors and \(\pi(\mathbf{h}_{t_{j}}|\mathbf{h}_{t_{j-1}},\mathbf{\sigma},\mathbf{\phi})\) is the joint distribution of the stochastic volatilities as in (18). We employ a block random walk sampler for the correlation matrix \(\mathbf{R}\) and for other parameters i.e. \((\mathbf{\sigma},\mathbf{\phi},\mu)\) we employ a Metropolis-Hastings adaptive random-walk sampler with a univariate normal proposal distribution, which are the default samplers in NIMBLE package (de Valpine et al., 2017) in R.
### Simulation Study
In this simulation our goal is to examine the effect of the correlation among the components on the accuracy of estimation. We generated 100 sets of zero mean log returns data from the IR-MSV model represented by equations (17) and (18) with \(p=3\) and \(T=5000\). The gap times \(g_{j},1\leq j\leq T\) are generated using a similar procedure as described in Section 2.3. The \(3\times 3\) correlation matrix has the following representation
\[\mathbf{R}=\begin{pmatrix}1&\rho_{12}&\rho_{13}\\ \rho_{12}&1&\rho_{23}\\ \rho_{13}&\rho_{23}&1\end{pmatrix}\]
We describe three different scenarios. In scenario 1, we consider moderate positive correlations among the components with \(\rho_{12}=0.6\), \(\rho_{13}=0.4\) and \(\rho_{23}=0.2\). In scenario 2, we consider a mix of high and low positive correlation and a negative correlation with \(\rho_{12}=-0.4\), \(\rho_{13}=0.7\) and \(\rho_{23}=0.3\). In scenario 3, we consider all correlations to be equal, high and positive with \(\rho_{12}=\rho_{13}=\rho_{23}=0.7\).
We run 210000 MCMC iterations and discard the first 10000 as burn-in and thinned every \(400^{th}\) sample to reduce autocorrelation between MCMC samples. We assessed the convergence for the parameters using trace and posterior density plots. In Table 4, Table 5, and Table 6 we report the posterior sample mean along with their true values and 95% credible intervals averaged the 100 data sets for scenario 1, scenario 2 and scenario 3 respectively.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Parameter & True Value & Mean & Q(2.5\%) & Q(97.5\%) \\ \hline \hline \(\rho_{12}\) & 0.6000 & 0.5987 & 0.5740 & 0.6158 \\ \(\rho_{13}\) & 0.4000 & 0.3987 & 0.3724 & 0.4286 \\ \(\rho_{23}\) & 0.2000 & 0.2008 & 0.1740 & 0.2254 \\ \(\sigma_{1}^{2}\) & 1.0000 & 0.9831 & 0.8639 & 1.1203 \\ \(\sigma_{1}^{2}\) & 0.8000 & 0.7807 & 0.6445 & 0.9119 \\ \(\sigma_{3}^{2}\) & 0.5000 & 0.5658 & 0.4606 & 0.6636 \\ \(\mu_{1}\) & -9.0000 & -8.9964 & -9.3681 & -8.6911 \\ \(\mu_{2}\) & -9.5000 & -9.4927 & -9.6809 & -9.3255 \\ \(\mu_{3}\) & -8.5000 & -8.5018 & -8.5973 & -8.3994 \\ \(\phi_{1}\) & 0.7000 & 0.7021 & 0.6374 & 0.7694 \\ \(\phi_{2}\) & 0.5000 & 0.4945 & 0.4119 & 0.5651 \\ \(\phi_{3}\) & 0.3000 & 0.3009 & 0.2181 & 0.3943 \\ \hline \hline \end{tabular}
\end{table}
Table 4: True values and posterior estimates of parameters for Scenario 1 from the IR-MSV model.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Parameter & True Value & Mean & Q(2.5\%) & Q(97.5\%) \\ \hline \hline \(\rho_{12}\) & -0.4000 & -0.3993 & -0.4245 & -0.3749 \\ \(\rho_{13}\) & 0.7000 & 0.7003 & 0.6835 & 0.7142 \\ \(\rho_{23}\) & 0.3000 & 0.3003 & 0.2710 & 0.3268 \\ \(\sigma_{1}^{2}\) & 1.0000 & 0.9778 & 0.8918 & 1.0933 \\ \(\sigma_{2}^{2}\) & 0.8000 & 0.7867 & 0.6824 & 0.8885 \\ \(\sigma_{3}^{2}\) & 0.5000 & 0.5613 & 0.4988 & 0.6305 \\ \(\mu_{1}\) & -9.0000 & -9.0009 & -9.2870 & -8.6325 \\ \(\mu_{2}\) & -9.5000 & -9.5044 & -9.6575 & -9.3313 \\ \(\mu_{3}\) & -8.5000 & -8.4880 & -8.5960 & -8.4005 \\ \(\phi_{1}\) & 0.7000 & 0.6928 & 0.6255 & 0.7494 \\ \(\phi_{2}\) & 0.5000 & 0.4927 & 0.4120 & 0.5586 \\ \(\phi_{3}\) & 0.3000 & 0.2954 & 0.2402 & 0.3541 \\ \hline \hline \end{tabular}
\end{table}
Table 5: True values and posterior estimates of parameters for Scenario 2 from the IR-MSV model.
## 4 Data Analysis: Multiple Intraday Log Returns
We implement the proposed models on intraday log returns for three health stocks traded on the NYSE: Medtronic PLC (MDT), Bristol-Myers Squibb Co (BMY), and CVS Health Corp (CVS). In particular, we compare the forecast performance of the IR-MSV model with the IR-SV models fit to each stock.
### Data Description
We considered the intra-day prices for MDT, BMY and CVS traded on June 24, 2016. We aggregated the high frequency prices at the one second level by taking the price with respect to the latest time point as in Buccheri et al. (2021b). There were 10066, 8797 and 11330 observations (transactions) for MDT, BMY, and CVS respectively. We synchronized these prices from three stocks using the refresh time sampling approach (Barndorff-Nielsen et al., 2011) to obtain a sample size of \(T=6239\) synchronized observations. A brief description of refresh sampling technique is discussed in (Section 2.5.3). Using these irregularly spaced refreshed prices (\(P_{t_{j}}\)) we calculate the log returns as
\[r_{t_{j}}=\log(P_{t_{j}})-\log(P_{t_{j-1}}),\quad 2\leq j\leq T\]
We have plotted the time series of log returns calculated from the refreshed prices of MDT, BMY and CVS on \(24^{th}\) June, 2016 in Figure 2. They seem to be fairly correlated.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Parameter & True Value & Mean & Q(2.5\%) & Q(97.5\%) \\ \hline \hline \(\rho_{12}\) & 0.7000 & 0.6985 & 0.6833 & 0.7147 \\ \(\rho_{13}\) & 0.7000 & 0.6993 & 0.6859 & 0.7100 \\ \(\rho_{23}\) & 0.7000 & 0.7006 & 0.6872 & 0.7139 \\ \(\sigma_{1}^{2}\) & 1.0000 & 0.9865 & 0.9045 & 1.0815 \\ \(\sigma_{2}^{2}\) & 0.8000 & 0.7811 & 0.7305 & 0.8642 \\ \(\sigma_{3}^{2}\) & 0.5000 & 0.4885 & 0.4241 & 0.5620 \\ \(\mu_{1}\) & -9.0000 & -8.9865 & -9.1761 & -8.7485 \\ \(\mu_{2}\) & -9.5000 & -9.4942 & -9.6221 & -9.3748 \\ \(\mu_{3}\) & -8.5000 & -8.4971 & -8.6012 & -8.3849 \\ \(\phi_{1}\) & 0.7000 & 0.6998 & 0.6351 & 0.7572 \\ \(\phi_{2}\) & 0.5000 & 0.4912 & 0.3955 & 0.5725 \\ \(\phi_{3}\) & 0.5000 & 0.4965 & 0.3965 & 0.5785 \\ \hline \hline \end{tabular}
\end{table}
Table 6: True values and posterior estimates of parameters for Scenario 3 from the IR-MSV model.
### Results
We fit IR-MSV model as in equations (17) and (18) to the irregularly spaced multivariate intraday log returns series of three stocks. We use first 6195 observations for fitting IR-MSV model and kept 44 observations as hold out samples. We have used exactly the same priors for the parameters as in the simulation setup. We run 300,000 MCMC iterations and discard the first 50,000 as burn-in to obtain samples from every \(500^{th}\) iteration of the last 250,000 iterations. For convergence of MCMC iterations we have used trace and posterior density plots. In Table 7, we report the posterior sample mean along with their 95% credible intervals.
Figure 2: Log returns of BMY, CVS and MDT calculated from the refreshed prices on \(24^{th}\) June, 2016
All parameters lie within the 95% credible intervals limits. The estimated correlations are moderate indicating fair dependency among the components which is expected. The persistence parameter for BMY is much higher than that of CVS and MDT indicating the temporal correlation of volatility of BMY is much higher than the temporal correlation of volatility of CVS and MDT.
We have also fitted IR-SV models to the univariate refreshed log return series of each stock of length \(T=6195\). We have used the same prior as discussed in the simulation study of IR-SV models in Section 2.3. We run 550,000 MCMC iterations and discard the first 50,000 as burn-in to obtain samples from every 1000th iteration of the last 500,000 iterations. In Table 10, Table 8 and Table 9 we report the posterior sample mean along with their 95% credible intervals. All the parameters lie inside the 95% credible intervals. Convergence for the parameters are assessed using trace and posterior density plots. We observe that the persistence parameter \(\phi\) for BMY is higher than that of CVS and MDT which is consistent with the result obtained in case of IR-MSV model fitting.
## 5 Summary and Discussion
In this chapter we extend gap time modeling idea of Nagaraja et al. (2011) to propose univariate volatility models for irregular financial time series by modifying the regularly spaced stochastic volatility models. We also extend this approach to propose multivariate stochastic volatility (MSV) models for multiple irregularly spaced time series which is a modification of the MSV model of Chib et al. (2009) that was used with daily data. We used simulation studies to show the accuracy of estimation of the proposed models. We have used the proposed models to model intraday logarithmic returns for three health sector NYSE stocks and compared the univariate and multivariate forecasting performance on shorter and longer horizons.
We can construct an IR-GARCH model in a similar way. We have illustrated the accuracy of estimation in IR-GARCH(1,1) models using simulations. This idea can also be extended to multivariate GARCH models which has been left for future research.
## 6 Appendix
### Explorations with Irregular GARCH models
One of the early approaches to model the return series sampled at irregularly spaced time intervals set by the trade arrivals is ACD-GARCH model by Ghysels and Jasiak (1998). This is a random coefficient GARCH, or doubly stochastic GARCH, where the stochastic durations between transactions determine the parameter dynamics for the GARCH equation. They proposed a Generalized Method of Moments (GMM) based two-step procedure for estimating the parameters. ACD-GARCH is quite cumbersome to estimate and also difficult to generalize to multiple time series. Meddahi et al. (2006) proposed a GARCH type model for
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Parameter & Mean & SD & 2.50\% & 97.50\% \\ \hline \hline \(\mu\) & -8.267 & 0.059 & -8.381 & -8.148 \\ \(\phi\) & 0.381 & 0.077 & 0.231 & 0.528 \\ \(\sigma\) & 0.648 & 0.039 & 0.570 & 0.724 \\ \hline \hline \end{tabular}
\end{table}
Table 10: Table showing posterior estimates and 95% credible intervals from IR-SV model for the refreshed intraday log returns of MDT.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Parameter & Mean & SD & 2.50\% & 97.50\% \\ \hline \hline \(\mu\) & -8.571 & 0.053 & -8.672 & -8.468 \\ \(\phi\) & 0.170 & 0.048 & 0.085 & 0.271 \\ \(\sigma\) & 0.829 & 0.034 & 0.763 & 0.898 \\ \hline \hline \end{tabular}
\end{table}
Table 9: Table showing posterior estimates and 95% credible intervals from IR-SV model for the refreshed intraday log returns of CVS.
irregularly spaced data which is an exact discretization of continuous time stochastic volatility processes observed at irregularly spaced times. Their model combines the advantages of ACD-GARCH Ghysels and Jasiak (1998) and ACD (Engle and Russell, 1998). A continuous version of GARCH (COGARCH) is another way to model irregularly spaced time series data (Maller et al., 2008). Recently, Buccheri et al. (2021a) propose to model intraday log-prices through a multivariate local-level model with score-driven covariance matrices and to treat asynchronicity as a missing value problem.
In this section we do some preliminary investigation of extending the gap time autoregressive model of Nagaraja et al. (2011) to develop an irregular GARCH (IR-GARCH) model. We introduce the IR-GARCH model and discuss its properties. We conduct a small simulation study to assess the accuracy of estimation under different scenarios. Future research of this topic can be useful.
Let \(\{r_{t_{j}}\}\) be a sequence of zero mean log-returns of an asset. Here \(t_{j}\) denotes the time of the \(j^{th}\) observation and \(g_{j}=t_{j}-t_{j-1}\), \(j>1\) is the known \(j^{th}\) observed gap time. Suppose \(\{\epsilon_{t_{j}}\}\) is a sequence of real valued independent and identically distributed random variables, having mean 0 and variance 1. We start with an IR-GARCH(1,1) model as shown below.
#### IR-GARCH(1,1)
Then, \(r_{t_{j}}\) follows IR-GARCH\((1,1)\) model if
\[r_{t_{j}}=\sigma_{t_{j}}\epsilon_{t_{j}} \tag{24}\]
\[\sigma_{t_{1}}^{2}=\omega(1-\alpha_{1}^{g_{1}}-\beta_{1}^{g_{1}}), \tag{25}\]
Let \(g*=\min_{j}g_{j}\); then the constraints for \(\sigma_{t_{j}}^{2}\) to be positive are \(\omega>0,\alpha_{1}>0,\beta_{1}>0\) and \(\alpha_{1}^{g*}+\beta_{1}^{g*}<1\). The unconditional mean of \(r_{t_{j}}\) is
The unconditional variance of \(r_{t_{j}}\) is
\[\mathrm{Var}(r_{t_{1}})=\mathrm{E}(r_{t_{1}}^{2})=\omega(1-\alpha_{1}^{g_{1} }-\beta_{1}^{g_{1}}),\text{ since }E(\epsilon_{t_{1}}^{2})=1 \tag{26}\]
and for \(j>1\),
\[\mathrm{Var}(r_{t_{j}})=\mathrm{E}(r_{t_{j}}^{2})=\mathrm{E}[\mathrm{E}[r_{t_ {j}}^{2}|\mathcal{F}_{t_{j-1}}]]=E[\sigma_{t_{j}}^{2}]=\omega(1-\alpha_{1}^{g_{ j}}-\beta_{1}^{g_{j}})+\alpha_{1}^{g_{j}}\mathrm{E}[r_{t_{j-1}}^{2}]+\beta_{1}^{g_{ j}}\mathrm{E}[\sigma_{t_{j-1}}^{2}]\]
Assuming stationarity of \(\{r_{t_{j}}\}\) with \(\mathrm{E}(r_{t_{j}})=0\), \(\mathrm{Var}(r_{t_{j}})=\mathrm{Var}(r_{t_{j-1}})=\mathrm{E}(r_{t_{j-1}}^{2})\) we have
\[\mathrm{Var}(r_{t_{j}})=\omega,\text{ provided }\alpha_{1}^{g*}+\beta_{1}^{g*} <1. \tag{27}\]
#### IR-ARCH(1) model
The returns \(r_{t_{j}}\) are said to follow an IR-ARCH(1) model if
\[r_{t_{j}}=\sigma_{t_{j}}\epsilon_{t_{j}}, \tag{28}\] \[\sigma_{t_{1}}^{2}=\omega(1-\alpha_{1}^{g_{1}}),\] \[\sigma_{t_{j}}^{2}=\omega(1-\alpha_{1}^{g_{j}})+\alpha_{1}^{g_{j} }r_{t_{j-1}}^{2},\text{ for }j>1,\]
where \(g_{j}\)'s are the observed gap time between two consecutive observations and \(\{\epsilon_{t_{j}}\}\) is a sequence of real valued independent and identically distributed random variables with mean 0 and variance 1.
**Proposition 6.1**.: _Suppose \(\{r_{t_{j}}\}\) follows an IR-ARCH(1) model defined by (28). Let \(\eta_{t_{1}}=r_{t_{1}}^{2}-\sigma_{t_{1}}^{2}-\omega\alpha_{1}^{g_{1}}\) and \(\eta_{t_{j}}=r_{t_{j}}^{2}-\sigma_{t_{j}}^{2}\), for \(j>1\). Suppose \(E[\eta_{t_{1}}^{2}]=C\) and for \(j>1\), \(E[\eta_{t_{j}}^{2}]=C(1-\alpha_{1}^{2g_{j}})\), where \(C>0\) is a constant. Then \(r_{t_{j}}^{2}\) is a stationary process._
Proof.: We have
\[\eta_{t_{1}} =r_{t_{1}}^{2}-\sigma_{t_{1}}^{2}-\omega\alpha_{1}^{g_{1}}\] \[\Rightarrow\sigma_{t_{1}}^{2} =r_{t_{1}}^{2}-\omega\alpha_{1}^{g_{1}}-\eta_{t_{1}}\]
Substituting in equation (28) we get,
\[r_{t_{1}}^{2}-\omega=\eta_{t_{1}},\]
where \(E(\eta_{t_{1}})=E(r_{t_{1}}^{2})-E(\sigma_{t_{1}}^{2})=0\), it follows from equation(26). Similarly, for \(j>1\),
\[r_{t_{j}}^{2}-\eta_{t_{j}}=\omega(1-\alpha_{1}^{g_{j}})+\alpha_{1 }^{g_{j}}r_{t_{j-1}}^{2}\] \[\Rightarrow r_{t_{j}}^{2}-\omega=\alpha_{1}^{g_{j}}(r_{t_{j-1}}^{2}- \omega)+\eta_{t_{j}}.\]
where \(E(\eta_{t_{j}})=E(r_{t_{j}}^{2}-\sigma_{t_{j}}^{2})=E(E(\sigma_{t_{j}}^{2} \epsilon_{t_{j}}^{2}|\mathcal{F}_{t_{j-1}}))-E(\sigma_{t_{j}}^{2})=E(\sigma_{t _{j}}^{2})-E(\sigma_{t_{j}}^{2})=0\), for \(j>1\). Let \(x_{t_{j}}=r_{t_{j}}^{2}-\omega\), along with \(E[\eta_{t_{1}}^{2}]=C\) and for \(j>1\), \(E[\eta_{t_{j}}^{2}]=C(1-\alpha_{1}^{2g_{j}})\), where \(C>0\) is a constant. Using Propostion 5.1 Nagaraja et al. (2011), it follows that \(x_{t_{j}}\) and hence \(r_{t_{j}}^{2}\) is a stationary process.
#### Estimation
We will use conditional maximum likelihood estimation (Tsay, 2005) by assuming standard normal distribution for errors for estimating the parameters. Let \(\mathbf{\theta}=(\omega,\alpha_{1},\beta_{1})^{\prime}\) be the vector of parameters for instance, assuming \(\epsilon_{t_{j}}\) to follow a standard normal distribution, the conditional Gaussian log-likelihood is given by
\[l_{n}(\mathbf{\theta})=-\frac{1}{2}\sum_{j=2}^{n}\Bigg{[}\log(2\pi)+\log(\sigma_{ t_{j}}^{2})+\frac{r_{t_{j}}^{2}}{\sigma_{t_{j}}^{2}}\Bigg{]}, \tag{29}\]
where \(\sigma_{t_{j}}^{2}\) is updated according to (25) for the IR-GARCH model. The estimated parameter vector is
\[\hat{\mathbf{\theta}}=\max_{\mathbf{\theta}}l_{n}(\mathbf{\theta}). \tag{30}\]
### IR-GARCH(1,1)
We describe two scenarios. For each scenario, we generated 100 sets of zero mean log returns data, each of length \(T=5000\). The gap times \(g_{j},1\leq j\leq T\) are generated using a similar
procedure as described in Section 2.3. The following Table 11 and Table 12 give the true values and conditional ML estimates of parameters for Scenario 1 and Scenario 2.
We have used R package Rsolnp to optimize the conditional maximum likelihood function to obtain the parameters in IR-GARCH(1,1) model. All the true values of the parameters are inside the 95% confidence interval.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Parameter & True Value & Estimate & 2.5\% & 97.5\% \\ \hline \hline \(\omega\) & 0.0100 & 0.0102 & 0.0088 & 0.0115 \\ \(\alpha\) & 0.7000 & 0.7070 & 0.6479 & 0.7662 \\ \(\beta\) & 0.2500 & 0.2427 & 0.1511 & 0.3343 \\ \hline \hline \end{tabular}
\end{table}
Table 11: True values and conditional ML estimates of parameters for Scenario 1 from the IR-GARCH model.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Parameter & True Value & Estimate & 2.5\% & 97.5\% \\ \hline \hline \(\omega\) & 0.0100 & 0.0106 & 0.0073 & 0.0138 \\ \(\alpha\) & 0.9000 & 0.9053 & 0.8682 & 0.9424 \\ \(\beta\) & 0.0500 & 0.0475 & 0.0130 & 0.0820 \\ \hline \hline \end{tabular}
\end{table}
Table 12: True values and conditional ML estimates of parameters for Scenario 2 from the IR-GARCH model. |
2307.03358 | Niobate-on-Niobate Resonators with Aluminum Electrodes | In this work, we have successfully engineered and examined suspended
laterally vibrating resonators (LVRs) on a lithium niobate thin film on lithium
niobate carrier wafer (LN-on-LN) platform, powered by aluminum interdigital
transducers (IDTs). Unlike the lithium niobate-on-silicon system, the LN-on-LN
platform delivers a stress-neutral lithium niobate thin film exhibiting the
quality of bulk single crystal. The creation of these aluminum-IDTs-driven
LN-on-LN resonators was achieved utilizing cutting-edge vapor-HF release
techniques. Our testing revealed both symmetric (S0) and sheer horizontal (SH0)
lateral vibrations in the LVR resonators. The resonators displayed a quality
factor (Q) ranging between 500 and 2600, and coupling coefficient $k_{eff}^2$
up to 13.9%. The figure of merit (FOM) $k_{eff}^2 \times Q$ can reach as high
as 294. The yield of these devices proved to be impressively reliable.
Remarkably, our LN-on-LN devices demonstrated a consistently stable temperature
coefficient of frequency (TCF) and good power handling. Given the low thermal
conductivity of lithium niobate, our LN-on-LN technology presents promising
potential for future applications such as highly sensitive uncooled sensors
using monolithic chip integrated resonator arrays. | Yiyang Feng, Sen Dai, Sunil A. Bhave | 2023-07-07T02:50:44Z | http://arxiv.org/abs/2307.03358v1 | # Niobate-on-Niobate Resonators with Aluminum Electrodes
###### Abstract
In this work, we have successfully engineered and examined suspended laterally vibrating resonators (LVRs) on a lithium niobate thin film on lithium niobate carrier wafer (LN-on-LN) platform, powered by aluminum interdigital transducers (IDTs). Unlike the lithium niobate-on-silicon system, the LN-on-LN platform delivers a stress-neutral lithium niobate thin film exhibiting the quality of bulk single crystal. The creation of these aluminum-IDTs-driven LN-on-LN resonators was achieved utilizing cutting-edge vapor-HF release techniques. Our testing revealed both symmetric (S0) and shear horizontal (SH0) lateral vibrations in the LVR resonators. The resonators displayed a quality factor (Q) ranging between 500 and 2600, and coupling coefficient (\(\mathrm{k}_{\mathrm{eff}}^{2}\times\mathrm{Q}\) can reach as 294. The yield of these devices proved to be impressively reliable. Remarkably, our LN-on-LN devices demonstrated a consistently stable temperature coefficient of frequency (TCF) and good power handling. Given the low thermal conductivity of lithium niobate, our LN-on-LN technology presents promising potential for future applications such as highly sensitive uncooled sensors using monolithic chip integrated resonator arrays.
Lithium niobate, Piezoelectric resonators, Niobate-on-niobate, Aluminum interdigital transducers (IDTs), Vapor HF method, High \(\mathrm{k}_{\mathrm{eff}}^{2}\times\mathrm{Q}\), Laterally vibrating resonators, High Power handling, Linear TCF
## I Introduction
Emerging 5G technology has created enormous opportunities and challenges in telecommunication industry. In order to cater to burgeoning demands for ultra-fast data transfer rate and a vast capacity of large-scale machine communications in the 5G wireless sector, there is a pressing need for high-performance, multi-frequency duplexer and multiplexer devices with an elevated coupling coefficient \(\mathrm{k}_{\mathrm{eff}}^{2}\) and quality factor Q. Aluminum Nitride (AIN) Film Bulk Acoustic Resonators (FBAR) and Contour-Mode Resonators (CMR) have been proposed to meet the criteria from RF industry. AlN FBAR technology has shown notable promise with high quality factors (Q) and a considerable fractional bandwidth of up to 7% [1]. Nevertheless, its capacity for incorporating multiple frequencies on a single chip remains constrained, primarily because the resonant frequency of the FBAR is dictated by the thickness of the thin film. Conversely, AlN CMRs present a different challenge, as their coupling coefficient (\(\mathrm{k}_{\mathrm{eff}}^{2}\)) falls short, registering below 2% [2].
Recent efforts have been dedicated to the lithium niobate (LN) platform. In contrast to AlN devices, the large \(\mathrm{d}_{31}\), \(\mathrm{d}_{15}\) and \(\mathrm{d}_{26}\) piezoelectric coefficients enable very high coupling coefficient and figure of merit (FOM) \(\mathrm{k}_{\mathrm{eff}}^{2}\times\mathrm{Q}\) within the lithium niobate devices [3, 4, 5, 6, 7, 8, 9, 10]. Despite its strong potential, future applications on lithium niobate technology face the following challenges. Although conventional lithium niobate SAW resonators are built on top of silicon carrier wafer, the large lattice constant and thermal expansion coefficient mismatch between LN and Si make the fabrication of free-standing BAW resonators with anchor suspension thermally unstable [7, 11, 12]. Earlier studies have leveraged the LN-on-LN platform as a strategy to address these limitations [4, 5, 6, 13, 14]. Nevertheless, these design approaches require the use of high-damping metal electrodes due to compatibility concerns during the fabrication process. This requirement, in turn, compromises the quality factor (Q).
In this work, we present our latest progress in the LN-on-LN bulk acoustic wave (BAW) technology with aluminum electrodes. The bulk-quality single crystal LN thin film is integrated through thermal matched bonding and polishing to create stress-free bonding. With a newly invented fabrication process, we successfully create a LN LVR device with low damping aluminum electrodes. By varying the orientation of the resonator, SH0 modes and S0 modes can be selectively excited on the same chip with Q ranging from 800 to 2500 and \(\mathrm{k}_{\mathrm{eff}}^{2}\) up to 13.9%. The highest figure of merit (FOM) \(\mathrm{k}_{\mathrm{eff}}^{2}\times\mathrm{Q}\) of our device can reach 294, which is one of the highest in the reports. Moreover, our devices also demonstrate a stable temperature coefficient (TCF) as high as -100.1 ppm/K for
Fig. 1: 3D model of LN-on-LN Laterally vibrating resonators. Red and blue indicate the IDT fingers connected to the source and ground, respectively.
SH0 modes and -65.3 ppm/K for S0 modes at various power levels within the resonator's linear regime. The capability to operate at different temperature and power level reveals its potential for application such as a highly sensitive uncooled bolometer.
## II Design and Modeling
### _Design properties_
The LN-on-LN LVR devices that we have developed feature an anchor-supported suspended piezoelectric thin film. A 0.9 \(\mu\rm m\) thick stress free X-cut lithium niobate thin film is created on top of 1 \(\mu\rm m\) oxide buffer layer. Oxide layer also serves as sacrificial layer to release the piezoelectric film. The suspended structure is designed with a length of L = 70 \(\mu\rm m\) and a width of W = 44 \(\mu\rm m\). This is integrated with meticulously positioned aluminum interdigital transducers (IDTs), aligned in a set of specific in-plane orientations to stimulate various acoustic modes. For symmetric configuration, IDTs consist of M = 3, 5, 7, 9, 11 electrodes, while for anti-symmetric configuration, IDTs contain M = 10 electrodes. The symmetric and anti-symmetic configuration will affect the coupling of different mode families respectively. Pitch \(\rm W_{p}\) is chosen to be 4 \(\mu\rm m\) and metallization ratio is set to be 50%. Compared with gold, which is notorious for its high internal friction loss, aluminum offers a lower intrinsic loss and a superior acoustic impedance match to lithium niobate. As a result, it has seen extensive use in LN-on-Si devices [2, 15]. With novel fabrication technology, aluminum IDTs are introduced to our LN-on-LN platform.
### _Finite Element Modeling_
We carried out 3D finite element analysis using COMSOL on lithium niobate resonators with varying orientations. For SAW wave devices, the frequency response has a sinc dependce on IDT pitch [13]. In our case, where lamb waves are launched within a laterally vibrating resonator, the frequency response is complicated by the Fabry-Perot (FP) cavity formed by the vertical sidewall. The mechanical boundary conditions of FP cavity enforces more constraints, where the \(\rm N^{th}\) order lamb wave with a wavelength \(\lambda\) satisfying \(\rm N\times\lambda/2=W\) will emerge. However, only the waves with best match to IDT configuration have good coupling and high Q. Through simulations, it has been deduced that escalating the number of IDT fingers (denoted as M) facilitates enhanced coupling of the resonator. Upon reaching a state where the product of M and the IDT pitch (\(\rm W_{p}\)) is equal to the overall device width (W), an optimal coupling scenario is realized. Consequently, the wave with an order defined by (N = M - 1) exhibits the highest degree of coupling efficiency in the system. Notably, with varied orientation of IDTs, two types of lamb wave modes, SH0 and S0, are detected in the simulation. When IDTs are aligned +9 degree relative to the +z axis of X-cut LN thin film, where "+" sign denotes counterclockwise rotation, the
Fig. 3: **(a)** Mode shape of SH0 mode. **(b)** Mode shape of S0 mode.
Fig. 2: **(a)** Optical image of LN-on-LN LVR devices. **(b)** Cross sectional view of released devices along A-A’ dashed line in subfigure (a).
coupling of SH0 modes is optimized. The resultant acoustic waves travel at a +9 degree angle to the +y axis with SH0 modes of different orders spanning 300 MHz to 500 MHz. As illustrated in Fig. 3, SH0 modes predominantly feature \(\mathrm{S}_{23}\) stress field component, signifying that \(\mathrm{d}_{15}\) and \(\mathrm{d}_{26}\) piezo coefficients are playing substantial roles in the excitation of SH0 modes. In comparison, when IDTs are aligned at a -30 degree to the +z axis (a "-" sign sign signifying clockwise rotation), different orders of S0 modes materialize between 700 MHz and 800 MHz with maximized coupling. In the case of S0 mode, the \(\mathrm{S}_{22}\) component dominates stress field, attributed to \(\mathrm{d}_{22}\) piezo coefficient.
## III Fabrication
### _Process Flow_
The preparation of LN-on-LN sample entails precise alignment and bonding of lithium niobate samples with 1 \(\mathrm{\mu m}\) oxide buffer layer to alleviate stress. The upper niobate layer is subsequently thinned to 0.9 \(\mathrm{\mu m}\) using mechanical polishing. Following the sample preparation, we deposit 100 nm thick aluminum electrodes through evaporation and lift-off procedure. The ensuing step involves patterning the LN structure. Conventionally, lithium niobate is defined via fluorine-based reactive ion etching (RIE). This tends to result in sloppy sidewall, rough etching surface and significant redeposition of \(\mathrm{LiF}_{2}\)[12, 16]. To circumvent these issues, we utilize an ion mill procedure [13, 17] under argon plasma. The argon ions are directed onto our sample at a steep angle at 14\({}^{\circ}\) to etch through the niobate layer at a rate of 56 nm/min. A subsequent 70\({}^{\circ}\) shallow angle ion mill is performed to clean up the sample and remove any redeposited materials left by ion mill process. After completing the ion milling process, the sample is immersed in acetone, and a 5-minute sonication is carried out to remove the photoresist.
### _Protection of Aluminum Electrodes_
The pivotal step in our process flow is the protection of aluminum IDTs while releasing the niobate structure. We opt for vapor HF technology over traditional buffered oxide etch (BOE) technology, largely due to the incompatibility between aluminum IDTs and BOE solution. The latter can easily penetrate the mask through the pinholes in the photoresist layers, leading to erosion of the aluminum IDTs. Conversely, vapor HF does not have the capacity to etch aluminum in the absence of water, making it a suitable candidate for processes involving aluminum IDTs. Nevertheless, during our experiments, we observed that moisture could accumulate on
Fig. 4: **(a)** Sample is prepared via bonding 0.9 \(\mathrm{\mu m}\) stress free X-cut LN niobate onto 1 \(\mathrm{\mu m}\) oxide buffer layer on X-cut LN carrier wafer. **(b)** Aluminum electrodes is patterned through lift-off process. **(c)** Ion mill etching on LN thin film is performed to define the device geometry. **(d)** Photoresist layer is patterned and reflowed with promoted adhesion. **(e)** Vapor HF flow applied onto the sample at elevated temperature to remove oxide layer without damaging Al electrodes. **(f)** The sample is transferred to acetone to remove photoresist. Another drying at critical point of \(\mathrm{LCO}_{2}\) is conducted to fully release the structure.
Fig. 5: **(a)** Image indicating undesirable undercut and damage in Al electrodes in the absence of adequate masking, **(b)** SEMs image showing bubbling and peeling of photoresist without adhesion promotion
Fig. 6: **(a)** Front side image showing the release structure with symmetric release windows. Unetched pillar can be found at the center behind IDTs. **(b)** Back side image showing the release structure with symmetric release windows. Unetched pillar can be found at the center. **(c)** Front side image showing the release structure with asymmetric release windows. No Unetched pillar can be found at the center behind IDTs. **(d)** Back side image showing the release structure with asymmetric release windows. No Unetched pillar can be found at the center.
the sample surface, exacerbating the etching effect of vapor HF when aluminum IDTs were exposed. Therefore, it is imperative to employ effective masking strategies to mitigate moisture accumulation. Prior to spinning the photoresist mask layer, the surface is treated with Surpass 4000 resist, which activates the surface and enhances adhesion. The presence of Surpass 4000 adhesion is crucial. It maintains the integrity of photoresist layer under vapor HF ambience. Without adhesion promotion, vapor HF tends to yield highly undesirable bubbling or leakages inside the photoresist layer, where HF and moisture accumulate and cause damages to aluminium IDTs. Surpass 4000 can also ensure better mask coverage on the sidewall. Once the surface is treated, we then pattern a 7 \(\mu\)m photoresist layer on treated surface, followed by a hard bake at 110 "C. This is performed to reflow the photoresist, removing moisture and cure pinholes within the layer.
### _Release of Niobate Structure_
Once the patterning of photoresist is completed, the sample is transferred to an electrostatic holding chuck within a commercial HF-vapor etcher. In this study, the release windows are intentionally designed to have an asymmetric configuration. Previous attempts employing symmetric release windows resulted in partial release of the structures, leaving unetched pillars at the central region. This is attributed to the oxide from the peripheral areas consuming substantial amounts of HF, consequently impacting the dynamics of the HF flow. At the symmetric point, i.e., the center, the HF flow is diminished, leading to a reduced etch rate. To address this challenge, the mask windows are crafted with an asymmetrical layout - one side features a circular shape while the opposite side is designed as an elongated stripe. This asymmetry creates an imbalance in HF pressure, thereby directing the HF flow across the center, ensuring that the etch rate remains consistent throughout the structure. The vapor HF etch is carried out at a temperature 15 "C above room temperature (in our case, T = 315K) for 23 minutes, a condition intentionally set to achieve an optimal etch rate and minimal moisture concentration in the photoresist. After the vapor HF etch, the sample is carefully immersed in acetone and IPA to dissolve the photoresist layer. Finally, a critical point drying is performed. The IPA solution is replaced with liquid carbon dioxide, allowing the sample to dry while preserving the suspension of the lithium niobate structure.
Fig. 8: **(a)** BVD circuit model applied to analyze the RF behaviors of our resonators. **(b)** Measured results, BVD model analysis and finite element analysis of SH0 modes with +9\({}^{\circ}\)IDT orientation. 111D fingers are equipped. Different orders of SH0 modes are marked in the spectrum. **(c)** Cross sectional view of stress field for 8\({}^{\mathrm{th}}\), 10\({}^{\mathrm{th}}\) and 12\({}^{\mathrm{th}}\) order SH0 modes. blue and red indicates \(\mathrm{S_{23}}\) field with different signs.
Fig. 7: False color SEM of the LN-on-LN laterally vibrating resonators.
## IV Experimental Results And Discussion
### _SH0 mode and S0 mode_
The one-port scattering parameters of resonators are assessed using a network analyzer (Agilent PNA-L N5230A). Parasitic pad feed-through capacitance is canceled through de-embedding process [18]. The scanning range is set at 700 MHz, with 20,000 sampling points. A resolution bandwidth of 10 KHz is employed, and the input power is maintained at -12 dBm. All measurements are conducted in atmosphere. Measured S parameters are transformed into admittance (Y\({}_{11}\)). The Q is computed using the method acpcibed in [19]. And effective coupling k\({}_{\text{eff}}^{2}\) is calculated as: k\({}_{\text{eff}}^{2}=2\times(\text{f}_{\text{p}}-\text{f}_{\text{s}})/\text{f}_{\text{s}}\), where f\({}_{\text{p}}\) represents parallel resonance and f\({}_{\text{s}}\) represents series resonance.
When IDTs are aligned from +0\({}^{\circ}\) to +40\({}^{\circ}\), sheer horizontal modes (SH0) are detected. Different orders of SH0 modes emerge at varied frequencies between 350 MHz and 600 MHz. With 11-finger IDTs equipped, the \(10^{\text{th}}\) order SH0 modes typically exhibit higher effective coupling k\({}_{\text{eff}}^{2}\) and Q. The edges are set at the zero points of displacement field, so the FP boundary conditions for best coupling SH0 modes are fulfilled. As shown in Fig. 8, the maximum product of k\({}_{\text{eff}}^{2}\times\text{Q}\) is observed when the IDTs are aligned at +9\({}^{\circ}\), where 10\({}^{\text{th}}\) order SH0 mode is found at 418 MHz. Q is 2117 and k\({}_{\text{e}}\)ff\({}^{2}\) reaches 13.9%. Thus the figure of merit (FOM) k\({}_{\text{eff}}^{2}\times\text{Q}\) is computed to be 294, one of the highest on LN-on-LN platform. The response of 10\({}^{\text{th}}\) order SH0 mode is analyzed using Butterworth-Van Dyke (BVD) model illustrated in Fig. 8 (a). The extracted parameters C\({}_{0}\), C\({}_{\text{x}}\), L\({}_{\text{x}}\) and R\({}_{\text{x}}\) are listed in the Table I. The measurement results of 10\({}^{\text{th}}\) order SH0 agree well with finite element simulation.
The substantial influence of the frequency response of SH0 modes on IDT orientation is confirmed experimentally. As the IDT fingers move away from +9\({}^{\circ}\) to +0\({}^{\circ}\), +20\({}^{\circ}\), and +30\({}^{\circ}\), we observe a decrease in coupling and an increase in resonant frequency, as shown in Fig. 9. These findings align with our simulation results and can be attributed to the anisotropy present in the LN piezoelectric thin film. As the IDTs' alignment deviates significantly from +9\({}^{\circ}\), the SH0 strength becomes less pronounced. However, maximum coupling of S0 mode is achieved when IDTs are aligned from -30\({}^{\circ}\) to -40\({}^{\circ}\), as demonstrated in Fig. 10. At a -30\({}^{\circ}\) IDT orientation, the most prominent S0 mode emerges at 719 MHz with an effective coupling of 10.1% (k\({}_{\text{eff}}^{2}\)) and Q = 891. At a -40\({}^{\circ}\) IDT orientation, the most noticeable S0 mode appears at 713 MHz with 8.0% k\({}_{\text{eff}}^{2}\) coupling and Q = 1344. We analyzed the response of S0 modes using the BVD model and compared the results with COMSOL finite element simulations. The extracted parameters are provided in Table II. It is worth noting that S0 modes are highly sensitive to
Fig. 10: Measured results, BVD model analysis and finite element analysis of S0 modes with -30\({}^{\circ}\) and -40\({}^{\circ}\) IDT orientation. 10 IDT fingers are equipped. Insignificant SH0 modes and higher order S0 modes are marked.
Fig. 9: RF response of resonators with IDTs aligned in different orientation.
the resonators' boundary conditions [13]. Maximum coupling and Q can only be obtained when the edge of resonator is positioned at the stress field nodes, which is different from the boundary conditioned required by SH0 modes. Fabrication errors could affect the performance of the S0 mode resonator. In our case, although the resonant frequency aligns with the simulation results, the coupling coefficient decreases because the aforementioned FP boundary conditions are not fulfilled for S0 modes. Even slight mismatch between nodes of S0 mode and IDT finger configuration could impact the coupling.
### _Electrode Loading_
Despite aluminum's low inherent loss and superior acoustic impedance compatibility with LN, it offers lower conductivity compared to unreactive metals such as gold. To delve deeper into the impact of electrode loading on Q, we fabricated identical S0/SH0 mode resonator structures, maintaining the same dimensions but varying the number of IDT fingers. Our measurements suggest that S0 modes are more vulnerable to electrode loading. As shown in Fig. 11, when the IDT fingers are aligned at -30\({}^{\circ}\) relative to the z-axis, reducing the number of IDT fingers from 11 to 3 leads to a decrease in energy coupled to the acoustic regime from RF regime, which is represented by the effective coupling \(\mathrm{k}_{\mathrm{eff}}^{2}\) dropping from 8.6% to 0.6%, while Q elevates from 801 to over 2400. This implies a significant energy loss in S0 modes due to electrode loading. In contrast, for the SH0 mode resonator, electrode loading has a lesser impact. With a +9\({}^{\circ}\) IDT orientation and an increase in the number of IDTs from 3 to 11, we observe an increase in \(\mathrm{k}_{\mathrm{eff}}^{2}\) while Q remains consistently high, exceeding 1500.
### _Temperature Stability_
We examined the temperature coefficient (TCF) of LN resonators of various orientations in their fundamental mode by incrementally adjusting the temperature from 300K to 370K in steps of 5K. Each measurement was conducted after allowing a 5-minute stabilization period at the desired temperature. The results, depicted in Fig. 12, reveal that both SH0 and S0 modes display a highly linear and stable TCF. The TCF for SH0 modes, recorded at -101 ppm/K, is slightly higher than that for S0 modes, which stands at -65.7 ppm/K. These TCF values align with those observed in uncompensated LN SAW devices [20]. Compensation for temperature can be achieved by incorporating an additional \(\mathrm{SiO}_{2}\) layer, as explored in previous studies [21, 8, 22].
### _Nonlinearity and Power Handling_
We conducted an empirical investigation into the nonlinearity behavior of LN resonators by measuring the frequency response across various power levels. An S0 mode resonator with 11 IDT orientations -30\({}^{\circ}\) from the +z axis, and dimensions of 70 \(\mu\)m\(\times\) 44 \(\mu\)m, was subjected to a frequency sweep from 680MHz to 790MHz at input power levels ranging from
Fig. 11: Measured Q and \(\mathrm{k}_{\mathrm{eff}}^{2}\) of S0 modes with -30\({}^{\circ}\) IDTs and different number of IDT fingers.
Fig. 12: Measured Temperature response of S0 modes and SH0 modes
Fig. 13: Measured S0 mode Temperature response at different power level.
-12 dBm to 10 dBm, incremented by 1 dBm. The device operated within its linear region, as evidenced by a Lorentzian-shaped frequency response near resonance when the input power was below -1 dBm. As the input power increased beyond this level, a progressively larger resonant frequency shift and distortion of frequency response became evident, signifying the transition of the resonator into its non-linear regime. A bifurcation point was identified at 6 dBm, marking the threshold of instability.
We investigated the power handling capacity of our LN-on-LN resonator devices by conducting a frequency sweep at varying temperature ranges from 300K to 370K and power levels spanning from -12 dBm to 6 dBm. Resonant frequencies of our LN-on-LN devices under these varied conditions are displayed in Fig. 13. It is discernible that for power levels below 0 dBm, the shift in resonant frequencies remains minimal. However, upon exceeding a power level of 0 dBm, substantial frequency shifts are observable, even though the temperature response maintains its linearity. Further study of the TCF at various power levels, depicted in Fig. 14, shows that the TCF variation remains under 0.1 ppm/K for power levels less than 0 dBm, suggesting a stable temperature response. Contrastingly, SAW LN-on-Si devices exhibited significant TCF alterations even at lower power levels [11]. We attribute the stability of our measured TCF in relation to input power to the minimization of residue stress in the LN-on-LN thin film resulting from thermal cycling induced by power in the matched substrate.
## V Conclusion
In this work, we pioneered the design, fabrication, and characterization of first-ever laterally vibrating SH0 and S0 mode resonators on an LN-on-LN platform driven by aluminum-driven IDTs. This is made possible by the development of a novel fabrication methodology compatible with aluminum. Incorporating aluminum electrodes, recognized for their low mechanical loss, enabled us to demonstrate a quality factor (Q) enhancement exceeding 2000 and high effective electromechanical coupling coefficient (\(\mathrm{k}_{\mathrm{eff}}^{2}\)) surpassing 13%. Our work yielded a figure of merit (FOM) of 294, one of the highest within the same platform and competitive with LN-on-Si outcomes. However, our LN-on-LN devices still maintain several advantages over traditional LN-on-Si devices. In contrast with the limits on the thermal instablity of LN-on-Si platform, we demonstrated a linear and consistent temperature coefficient of frequency (TCF) across different temperatures. Furthermore, our devices exhibit remarkable sensitivity to input power in terms of oscillation frequency, with less than 0.1 ppm/K within the resonator's linear range. Such traits potentially broaden the applications of LN-on-LN devices, including the utilization in highly sensitive, uncooled sensors facilitated by integrated resonator arrays on a monolithic chip [23, 24, 25].
## Acknowledgment
The devices were fabricated and tested in Birck Nanotechnology Center at Purdue University. The authors would like to thank Hao Tian, Noah Opondo and Ozan Erturk for valuable discussion on fabrication techniques, Mengyue Xu for discussion on lithium niobate devices simulation, Neil Dilley for assistance in handling of AJA ion milling tool and Nithin Raghunathan for training on vapor HF tool.
|
2301.01819 | A Protocol for Intelligible Interaction Between Agents That Learn and
Explain | Recent engineering developments have seen the emergence of Machine Learning
(ML) as a powerful form of data analysis with widespread applicability beyond
its historical roots in the design of autonomous agents. However, relatively
little attention has been paid to the interaction between people and ML
systems. Recent developments on Explainable ML address this by providing visual
and textual information on how the ML system arrived at a conclusion. In this
paper we view the interaction between humans and ML systems within the broader
context of interaction between agents capable of learning and explanation.
Within this setting, we argue that it is more helpful to view the interaction
as characterised by two-way intelligibility of information rather than once-off
explanation of a prediction. We formulate two-way intelligibility as a property
of a communication protocol. Development of the protocol is motivated by a set
of `Intelligibility Axioms' for decision-support systems that use ML with a
human-in-the-loop. The axioms are intended as sufficient criteria to claim
that: (a) information provided by a human is intelligible to an ML system; and
(b) information provided by an ML system is intelligible to a human. The axioms
inform the design of a general synchronous interaction model between agents
capable of learning and explanation. We identify conditions of compatibility
between agents that result in bounded communication, and define Weak and Strong
Two-Way Intelligibility between agents as properties of the communication
protocol. | Ashwin Srinivasan, Michael Bain, A. Baskar, Enrico Coiera | 2023-01-04T20:48:22Z | http://arxiv.org/abs/2301.01819v1 | # A Protocol for Intelligible Interaction Between Agents That Learn and Explain
###### Abstract
Recent engineering developments have seen the emergence of Machine Learning (ML) as a powerful form of data analysis with widespread applicability beyond its historical roots in the design of autonomous agents. However, relatively little attention has been paid to the interaction between people and ML systems. Recent developments on Explainable ML address this by providing visual and textual information on how the ML system arrived at a conclusion. In this paper we view the interaction between humans and ML systems within the broader context of interaction between agents capable of learning and explanation. Within this setting, we argue that it is more helpful to view the interaction as characterised by two-way intelligibility of information rather than once-off explanation of a prediction. We formulate two-way intelligibility as a property of a communication protocol. Development of the protocol is motivated by a set of 'Intelligibility Axioms' for decision-support systems that use ML with a human-in-the-loop. The axioms are intended as sufficient criteria to claim that: (a) information provided by a human is intelligible to an ML system; and (b) information provided by an ML system is intelligible to a human. The axioms inform the design of a general synchronous interaction model between agents capable of learning and explanation. We identify conditions of compatibility between agents that result in bounded communication, and define Weak and Strong Two-Way Intelligibility between agents as properties of the communication protocol.
keywords: Learning Agents, Two-Way Intelligibility, Communication. +
Footnote †: journal: Elsevier
## 1 Introduction
In the second half of his seminal 1950 paper [44], Alan Turing describes an autonomous agent that has the capacity to alter its programming based on experiments and mistakes. Since then, developments in mathematics and computing have made steady progress and one form of ML - deep neural networks - has been able to achieve startlingly good predictive performance when provided with sufficient data and computational resources.
A difficulty has arisen when the models ML methods construct have to be examined by humans. For example, a deep neural network may predict, with high accuracy, the occurrence of malignancies from X-ray images. If an explanation of how that prediction was arrived at is required by the clinician, then we hit an "intelligibility bottleneck". Some of this arises from a mismatch between what certain ML practitioners view as suitable explanations, and what subject end-users require [2]. If current techniques for explanation are unfit for purpose, they relegate ML systems to the status of drugs for which no definitive biological mechanism is understood, but which nonetheless may be effective [10]. Whilst this may allow ML to be applied for some tasks, it falls short of "human-level" performance as far as explainability goes [4].1
Footnote 1: We recognise that there are clinical settings where explanations may _not_ be required. ARDA, for example, is a highly accurate ML-based tool for diagnosis of diabetic retinopathy (see: [https://health.google/caregivers/arda/](https://health.google/caregivers/arda/)). based on the work reported in [12]. It has been trained using labelled data provided by over 100 clinicians, and has been tested in a clinical trial. It is a device for triage-assistance in settings where a clinician examines 1000s of patients a day, and the tool is considered adequately field-tested. In this paper we are concerned instead with what needs to be done if explanations _are_ needed.
But what does it mean for an explanation from an ML-system to be understandable to the clinician, or, more broadly, to a person interacting with a ML- based system? One could view this as a requirement for humans and ML systems to maximise their mutual knowledge, developed over sequences of communicative interactions [5]. At least some shared understanding of the concepts and terminology in a domain appears to be needed for communication between human and machine, just as between humans. Serious consequences may follow when this is lacking, for example, in the misinterpretation of scientific knowledge in legal proceedings [16; 37], or when the rate of production and complexity of data outpaces the abilities of specialists to assimilate and process them.
In such settings, we would expect human-machine systems to become increasingly collaborative, with neither human nor machine having complete answers to problems. We expect not only that information provided by a machine should be intelligible to a human, but also that any information provided by a human will need to be intelligible to the machine. In this paper, we view this 'two-way intelligibility' requirement within the broader context of two-way intelligible interaction between agents capable of learning and explanation.
The main contributions of the paper are as follows:
1. We propose the notion of Two-Way Intelligibility between agents as a pre-requisite in the design of Explainable Artificial Intelligence (XAI) systems. We characterise Two-Way Intelligibility as the consequence of the interaction between a pair of communicating agents capable of learning and explanation. This includes, but is not limited to, a human and an AI system, and provide a set of intuitive axioms to specify the special case of human-machine intelligibility;
2. A detailed description of a communication protocol between agents, such that both One-Way and Two-Way Intelligibility can be defined as properties arising from the execution of the
protocol. We define condition of compatibility between agents under which the communication is bounded; and conditions under which the protocol can be seen as a correct implementation of the intelligibilty axioms proposed for human-machine interaction.
**The strange case of Thompson's table**
An early description of the understandability of a machine's explanation to a human specialist is provided by Michie [25]. The description begins with the construction of a black box for a chess endgame (the interjections in brackets are ours):
At the meeting in Toronto in 1977 of the International Federation for Information Processing, Kenneth Thompson of Bell Telephone Laboratories presented a computer program for playing the chess end-game of King and Queen against King and Rook. He had done this by the ultimate in 'hammer and tongs' methods: in the absence of a complete set of rules for playing the end-game, he had previously programmed the machine to work out what to do in every single possible position...All these moves were then loaded into a gigantic 'look-up' table in the machine's memory...Thompson invited [International Masters] to demonstrate winning play for the Queen's side against the machine. To their embarrassment they found they could not win, even after many attempts...The machine repeatedly conducted the defence in ways which to them were so bizarre and counter-intuitive [like separating King and Rook] that [the Chess Masters] were left grasping air...Naturally [they] found the experience upsetting. They wanted to ask the program to explain its strategy, but this of course neither it nor its author could do. The answer in every case was, 'It's in the table.' (pg. 64, [25])
Michie describes how this situation is not very different to the case of ML programs that are unable to explain their decision-making. He sees this as not being especially problematic in some circumstancesa. However in some other cases, involving decision-making in critical areas, he notes that lack of meaningful feedback from the machine can become a serious issue:
Footnote a: Surprisingly, Michie includes the possibility that scientific discovery may even benefit from highly predictive but opaque machine-constructed models, since it would force scientists to develop new explanations for unexpected predictions.
But what if the system were doing something of social importance, such as managing a complex control function in factory automation, transport or defence? Two supervisors, let us imagine, are responsible for intervening manually in the event of malfunction. The system now does the equivalent in industrial or military terms of'separating its King and Rook'. 'Is this a system malfunction?' the supervisors ask each other. They turn to the system for enlightenment. But it simply returns the same answer over and over again...Any socially responsible design for a system must make sure that its decisions are not only scrutable but refutable...("The lunatic black box", pg. 68, [25]).
## 2 A Model for Intelligible Interaction
We motivate the development of a general interaction model between agents that learn and explain by looking first at possible criteria for inferring One-Way Intelligibility of human-machine interaction. These criteria will then inform the design of a communication protocol for intelligible interaction in the more general setting.
### Specifying Intelligibility in Human-Machine Interaction
We motivate the notion of intelligible interaction using a recent research study on identification of Covid-19 patients, based on X-ray images. The automated tool described in [17] uses a hierarchical design in which clinically relevant features are extracted from X-ray images using state-of-the-art deep neural networks. Deep neural networks are used to extract features like ground-glass opacity from the X-rays; and the system also includes a network for a deep network for prediction of possible disease (like pneumonia). The output from the deep networks are used by a symbolic decision-tree learner to arrive at a prediction about Covid-19. Explanations are textual descriptions obtained from the path followed by the decision-tree. Results reported in [17] describe how this neural-symbolic approach compares to an end-to-end monolithic neural approach (the predictive results of the two are comparable). However, our interest here is on the clinical assessment of the explanations produced by the symbolic model by radiologists: Fig. 1 shows an example of a machine's explanation and a clinician's assessment of that explanation. A tabulation of assessment on several "test" images is also shown. From the tabulation we can see: (a) The radiologist does not always think the model is correct (this is despite a supposed predictive accuracy of over 99% claimed for the model); (b) The radiologist is more likely to refute the explanation when he thinks the model is wrong; (c) Overall, the radiologist refutes the explanations in \(13/30\approx 43\%\) of the instances and finds the explanations acceptable for the remaining 17 cases (\(\approx 57\%\)).
We would like to say that the machine's explanations are 'intelligible' to the human, since they are all either confirmed or refuted. We propose to capture this notion of intelligibility using the following six axioms that are concerned with communication of information in two categories:2
Figure 1: Top: The machine’s explanation and a senior radiologist’s feedback; and Bottom: A tabulation of the radiologist’s assessment of explanations from the ML-based system on a set of test images.
Footnote 1: The authors are grateful to the anonymous reviewers for providing the results in this paper.
**Human-to-Machine.**: Axioms in this category are concerned with machine-intelligibility of the information provided by a human to the machine.
1. Machine-Confirmation: If the machine ratifies a human-explanation then the human's explanation is intelligible to the machine.
2. Machine-Refutability: If the machine refutes a human-explanation then the human's explanation is intelligible to the machine.
3. Machine-Performance: If the human-explanation improves machine-performance then the human's explanation is intelligible to the machine.
**Machine-to-Human.**: This concerns the human-intelligibility of explanations provided by a machine:
1. Human-Confirmation: If the human ratifies a machine-explanation then the machine's explanation is intelligible to the human.
2. Human-Refutability: If the human refutes a machine-explanation then the machine's explanation is intelligible to the human.
3. Human-Performance: If the machine-explanation improves the human's performance then the machine's explanation is intelligible to the human.
For the instances in Fig. 1, one or the other of the Human-Confirmation axiom or the Human-Refutability axiom will hold. Appendix A contains more examples from the ML literature where conditions of the axioms can be said to hold. At this point, the following clarifications may be helpful:
* The examples in Appendix A are by no means the only ones in the literature. However, some of the conditions have received less attention than the others (machine-refutability, and human-performance are examples);
* The axioms are not intended to be a complete definition of machine- or human-intelligibility. Thus it is possible, for example, that none of the conditions for the machine-to-human axioms hold, and the machine's explanation may still be human-intelligible; The axioms also do not specify what, if anything, should be done if one of more of them hold. For example, if a machine's explanation is refuted, then what should the machine do about it?
* Two aspects of the axioms might escape attention are: (a) Although individually, the axioms result in an inference of One-Way Intelligibility, but taken together they allow an inference of Two-Way Intelligibility; and (b) The inference of intelligibility will depend on the specific human and machine involved in the interaction.
Thus, the axioms can at best be seen as a partial specification for intelligibility within the context of an interaction model. In the next section, we develop a general model of interaction between agents that use agent-specific functions for learning and explanations. We then use this model to identify conditions of collaborative communication between human and machine; and identify two-way intelligibility based on the messages exchanged. We will return in Sec. 3.3 to the notion of the axioms as a specification for intelligibility.
### Interaction between Lex Agents
We now develop an interaction model for the more general setting. Let \(\mathcal{A}\) be a set of agents that have capabilities for learning (induction) and explanation (justification).3 We will call such agents \(\mathtt{LEX}\) agents (short for \(\mathtt{Learn}\)-and-\(\mathtt{Explain}\)). Specifically, we assume that the interaction between \(\mathtt{LEX}\) agents will be modelled by communicating finite-state automata which we will also call \(\mathtt{LEX}\) automata. We will assume each \(\mathtt{LEX}\) automaton \(a_{m}\) for the agent \(m\) has access to: a hypothesis \(H_{m}\); and a dataset \(D_{m}\) consisting of 4-tuples \(\{(x_{i},y_{i},e_{i},p_{i})\}_{i=1}^{N}\), where \(x_{i}\) is a data-instance, \(y_{i}\) is a label for \(x_{i}\); \(e_{i}\) is an explanation for \(y_{i}\) given \(x_{i}\); and \(p_{i}\) represents the provenance for the label and explanation (that is, details about the origin of \(y_{i},e_{i}\) for an \(x_{i}\)). Additionally, we will assume each \(\mathtt{LEX}\) automaton \(a_{m}\) also has access to the following automaton-specific functions:
Footnote 3: The capacity for inference (deduction) is taken for granted.
* \(\mathtt{PREDICT}_{m}\) that returns the prediction of a data-instance \(x\) using its hypothesis;
* \(\mathtt{EXPLAIN}_{m}\) that returns an explanation for a data-instance \(x\) using its hypothesis;
* \(\mathtt{LEARN}_{m}\) that learns a possibly new hypothesis given its existing hypothesis, dataset, and a possibly new data-triple;
* \(\mathtt{MATCH}_{m}\) which is true if a pair of predictions \(y,y^{\prime}\) match; and
* \(\mathtt{AGREE}_{m}\) that is true if a pair of explanations \(e,e^{\prime}\) agree with each other.
We call these \(\mathtt{LEX}\)-functions. In the rest of the paper, it will be understood that these functions are automaton-specific and we will drop the subscript on the functions unless we want to emphasise them. We will return to these functions later in the paper. Additionally we will assume a special agent \(\Delta\not\in\mathcal{A}\), called the _oracle_. \(\Delta\) is a non-\(\mathtt{LEX}\) agent, but it will be convenient to model it's interaction with other \(\mathtt{LEX}\) automata using the same communication protocol used for \(\mathtt{LEX}\) automata.
#### 2.2.1 \(\mathtt{Lxp}\): A Communication Protocol for \(\mathtt{LEX}\) Automata
We will adopt a protocol in which messages are exchanged between a pair of \(\mathtt{LEX}\) automata \(a_{m},a_{n}\in\mathcal{A}\) in a _session_.4 In this paper, we will only be focusing on communication that takes place one-instance-at-a-time. The messages in the communication protocol follows the grammar-like rules shown below:
Footnote 4: We will use the terms “agents” and “automata” interchangeably.
SendMessage ::= Send(M,(T,(X,Y,E)))
ReceiveMessage ::= Receive(M,(T,(X,Y,E)))
Send ::= '+'
Receive ::= '-'
M ::= \(m\), \(m\in\mathcal{A}\cup\{\Delta\}\)
T ::= \(t\), \(t\in\{\mathit{Init},\mathit{Ratify},\mathit{Refute},\mathit{Revise},\mathit{ Reject},\mathit{Term}\}\)
X ::= \(x\), \(x\in\mathcal{X}\)
Y ::= \(y\), \(y\in\mathcal{Y}\cup\{\text{'?'}\}\)
E ::= \(e\), \(e\in\mathcal{E}\cup\{\text{'?'},\text{\text@b{\char 12}}\}\)
Here \(\epsilon\) denotes a empty string ('do nothing'); \(\mathcal{X}\) is a set of data-instances; \(\mathcal{Y}\) is a set of 'labels' for data-instances, and '?' is to be read as "not known"; \(\mathcal{E}\) is a set of explanations. The explanation \(\text@b{\char 12}\) is to be read as 'oracular statement".
Figure 2(a) shows the messages sent and received by an automaton for an agent (other than \(\Delta\)), and Fig 2(b) shows the corresponding messages sent and received by \(\Delta\). Informally, the figure tells us that every session between a pair of \(\mathtt{LEX}\) automata has to explicitly initiated and terminated. The session can be terminated by either automaton, and an automaton can only initiate a new session after terminating an existing session.
We will also require that only messages sent by \(\Delta\) can contain \(\blacktriangle\) as an explanation and that the following restriction holds on the \(\mathtt{LEX}\) functions: for any agent \(m\neq\Delta\) and \(D_{m}\), if \(H_{m}=\mathtt{LEARN}_{m}(\cdot,D_{m})\) and \((x,y,\blacktriangle)\in D_{m}\), then \(\mathtt{AGREE}_{m}(\mathtt{PREDICT}_{m}(x,H_{m}),y)=TRUE\). Informally, this assumes the predictions by the oracle are always correct, and therefore all non-oracular agents have to ensure their predictions are consistent with the predictions they have received from the oracle.
To specify the \(\mathtt{LXP}\) protocol fully we need to define the transition system, which we describe next.
#### Guarded Transitions
Let \(S_{mn}(x)\) denote a session between \(a_{m}\) and \(a_{n}\) in \(\mathcal{A}\cup\{\Delta\}\) about a data-instance \(x\in\mathcal{X}\). Correctly there may be multiple sessions between \(a_{m,n}\) involving the same data-instance, and we would need an additional index to capture this. We ignore this here, and will usually omit \(x\), when the context is obvious. We also adopt the convention that the session \(S_{mn}\) is initiated by \(a_{m}\).
\(S_{mn}\) can be represented by the execution of the protocol which results in a sequence of 'configurations' \(\langle\gamma_{mn,1},\gamma_{mn,2},\ldots,\gamma_{mn,k}\rangle\). It is helpful to think of any configuration \(\gamma_{mn,i}\) as being composed of a pair of 'local configurations' of the automaton \(\gamma_{m,i}\) for \(a_{m}\) and \(\gamma_{n,i}\) for \(a_{n}\), and \(\gamma_{mn,i}=(\gamma_{m,i},\gamma_{n,i})\). We will define \(\gamma_{m,i}=(s_{m,i},(H_{m,i},D_{m,i}),\mu_{m,i})\) and \(\gamma_{n,i}=(s_{n,i},(H_{n,i},D_{n,i}),\mu_{n,i})\). Here the \(s_{.,i}\) are states; \(H_{.,i}\) are hypotheses; \(D_{.,i}\) are 4-tuples; \(\mu_{.,i}\) is the message sent or received. From the grammar rules, messages are of the form \(+(A,(t,(x,y,e)))\) or \(-(A,(t,(x,y,e))\), where \(A\) is either \(m\) or \(n\) (denoting \(a_{m}\) or \(a_{n}\) for short); \(t\) is a message-tag, \(x\) is a data-instance, \(y\) is a label, and \(e\) is an explanation. The corresponding local configuration \(\gamma_{n,i}\) is similar.
Before defining transitions between configurations, we introduce the guards here. The guard \(\top\) is trivially true in all configurations. The definitions of non-trivial guards are the same for all \(\mathtt{LEX}\) agents, and we define them here for the receiving automaton \((a_{n})\).
**Definition 1** (Guards).: _Let \(a_{n}\) be a \(\mathtt{LEX}\) agent. Let \(\gamma_{mn,i}=(\gamma_{m,i},\gamma_{n,i})\) be a configuration in a session \(S_{mn}\), where \(\gamma_{m,i}=(s_{m,i},(H_{m,i},D_{m,i}),\mu_{m,i})\) and \(\gamma_{n,i}=(s_{n,i},(H_{n,i},D_{n,i}),\mu_{n,i})\). Let \(\mu_{m,i}=+(n,(t_{m},(x,y_{m},e_{m})))\), \(\mu_{n,i}=-(m,(t_{m},(x,y_{m},e_{m})))\), \(y_{n}=\mathtt{PREDICT}_{n}(x,H_{n,i})\), and \(e_{n}=\mathtt{EXPLAIN}_{n}((x,y_{n}),H_{n,i})\). Then we define the guards:_
* \(\mathtt{MATCH}_{n}(y_{n},y_{m})\,\wedge\,\mathtt{AGREE}_{n}(e_{n},e_{m})\)__
* \(\mathtt{MATCH}_{n}(y_{n},y_{m})\,\wedge\,\neg\mathtt{AGREE}_{n}(e_{n},e_{m})\)__
* \(\neg\mathtt{MATCH}_{n}(y_{n},y_{m})\,\wedge\,\mathtt{AGREE}_{n}(e_{n},e_{m})\)__
* \(\neg\mathtt{MATCH}_{n}(y_{n},y_{m})\,\wedge\,\neg\mathtt{AGREE}_{n}(e_{n},e_{m})\)__
**Remark 1**.: _We note the following:_
* _It is not hard to see that at most one of the four guards can be true in a configuration_ \(\gamma_{mn,i}\)_._
* _The guards only apply to the automaton in the_ \(\mathtt{CAN\_SEND}\) _state in_ \(\gamma_{mn,i}\) _(here_ \(a_{n}\)_);_
Figure 2: Messages sent (‘+”) and received (‘-’) by: (a) automata for agents other than the oracle; and (b) the oracle. Here \(\top\) stands for a guard condition that is trivially true. RAT, REF, REV and REJ represent the guard conditions used by the guarded transition system, which are described below.
_._
3. _The guards for all automata use only the_LEX _functions_ MATCH _and_ AGREE_. However, since these functions are automaton-specific, the value of the guard function in one automaton may or may not agree with the corresponding value in a different automaton. We will restrict ourselves to the_ compatible _automata, which we define below._
The guards are used to define guarded transition relations (or simply transition relations). Intuitively, transitions can be understood as follows: if a guard \(g\) is \(TRUE\) in a configuration after performing some computation then an action is executed. The action consists of receiving a message or sending a response (except when the received tag is \(Term\)).
**Definition 2** (Guarded Transition Relation).: _A guarded transition relation is a set of 4-tuples \(((\gamma_{m},\gamma_{n}),\Pi,g,(\gamma^{\prime}_{m},\gamma^{\prime}_{n}))\), where \(\gamma_{m},\gamma_{n},\gamma^{\prime}_{m}\) and \(\gamma^{\prime}_{n}\) are (local) configurations; \(\Pi\) is the computations performed by the sending automaton5 to evaluate \(g\) and update \((\gamma_{m},\gamma_{n})\) to \((\gamma^{\prime}_{m},\gamma^{\prime}_{n})\). \(g\) is a Boolean guard function defined over local configurations._
Footnote 5: The automaton which is in CAN_SEND in \(\gamma_{m}\) or \(\gamma_{n}\).
To address (c) in Remark. 1, we focus on the special case where a pair of LEX agents that agree with each other on their MATCH and AGREE functions within a session.
**Definition 3** (Compatible Automata).: _Let \(S_{mn}\) be a session between_LEX _agents \(a_{m}\) and \(a_{n}\). Let \(Y_{m}=\{y_{m}:+(n,(\cdot,(x,y_{m},\cdot))\}\) be the set of predictions in messages sent by \(a_{m}\) to \(a_{n}\) and \(Y_{n}=\{y_{n}:+(m,(\cdot,(x,y_{n},\cdot))\}\) be the set of messages sent by \(a_{n}\) to \(a_{m}\). Let \(E_{m}=\{e_{m}:+(n,(\cdot,(x,(x,\cdot,e_{m})))\}\) be the set of explanations in messages sent by \(a_{m}\) to \(a_{n}\) and \(E_{n}=\{e_{n}:+(m,(\cdot,(x,\cdot,e_{n})))\}\) be the set of explanations in messages sent by \(a_{n}\) to \(a_{m}\). We will say there is a functional agreement on predictions between \(a_{m}\) and \(a_{n}\) in \(S_{mn}\), or \(a_{m}\ \simeq_{y}a_{n}\) in \(S_{mn}\), if for all \(y_{m}\in Y_{m}\) and \(y_{n}\in Y_{n}\), \(\mathtt{MATCH}_{m}(y_{m},y_{n})=\mathtt{MATCH}_{n}(y_{n},y_{m})\). Similarly we will say there is a functional agreement on explanations between \(a_{m}\) and \(a_{n}\) in \(S_{mn}\), or \(a_{m}\simeq_{e}a_{n}\) in \(S_{mn}\), if for all \(e_{m}\in E_{m}\) and \(e_{n}\in E_{n}\)\(\mathtt{AGREE}_{m}(e_{m},e_{n})=\mathtt{AGREE}_{n}(e_{n},e_{m})\). We will say automata \(a_{m}\) and \(a_{n}\) are compatible in session \(S_{mn}\) iff \(a_{m}\simeq_{y}a_{n}\) and \(a_{m}\simeq_{e}a_{n}\) in \(S_{mn}\)._
We assume that the oracle \(\Delta\) is compatible with any LEX agent.
A session involving compatible agents will be called a _collaborative_ session. We now examine transitions between automata \(a_{m}\) and \(a_{n}\) in a collaborative session. Assume automaton \(a_{n}\) receives a message \(\mu\) from \(a_{m}\). Then, \(a_{n}\) executes code \(\Pi\); checks the guard \(g\); updates the current configuration \(\gamma\) to a new configuration \(\gamma^{\prime}\); and sends a message \(\mu^{\prime}\) to \(m\). For a pair of agents \(a_{m}\) and \(a_{n}\), neither of which is \(\Delta\), the transitions specify that the message sent by \(a_{n}\) to \(a_{m}\) satisfies the following constraints:
is sent. Appendix B lists all possible transitions involving non-trivial guard functions. Restricting \(a_{m}\) and \(a_{n}\) to being compatible automata eliminates some elements from the set of transitions.
**Definition 4** (Transitions in a Collaborative Session).: _Let \(S_{mn}\) be a session between compatible agents \(a_{m}\) and \(a_{n}\). The transitions for \(S_{mn}\) can be specified as a set of 4-tuples \((\gamma,\Pi,g,\gamma^{\prime})\). Let \(\gamma=((s_{n},(H_{m},D_{m}),+(n,\mu)),(s_{n},(H_{n},D_{n}),-(m,\mu)))\) and \(\gamma^{\prime}=(s^{\prime}_{m},((H_{m},D_{m}),-(n,\mu^{\prime})),\)\((s^{\prime}_{n},(H^{\prime}_{n},D^{\prime}_{n}),+(m,\mu^{\prime})))\), where \(s_{m}=\mathtt{CAN\_RECEIVE},\ s_{n}=\mathtt{CAN\_SEND}\); \(s^{\prime}_{m}=\mathtt{CAN\_SEND}\), \(s^{\prime}_{n}=\mathtt{CAN\_RECEIVE}\). Let \(\Pi=(D^{\prime}_{n}:=D_{n}\cup\{(x,y_{m},e_{m},m)\}\;;\ P\;;\ y_{n}:=\mathtt{ PREDICT}(x,H_{n})\;;\ e_{n}:=\mathtt{EXPLAIN}((x,y_{n}),H_{n})\;;\ y^{\prime}_{n}:=\mathtt{PREDICT}(x,H^{\prime}_{n})\;;\ e^{\prime}_{n}:=\mathtt{EXPLAIN}((x,y^{\prime}_{n}),H^{ \prime}_{n}))\). Let \(g^{\prime}=\mathtt{MATCH}(y^{\prime}_{n},y_{m})\)\(\wedge\ \mathtt{AGREE}(e^{\prime}_{n},e_{m})\)._
_Using Proposition 2 in Appendix B, the legal transitions for \(a_{n}\) are specified below (for simplicity, we only show \(\mu,P,g\) and \(\mu^{\prime}\); the numbering of transitions is from the complete tabulation in Appendix B)._
\begin{tabular}{|l|l|c|c|c|} \hline _Trans_ & \(\mu\) _(received by_ \(a_{n}\)_)_ & \(P\) & \(g\) & \(\mu^{\prime}\) _(sent by_ \(a_{n}\)_)_ \\ \hline _0._ & _No message_ & \(H^{\prime}_{n}:=H_{n}\) & \(\top\) & \((Init,(x,y^{\prime}_{n},e^{\prime}_{n}))\) \\ \hline _1._ & \((Init,(x,y_{m},e_{m}))\) & \(H^{\prime}_{n}:=H_{n}\) & \(g_{1}\) & \((Ratify,((x,y^{\prime}_{n},e^{\prime}_{n}))\) \\ \hline _2._ & \((Init,(x,y_{m},e_{m}))\) & \(H^{\prime}_{n}:=\mathtt{LEARN}(H_{n},D^{\prime}_{n})\) & \(g_{2}\ \wedge\ \neg g^{\prime}\) & \((Refute,((x,y^{\prime}_{n},e^{\prime}_{n}))\) \\ \hline _3._ & \((Init,(x,y_{m},e_{m}))\) & \(H^{\prime}_{n}:=\mathtt{LEARN}(H_{n},D^{\prime}_{n})\) & \(g_{2}\ \wedge\ g^{\prime}\) & \((Revise,((x,y^{\prime}_{n},e^{\prime}_{n}))\) \\ \hline _4._ & \((Init,(x,y_{m},e_{m}))\) & \(H^{\prime}_{n}:=\mathtt{LEARN}(H_{n},D^{\prime}_{n})\) & \(g_{3}\ \wedge\ \neg g^{\prime}\) & \((Refute,((x,y^{\prime}_{n},e^{\prime}_{n}))\) \\ \hline _5._ & \((Init,(x,y_{m},e_{m}))\) & \(H^{\prime}_{n}:=\mathtt{LEARN}(H_{n},D^{\prime}_{n})\) & \(g_{3}\ \wedge\ \neg g^{\prime}\) & \((Revise,((x,y^{\prime}_{n},e^{\prime}_{n}))\) \\ \hline _6._ & \((Init,(x,y_{m},e_{m}))\) & \(H^{\prime}_{n}:=H_{n}\) & \(g_{4}\) & \((Reyet,((x,y^{\prime}_{n},e^{\prime}_{n}))\) \\ \hline _7._ & \((Ratify,(x,y_{m},e_{m}))\) & \(H^{\prime}_{n}:=H_{n}\) & \(g_{1}\) & \((Ratify,((x,y^{\prime}_{n},e^{\prime}_{n}))\) \\ \hline _13._ & \((Refute,(x,y_{m},e_{m}))\) & \(H^{\prime}_{n}:=H_{n}\) & \(g_{1}\) & \((Ratify,((x,y^{\prime}_{n},e^{\prime}_{n}))\) \\ \hline _14._ & \((Refute,(x,y_{m},e_{m}))\) & \(H^{\prime}_{n}:=\mathtt{LEARN}(H_{n},D^{\prime}_{n})\) & \(g_{2}\ \wedge\ \neg g^{\prime}\) & \((Refute,((x,y^{\prime}_{n},e^{\prime}_{n}))\) \\ \hline _15._ & \((Refute,(x,y_{m},e_{m}))\) & \(H^{\prime}_{n}:=\mathtt{LEARN}(H_{n},D^{\prime}_{n})\) & \(g_{2}\ \wedge\ g^{\prime}\) & \((Revise,((x,y^{\prime}_{n},e^{\prime}_{n}))\) \\ \hline _16._ & \((Refute,(x,y_{m},e_{m}))\) & \(H^{\prime}_{n}:=\mathtt{LEARN}(H_{n},D^{\prime}_{n})\) & \(g_{3}\ \wedge\ \neg g^{\prime}\) & \((Refute,((x,y^{\prime}_{n},e^{\prime}_{n}))\) \\ \hline _17._ & \((Refute,(x,y_{m},e_{m}))\) & \(H^{\prime}_{n}:=\mathtt{LEARN}(H_{n},D^{\prime}_{n})\) & \(g_{3}\ \wedge\ g^{\prime}\) & \((Revise,((x,y^{\prime}_{n},e^{\prime}_{n}))\) \\ \hline _18._ & \((Refute,(x,y_{m},e_{m}))\) & \(H^{\prime}_{n}:=H_{n}\) & \(g_{4}\) & \((Reyet,((x,y^{\prime}_{n},e^{\prime}_{n}))\) \\ \hline _19._ & \((Revise,(x,y_{m},e_{m}))\) & \(H^{\prime}_{n}:=H_{n}\) & \(g_{1}\) & \((Ratify,((x,y^{\prime}_{n},e^{\prime}_{n}))\) \\ \hline _30._ & \((Reject,(x,y_{m},e_{m}))\) & \(H^{\prime}_{n}:=H_{n}\) & \(g_{4}\) & \((Reyet,((x,y^{\prime}_{n},e^{\prime}_{n}))\) \\ \hline _31._ & \((Ratify,(x,y_{m},e_{m}))\) & \(H^{\prime}_{n}:=H_{n}\) & \(\top\) & \((Term,(x,y^{\prime}_{n},e^{\prime}_{n}))\) \\ \hline _32._ & \((Refute,(x,y_{m},e_{m}))\) & \(H^{\prime}_{n}:=H_{n}\) & \(\top\) & \((Term,(x,y^{\prime}_{n},e^{\prime}_{n}))\) \\ \hline _33._ & \((Revise,(x,y_{m},e_{m}))\) & \(H^{\prime}_{n}:=H_{n}\) & \(\top\) & \((Term,(x,y^{\prime}_{n},e^{\prime}_{n}))\) \\ \hline _34._ & \((Reject,(x,y_{m},e_{m}))\) & \(H^{\prime}_{n}:=H_{n}\) & \(\top\) & \((Term,(x,y^{\prime}_{n},e^{\prime}_{n}))\) \\ \hline _35._ & \((Init,(x,y_{m},e_{m}))\) & \(H^{\prime}_{n}:=H_{n}\) & \(\top\) & \((Term,(x,y^{\prime}_{n},e^{\prime}_{n}))\) \\ \hline _36._ & \((Term,(x,y_{m},e_{m}))\) & \(H^{\prime}_{n}:=\mathtt{LEARN}(H_{n},D^{\prime}_{n})\) & \(e_{m}=\blacktriangle\) & _No message_ \\ \hline _37._ & \((Term,(x,y_{m}
automaton can revise its hypothesis to ensure that its prediction \(y^{\prime}_{m}\) matches \(y^{\prime}_{n}\) and its explanation \(e^{\prime}_{m}\) agrees with \(e^{\prime}_{n}\). Then \(a_{n}\) and sends the message \(+(n,(Revise,(x,y^{\prime}_{m},e^{\prime}_{m})))\) (using transition 17). Since \(g_{1}\) is now necessarily true, \(a_{n}\) sends \(+(m,(Ratify,(x,y^{\prime},e^{\prime})))\) (using transition 19). \(a_{m}\) then terminates the session (using transition 31)._
We can construct an abstraction of the set of transitions in the form of a'message-graph'. This is in Fig. 3: for simplicity, to the row numbers in the tabulation in Defn.4.6 It is evident from the self-loops in the graph that communication can become unbounded. To redress this, we alter the \(\mathtt{LXP}\) protocol by replacing transitions encoding self-loops.
Footnote 6: Informally, for an edge \((v_{1},v_{2})\) in the graph, the label for \(v_{1}\) is the message-tag received, and the label for \(v_{2}\) is the message-tag sent. The edges are labelled with the corresponding transition entries in the table in Defn.4. Correctly, the edge-label should be distinguish between which of \(a_{n}\) or \(a_{m}\) is sending, and which is receiving along with the message content. This level of detail is not needed for what follows.
**Definition 5** (Transitions in a Modified Protocol).: _From Remark 3 in Appendix B, the entries for \(7\) and \(30\) can be replaced with transitions \(7^{\prime}\) and \(30^{\prime}\) below. In addition, we rename transition \(14\) and \(16\) as \(14-k\) and \(16-k\) to denote that at most \(k\) occurrences of the transition can occur on any execution; and add the transitions \(14^{\prime}\) and \(16^{\prime}\) to allow termination after \(k\) iterations. The modified set of transitions are shown below._
\begin{tabular}{|l|c|c|c|c|} \hline _S.No._ & \(\mu\) & \(P\) & \(g\) & \(\mu^{\prime}\) \\ \hline \(7^{\prime}\)_._ & \((Ratify,(x,y_{m},e_{m}))\) & \(H^{\prime}_{n}:=H_{n}\) & \(g_{1}\) & \((Term,(x,y^{\prime}_{n},e^{\prime}_{n}))\) \\ \hline \(14^{\prime}\)_._ & \((Refute,(x,y_{m},e_{m}))\) & \(H^{\prime}_{n}:=:=\mathtt{LEARN}(H_{n},D^{\prime}_{n})\) & \(g_{2}\ \land\neg g^{\prime}\) & \((Term,(x,y^{\prime}_{n},e^{\prime}_{n}))\) \\ \hline \(14^{\prime}\)_._ & \((Refute,(x,y_{m},e_{m}))\) & \(H^{\prime}_{n}:=:=\mathtt{LEARN}(H_{n},D^{\prime}_{n})\) & \(g_{2}\ \land\neg g^{\prime}\) & \((Refute,(x,y^{\prime}_{n},e^{\prime}_{n}))\) \\ \hline \(16^{\prime}\)_._ & \((Refute,(x,y_{m},e_{m}))\) & \(H^{\prime}_{n}:=:=\mathtt{LEARN}(H_{n},D^{\prime}_{n})\) & \(g_{3}\ \land\neg g^{\prime}\) & \((Term,(x,y_{n},e_{n}))\) \\ \hline \(16^{\prime}\)_._ & \((Refute,(x,y_{m},e_{m}))\) & \(H^{\prime}_{n}:=\mathtt{LEARN}(H_{n},D^{\prime}_{n})\) & \(g_{3}\ \land\neg g^{\prime}\) & \((Refute,(x,y^{\prime}_{n},e^{\prime}_{n}))\) \\ \hline \(30^{\prime}\)_._ & \((Rajecet,(x,y_{m},e_{m}))\) & \(H^{\prime}_{n}:=H_{n}\) & \(g_{4}\) & \((Term,(x,y^{\prime}_{n},e^{\prime}_{n}))\) \\ \hline \end{tabular} We will call the modified protocol \(\mathtt{LXP}(\mathtt{k})\). The corresponding message-graph is in Fig. 3(c).
### Protocol Properties
It is straightforward to show that communication between compatible agents using \(\mathtt{LXP}(\mathtt{k})\) is bounded:
**Proposition 1** (BoundedCommunication).: _Let \(\mathit{Fig}\) 3(c) represents the message graph of a collaborative session using the \(\mathtt{LXP}(\mathtt{k})\) protocol. Then any communication in the session has bounded length._
Proof.: All messages in a session commence with \(Init\) and end with \(Term\) message-tags. The result follows straightforwardly from the fact that fig. 3(c) is a DAG except the loops \(14\)-k and \(16\)-k at Refute. But each of these loops can occur at most \(k\) times. Therefore any path between \(Init\) and \(Term\) is bounded.
We note that without the restriction to compatible agents, it is not possible to guarantee bounded communication (Fig. 3(a), which contains several cycles). One- and Two-Way Intelligibility can be stated as properties of \(\mathtt{LXP}(\mathtt{k})\).
**Definition 6** (Messages sent in a Session \(S_{mn}\)).: _Let \(S_{mn}\) be a session between agents \(m\) and \(n\) consisting of a sequence of configurations \(\langle\gamma_{1},\gamma_{2},\ldots,\gamma_{k}\rangle\), where \(\gamma_{i}=(\gamma_{m,i},\gamma_{n,i})\). Let
Figure 3: Message-graph obtained from: (a) transitions listed in Appendix B; (b) the transitions in the LXP protocol, which is only between compatible agents (Defn.4 and (c) the transitions defined in the LXP(k) protocol (Defn. 5, in which self-loops in LXP are removed).
\((\cdot,\cdot,\mu_{m,i})\), \(\gamma_{n,i}=(\cdot,\cdot,\mu_{n,i})\) where \(\mu_{m,1}=+(n,(Init,\cdot))\) and \(\mu_{\cdot,k}=+(\cdot,(Term,\cdot))\). Then, the set of messages sent from \(m\) to \(n\) in \(S_{mn}\) is \(M_{mn}=\{\tau:\text{there exists }i\text{ such that }\mu_{m,i}=+(n,\tau)\}\). The set of messages sent from \(n\) to \(m\) in \(S_{mn}\) is \(M_{mn}=\{\tau:\text{there exists }i\text{ such that }\mu_{n,i}=+(m,\tau)\}\). The messages sent in a session \(S_{mn}\) is defined as the pair \((M_{mn},M_{nm})\)._
**Definition 7** (One-way Intelligibility).: _Let \((M_{mn},M_{nm})\) be the messages sent in a session \(S_{mn}\). If there exists at least one element \((t_{i},\cdot)\in M_{mn}\) s.t. \(t_{i}\in\{\text{Ratify},\text{Refute},\text{Revise}\}\) then we will say \(S_{mn}\) exhibits One-Way Intelligibility for \(m\) using \(\mathtt{LXP}(\mathtt{k})\). Similarly for One-Way Intelligibility for \(n\)._
**Definition 8** (Two-Way Intelligibility).: _Let \((M_{mn},M_{nm})\) be the messages sent in a session \(S_{mn}\). If \(S_{mn}\) exhibits One-Way Intelligibility for \(m\) using \(\mathtt{LXP}(\mathtt{k})\) and \(S_{mn}\) exhibits One-Way Intelligibility for \(n\) using \(\mathtt{LXP}(\mathtt{k})\) One-Way Intelligibility for \(n\) then we will say \(S_{mn}\) exhibits Weak Two-Way Intelligibility for \(m\) and \(n\) using \(\mathtt{LXP}(\mathtt{k})\). If \(S_{mn}\) exhibits Weak Two-Way Intelligibility for \(m\) and \(n\) using \(\mathtt{LXP}(\mathtt{k})\) and there does not exist any \((t,\cdot)\in M_{mn}\cup M_{nm}\) s.t. \(t=Reject\) then we will say that \(S_{mn}\) exhibits Strong Two-Way Intelligibility for \(m\) and \(n\) using \(\mathtt{LXP}(\mathtt{k})\)._
## 3 Limitations and Clarifications
Although \(\mathtt{LXP}(\mathtt{k})\) gives us a basis for inferring intelligibility from interaction, it is only a first step. The reader would have already identified some limitations, notably:
* Execution of \(\mathtt{LXP}(\mathtt{k})\) requires definition of the \(\mathtt{LEX}\) functions. As is evident from Defn. 4, transitions involve checks and updates that involve significant local computation. Of these the functions \(\mathtt{LEARN}\) and \(\mathtt{EXPLAIN}\) require attention. We comment on these below.
* The description of the protocol has been substantially simplified by two assumptions. First, that each session will deal with one data-instance at-a-time, and secondly, that the agents involved in the interaction are compatible. The first restriction means that data containing multiple examples will require multiple sessions. While learning and revision of hypotheses (the \(\mathtt{LEX}\) function \(\mathtt{LEARN}\)) is possible with single examples, for example with an on-line learner, it is more likely that several sessions may be needed to arrive at a suitable hypothesis.7 An alternative to multiple sessions is to modify messages to include multiple instances, along with their predictions and explanations. This would then require all the \(\mathtt{LEX}\) functions to be modified to account for this change. The issue of compatibility is related to the notion of'shared knowledge'. The notion of compatibility we have used implies a shared meaning of when two predictions are similar, and when two explanations agree. It may be possible to have bounded-length sessions without this common understanding. At this stage, we do not know how to achieve this. Footnote 7: A useful side-effect of single-instance sessions is that it allows the communication of instance-specific explanations (“this specific data instance has this label because...”). It has been suggested that this is a more effective way for humans to convey relevant information to a machine-learning engine [27].
* \(\mathtt{LXP}(\mathtt{k})\) is a synchronous protocol, somewhat like a plain old telephone system ("POTS"). That is, interaction between a pair of agents has to be terminated before commencing a new one. More elaborate protocols allowing concurrency may be possible, at the expense of greater
complexity in managing the global configuration of the system. It may require provenance information to be more detailed (like inclusion of session identifiers and session indices, along with the sender's identifier).
* The protocol does not account for noise in the channel, delays, or cost of communication between agents. Implementations will need to account for all of these aspects.
Finally, although not a limitation of \(\mathtt{LXP(k)}\), it is nevertheless useful to note that it is common for protocols to have a preliminary (hand-shaking) phase where some prior information is exchanged. We have not described this aspect in the paper, but it is the phase where \(\mathtt{LEX}\) agents can establish some 'common ground' needed for ensuring compatibility.
### Note on \(\mathtt{LEX}\) Functions
The principal \(\mathtt{LEX}\) functions are in 3 categories: (a) Inference (the function \(\mathtt{PREDICT}\)); (b) Induction (the function \(\mathtt{LEARN}\)); and (c) Justification (the function \(\mathtt{EXPLAIN}\)). The Boolean functions \(\mathtt{MATCH}\) and \(\mathtt{AGREE}\) are defined in terms the functions in categories (a)-(c). It is our position in this paper that these constitute the basic functionality needed for agents that learn and explain.
One the face of it, it would appear that the \(\mathtt{LXP(k)}\) protocol requires \(\mathtt{LEARN}\) to construct hypotheses one-instance at a time (that is, the protocol is restricted to on-line learners). However this is not the case. It is feasible that \(\mathtt{LEARN}\) may only construct a hypothesis after a reasonable number of instances have been accumulated. This could be reflected, for example, in the automaton receiving the instances sending a sequence of messages with \(Refute\) tags, and then a message with a \(Revise\) tag. Thus, communicating data instance-by-instance does not necessarily mean hypotheses also have to revised instance-by-instance. Assuming that prediction is an important role for a hypothesis, the requirement of a \(\mathtt{PREDICT}\) function is unsurprising. However, the \(\mathtt{MATCH}\) function can still require some attention. It is straightforward if this is defined by the equality relation. But this may not be appropriate for some kinds of predictions like numeric values. If a pair of numeric values are taken to match is they are within some tolerance factor, then the definition of \(\mathtt{MATCH}\) may not satisfy some intuitive properties of equality (like transitivity).
\(\mathtt{LXP(k)}\) requires all predictions should be accompanied by explanations. For some kinds of agents, like those based on logic, it is possible to identify \(\mathtt{EXPLAIN}\) with some known descriptors, like proofs and \(\mathtt{AGREE}\) can be formulated in terms of well-understood logical operations, like consistency checking. For explanations in a less formal setting, like natural language, it is likely that obtaining a definition of \(\mathtt{AGREE}\) may require additional effort, and may require models constructed from data.
**Example 2**: _Let \(a_{m}\) and \(a_{n}\) be agents that use a logic-based representation, who have exchanged predicate-definitions encoding their domain-knowledge. Let \(H_{m}\) be the current hypothesis of \(a_{m}\) and \(H_{n}\) be the hypothesis for \(a_{n}\). Let predictions of a data-instance \(x\) by \(H_{m,n}\) be done by clauses of the form \(predict(X,C)\gets Body\), to be read as "The prediction of any instance X is C if the conditions in \(Body\) are true". Then possible \(\mathtt{LEX}\) functions for \(a_{m}\) (and similarly for \(a_{n}\)) are;_
1. \(y=\mathtt{PREDICT}_{m}(x,H_{m})\equiv(H_{m}\vdash predict(x,y))\) _(where_ \(\vdash\) _is a derivability relation);_
2. \(\mathtt{LEARN}_{m}\) _constructs hypotheses using techniques developed in Inductive Logic Programming_ _[_29_]__;_
3. \(e=\mathtt{EXPLAIN}_{m}((x,y),H_{m})\) _is the clause in_ \(H_{m}\) _used to derive_ \(predict(x,y)\)_;_
4. _If_ \(y_{m}\) _is a prediction by_ \(a_{m}\) _and_ \(y_{n}\) _is a prediction by_ \(a_{n}\) _then_ \(\mathtt{MATCH}_{m}(y_{m},y_{n}):=(y_{m}=y_{n})\)
_(e) if_ \(e_{m}\) _is a (clausal) explanation from_ \(a_{m}\) _and_ \(e_{n}\) _is a (clausal) explanation from_ \(a_{n}\) _then_ \(\mathtt{AGREE}_{m}(e_{m},e_{n}):=(e_{m}=_{\theta}e_{n})\) _(where_ \(=_{\theta}\) _denotes an equivalence relation based on the_ \(\theta\)_-subsumption as defined in_ _[_31_]__)._
_(These definitions are illustrative, and not the only ones possible with logic-based agents.)_
### Role of The Oracle
The oracle \(\Delta\) has been used as the source of infallible information about the label for any data instance \(x\). In any session \(S_{m\Delta}\), \(\Delta\) receives a message \(-(m,(Init,(x,?,?)))\) denoting a query about the label and explanation for data instance \(x\). \(\Delta\) terminates the session with the message \(+(m,(Term,(x,y,\blacktriangle)))\), where \(y\) is the oracle's prediction for the label of \(x\) and \(\blacktriangle\) denotes that the explanation is an oracular statement. In our interaction model, \(\Delta\) never initiates a session; and never sends any message-tags other than \(Term\).
A question that arises is this: if the true labels of data-instances can be obtained directly from the oracle, why doesn't a \(\mathtt{LEX}\) agent communicated just with \(\Delta\)? One aspect we have not considered in developing the communication protocol is the cost of communication. Collaboration between \(\mathtt{LEX}\) agents will be worthwhile if: (a) communication to and from \(\Delta\) is significantly more expensive (in time or money or both) than communication between \(\mathtt{LEX}\) agents; and (b) the \(\mathtt{LEX}\) functions of any one agent can use predictions and explanations from other agents effectively. Of these, (a) is likely to be the case if the oracle is intended to model the acquisition of real-world data by manipulation and experimentation. The extent to which (b) holds will depend on whether agents are able to establish some common knowledge, and fulfil the requirements of compatibility.
The following example shows how collaboration can result in an effective use of either agent's knowledge of the oracle's prediction, thus possibly lessening the cost of communicating to the oracle.
**Example 3**.: _Let \(m,n\) be compatible \(\mathtt{LEX}\) agents with \(\mathtt{MATCH}_{m}(y_{m},y_{n}):=(y_{m}=y_{n})\) and \(\mathtt{MATCH}_{n}(y_{n},y_{m})\)\(:=(y_{n}=y_{m})\). Let \(S_{mn}(x)\) be a collaborative session between \(m\) and \(n\) s.t. \(S_{mn}\) terminate with a \(Ratify\) message-tag(correctly, \(Ratify\) followed by \(Term\)), with \(H_{m},H_{n}\) the hypotheses for \(m,n\), and \(D_{m},D_{n}\) be the datasets for \(m,n\) respectively. If \((x,y,\blacktriangle,\Delta)\in D_{m}\) or \((x,y,\blacktriangle,\Delta)\in D_{n}\) then \(\mathtt{PREDICT}_{m}(x,H_{m})\ =\ \mathtt{PREDICT}_{n}(x,H_{n})\ =\ y\)._
**Remark 2**.: _If either \(m\) or \(n\) has an oracular prediction for \(x\) and the session ends with \(m,n\) reaching a consensus on prediction and explanation, then both agents will agree with the oracle's prediction for \(x\). This way, it is sufficient for only one of \(m\) or \(n\) to communicate to \(\Delta\) about \(x\). Extending to set of instances \(X=\{x_{1},x_{2},\ldots,x_{k}\}\), the cost of communicating to the oracle can be reduced for both \(m\) and \(n\) by restricting oracle-communication for each agent to partitions \(X_{m}\) and \(X_{n}\) respectively._
### \(\mathtt{Lxp(k)}\) and the Intelligibility Axioms
We now reappraise the Intelligibility Axioms in Sec. 2.1 in terms of communication using \(\mathtt{LXP(k)}\) between a human- and a machine-agent. We can treat the axioms as a specification (\(Spec\)) for intelligibility and the protocol as an implementation (\(Impl\)) for intelligibility. Following [15], \(Impl\) correctly implements \(Spec\) iff \(Impl\ \models\ Spec\). We interpret this to mean that whenever intelligibility follows from Defn. 7 using \(\mathtt{LXP(k)}\), intelligibility is inferred using the axioms. It means whenever
the \(\texttt{LXP}(\texttt{k})\) protocol communicates _Ratify_, _Refute_ or _Revise_ tag, the antecedent of one of the axioms should be true8. See the table below for the mapping.
Footnote 8: Here the mapping from the actions of the protocol to antecedents of axioms is along the intended interpretation.
\begin{tabular}{|c|c|c|c|} \hline \multicolumn{2}{|c|}{Axiom} & \multicolumn{2}{|c|}{If \(\texttt{LXP}(\texttt{k})\)} & \multicolumn{2}{|c|}{Then Axiom's Antecedent} \\ Type & Name & action is: & must be true: \\ \hline \(H\to M\) & Machine & Machine sends & Human explanation \\ & Confirmation & _Ratify_ tag & is ratified by machine \\ \hline \(H\to M\) & Machine & Machine sends & Human explanation \\ & Refutability & _Refute_ tag & is refuted by machine \\ \hline \(H\to M\) & Machine & Machine sends & Human explanation \\ & Performance & _Revise_ tag & improves machine performance \\ \hline \(M\to H\) & Human & Human sends & Machine explanation \\ & Confirmation & _Ratify_ tag & is ratified by human \\ \hline \(M\to H\) & Human & Human sends & Machine explanation \\ & Refutability & _Refute_ tag & is refuted by human \\ \hline \(M\to H\) & Human & Human sends & Machine explanation \\ & Performance & _Revise_ tag & improves human performance \\ \hline \end{tabular} Treating interaction between human- and machine as a single session \(S\) of communication using \(\texttt{LXP}(\texttt{k})\), it is evident that:
1. If \(S\) exhibits One-Way Intelligibility for the machine using \(\texttt{LXP}(\texttt{k})\), then we can infer machine-intelligibility using the \(H\to M\) axioms;
2. If \(S\) exhibits One-Way Intelligibility for the human using \(\texttt{LXP}(\texttt{k})\), then we can infer human-intelligibility using the \(M\to H\) axioms;
3. If \(S\) exhibits Weak Two-Way Intelligibility for human and machine, then we can infer machine-and human-intelligibility using the \(H\to M\) and \(M\to H\) axioms.
By adopting \(\texttt{LXP}(\texttt{k})\) as a protocol, we are immediately assuming compatibility constraints the MATCH and AGREE functions used by human and machine. In fact the design constraints in the table impose additional constraints on the antecedents of the axioms and the \(\texttt{LEX}\) functions MATCH and AGREE and the actions performed by either agent. For example: given the human and machine's prediction and explanation, the human's MATCH and AGREE functions both being true are taken as sufficient conditions for the machine's explanation being ratified by human. The most interesting consequence results from the performance axioms. Take the Machine Performance axiom. If a \(Revise\) tag is sent from the machine to the human, one of MATCH or AGREE for the machine must be false before revision, and both MATCH and AGREE must be true after revision. For this to be sufficient for improvement in machine-performance means that the performance-measure used by the machine depends on the values of MATCH and AGREE before and after revision.
## 4 Related Work
In Appendix A we referred to a number of sources that are relevant to one or the other of the Intelligibility Axioms in 2.1. In this section we turn to some other relevant work. From
the framework developed in this paper intelligibility can be seen as a ternary relation involving: the information-provider, the information provided, and the information-recipient. In the literature on Explainable ML, this has re-emerged as important requirement for the acceptability of ML (see [26] for a recent example citing earlier work [14], and [24] for an early identification of this). Furthermore, the explainer and the explainee can be, at different times, the same person or agent.
A large literature has built up over recent years addressing the problems of inscrutable "black-box" models generated by some modern machine learning techniques which then require mechanisms for "explainability" [11]. There are several excellent reviews available on Explainable Machine Learning (XML), and we refer to the reader to them. More broadly, the origins of XML can be found in a prominent DARPA project, launched in 2016 titled "Explainable Artificial Intelligence (XAI)" [7] which is credited with initiating efforts in XML [1]. In fact, XAI itself can be viewed as a continuation of pre-existing research trends: earlier approaches in machine learning largely used models based on knowledge representations developed in artificial intelligence that were designed to be interpretable (such as rules, decision trees or logical theories), whereas only recently the drive for accuracy on ever-larger datasets has led to models that require explanation [34].
The DARPA project used the term "explainability" to denote the use of techniques to generate an explanation for a model [13]. A distinction can therefore be made between _interpretable_ models, which are constrained to work in a way that is (at least, in principle) understandable to humans, and _explainable_ methods, which are applied to the results obtained from black-box models [34]. This difference is similar to the distinction made between model transparency and _post hoc_ explainability for black-box models [21]. However, interpretability is not simply a property of a class of models, but also of the data, feature engineering, and other factors, and, as such, it may be difficult to achieve in applications [21, 34]. Explainability by _post hoc_ methods suffers from problems of approximation, since the explainable model is not usually the model responsible for making the predictions. Recent criticism of _post hoc_ explainability suggests such methods should not be adopted for certain medical [3] and legal [45] applications. In particular, it appears that such explanations may not be able to satisfy the legislative requirements for avoidance of discrimination, for example [45]
In presentations of the problem of explainability it is usually assumed there is a human to which a machine is providing the explanation, in the terminology we have used in this paper, this means One-Way Intelligibility for the human. Even in this limited setting, considering only _techniques_ for explanation ignores other issues, like the role of the explainee, for whom the explanation is intended, which is contrary to evidence from social science [26]. For instance, the diversity of explainees suggests thinking of explanation in terms of the understanding of the explainee [39]. From this standpoint [38, 39] propose that explanation should be a bi-directional process rather than a one-off, one-way delivery of an explanation to a recipient. Therefore explainability is a process involving reasoning over interpretable insights that are aligned with the explainee's background knowledge. The proposal is that an explainer should allow explainees to interact and rebut explanations; in the proposed framework the entire explanatory process is based on a reasoning system enabling communication between agents.
We are aware of at least two threads of work in the ML literature that recognises some of the aspects just described. In Argument-Based ML, or ABML [47, 27], data provided by a human to an ML engine includes explanations formulated in terms of background knowledge shared between human and machine. These explanations constitute 'arguments' for the label for data instances, and the ML system attempts to construct hypotheses that are consistent with the explanations. Learning from explanations was employed by early ML systems like MARVIN [35] that performed
supervised-learning in the original sense of a human guiding the construction of hypotheses by a machine. The learning protocol employed in such systems is naturally interactive, involving human-presentation of data along with explanations (if any), and revision of hypotheses by machine. The interaction can result in Two-Way Intelligibility if interactions consist of human refutation of a machine's explanations; and the machine revises its hypothesis in response. However, no refutation of the human's explanation is envisaged within ABML nor any revision of his or her hypothesis.
A second thread of work is that on Explanatory Interactive Learning (XIL: [43; 40]). Although the methods proposed in [39] allow the human to interact with the machine learner, this approach is more fully realised in XIL, with the implementation of a revision capability based on the combination of active machine learning with machine-to-human explainability interaction. The ML component of XIL uses active learning to identify an "informative" (unlabelled) instance. This instance is labelled by the machine's hypothesis, and an explanation is then generated While the specific mechanism for generating an explanation is not important, XIL does require the prediction and explanation to be characterised into one of 3 categories by a human-in-the-loop: (i) "right for the right reasons"; (ii) "wrong for the wrong reasons"; and (iii) "right for the wrong reasons" [33]. Nothing is done in categories (i) and (ii), but category (iii) initiates a revision of the machine's hypothesis using a range of XAI techniques [42]9.
Footnote 9: The category “wrong for the right reasons” does not appear to have been addressed in XIL.
In work on XIL applications to computer vision, where the learner is implemented by a deep network, the key intermediate step of defining "concepts" was implemented to better communicate with the human to enable any required revisions [40].10
Footnote 10: Such _concept-based_ explainability methods have also been applied more widely for deep learning [46]. For example, concept-based explanations have recently been applied in an attempt to comprehend the chess knowledge obtained by AlphaZero, and concluded that this approach did reveal a number of relationships between learned concepts and historical game play, from standard opening moves to tactical skills, as assessed by a former world chess champion [23]. This is an illustration of the phenomenon predicted by Michie (see above, Thompson’s table).
The characterisation predictions and explanations in XIL is similar in spirit to the categories defined by the guard functions in this paper. As with ABML, an XIL system demonstrates Two-Way Intelligibility when the human provides refutations (in XIL, this is done by augmenting the data) and the machine revises its hypothesis. Also in common with ABML, the machine does not provide refutations for the human's prediction and/or explanation, and revision of the human's hypothesis is not envisaged. Finally, although not cast either as ABML or XIL, but nevertheless related to aspects of both are implementations of incremental learning designed human-machine collaboration. Recent examples of collaboration using symbolic ML are in [32] for applications to computer security, and using a combination of neural and symbolic learning for collaborative decision-making systems for medical images [36]). Again, these approaches exhibit a form of two-way intelligibility, since they rely on the human providing refutations and the machine improving its performance as a result.
## 5 Concluding Remarks
In this paper we have sought to look beyond the current practice in Explainable Machine Learning. It is our contention that for problems which are sufficiently complex that neither human nor machine have complete answers, designers of ML-based decision-support tools must be concerned with mutual intelligibility of information. Based on this position, we have sought to characterise
intelligibility as a property of interaction. For the design of ML systems with humans-in-the-loop this means both a human and a machine need to provide explanations in a form that the other can inspect and evaluate critically, and respond meaningfully to that evaluation.
But the requirement is easier to state than to achieve. There are 3 aspects to consider: (a) The representation used by either side for explanations; (b) How to evaluate critically; and (c) The rules of communicating the result of evaluation. In this paper, we have focused on (c), with the goal of inferring intelligibility from the messages exchanged. Nevertheless, we comment on (a) and (b) below, since they are relevant to LEX agents of the kind we consider in this paper.
Aspect (a) is concerned with on how human knowledge about the world should be represented in order to be useful to a machine, and _vice versa_. McCarthy [22] envisaged this would be done as statements in a formal language. But this does not have to be necessarily the case. For example, it may be sufficient for information provided by the human to transform some or all of the inputs of the ML system, like the data, utility function, structure or parameters. However, we do not as yet have a way to do this satisfactorily for all kinds of formal languages and all kinds of ML systems. In the short- to medium-term therefore, we believe this will require the human decision-maker may have a working knowledge of the ML system, or to employ someone who does. In the longer term, neither of these options are practical, and decision-support tools will need mechanisms to receive information in natural language, and perform any manipulations to its inputs internally. Some progress is being made on processing information in a natural language (see for example, [41]), but we are still far from being able to do this well. Assuming we have the human-supplied information in some machine-usable form, the machine may still need to ask the decision-maker questions to clarify any ambiguities or inconsistencies. To resolve these would undoubtedly need the machine to be able to generate its own data, and ask questions about them. How should it represent these questions? Again, in the long-run, natural human-computer interaction techniques seem inevitable [6].
The problem in (b) is concerned with the human evaluation of a machine's explanation, and _vice versa_. Human evaluation would clearly be easier if the explanation employed concepts that the human can recognise. In the near-term, the machine can achieve this in one of two ways. First, the machine can elect to employ only those concepts that are already known to the human. The identification of Covid patients in the previous section is an example: the features extracted from chest X-rays were restricted to those identified by a radiologist. A second way is for the machine to show instantiations through some textual or visual means. For example, ML systems that attempted to discover chess concepts showed board positions exemplifying the concept. The chess-expert may then be able to map the machine-identified concept to some part of their chess vocabulary (such as "this is really about Kings in opposition"). In the long-term, as the problems, data and ML systems get more complex, it may not be possible to engineer an adequate set of shared concepts beforehand. Establishing what exactly a concept 'invented' by the machine really means will be a challenging task. The converse problem, of critical evaluation of a human explanations may prove more amenable. If the explanations are translatable into a formal language, then tools developed for model- and proof-checking can be adapted to the task. The recent promise large-scale language models suggests that even if explanations are in a natural language, it may be possible to develop language models for approximating idealised forms of logical reasoning.
Aspect (c)-when and what to communicate-is the most amenable to analysis, given the long history of developing communications protocols. Surprisingly, although there is little dispute that explanations are intended to be communicated, little attention to paid to effective interaction between the agents constructing the explanations. We identify effectivity with intelligibility, and
show how this can be postulated as a property of a communication protocol between agents capable of learning to predict and explaining the predictions. We suggest that the design of 'two-way intelligibility' protocols should be part of the design and analysis of Explainable AI (XAI) systems from the 'ground-up'. It is not coincidental that three of the R's we have identified here -- _Refute_, _Revise_, and _Ratify_ -- are at the heart of advancing understandability in Science. We suggest they may play a similar role in evolving a shared understanding of data by human-machine systems.
**Acknowledgements.** AS is a Visiting Professor at Macquarie University, Sydney and a Visiting Professorial Fellow at UNSW, Sydney. He is also the Class of 1981 Chair Professor at BITS Pilani, Goa, the Head of the Anuradha and Prashant Palakurthi Centre for AI Research (APPCAIR) at BITS Pilani, and a Research Associate at TCS Research. MB acknowledges support in part by Rich Data Co. Pty and the Australian Government's Innovations Connections scheme (awards ICG001855 and ICG001858). EC is supported by an NHMRC investigator grant. Many of the results reported here are from collaborative work done by the authors with colleagues at several institutions. We would especially like to acknowledge the role played by: Tirtharaj Dash, Rishabh Khincha, Soundarya Krishnan, Lovekesh Vig, Arijit Roy, Gautam Shroff, Paul Compton, Ross King and Stephen Muggleton. AS and MB owe a debt of gratitude to Donald Michie, who shaped much of their thinking on the views expressed in this paper.
|
2303.08922 | UHECR Signatures and Sources | Abstract. We discuss recent results on the clustering, composition and
distribution of Ultra-High Energy Cosmic Rays (UHECR) in the sky; from the
energy of several tens of EeV in the dipole anisotropy, up to the highest
energy of a few narrow clusters, those of Hot Spots. Following the early UHECR
composition records deviations from proton, we noted that the UHECR events
above 40 EeV can be made not just by any light or heavy nuclei, but mainly by
the lightest ones as He,D, Li,Be. The remarkable Virgo absence and the few
localized nearby extragalactic sources, such as CenA, NGC 253 and M82, are
naturally understood: lightest UHECR nuclei cannot reach us from the Virgo
distance of twenty Mpc, due to their nuclei fragility above a few Mpc
distances. Their deflection and smearing in wide hot spots is better tuned to
the lighter nuclei than to the preferred proton or heavy nuclei candidate
courier. We note that these lightest nuclei still suffer of a partial
photodistruction even from such close sources. Therefore, their distruption in
fragments, within few tens EeV multiplet chain of events, have been expected
and later on observed by Auger collaboration, nearly a decade ago. These
multiplet presences, strongly correlate with the same CenA, NGC253 sources. The
statistical weight of such correlation is reminded. We conclude that the same
role of NGC 253 clustering at lower energies could also feed the Auger dipole
anisotropy at lower energy ranges, integrated by nearest Vela, Crab, LMC and
Cas A contributes. In our present UHECR model, based on lightest nuclei in
local volumes of a few Mpcs, closest AGN, Star-Burst or very close SNR are
superimposing their signals, frozen in different epochs, distances and
directions, feeding small and wide anisotropy. Possible tests to confirm, or
untangle the current model from alternative ones, are suggested and updated. | Daniele Fargion, Pier Giorgio De Sanctis Lucentini, Maxim Y. Khlopov | 2023-03-15T20:39:40Z | http://arxiv.org/abs/2303.08922v1 | # UHECR Signatures and Sources
###### Abstract
We discuss recent results on the clustering, composition and distribution of Ultra-High Energy Cosmic Rays (UHECR) in the sky; from the energy of several tens of EeV in the dipole anisotropy, up to the highest energy of a few narrow clusters, those of Hot Spots. Following the early UHECR composition records deviations from proton we noted that the UHECR events above 40 EeV can be made not just by any light or heavy nuclei, but mainly by the lightest ones as He,D, Li,Be. The remarkable Virgo absence and the few localized nearby extragalactic sources, such as CenA, NGC 253 and M82, are naturally understood: lightest UHECR nuclei cannot reach us from the Virgo distance of twenty Mpc, due to their nuclei fragility above a few Mpc distances. Their deflection and smearing in wide hot spots is better tuned to the lighter nuclei than to the preferred proton or heavy nuclei candidate courier. We note that these lightest nuclei still suffer of a partial photodistruction even from such close sources. Therefore, their disturbrion in fragments, within few tens EeV multiplet chain of events, have been expected and later on observed by Auger collaboration, nearly a decade ago. These multiplet presences, strongly correlate with the same CenA, NGC253 sources. The statistical weight of such correlation is reminded. We conclude that the same role of NGC 253 clustering at lower energies could also feed the Auger dipole anisotropy at lower energy ranges. Such lower energy anisotropy could be fed and integrated by nearest Vela, Crab, LMC and Cas A contributes. In our present UHECR model, based on lightest nuclei in local volumes of a few Mpcs, closest AGN, Star-Burst or very close SNR are superimposing their signals, frozen in different epochs, distances and directions, feeding small and wide anisotropy. Possible tests to confirm, or untangle the current model from alternative ones, are suggested and updated.
+
Footnote †: e-mail: [email protected]
## 1 Introduction
Cosmic rays (CR) mainly contain charged particles. As charges they are bent, deflected and smeared in their flight through cosmic and galactic magnetic fields losing most of the astrophysical source imprint. Thus for a century the CR origin remained a mystery and a compelling question continually addressed by high-energy astrophysics[1]. Indeed the Ultra High Energy CR (UHECR) nature itself connects the most violent sources of the Universe with the deep inner secrets of nuclear and particle physics.
The extreme boundaries of micro and macro physics find a rare connection by the UHECR-source correlation. We offer here our reading key based on last decades discovered maps and composition data. UHECR proton above EeV (\(10^{18}\) eV) are expected to be less deflected because of their harder rigidity. They may more closely represent the location of their source leading to connections on an astrophysical or cosmic scale.
With increasing energy the UHECR protons are deflected less and less, but above nearly 60 EeV where they should be somewhat beamed within a few degrees, they also become opaque and constrained by the Cosmic Black Body, due to the photopion production. That effect is known as the GKZ cutoff [2; 3].
Such UHECR nucleons are then enclosed within one to two hundred Mpc leading to a small volume bound within our much larger Universe (4Gpc). This proton scenario, if real, could facilitate the identification of the source associated with such an inhomogeneous scale mass distribution.
UHECRs could also be nuclei, but these are affected by increased deflection and smearing as well as partial opacity of the photopions. In addition, the lighter UHECR nuclei undergo additional photon-nuclear destruction, reducing the UHECR propagation distance to a few Mpc. Their fragments at tens of EeV, must be lighter and more diffused. Therefore, the path of an incoming UHECR depends not only on the energy but also on the mass and composition of the charge (such as p,He,N,Fe,Ni..), on the intensity of the magnetic fields encountered and on the timing of any in-flight decays.
Indeed, for a more in-depth understanding of the phenomenon we have to consider not only the distribution of the possible sources, but also the _history_ of the CR, the timing of the detected signals, the delay due to the random-walk [4], each of them setting specific limits to the allowed energy windows.
Candidate sources, whether they are small as SN, GRB, Star-Burst, or larger as AGN Jet, can be found both
far from us and close -even in our galaxy- in a complex inhomogeneous cosmic structure. We show how this difficult source correlation search can be approached and partially solved by constrains based on last two decades of observational discoveries [1] on UHECR composition, their clustering and anisotropies at different energy windows.
## 2 UHECR composition and Virgo absence
In the early 2000s most of the theoretical models were based on a proton composition for UHECR at the GZK energy limits: \(6\cdot 10^{19}\)-\(10^{20}\) eV. In fact, the GZK cutoff was the main focus looked for in the UHECR spectra. The proton at such energies should cluster within tight angles, \(2^{\circ}\) for coherent bending, or near \(8^{\circ}\) for incoherent random bending [5]; it must also be contained in volumes of \(1-2\) hundred Mpc, the GZK volume. Thus they were expected to rise and shine from the nearest and more mass populated regions, similar to how infrared galaxies are painting the infrared sky: the Virgo cluster, at a distance of about 20 Mpc, is the dominant mass in the Auger southern emisphere sky. Surprisingly, at those energies the UHECR signals in Auger, and later also in Telescope Array (TA) in the northern hemisphere, did not show a sharp signature confirming this expectation: the _surprising Virgo absence_.
Rather both Auger and TA discovered some wider (\(8^{\circ}-20^{\circ}\)) clustering elsewhere, uncorrelated with Virgo. A first Hot Spot in southern sky was pointing to Cen-A, the nearest AGN in Auger sky. A second Hot Spot in northern sky was found by TA pointing toward a nearest AGN M82. A third clustering, or Hot Spot, was later recognized [6], pointing to NGC 253 a micro AGN or star-burst galaxy still in the Auger sky.
A combined map of the Auger and TA signals is presented in equatorial coordinates in Fig.1. In the center the area with the most abundant dark dots, thousands of galaxies of the Virgo Cluster, at a distance of 18 Mpc well within the GZK volume [7].
Assuming UHECR protons, the Virgo absence was (and still is) puzzling and inexplicable. Since 2007 a harder spectra in Auger composition studies [8] based on the recently observed most energetic UHECR airshower profile, disfavored protons above 10 EeV in favour of light and lightest nuclei. Protons have actually only been found to be dominant at sub-EeV energy ranges. This led us to suggest that the absence of Virgo could be explained if UHECRs consist mainly of lighter nuclei, such as D, He, Li, Be [9; 10; 11; 12].
The lightest nuclei are indeed too fragile to survive the 20 Mpc flight from Virgo, due to photon-nuclear distruption. On the other hand, Cen-A, M82 and NGC 253 are just few Mpc away, much closer than Virgo. These smaller distances allow most UHECRs to survive photon nuclear distruption and opacity. These sources are among the ideal candidates star-burst or active AGN around us, whose signals are well tuned and observable in the Auger-TA data. As the data of the last decade had shown. Only a decade later (2017), the UHECR signature of the light nuclei was largely confirmed (see note on page 26 of [13]) by their detailed slant depth averaged models of the UHECR shower maximum, which show the key role of light and even the lightest nuclei.
Therefore, around \(3\cdot 10^{19}\) eV, UHECRs mainly He like nuclei may be cosmic but, as already mentioned, extremely local just within a few Mpc, almost all in our Local Group [6].
Other near and far cosmic UHECR sources could further add dilute homogeneous and isotropic noises. Rarer and heavier nuclei, such as Ni and Fe, may also be present at higher energies (around or higher than \(7\cdot 10^{19}\) eV), but so bent and smeared that they are often confined within our own galaxy [11]. Ultimately these near candidates, Cen-A, M82, NGC 253, are contributing the most to the local anisotropy [6].
### Hot Spot and correlated multiplet fragments
A lightest nuclei whose charge is two or four times that of the proton undergoes a somewhat larger deflection than the protons, a smearing comparable to those observed in the Southern and Northern skies. The charge \(Z=2-4\) of lightest nuclei explains the observed sizes of the Hot Spots due to their random incoherent deflection at about \(3\cdot 10^{19}\) eV, assuming He-like nuclei are bent within angles of \(16^{\circ}-32^{\circ}\)[5].
We note that other light but heavier nuclei, like N,O, suffer no photo-nuclear distruption from Virgo, but are more smeared in a wider solid angle. These nuclei would cause a signal from Virgo diluted in a large angle of the sky, \(56^{\circ}-64^{\circ}\), with a possible negligible role in the observed Hot Spot, but with a potential role in the dipole anisotropy. Such a dipole signature toward Virgo is again absent: it points elsewhere, almost half the sky away. Therefore, this is a compelling argument for considering UHECR nuclei consisting mostly of the lightest ones, unable to reach us from Virgo.
Figure 1: The combined map, in equatorial coordinates, based on the Auger and TA data. Note in red two Hot Spot UHECR clustering signals. The highest UHECR density event rates are north around M82 AGN for TA and south around Cen-A for Auger. The dark dots circled in red in the center [7] correspond to the closest potential sources included in the GZK volume, pointing to the Virgo cluster. Those sources were ideal for any UHECR proton carrier. The lighter nuclei are much more limited, a few Mpc, by photonuclear destruction. These are the ideal candidates who can explain the absence of Virgo in the Auger map.
The same photo-nuclear distruption can occur for D, He,Li, Be nuclei even from a nearby Cen-A, or NGC 253; in part it had to happen, and as we have noted it happened. These fragments were to be centered or distributed along the main Hot Spot sources in the Auger sky: Cen-A, as well as in the additional clustering around NGC 253. M82 is not visible from Auger sky, but is within the TA sky.
We had foreseen [5; 11], such a tail of fragments around Cen-A, a couple of years before its observation by Auger [14] in 2012, see Figure 2. In the figure three multiplets of events are marked with black dots. Their extrapolated endpoints are signed with a blue cross. We note that two of them are within \(8^{\circ}\) of Cen-A. And the third one is within \(8^{\circ}\), to the other relevant source, NGC 253.
One may inquire how often such a correlation may occur by chance. The solid angle distance of each source from its fragment tail cross center is about \(7^{\circ}-8^{\circ}\), which represents less than 1% of the AUGER sky.
The probability that two such related events occurred within 3 trials, each of which point to the same Cen-A area within 1% of the sky, is less than \(2.97\cdot 10^{-4}\).
Furthermore, the probability that three signals appearing above the two selected sources, in an area that is 2% of the Auger sky, and occurring by chance is about \(8\cdot 10^{-6}\). We recall that Cen-A and NGC 253 are the only two in the Auger sky.
Thus, the directional clustering of the fragments towards our a priori selected sources offers another strong argument in favor of the current model of lighter nuclei.
### UHECR at AGN 3C 454: a Z resonance?
Few unexplained UHECR events correlated with very distant AGN (such as 3C 454), if confirmed, may require a different mechanism to exceed the GZK cut-off. A ZeV neutrino that scatters on the relic neutrinos, forming the final UHECR and effectively exceeding any GZK cut-off. The calorimeter for ZeV neutrinos scattering is in the huge light (even sterile) relic cosmic neutrinos halo spread around origin galaxy or local group. They are converting the ultrarelativistic energy of the Z boson, by its decay, into secondary nucleons and antinucleons. These nucleons can reach us, overcoming any GZK distance. These UHE ZeV neutrinos [16], complementary to the GZK neutrinos at tens of EeV, could rise as ascending or horizontal airshowers in deep valleys [17], induced by a tau decay skimming and escaping from the Earth [18].
## 3 Conclusion
Many scenario inspired by many spare parameter are confusing the UHECR connection to their sources. The UHECR anisotropy [19] and asymmetry [20] show that their main origin is within a very local Universe, not a hundred Mpc away.
UHECR are very probably formed in jets both in AGN quasars and in micro-quasars (star forming regions, accretion disk and ejecta), powered by black Hole or neutron star binary tidal disruptions. These jets [21] may arise in both AGN and in smaller size GRB, micro quasars and SGR, shining at different epochs and locations. The charged hadronic nuclei UHECR flying from those jets are possibly bent, spread by random walk, suffering large time delay. This explain why they are not expected to be correlated with present optical active AGN at distances of
Figure 3: The UHECR dipole clustering in equatorial coordinates. Blue, green, red, and violet points and curves indicate different energy ranges (4-8, 8-16, 16-32 and \(>\)32). Few suggested galactic or nearby extragalactic source candidate are labeled. The main extragalactic source, the star-burst NGC 253 role, is well correlated to the \(8-16;16-32\) EeV anisotropy energy ranges. An additional minor influence of galactic sources, as Crab, Vela and LMC, could explain the lower energy, 4 EeV, anisotropy and its directional variability with the different energy increase.
Figure 2: The galactic UHECR map overlaid on the recent 39 EeV anisotropy map, shows the earliest (2011) UHECR events and the associated 20 EeV smeared multiplet, up and down respect to the Cen A position. We interpret these nearly vertical spread of events, followed by the fragment signals, as due to the horizontal planar galactic spiral magnetic fields, whose spin flip up and (later) down during the UHECR flight, is leading to an up-down deflection of the charged He,as well as D, Li, Be, lightest UHECR fragments. Let us note, here for the first time, also the presence of a train of UHECR fragments located around the SMC and LMC galactic direction, pointing to NGC 253 source in the South galactic Pole [14]. The probability that these fragments are correlated by chance with both to the two main candidate of UHECR in Auger sky, Cen-A and NGC 253, is quite negligible, below 0.001%.
hundred Mpc[4]. There is a possibility that the heavier UHECRs such as NI, Co, Fe are mostly bent, twisted and even contained in spiral paths within halos of their own origin. The lightest nuclei _Helium-like_, possibly secondary fragments of heavier ones, are quite directional; they are able to escape from those AGN galaxy, star-burst sources as nearby Cen-A, NGC 253, M82, reaching us with some memory of their origin in the observed few Hot Spots.
These lightest UHECR nuclei at tens of EeV have the virtue of flying only within a space of very few Mpc, hiding the presence of Virgo and offering only a local group detection view. Protons only play a role at EeV or lower, escaping hundreds of Mpcs rather smeared in their arrival directions. Their bending and overlapping should lead to a homogeneous isotropic sky, like the one observed at the EeVs energy windows. Lastly, the very few and rare heaviest and more energetic Ni-Co-Fe UHECRs can also arise around our galactic halo, perhaps also polluted by nearest galactic sources.
In summary lightest nuclei in UHECR are tagging just a few sources in the very local universe Cen-A, M82, NGC 253; AGN or starburst or microquasars. Also at lower energy, in the \(4-30\) EeV range, UHECR are feeding an Auger dipole anisotropy, mostly linked to NGC 253, see Fig.3, and partially to Vela, Crab, LMC, SMC. Other minor, unexplained UHECR cluster can, if confirmed, re-vive earliest Z boson resonance models, based on UHE ZeV neutrinos scattering on relic cosmic ones, with mass, see Fig.4. The eventual multi-peak energy signature [22] in their spectra, may in principle in a far future, test the different neutrino mass splitting or even the recent sterile neutrino masses candidature. Their final nucleon anti-nucleon shower composition signature (not the lightest nuclei one) is a key test, to verify and disentangle their Z boson nature, see Fig. 4, [4].
In conclusion, early popular proton courier at the GZK energy of \(40-60\) EeV, is not longer a viable UHECR candidate [8]. The recent, simplest models based on starburst versus AGN nearby sources nature, while ignoring the UHECR nuclei composition role, cannot explain the puzzling absence of Virgo. In a sentence, for the moment, lightest nuclei [9] and their clustering along a few correlated local sources [6] are the first and main guaranteed messages from last decade of Auger and TA data understanding.
The three tens of EeV events in the Auger sky clustered in multiplets pointing two towards Cen-A and one towards NGC 253, as predicted [10; 11; 23] and observed [14] (see Fig. 2), are providing key support for the lightest nuclei model.
## Acknowledgement
The research by M.K. was financially supported by Southern Federal University in the framework of the State contract with the Ministry of Science and Education of Russian Federation.
|
2302.07938 | Scalable Multi-Agent Reinforcement Learning with General Utilities | We study the scalable multi-agent reinforcement learning (MARL) with general
utilities, defined as nonlinear functions of the team's long-term state-action
occupancy measure. The objective is to find a localized policy that maximizes
the average of the team's local utility functions without the full
observability of each agent in the team. By exploiting the spatial correlation
decay property of the network structure, we propose a scalable distributed
policy gradient algorithm with shadow reward and localized policy that consists
of three steps: (1) shadow reward estimation, (2) truncated shadow Q-function
estimation, and (3) truncated policy gradient estimation and policy update. Our
algorithm converges, with high probability, to $\epsilon$-stationarity with
$\widetilde{\mathcal{O}}(\epsilon^{-2})$ samples up to some approximation error
that decreases exponentially in the communication radius. This is the first
result in the literature on multi-agent RL with general utilities that does not
require the full observability. | Donghao Ying, Yuhao Ding, Alec Koppel, Javad Lavaei | 2023-02-15T20:47:43Z | http://arxiv.org/abs/2302.07938v2 | # Scalable Multi-Agent Reinforcement Learning with General Utilities
###### Abstract
We study the scalable multi-agent reinforcement learning (MARL) with general utilities, defined as nonlinear functions of the team's long-term state-action occupancy measure. The objective is to find a localized policy that maximizes the average of the team's local utility functions without the full observability of each agent in the team. By exploiting the spatial correlation decay property of the network structure, we propose a scalable distributed policy gradient algorithm with shadow reward and localized policy that consists of three steps: (1) shadow reward estimation, (2) truncated shadow Q-function estimation, and (3) truncated policy gradient estimation and policy update. Our algorithm converges, with high probability, to \(\epsilon\)-stationarity with \(\widetilde{\mathcal{O}}(\epsilon^{-2})\) samples up to some approximation error that decreases exponentially in the communication radius. This is the first result in the literature on multi-agent RL with general utilities that does not require the full observability.
## 1 Introduction
Many decision-making problems take a form beyond the classic cumulative reward, such as apprenticeship learning [1], diverse skill discovery [2], pure exploration [3], and state marginal matching [4], among others. Such problems can be abstracted as _reinforcement Learning (RL) with general utilities_[5, 6], which focus on finding a policy to maximize a nonlinear function of the induced state-action occupancy measure. It generalizes the standard RL in which the objective is only an inner product between the state-action occupancy measure induced by the policy and a policy-independent reward for each state-action pair.
Beyond the single agent RL, consider the multi-agent problem where different agents need to interact to obtain a favorable outcome by finding a decision policy that maximizes the global accumulation of all agent's general utility. This setting captures a wide range of applications, e.g. epidemics [7], social networks [8], finance [9], intelligent transportation [10] and wireless communication networks [11]. Recently, [12] proposed a new mechanism for cooperation that allows agents to incorporate general utilities for multi-agent RL (MARL) with common payoffs among agents. To enable the decentralization of agents' policies under general utilities, [12] defines local occupancy measure of each agent as a marginalization of the global occupancy measure, and it defines the local general utility of the agent as an arbitrary function of its local occupancy measure. Based on these definitions, [12] derives a policy gradient-based algorithm, namely Decentralized Shadow
Reward Actor-Critic, where each agent estimates its policy gradient based on local information and communications with its neighbors.
However, their approach assumes the full observability, i.e., each agent should have access to the global states and actions of the team. Such assumption has two limitations. First, it is expensive and sometimes impossible to communicate with all agents in the team when the size of the team is large. In addition, full observability also implies that the policy and critic networks in this approach depend on the global states and actions of the team, which may be a barrier to effective decentralized implementation in practice. Moreover, even if individual state and action spaces are often small, the size of global state and action spaces can be exponentially large in the number of agents, which can be fundamentally intractable for large numbers of agents [13].
To address these issues, we aim to develop a scalable algorithm for multi-agent RL with general utilities without the full observability assumption. Inspired by the localization idea proposed in [14], our work makes the following contributions:
* We derive a truncated policy gradient estimator using the shadow reward and the localized policy for MARL with general utilities. We further establish the approximation error of the proposed truncated policy gradient estimator based on the spatial correlation decay assumptions;
* We propose a distributed policy gradient algorithm with shadow reward and localized policy that consists of three pieces: (1) shadow reward estimation, (2) truncated shadow Q-function estimation, and (3) truncated policy gradient estimation and policy update.
* We establish that, with high probability, our algorithm requires \(\widetilde{\mathcal{O}}(\epsilon^{-2})\) samples to achieve \(\epsilon\)-stationarity with the error term \(\mathcal{O}\left(n\phi_{0}^{2\kappa}\right)\), where \(\phi_{0}\in(0,1)\), \(n\) is the number of agents, \(\mathcal{N}\) is the set of agents.
It is critical to note that the operating hypotheses we require for developing a localized algorithm for MARL are related to, but distinct from [14] in the following sense: we assume the transition dynamics and policies of all agents are globally correlated and the correlation satisfies a spatial decay property. In contrast, the agents are considered to act on their own with their transitions only affected by the nearest neighbors in [14].
### Notations
For a finite set \(\mathcal{S}\), let \(|\mathcal{S}|\) denote its cardinality and let \(\mathrm{TV}(\mu,\mu^{\prime}):=\sup_{A\in\mathcal{S}}|\mu(A)-\mu^{\prime}(A)|\) be the total variation distance between two probability distributions \(\mu\) and \(\mu^{\prime}\) on \(\mathcal{S}\). When the variable \(s\) follows the distribution \(\xi\), we write it as \(s\sim\xi\). Let \(\mathbb{E}[\cdot]\) and \(\mathbb{E}[\cdot\mid\cdot]\), respectively, denote the expectation and conditional expectation of a random variable. Let \(\mathbb{R}\) denote the set of real numbers. For vectors \(x\) and \(y\), we use \(x^{\top}\) to denote the transpose of \(x\) and use \(\langle x,y\rangle\) to denote the inner product \(x^{\top}y\). We use the convention that \(\|x\|_{1}=\sum_{i}|x_{i}|\), \(\|x\|:=\|x\|_{2}=\sqrt{\sum_{i}x_{i}^{2}}\), and \(\|x\|_{\infty}=\max_{i}|x_{i}|\).
## 2 Problem Formulation
Consider an infinite-horizon Markov Decision Process (MDP) over a finite state space \(\mathcal{S}\) and a finite action space \(\mathcal{A}\) with a discount factor \(\gamma\in[0,1)\). Let \(\xi\) be the initial distribution. A policy \(\pi\) is a function that specifies the decision rule of the agent, i.e., the agent takes action \(a\in\mathcal{A}\) with probability \(\pi(a|s)\) in state \(s\in\mathcal{S}\). When action \(a\) is taken, the transition to the next state \(s^{\prime}\)
from state \(s\) follows the probability distribution \(s^{\prime}\sim\mathbb{P}(\cdot|s,a)\). In standard RL, the objective is to maximize the expected (discounted) cumulative reward, i.e.,
\[\max_{\pi}V^{\pi}(r):=\mathbb{E}\bigg{[}\sum_{k=0}^{\infty}\gamma^{k}r\left(s^{ k},a^{k}\right)\bigg{|}a^{k}\sim\pi(\cdot|s^{k}),s^{0}\sim\xi\bigg{]}, \tag{1}\]
where \(r(\cdot,\cdot)\) denotes the reward function and the expectation is taken over all possible trajectories. The value function can also be written as \(V^{\pi}(r)=\langle r,\lambda^{\pi}\rangle\), where \(\lambda^{\pi}\) is the _discounted state-action occupancy measure_ defined as
\[\lambda^{\pi}(s,a)=\sum_{k=0}^{\infty}\gamma^{k}\mathbb{P}\left(s^{k}=s,a^{k}= a\big{|}a^{k}\sim\pi(\cdot|s^{k}),s^{0}\sim\xi\right),\forall(s,a). \tag{2}\]
We consider a more general problem where the objective is to maximize a general function of \(\lambda^{\pi}\), namely
\[\max_{\pi}f(\lambda^{\pi}), \tag{3}\]
where \(f:\mathbb{R}^{|\mathcal{S}|\cdot|\mathcal{A}|}\to\mathbb{R}\) can be a possibly nonlinear function. Such an objective arises in various applications and is commonly referred to as a _general utility_[12, 15]. For instance, in apprenticeship learning [1], the objective is \(f(\lambda^{\pi})=-\operatorname{dist}(\lambda^{\pi},\lambda_{e})\), where \(\lambda_{e}\) corresponds to the expert demonstration and \(\operatorname{dist}(\cdot,\cdot)\) is a distance function. In maximum entropy exploration [3], \(f(\cdot)\) refers to the entropy function such that \(f(\lambda^{\pi})=-\sum_{s}d^{\pi}(s)\log d^{\pi}(s)\), where \(d^{\pi}(s)=(1-\gamma)\sum_{a}\lambda^{\pi}(s,a)\) is the discounted state occupancy measure.
In this work, we study the decentralized version of (3), where the system is decentralized among a network of agents associated with a graph \(\mathcal{G}=(\mathcal{N},\mathcal{E})\) (not densely connected). The vertex set \(\mathcal{N}=\{1,2,\ldots,n\}\) denotes the set of \(n\) agents and the edge set \(\mathcal{E}\) prescribes the communication links among agents. Let \(d(i,j)\) be the distance between agents \(i\) and \(j\) on \(\mathcal{G}\), defined as the length of the shortest path between them. For \(\kappa\geq 0\), we define \(\mathcal{N}_{i}^{\kappa}=\{j\in\mathcal{N}|d(i,j)\leq\kappa\}\) as the set of agents in the neighborhood of radius \(\kappa\) of agent \(i\), with the shorthand notations \(\mathcal{N}_{-i}^{\kappa}:=\mathcal{N}\mathcal{N}_{i}^{\kappa}\) and \(-i:=\mathcal{N}\mathcal{N}_{i}^{0}=\mathcal{N}\backslash\{i\}\). The details of the decentralization are as follows:
Space DecompositionThe global state and action spaces are the product of local spaces, i.e., \(\mathcal{S}=\mathcal{S}_{1}\times\mathcal{S}_{2}\times\cdots\times\mathcal{S} _{n}\), \(\mathcal{A}=\mathcal{A}_{1}\times\mathcal{A}_{2}\times\cdots\times\mathcal{A} _{n}\), meaning that for every \(s\in\mathcal{S}\) and \(a\in\mathcal{A}\), we can write \(s=(s_{1},s_{2},\ldots,s_{n})\) and \(a=(a_{1},a_{2},\ldots,a_{n})\). For each subset \(\mathcal{N}^{\prime}\subset\mathcal{N}\), we use \((s_{\mathcal{N}^{\prime}},a_{\mathcal{N}^{\prime}})\) to denote the state-action pair for agents in \(\mathcal{N}^{\prime}\). We assume that each agent has direct access to its own states and actions while accessing other agents' information requires communications.
Transition DecompositionGiven the current global state \(s\) and action \(a\), the local states in the next period are independently generated, i.e., \(\mathbb{P}(s^{\prime}|s,a)=\prod_{i\in\mathcal{N}}\mathbb{P}_{i}(s^{\prime}_{ i}|s,a),\;\forall s^{\prime}\in\mathcal{S}\), where \(\mathbb{P}_{i}\) denotes the local transition probability.
Policy FactorizationThe global policy can be decomposed as \(\pi(a|s)=\prod_{i\in\mathcal{N}}\pi^{i}\left(a_{i}|s_{\mathcal{N}_{i}^{\kappa} }\right),\,\forall(s,a)\), i.e., given global state \(s\), each agent \(i\) acts independently according to its local policy \(\pi^{i}\), which depends on the state of agents in \(\mathcal{N}_{i}^{\kappa}\). For the policy parameterization, we assume that the local policy of agent \(i\) is parameterized by \(\theta_{i}\), and therefore one can write \(\pi(a|s)=\pi_{\theta}(a|s)=\prod_{i\in\mathcal{N}}\pi^{i}_{\theta_{i}}\left(a_ {i}|s_{\mathcal{N}_{i}^{\kappa}}\right)\), where \(\theta=(\theta_{1},\theta_{2},\ldots,\theta_{n})\in\Theta\) is the global parameter.
Local UtilityFor each agent \(i\), define its _local discounted state-action occupancy measure_ as
\[\lambda_{i}^{\pi}\left(s_{i},a_{i}\right)=\sum_{k=0}^{\infty}\gamma^{k}\mathbb{P }\left(s_{i}^{k}=s_{i},a_{i}^{k}=a_{i}\big{|}a^{k}\sim\pi(\cdot|s^{k}),s^{0} \sim\xi\right),\forall\left(s_{i},a_{i}\right), \tag{4}\]
which can be viewed as the marginalization of the global occupancy measure, i.e., \(\lambda_{i}^{\pi}(\hat{s}_{i},\hat{a}_{i})=\sum_{s_{i}=\hat{s}_{i},a_{i}=\hat{a }_{i}}\lambda^{\pi}(s,a)\). Then, the global utility function \(f(\cdot)\) can be written as the average of local utilities, i.e., \(f(\lambda^{\pi})=1/n\times\sum_{i\in\mathcal{N}}f_{i}(\lambda_{i}^{\pi})\), where \(f_{i}:\mathbb{R}^{|\mathcal{S}_{i}|\cdot|\mathcal{A}_{i}|}\to\mathbb{R}\) is a function of the local occupancy measure \(\lambda_{i}^{\pi}\) and is private to agent \(i\). Thus, under the parameterization \(\pi_{\theta}\), (3) can be rewritten as
\[\max_{\theta\in\Theta}F(\theta),\text{ where }F(\theta):=f(\lambda^{\pi_{ \theta}})=\frac{1}{n}\cdot\sum_{i\in\mathcal{N}}f_{i}(\lambda_{i}^{\pi_{ \theta}}). \tag{5}\]
Finally, we remark that, by choosing all \(f_{i}(\cdot)\) to be linear, (5) reduces to standard MARL, where each agent \(i\) is associated with a local reward function \(r_{i}:\mathcal{S}_{i}\times\mathcal{A}_{i}\to\mathbb{R}\) and the global reward is defined as \(r(s,a):=1/n\times\sum_{i\in\mathcal{N}}r_{i}(s_{i},a_{i})\).
## 3 Truncated Policy Gradient Algorithm with Shadow Reward
In RL with cumulative reward, the _policy gradient theorem_[16] applies to computing the gradient of the value function:
\[\begin{split}\nabla_{\theta}V^{\pi_{\theta}}(r)&= \frac{1}{1-\gamma}\mathbb{E}_{s\sim d^{\pi_{\theta}},a\sim\pi_{\theta}(\cdot|s )}\left[\psi_{\theta}(a|s)\cdot Q^{\pi_{\theta}}(r;s,a)\right],\\ &=\mathbb{E}\left[\sum_{k=0}^{\infty}\gamma^{k}\psi_{\theta}(a^{k }|s^{k})\cdot Q^{\pi_{\theta}}(r;s^{k},a^{k})\right]\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
In the decentralized formulation (5), for each agent \(i\), let \(r_{i}^{\pi_{\theta}}:=\nabla_{\lambda_{i}}f_{i}(\lambda_{i}^{\pi_{\theta}})\in \mathbb{R}^{|\mathcal{S}_{i}|\times|\mathcal{A}_{i}|}\) be the local shadow reward, which only depends on the local state and action for a given policy \(\pi_{\theta}\), and we define the local shadow Q-function as \(Q^{\pi_{\theta}}_{i}(s,a):=Q^{\pi_{\theta}}(r_{i}^{\pi_{\theta}};s,a)\). Then, it is clear that \(r^{\pi_{\theta}}=1/n\times\sum_{i\in\mathcal{N}}r_{i}^{\pi_{\theta}}\) and \(Q^{\pi_{\theta}}_{f}(s,a)=1/n\times\sum_{i\in\mathcal{N}}Q^{\pi_{\theta}}_{i}( s,a)\), and the gradient of \(F(\theta)\) with respect to agent \(i\)'s local parameter \(\theta_{i}\) can be written as
\[\nabla_{\theta_{i}}F(\theta)=\frac{1}{1-\gamma}\mathbb{E}_{s\sim d^{\pi_{ \theta}},a\sim\pi_{\theta}(\cdot|s)}\left[\psi_{\theta_{i}}(a_{i}|s_{\mathcal{ N}_{i}^{\kappa}})\cdot\frac{1}{n}\sum_{j\in\mathcal{N}}Q^{\pi_{\theta}}_{j}(s,a) \right], \tag{10}\]
where we use the policy factorization to derive that \(\nabla_{\theta_{i}}\log\pi_{\theta}(a|s)=\nabla_{\theta_{i}}\log\pi_{\theta_{ i}}^{i}(a_{i}|s_{\mathcal{N}_{i}^{\kappa}})=:\psi_{\theta_{i}}(a_{i}|s)\), and we refer to \(\psi_{\theta_{i}}(\cdot|\cdot)\) as the local score function. Thus, updating the local parameter \(\theta_{i}\) with the gradient (10) requires knowing the global state and action as well as the shadow Q-functions of all agents, which can be inefficient in large networks due to the communication cost. In the remainder of the section, we show that an accurate gradient estimator can be designed for all agents while only local communications with neighbors are required under some correlation decay assumptions.
### Spatial Correlation Decay Assumption
Following [17], we assume that a form of correlation decay property holds for the transition probability [18, 19].
**Assumption 1**.: _For a matrix \(M\in\mathbb{R}^{n\times n}\) whose \((i,j)\) entry is defined as_
\[M_{ij}=\sup_{s_{j},a_{j},s^{\prime}_{j},a^{\prime}_{j},s\sim j,a_{-j}}\mathrm{ TV}\left(\mathbb{P}_{i}\left(\cdot|s_{j},s_{-j},a_{j},a_{-j}\right),\mathbb{P}_{i} \left(\cdot|s^{\prime}_{j},s_{-j},a^{\prime}_{j},a_{-j}\right)\right), \tag{11}\]
_assume that there exists \(\beta\geq 0\) such that_
\[\max_{i\in\mathcal{N}}\sum_{j\in\mathcal{N}}e^{\beta d(i,j)}M_{ij}\leq\rho, \tag{12}\]
_with \(\rho<1/\gamma\), where \(\gamma\) is the discount factor._
By definition, the element \(M_{ij}\) characterizes the maximum level of impact of agent \(j\)'s state and action on the local transition probability of agent \(i\). Then, Assumption 1 mainly requires that such impacts decrease exponentially with respect to the distance between agents. Such a decay is usually typical in engineered systems with large networks, e.g., in wireless communication where the strength of signals decreases exponentially with the distance [20, 21].
### Truncated Shadow Q-function
We first introduce the notion of _exponential decay_ for Q-functions [14], which is a form of correlation decay property.
**Definition 1**.: _For \(c\geq 0\) and \(\phi\in(0,1)\), the \((c,\phi)\)-exponential decay property holds if, for every policy \(\pi_{\theta}\), agent \(i\), and state-action pairs \((s,a),(s^{\prime},a^{\prime})\in\mathcal{S}\times\mathcal{A}\) with \(s_{\mathcal{N}_{i}^{\kappa}}=s^{\prime}_{\mathcal{N}_{i}^{\kappa}}\), \(a_{\mathcal{N}_{i}^{\kappa}}=a^{\prime}_{\mathcal{N}_{i}^{\kappa}}\), the local shadow Q-function satisfies_
\[\left|Q^{\pi_{\theta}}_{i}(s,a)-Q^{\pi_{\theta}}_{i}(s^{\prime},a^{\prime}) \right|\leq c\phi^{\kappa}. \tag{13}\]
The exponential decay property holds when the dependency of each agent's local shadow Q-function on other agents' states and actions exponentially decreases with respect to their distances. Motivated by [14] and [19], for every \(i\), we define \(\widetilde{Q}_{i}^{\pi_{\theta}}:\mathcal{S}_{\mathcal{N}_{i}^{\kappa}}\times \mathcal{A}_{\mathcal{N}_{i}^{\kappa}}\rightarrow\mathbb{R}\) to be agent \(i\)'s truncated shadow Q-function, depending only on the states and actions of agents in the neighborhood \(\mathcal{N}_{i}^{\kappa}\):
\[\widetilde{Q}_{i}^{\pi_{\theta}}(s_{\mathcal{N}_{i}^{\kappa}},a_{\mathcal{N}_ {i}^{\kappa}}):=Q_{i}^{\pi_{\theta}}(s_{\mathcal{N}_{i}^{\kappa}},\bar{s}_{ \mathcal{N}_{-i}^{\kappa}},a_{\mathcal{N}_{i}^{\kappa}},\bar{a}_{\mathcal{N}_ {-i}^{\kappa}}), \tag{14}\]
for every \((s_{\mathcal{N}_{i}^{\kappa}},a_{\mathcal{N}_{i}^{\kappa}})\in\mathcal{S}_{ \mathcal{N}_{i}^{\kappa}}\times\mathcal{A}_{\mathcal{N}_{i}^{\kappa}}\), where \((\bar{s}_{\mathcal{N}_{-i}^{\kappa}},\bar{a}_{\mathcal{N}_{-i}^{\kappa}})\) is any fixed state-action pair for the agents in \(\mathcal{N}_{-i}^{\kappa}\). That is, the estimator \(\widetilde{Q}_{i}^{\pi_{\theta}}(s_{\mathcal{N}_{i}^{\kappa}},a_{\mathcal{N}_ {i}^{\kappa}})\) can be viewed as an approximate of the true shadow Q-function \(Q_{i}^{\pi_{\theta}}(s,a)\) by taking arbitrary values for \((s_{\mathcal{N}_{-i}^{\kappa}},a_{\mathcal{N}_{-i}^{\kappa}})\). Compared with \(Q_{i}^{\pi_{\theta}}\), the estimator \(\widetilde{Q}_{i}^{\pi_{\theta}}\) depends on much smaller state and action spaces, and it is thus easy to estimate and store.
When the \((c,\phi)\)-exponential decay property holds for Q-functions, it can be intuitively understood that the accuracy of this approximation has the order \(\mathcal{O}(\phi^{\kappa})\). The following lemma shows that, when Assumption 1 holds and the shadow reward is universally bounded, the exponential decay property is satisfied. We are thus capable of proving that \(\widetilde{Q}_{i}^{\pi_{\theta}}\) is a satisfactory approximation of \(Q_{i}^{\pi_{\theta}}\).
**Lemma 2**.: _Suppose that Assumption 1 holds and there exists \(M_{f}>0\) such that \(\|\nabla_{\lambda_{i}}f_{i}(\lambda_{i}^{\pi_{\theta}})\|_{\infty}\leq M_{f}\), \(\forall i\in\mathcal{V},\theta\in\Theta\). Then, **(I)** the \((c_{0},\phi_{0})\)-exponential decay property holds with \((c_{0},\phi_{0})=\left(\frac{2\gamma\rho M_{f}}{1-\gamma\rho},e^{-\beta}\right)\), **(II)** the truncated shadow Q-function satisfies \(\sup_{s,a}\left|\widehat{Q}_{i}^{\pi_{\theta}}(s_{\mathcal{N}_{i}^{\kappa}},a_ {\mathcal{N}_{i}^{\kappa}})-Q_{i}^{\pi_{\theta}}(s,a)\right|\leq c_{0}\phi_{0 }^{\kappa}\)._
Under the bounded gradient assumption, we can treat the shadow Q-functions as standard Q-functions with bounded reward functions. We refer the reader to [17] for the proof of part _(I)_ in Lemma 2. Then, part _(II)_ follows directly from the definition of the exponential decay property. We note that the set of all possible state-action occupancy measures forms a convex polytope in \(\mathbb{R}^{|\mathcal{S}|\times|\mathcal{A}|}\) and is therefore a compact set. Thus, requiring the existence of \(M_{f}>0\) in Lemma 2 is not a restrictive assumption and it naturally holds if the gradient \(\nabla_{\lambda}f(\lambda)\) is a continuous mapping on the set of occupancy measures. We additionally remark that a faster rate of the exponential decay property may be proved under extra assumptions, e.g., mixing properties of the underlying Markov chain [14].
### Truncated Policy Gradient Estimator
In this section, we introduce how the exponential decay property can help design scalable algorithms.
As mentioned earlier, the major challenge in employing the exact policy gradient (10) comes from obtaining the global state-action pairs and the local shadow Q-functions of all agents, which may incur high costs in large networks. Instead, we consider the following truncated policy gradient estimator:
\[\widehat{g}_{i}(\theta)=\frac{1}{1-\gamma}\mathbb{E}_{s\sim d^{\pi_{\theta}}, a\sim\pi_{\theta}(\cdot|s)}\Bigg{[}\psi_{\theta_{i}}(a_{i}|s_{\mathcal{N}_{i}^{ \kappa}})\cdot\frac{1}{n}\sum_{j\in\mathcal{N}_{i}^{\kappa}}\widehat{Q}_{j}^{ \pi_{\theta}}(s_{\mathcal{N}_{j}^{\kappa}},a_{\mathcal{N}_{j}^{\kappa}}) \Bigg{]}, \tag{15}\]
Compared to the true policy gradient (10), the estimator \(\widehat{g}_{i}(\theta)\) replaces the shadow Q-functions with their truncated estimators. Furthermore, it only uses the truncated Q-functions of agents in \(\mathcal{N}_{i}^{\kappa}\). In the next proposition, we evaluate the approximation error of \(\widehat{g}_{i}(\theta)\).
**Proposition 1**.: _Let Assumption 1 hold. Suppose that there exist \(M_{f},M_{\psi}>0\) such that \(\|\nabla_{\lambda_{i}}f_{i}(\lambda_{i}^{\pi_{\theta}})\|_{\infty}\leq M_{f}\) and \(\|\psi_{\theta_{i}}(a_{i}|s_{\mathcal{N}_{i}^{\kappa}})\|\leq M_{\psi}\), \(\forall i\in\mathcal{N},(s,a)\in\mathcal{S}\times\mathcal{A},\theta\in\Theta\). Then, for all \(i\in\mathcal{N},\theta\in\Theta\), we have that_
\[\|\widehat{g}_{i}(\theta)-\nabla_{\theta_{i}}F(\theta)\|\leq\frac{c_{0}\phi_{0 }^{\kappa}M_{\psi}}{1-\gamma}. \tag{16}\]
Proof.: In this proof, we write \(\mathbb{E}_{s\sim d^{\pi_{\theta}},a\sim\pi_{\theta}(\cdot|s)}\) simply as \(\mathbb{E}\). The difference term in (16) can be expanded as
\[\widehat{g}_{i}(\theta)-\nabla_{\theta_{i}}F(\theta) =\frac{1}{n(1-\gamma)}\mathbb{E}\Bigg{[}\psi_{\theta_{i}}(a_{i}|s _{\mathcal{N}_{i}^{\kappa}})\Bigg{(}\sum_{j\in\mathcal{N}_{i}^{\kappa}} \widehat{Q}_{j}^{\pi_{\theta}}(s_{\mathcal{N}_{j}^{\kappa}},a_{\mathcal{N}_{j} ^{\kappa}})-\sum_{j\in\mathcal{N}}Q_{j}^{\pi_{\theta}}(s,a)\Bigg{)}\Bigg{]} \tag{17}\] \[=\frac{1}{n(1-\gamma)}\mathbb{E}\Bigg{[}\psi_{\theta_{i}}(a_{i}|s _{\mathcal{N}_{i}^{\kappa}})\sum_{j\in\mathcal{N}}\left(\widehat{Q}_{j}^{\pi_{ \theta}}(s_{\mathcal{N}_{j}^{\kappa}},a_{\mathcal{N}_{j}^{\kappa}})-Q_{j}^{ \pi_{\theta}}(s,a)\right)\Bigg{]}\] \[\quad-\frac{1}{n(1-\gamma)}\mathbb{E}\Bigg{[}\psi_{\theta_{i}}(a _{i}|s_{\mathcal{N}_{i}^{\kappa}})\sum_{j\in\mathcal{N}_{-i}^{\kappa}} \widehat{Q}_{j}^{\pi_{\theta}}(s_{\mathcal{N}_{j}^{\kappa}},a_{\mathcal{N}_{j }^{\kappa}})\Bigg{]}.\]
Now, we show that the second term above is actually \(0\). Indeed, for given \(s\in\mathcal{S}\), one can write:
\[\mathbb{E}_{a\sim\pi_{\theta}(\cdot|s)}\Bigg{[}\psi_{\theta_{i}}(a _{i}|s_{\mathcal{N}_{i}^{\kappa}})\sum_{j\in\mathcal{N}_{-i}^{\kappa}} \widehat{Q}_{j}^{\pi_{\theta}}(s_{\mathcal{N}_{j}^{\kappa}},a_{\mathcal{N}_{j }^{\kappa}})\Bigg{]} \tag{18}\] \[=\sum_{a}\prod_{k\in\mathcal{N}}\pi_{\theta_{k}}^{k}(a_{k}|s_{ \mathcal{N}_{k}^{\kappa}})\cdot\frac{\nabla_{\theta_{i}}\pi_{\theta_{i}}^{i}(a _{i}|s_{\mathcal{N}_{i}^{\kappa}})}{\pi_{\theta_{i}}^{i}(a_{i}|s_{\mathcal{N} _{i}^{\kappa}})}\cdot\sum_{j\in\mathcal{N}_{-i}^{\kappa}}\widehat{Q}_{j}^{\pi_ {\theta}}(s_{\mathcal{N}_{j}^{\kappa}},a_{\mathcal{N}_{j}^{\kappa}})\] \[=\sum_{a_{-i}}\Bigg{[}\left(\sum_{a_{i}}\nabla_{\theta_{i}}\pi_{ \theta_{i}}^{i}(a_{i}|s_{\mathcal{N}_{i}^{\kappa}})\right)\cdot\prod_{k\neq i} \pi_{\theta_{k}}^{k}(a_{k}|s_{\mathcal{N}_{k}^{\kappa}})\cdot\sum_{j\in \mathcal{N}_{-i}^{\kappa}}\widehat{Q}_{j}^{\pi_{\theta}}(s_{\mathcal{N}_{j}^{ \kappa}},a_{\mathcal{N}_{j}^{\kappa}})\Bigg{]}\] \[=0,\]
where we expand the expectation and the score function in the first equality. The third equality holds since \(j\in\mathcal{N}_{-i}^{\kappa}\) implies \(i\notin\mathcal{N}_{j}^{\kappa}\), and thus the summation \(\sum_{j\in\mathcal{N}_{-i}^{\kappa}}\widehat{Q}_{j}^{\pi_{\theta}}(s_{ \mathcal{N}_{j}^{\kappa}},a_{\mathcal{N}_{j}^{\kappa}})\) is irrelevant to \(a_{i}\). In the last equality, since \(\sum_{a_{i}}\nabla_{\theta_{i}}\pi_{\theta_{i}}^{i}(a_{i}|s_{\mathcal{N}_{i}^{ \kappa}})=\nabla_{\theta_{i}}\left[\sum_{a_{i}}\pi_{\theta_{i}}^{i}(a_{i}|s_{ \mathcal{N}_{i}^{\kappa}})\right]=\nabla_{\theta_{i}}1=0\), we conclude that the whole term is equal to zero. Therefore, it holds that
\[\|\widehat{g}_{i}(\theta)-\nabla_{\theta_{i}}F(\theta)\| =\Bigg{\|}\frac{1}{n(1-\gamma)}\mathbb{E}\Bigg{[}\psi_{\theta_{ i}}(a_{i}|s_{\mathcal{N}_{i}^{\kappa}})\sum_{j\in\mathcal{N}}\left( \widehat{Q}_{j}^{\pi_{\theta}}(s_{\mathcal{N}_{j}^{\kappa}},a_{\mathcal{N}_{j} ^{\kappa}})-Q_{j}^{\pi_{\theta}}(s,a)\right)\Bigg{]}\Bigg{\|} \tag{19}\] \[\leq\frac{1}{n(1-\gamma)}\cdot M_{\psi}\cdot n\cdot c_{0}\phi_{0} ^{\kappa}\]
where we use Lemma 2 to bound the difference between the truncated shadow Q-functions and true shadow Q-functions. This completes the proof.
Proposition 1 shows that, the accuracy of the truncated gradient estimator has the order \(\mathcal{O}(\phi_{0}^{\kappa})\), which decreases along with the communication radius \(\kappa\). Thus, it indicates a feasible direction to reduce the communication of agents to their \(\kappa\)-neighborhoods.
### Algorithm Design
In this section, we present our method, Distributed Policy Gradient Algorithm with Shadow Reward, for solving problem (5). The algorithm, summarized in Algorithm 1, consists of the following elements:
```
1:Input: Initial policy \(\theta^{0}\); initial distribution \(\xi\); communication radius \(\kappa\); step-sizes \(\{\eta^{t}_{\theta}\}\); batch size \(B\); episode length \(H\).
2:for iteration \(t=0,1,2,\dots\)do
3: Sample \(B\) trajectories \(\tau=\left\{(s^{0},a^{0}),\cdots,(s^{H-1},a^{H-1})\right\}\) with length \(H\), under policy \(\pi_{\theta^{t}}\), initial distribution \(\xi\). Collect them as batch \(\mathcal{B}_{t}\).
4: Each agent \(i\) estimates its local occupancy measure \(\lambda^{\pi_{\theta^{t}}}_{i}\) through \[\widetilde{\lambda}^{t}_{i}=\frac{1}{B}\sum_{\tau\in\mathcal{B}_{t}}\sum_{k=0 }^{H-1}\gamma^{k}\cdot\mathbf{e}_{i}\left(s^{k}_{i},a^{k}_{i}\right)\in\mathbb{ R}^{|\mathcal{S}_{i}|\times|\mathcal{A}_{i}|},\] (20) and computes the empirical shadow reward \(\widetilde{\tau}^{t}_{i}=\nabla_{\lambda_{i}}f_{i}(\widetilde{\lambda}^{t}_{ i})\).
5: Each agent \(i\) communicates with its neighborhood \(\mathcal{N}^{\kappa}_{i}\) and estimate the truncated Q-function under \(\widetilde{\tau}^{t}_{i}\), denoted as \(\widetilde{Q}^{t}_{i}\).
6: Each agent \(i\) shares \(\widetilde{Q}^{t}_{i}\) with its neighborhood \(\mathcal{N}^{\kappa}_{i}\) and estimates the truncated policy gradient through \[\widetilde{g}^{t}_{i}=\frac{1}{B}\sum_{\tau\in\mathcal{B}_{t}}\left[\sum_{k=0 }^{H-1}\gamma^{k}\psi_{\theta^{t}_{i}}(a^{k}_{i}|s^{k}_{\mathcal{N}^{\kappa}_ {i}})\cdot\frac{1}{n}\sum_{j\in\mathcal{N}^{\kappa}_{i}}\widetilde{Q}^{t}_{i} (s^{k}_{\mathcal{N}^{\kappa}_{j}},a^{k}_{\mathcal{N}^{\kappa}_{j}})\right]\!.\] (21)
7: Each agent \(i\) updates the policy through \[\theta^{t+1}_{i}=\theta^{t}_{i}+\eta^{t}_{\theta}\cdot\widetilde{g}^{t}_{i}.\] (22)
8:endfor
```
**Algorithm 1** Distributed Policy Gradient Algorithm With Shadow Reward and Localized Policy
Shadow Reward Estimation (lines 3-4)In the beginning of each iteration \(t\), the current policy is simulated to generate a batch of \(B\) trajectories with length \(H\). Since the local policy \(\pi^{i}_{\theta_{i}}(\cdot|s_{\mathcal{N}^{\kappa}_{i}})\) of each agent \(i\) only depends on the states of \(\mathcal{N}^{\kappa}_{i}\), the process of trajectory sampling is comply with the communication requirement. Then, using local state-action information, each agent \(i\) forms an estimation \(\widetilde{\lambda}^{t}_{i}\) for its local occupancy measure through (20), where we define \(\mathbf{e}_{i}\left(s_{i},a_{i}\right)\in\mathbb{R}^{|\mathcal{S}_{i}|\times| \mathcal{A}_{i}|}\) as a vector with its \((s_{i},a_{i})\)-th entry equal to one and other entries equal to zero. Finally, the empirical shadow reward is computed via \(\widetilde{\tau}^{t}_{i}=\nabla_{\lambda_{i}}f_{i}(\widetilde{\lambda}^{t}_{ i})\).
Truncated Shadow Q-function Estimation (line 5)In the next stage, each agent \(i\) takes \(\widetilde{\tau}^{t}_{i}\) as their reward function (pretending that to be the true shadow reward) and communicates with its neighborhood \(\mathcal{N}^{\kappa}_{i}\) to estimate the truncated shadow Q-function \(\widetilde{Q}^{t}_{i}\). We do not specify the estimation process and allow the use of any existing approach for Q-function evaluation as long as it satisfies the error bound required for the theoretical analysis in Section 4 (see Assumption 4). For example, one can use the Temporal difference (TD) learning [22], which is a model-free method for estimating the Q-function. In TD-learning, all agents iteratively update their estimations along a common trajectory \(\tau=\left\{(s^{0},a^{0}),\cdots,(s^{H-1},a^{H-1})\right\}\) under policy \(\pi_{\theta^{t}}\). For every new global state-action pair \((s^{k},a^{k})\), the TD-learning updates the current estimation \(\widetilde{Q}^{t}_{i}\) through
\[\begin{split}\widetilde{Q}^{t}_{i}(s^{k-1}_{\mathcal{N}^{\kappa}_ {i}},a^{k-1}_{\mathcal{N}^{\kappa}_{i}})&\leftarrow(1-\eta^{k-1}_ {Q})\widetilde{Q}^{t}_{i}(s^{k-1}_{\mathcal{N}^{\kappa}_{i}},a^{k-1}_{ \mathcal{N}^{\kappa}_{i}})+\eta^{k-1}_{Q}\Big{[}\widetilde{\tau}^{t}_{i}(s^{k -1}_{i},a^{k-1}_{i})+\gamma\widetilde{Q}^{t}_{i}(s^{k}_{\mathcal{N}^{\kappa}_ {i}},a^{k}_{\mathcal{N}^{\kappa}_{i}})\big{]},\\ \widetilde{Q}^{t}_{i}(s^{k}_{\mathcal{N}^{\kappa}_{i}},a^{k}_{ \mathcal{N}^{\kappa}_{i}})&\leftarrow\widetilde{Q}^{t}_{i}(s^{k ^{\kappa}_{i}}_{\mathcal{N}^{\kappa}_{i}},a^{k}_{\mathcal{N}^{\kappa}_{i}}), \text{ for }(s_{\mathcal{N}^{\kappa}_{i}},a^{k}_{\mathcal{N}^{\kappa}_{i}})\neq(s^{k-1}_{ \mathcal{N}^{\kappa}_{i}},a^{k-1}_{\mathcal{N}^{\kappa}_{i}}),\end{split} \tag{23}\]
where \(\{\eta_{Q}^{k}\}\) are the learning step-sizes. As shown in [14, Theorem 5], the above procedure exhibits an error rate of \(\mathcal{O}(1/\sqrt{H})\) under a local exploration assumption. Together with the error induced by the empirical shadow reward, this implies \(\|\widetilde{Q}_{i}^{t}-\widetilde{Q}_{i}^{t}\|_{\infty}=\mathcal{O}(1/\sqrt{H} +\|\overline{r}_{i}^{t}-r_{i}^{t}\|_{\infty})\). Besides the TD-learning, one can also deploy other model-free or model-based estimators depending on the sampling mechanisms, e.g., [23, 24].
Truncated Policy Gradient Estimation and Policy Update (lines 6-7)At the final stage, every agent \(i\) exchanges their estimation \(\widetilde{Q}_{i}^{t}\) with the neighborhood \(\mathcal{N}_{i}^{\kappa}\) and evaluates the truncated policy gradient (15) through (21). The new policy is obtained by performing a policy gradient ascent with the estimated gradient \(\widetilde{g}_{i}^{t}\).
**Remark 1**.: _In contrast to a major line of MARL research, e.g., [12, 25], full observability is not required for executing Algorithm 1, i.e., the agents do not need have access to the global information, including the global state and action. Instead, for the specified communication radius \(\kappa\), each agent \(i\) needs to communicate with its neighborhood \(\mathcal{N}_{i}^{\kappa}\) to sample trajectories, estimate its local shadow Q-function, and estimate its truncated policy gradient._
## 4 Convergence Analysis
In this section, we analyze the convergence behavior of Algorithm 1. We first summarize the additional technical assumptions required, among which some have appeared in the previous section.
**Assumption 2**.: _Let \(\Lambda\) be the set of all possible occupancy measures \(\lambda\). The utility function \(f(\cdot)\) satisfies: **(I)**\(\exists M_{f}>0\) such that \(\|\nabla_{\lambda_{i}}f_{i}(\lambda_{i})\|_{\infty}\leq M_{f}\), \(\forall i\in\mathcal{N}\) and \(\lambda\in\Lambda\). **(II)**\(\exists L_{\lambda}\) such that \(\|\nabla_{\lambda_{i}}f_{i}(\lambda_{i})-\nabla_{\lambda_{i}}f_{i}(\lambda_{i }^{\prime})\|_{\infty}\leq L_{\lambda}\|\lambda_{i}-\lambda_{i}^{\prime}\|\), \(\forall i\in\mathcal{N}\) and \(\lambda,\lambda^{\prime}\in\Lambda\)._
**Assumption 3**.: _The parameterized policy \(\pi_{\theta}\) and the associated occupancy measure \(\lambda^{\pi_{\theta}}\) satisfy: **(I)**\(\exists M_{\psi}>0\) such that the score function \(\|\psi_{\theta_{i}}(a_{i}|s_{\mathcal{N}_{i}^{\kappa}})\|\leq M_{\psi}\), \(\forall i\in\mathcal{N}\), \((s,a)\in\mathcal{S}\times\mathcal{A}\), \(\theta\in\Theta\). **(II)**\(\exists L_{\theta}>0\) such that the utility function \(F(\theta)=f(\lambda^{\pi_{\theta}})\) is \(L_{\theta}\)-smooth with respect to \(\theta\)._
Besides the bounded gradient and the bounded score function assumptions, we additionally assume that the utility function \(f_{i}(\lambda_{i}^{\pi_{\theta}})\) is smooth with respect to both the occupancy measure \(\lambda_{i}\) and the policy \(\theta\). These assumptions are standard in the literature of reinforcement learning with general utilities [3, 12, 15, 26].
As discussed in Section 3, we do not specify the estimation process for the truncated shadow Q-functions. Instead, we assume that an oracle is used, which produces a bounded-error approximation to the true function. Let \(\widetilde{Q}_{r_{i}}^{\pi_{\theta}}(\cdot,\cdot)\in\mathbb{R}^{|\mathcal{S}_ {\mathcal{N}_{i}^{\kappa}}|\times|\mathcal{A}_{\mathcal{N}_{i}^{\kappa}}|}\) be the \(\kappa\)-truncated local Q-function under reward \(r_{i}\in\mathbb{R}^{|\mathcal{S}_{i}|\times|\mathcal{A}_{i}|}\) for agent \(i\).
**Assumption 4**.: _For every \(i\in\mathcal{N}\) and \(\theta\in\Theta\), an approximation \(\widetilde{Q}_{r_{i}}^{\pi_{\theta}}(\cdot,\cdot)\) can be computed for \(\widetilde{Q}_{r_{i}}^{\pi_{\theta}}(\cdot,\cdot)\) such that_
\[\sup_{s_{\mathcal{N}_{i}^{\kappa}},a_{\mathcal{N}_{i}^{\kappa}}}\big{|} \widetilde{Q}_{r_{i}}^{\pi_{\theta}}(s_{\mathcal{N}_{i}^{\kappa}},a_{\mathcal{ N}_{i}^{\kappa}})-\widetilde{Q}_{r_{i}}^{\pi_{\theta}}(s_{\mathcal{N}_{i}^{ \kappa}},a_{\mathcal{N}_{i}^{\kappa}})\big{|}\leq\epsilon_{0}\|r_{i}\|_{\infty}, \tag{24}\]
_where \(\epsilon_{0}>0\) is the approximation error._
Under Assumption 4, we have that the estimator \(\widetilde{Q}_{i}^{t}\) in line 5 of Algorithm 1 satisfies \(\|\widetilde{Q}_{i}^{t}-\widetilde{Q}_{\overline{r}_{i}^{\pi_{\theta}}}^{\pi_{ \theta}}\|_{\infty}\leq\epsilon_{0}\|\overline{r}_{i}^{t}\|_{\infty}\). This can be achieved, for example, with \(\mathcal{O}(1/(\epsilon_{0})^{2})\) samples by the TD-learning procedure (23).
Before analyzing the convergence of Algorithm 1, we first present a few auxiliary results, which evaluate the estimators \(\widetilde{\lambda}_{i}^{t}\), \(\widetilde{\tau}_{i}^{t}\), \(\widetilde{Q}_{i}^{t}\), and \(\widetilde{g}_{i}^{t}\).
**Proposition 2**.: _Let \(\delta_{0}\in(0,1/(2n))\) be the failure probability. Under Assumptions 2-4, it holds for every period \(t\geq 0\) that_
* _for each agent_ \(i\in\mathcal{N}\)_, with probability_ \(1-\delta_{0}\)__ \[\|\widetilde{\lambda}_{i}^{t}-\lambda_{i}^{\pi_{\theta t}}\|\leq\epsilon_{1}( \delta_{0}),\ \ \|\overline{r}_{i}^{t}-r_{i}^{t}\|_{\infty}\leq L_{\lambda}\epsilon_{1}( \delta_{0}).\] (25)
* _with probability_ \(1-n\delta_{0}\)__ \[\|\widetilde{Q}_{i}^{t}-\widehat{Q}_{i}^{\pi_{\theta t}}\|_{\infty}\leq \epsilon_{0}M_{f}+\frac{L_{\lambda}\epsilon_{1}(\delta_{0})}{1-\gamma},\ \ \forall i\in \mathcal{N}.\] (26)
* _with probability_ \(1-2n\delta_{0}\)__ \[\|\widehat{g}_{i}^{t}-\widehat{g}_{i}(\theta^{t})\|\leq\epsilon_{2,i}(\delta _{0}),\ \forall i\in\mathcal{N},\] (27) _where_ \[\epsilon_{1}(\delta_{0})=\sqrt{\frac{4+2\gamma^{2H}B-16\log\delta_{0}}{(1- \gamma)^{2}B}},\quad\epsilon_{2,i}(\delta_{0})=\frac{|\mathcal{N}_{i}^{ \kappa}|}{n}\mathcal{O}\left(\epsilon_{0}+\sqrt{\frac{\log(1/\delta_{0})}{B}} +\gamma^{H}\right).\] (28)
Proof.: We refer the reader to [12, Appendix D.1] for the proof of part _(I)_. For part _(II)_, we first recall that \(\widetilde{Q}_{i}^{t}\) is a sample-based estimation for the truncated local Q-function \(\widehat{Q}_{\overline{r}_{i}^{t}}^{\pi_{\theta t}}\). By combining the error bound (25) with Assumptions 2 and 4, we obtain that with probability \(1-\delta_{0}\)
\[\begin{split}\|\widetilde{Q}_{i}^{t}-\widehat{Q}_{i}^{\pi_{ \theta t}}\|_{\infty}&\leq\|\widetilde{Q}_{i}^{t}-\widehat{Q}_{ \overline{r}_{i}^{t}}^{\pi_{\theta t}}\|_{\infty}+\|\widehat{Q}_{\overline{r} _{i}^{t}}^{\pi_{\theta t}}-\widehat{Q}_{i}^{\pi_{\theta t}}\|_{\infty}\\ &\leq\epsilon_{0}\|\overline{r}_{i}^{t}\|_{\infty}+\frac{\| \overline{r}_{i}^{t}-r_{i}^{t}\|_{\infty}}{1-\gamma}\\ &\leq\epsilon_{0}M_{f}+\frac{L_{\lambda}\epsilon_{1}(\delta_{0}) }{1-\gamma}.\end{split} \tag{29}\]
By applying the union bound, we have that with probability \(1-n\delta_{0}\), the bound (29) holds for all agents \(i\in\mathcal{N}\).
For part _(III)_, let \(\mathcal{F}_{t}\) denote the \(\sigma\)-algebra generated by all the trajectories in \(\mathcal{B}_{t}\) sampled at the \(t\)-th iteration and let
\[\widehat{g}_{i}^{t}:=\frac{1}{B}\sum_{\tau\in\mathcal{B}_{t}}\left[\sum_{k=0}^ {H-1}\gamma^{k}\psi_{\theta_{i}^{t}}(a_{i}^{k}|s_{\mathcal{N}_{i}^{\kappa}}^{ k})\frac{1}{n}\sum_{j\in\mathcal{N}_{i}^{\kappa}}\widehat{Q}_{i}^{\pi_{ \theta t}}\big{(}s_{\mathcal{N}_{j}^{\kappa}}^{k},a_{\mathcal{N}_{j}^{\kappa}} ^{k}\big{)}\right], \tag{30}\]
which differs from \(\widehat{g}_{i}^{t}\) only in the Q-function term. Next, we derive the bound (27) through the decomposition
\[\widehat{g}_{i}^{t}-\widehat{g}_{i}(\pi_{\theta^{t}})=\left(\widehat{g}_{i}^{t }-\widehat{g}_{i}^{t}\right)+\left(\widehat{g}_{i}^{t}-\mathbb{E}\left[\widehat {g}_{i}^{t}|\mathcal{F}_{t}\right]\right)+\left(\mathbb{E}\left[\widehat{g}_{ i}^{t}|\mathcal{F}_{t}\right]-\widehat{g}_{i}(\theta^{t})\right). \tag{31}\]
For the first difference, it holds from part _(II)_ and Assumption 3 that with probability \(1-n\delta_{0}\).
\[\begin{split}\|\widehat{g}_{i}^{t}-\widehat{g}_{i}^{t}\|^{2}& \leq\left[\frac{1}{1-\gamma}\cdot M_{\psi}\cdot\frac{|\mathcal{N}_{i}^{ \kappa}|}{n}\right]^{2}\left(\epsilon_{0}M_{f}+\frac{L_{\lambda}\epsilon_{1}( \delta_{0})}{1-\gamma}\right)^{2}\\ &=\frac{|\mathcal{N}_{i}^{\kappa}|^{2}M_{\psi}^{2}}{n^{2}(1-\gamma) ^{2}}\left(\epsilon_{0}M_{f}+\frac{L_{\lambda}\epsilon_{1}(\delta_{0})}{1- \gamma}\right)^{2}=:C_{1,i},\end{split} \tag{32}\]
Then, we bound the second term in (31). For a trajectory \(\tau\) and \(k_{1},k_{2}\geq 0\), we define that
\[G_{i}^{t}(\tau_{k_{1}}^{k_{2}}):=\sum_{k=k_{1}}^{k_{2}}\gamma^{k}\psi_{\theta_{i }^{t}}(a_{i}^{k}|s_{\mathcal{N}_{i}^{k}}^{k_{\infty}})\cdot\frac{1}{n}\sum_{j \in\mathcal{N}_{i}^{k}}\widehat{Q}_{i}^{\pi_{\theta^{t}}}(s_{\mathcal{N}_{j}^ {\kappa}}^{k},a_{\mathcal{N}_{j}^{\kappa}}^{k}). \tag{33}\]
It is clear from definition that \(\widehat{g}_{i}^{t}=1/B\cdot\sum_{\tau\in\mathcal{B}_{t}}G_{i}^{t}(\tau_{0}^{H -1})\). By Assumptions 2 and 3, it holds that
\[\mathbb{E}\left[\|G_{i}^{t}(\tau_{0}^{H-1})\|^{2}|\mathcal{F}_{t}\right]\leq \frac{|\mathcal{N}_{i}^{\kappa}|^{2}M_{f}^{2}M_{\psi}^{2}}{n^{2}(1-\gamma)^{4}} \tag{34}\]
Thus, by [27, Lemma 18], we have that with probability \(1-\delta_{0}\)
\[\left\|\widehat{g}_{i}^{t}-\mathbb{E}\left[\widehat{g}_{i}^{t}|\mathcal{F}_{ t}\right]\right\|^{2}\leq\frac{(2-8\log\delta_{0})|\mathcal{N}_{i}^{\kappa}|^{2}M_ {f}^{2}M_{\psi}^{2}}{n^{2}(1-\gamma)^{4}B}:C_{2,i}. \tag{35}\]
Finally, to bound the third term in (31), we derive that
\[\begin{split}\left\|\mathbb{E}\left[\widehat{g}_{i}^{t}|\mathcal{ F}_{t}\right]-\widehat{g}_{i}(\theta^{t})\right\|^{2}&=\left\| \mathbb{E}\left[G_{i}^{t}(\tau_{0}^{H-1})|\pi_{\theta^{t}},\xi\right]-\widehat{ g}_{i}(\theta^{t})\right\|^{2}\\ &=\left\|\mathbb{E}\left[G_{i}^{t}(\tau_{0}^{\infty})|\pi_{\theta ^{t}},\xi\right]-\widehat{g}_{i}(\theta^{t})-\mathbb{E}\left[G_{i}^{t}(\tau_{ H}^{\infty})|\pi_{\theta^{t}},\xi\right]\right\|^{2}\\ &=\left\|\mathbb{E}\left[G_{i}^{t}(\tau_{H}^{\infty})|\pi_{ \theta^{t}},\xi\right]\right\|^{2}\\ &\leq\frac{\gamma^{2H}|\mathcal{N}_{i}^{\kappa}|^{2}M_{\psi}^{2}M _{f}^{2}}{n^{2}(1-\gamma)^{4}}:C_{3,i}.\end{split} \tag{36}\]
where we use the fact that \(G_{i}^{t}(\tau_{0}^{\infty})\) is an unbiased estimator for \(\widehat{g}_{i}(\theta^{t})\) in the third equality. The inequality in the last line follows from Assumptions 2 and 3.
Putting (31), (32), (35), (36) together, we have that
\[\begin{split}\left\|\widehat{g}_{i}^{t}-\widehat{g}_{i}(\theta^{ t})\right\|^{2}&\leq 3\Bigg{[}\|\widehat{g}_{i}^{t}-\widehat{g}_{i}^{t} \|^{2}+\|\widehat{g}_{i}^{t}-\mathbb{E}\left[\widehat{g}_{i}^{t}|\mathcal{F}_{ t}\right]\|^{2}+\|\mathbb{E}\left[\widehat{g}_{i}^{t}|\mathcal{F}_{t}\right]- \widehat{g}_{i}(\theta^{t})\|^{2}\Bigg{]}\\ &\leq 3(C_{1,i}+C_{2,i}+C_{3,i})\\ &=\frac{|\mathcal{N}_{i}^{\kappa}|^{2}}{n^{2}}\mathcal{O}\left( \epsilon_{0}^{2}+\epsilon_{1}(\delta_{0})^{2}+\frac{\log(1/\delta_{0})}{B}+ \gamma^{2H}\right)\\ &=\frac{|\mathcal{N}_{i}^{\kappa}|^{2}}{n^{2}}\mathcal{O}\left( \epsilon_{0}^{2}+\frac{\log(1/\delta_{0})}{B}+\gamma^{2H}\right),\end{split} \tag{37}\]
where we use the definition of \(\epsilon_{1}(\delta_{0})\) in (28). Finally, we note that, by the union bound, (35) holds for all agents \(i\in\mathcal{N}\) with probability \(1-n\delta_{0}\). Since (32) holds with probability \(1-n\delta_{0}\) and (36) is deterministic, we conclude that, with probability \(1-2n\delta_{0}\), the bound (37) holds for all agents. The proof is completed by taking \(\epsilon_{2,i}(\delta_{0}):=\sqrt{3(C_{1,i}+C_{2,i}+C_{3,i})}\).
Proposition 2 evaluates the accuracy of the estimation for the truncated policy gradient. Together with Proposition 1, this provides a probabilistic upper bound for the gradient estimation error \(\|\widehat{g}_{i}^{t}-\nabla_{\theta_{i}}F(\theta^{t})\|\), which we will use to prove the convergence of Algorithm 1 in the following theorem.
**Theorem 1**.: _Suppose that Assumptions 1-4 hold and the step-sizes satisfy \(\eta_{\theta}^{t}\leq 1/\big{(}4L_{\theta}\big{)}\), \(\forall t\geq 0\). For every \(T>0\), let \(\delta_{0}=\delta/(2nT)\), where \(\delta\in(0,1)\) is the failure probability. Then, with probability \(1-\delta\), it holds that_
\[\frac{\sum_{t=0}^{T-1}\eta_{\theta}^{t}\left\|\nabla_{\theta}F(\theta^{t}) \right\|^{2}}{\sum_{t=0}^{T-1}\eta_{\theta}^{t}}\leq\frac{4\left(F(\theta^{T})-F (\theta^{0})\right)}{\sum_{t=0}^{T-1}\eta_{\theta}^{t}}+3\Delta(\delta_{0}), \tag{38}\]
_where_
\[\Delta(\delta_{0})=\mathcal{O}(n\phi_{0}^{2\kappa})+\sum_{i\in\mathcal{N}}\frac{| \mathcal{N}_{i}^{\kappa}|^{2}}{n^{2}}\mathcal{O}\left(\epsilon_{0}^{2}+\frac{ \log(1/\delta_{0})}{B}+\gamma^{2H}\right). \tag{39}\]
Proof.: By the smoothness of \(F(\theta)\) (Assumption 3), when the step-size satisfies \(\eta_{\theta}^{t}\leq 1/\big{(}4L_{\theta}\big{)}\), the policy update (22) implies
\[F(\theta^{t+1})-F(\theta^{t}) \geq\sum_{i\in\mathcal{N}}\left[\left\langle\nabla_{\theta_{i}}F( \theta^{t}),\eta_{\theta}^{t}\overline{g}_{i}^{t}\right\rangle-\frac{L_{ \theta}}{2}\|\eta_{\theta}^{t}\overline{g}_{i}^{t}\|^{2}\right] \tag{40}\] \[=\sum_{i\in\mathcal{N}}\left[\eta_{\theta}^{t}\left\langle\nabla_ {\theta_{i}}F(\theta^{t}),\nabla_{\theta_{i}}F(\theta^{t})-(\nabla_{\theta_{ i}}F(\theta^{t})-\overline{g}_{i}^{t})\right\rangle\right.\] \[\quad-\frac{L_{\theta}}{2}(\eta_{\theta}^{t})^{2}\left\|\nabla_{ \theta_{i}}F(\theta^{t})-(\nabla_{\theta_{i}}F(\theta^{t})-\overline{g}_{i}^{ t})\right\|^{2}\Bigg{]}\] \[\geq\sum_{i\in\mathcal{N}}\left[\eta_{\theta}^{t}\left\|\nabla_{ \theta_{i}}F(\theta^{t})\right\|^{2}-\frac{\eta_{\theta}^{t}}{2}\left(\left\| \nabla_{\theta_{i}}F(\theta^{t})\right\|^{2}+\left\|\nabla_{\theta_{i}}F( \theta^{t})-\widehat{g}_{i}^{t}\right\|^{2}\right)\right.\] \[\quad\left.-L_{\theta}(\eta_{\theta}^{t})^{2}\left(\left\|\nabla_ {\theta_{i}}F(\theta^{t})\right\|^{2}+\left\|\nabla_{\theta_{i}}F(\theta^{t}) -\widehat{g}_{i}^{t}\right\|^{2}\right)\right]\]
where we apply the basic inequality \(2\left\langle a,b\right\rangle\leq\|a+b\|^{2}/2\leq\|a\|^{2}+\|b\|^{2}\) in the last inequality. By rearranging the terms in (40) and using the condition \(\eta_{\theta}^{t}\leq 1/\big{(}4L_{\theta}\big{)}\), we have that
\[F(\theta^{t+1})-F(\theta^{t})\geq\frac{\eta_{\theta}^{t}}{4}\left\|\nabla_{ \theta}F(\theta^{t})\right\|^{2}-\frac{3\eta_{\theta}^{t}}{4}\sum_{i\in \mathcal{N}}\left\|\nabla_{\theta_{i}}F(\theta^{t})-\overline{g}_{i}^{t} \right\|^{2}, \tag{41}\]
which further implies that
\[\eta_{\theta}^{t}\left\|\nabla_{\theta}F(\theta^{t})\right\|^{2}\leq 4\left(F( \theta^{t+1})-F(\theta^{t})\right)+3\eta_{\theta}^{t}\sum_{i\in\mathcal{N}} \left\|\nabla_{\theta_{i}}F(\theta^{t})-\overline{g}_{i}^{t}\right\|^{2}. \tag{42}\]
By Propositions 1 and 2, with probability \(1-\delta_{0}\), it holds that
\[\sum_{i\in\mathcal{N}}\left\|\nabla_{\theta_{i}}F(\theta^{t})- \overline{g}_{i}^{t}\right\|^{2} \leq\sum_{i\in\mathcal{N}}2\left(\left\|\nabla_{\theta_{i}}F( \theta^{t})-\widehat{g}_{i}(\theta^{t})\right\|^{2}+\left\|\widehat{g}_{i}( \theta^{t})-\widehat{g}_{i}^{t}\right\|^{2}\right) \tag{43}\] \[=2\sum_{i\in N}\left[\left(\frac{c_{0}\phi_{0}^{\kappa}M_{\psi}} {1-\gamma}\right)^{2}+\left(\epsilon_{2,i}(\delta_{0})\right)^{2}\right]\] \[=:\Delta(\delta_{0}).\]
The relation (39) follows directly from the definition of \(\epsilon_{2,i}\) in Proposition 2, Applying the union bound again, we have that (43) holds for all \(t=0,1,\ldots,T-1\) with probability \(1-\delta\), where \(\delta=(2nT)\delta_{0}\). Thus, by substituting (43) into (42) and summing over \(t=0,1,\ldots,T-1\), we conclude that with probability \(1-\delta\)
\[\frac{\sum_{t=0}^{T-1}\eta_{\theta}^{t}\left\|\nabla_{\theta}F( \theta^{t})\right\|^{2}}{\sum_{t=0}^{T-1}\eta_{\theta}^{t}} \leq\frac{\sum_{t=0}^{T-1}4\left(F(\theta^{t+1})-F(\theta^{t}) \right)}{\sum_{t=0}^{T-1}\eta_{\theta}^{t}}+\frac{\sum_{t=0}^{T-1}3\eta_{\theta }^{t}\Delta(\delta_{0})}{\sum_{t=0}^{T-1}\eta_{\theta}^{t}} \tag{44}\] \[=\frac{4\left(F(\theta^{T})-F(\theta^{0})\right)}{\sum_{t=0}^{T-1} \eta_{\theta}^{t}}+3\Delta(\delta_{0}),\]
which completes the proof.
Under constant step-sizes \(\eta_{\theta}^{t}\equiv\eta_{\theta}\), the bound (38) becomes
\[\frac{1}{T}\sum_{t=0}^{T-1}\big{\|}\nabla_{\theta}F(\theta^{t})\big{\|}^{2}\leq \frac{4\left(F(\theta^{T})-F(\theta^{0})\right)}{\eta_{\theta}T}+3\Delta(\delta_ {0}), \tag{45}\]
which implies an \(\mathcal{O}(1/T)\) iteration complexity with the approximation error \(3\Delta(\delta_{0})\). As shown in (39), the constant \(\Delta(\delta_{0})\) will be small when the rate of spatial correlation decay is fast, the computational error \(\epsilon_{0}\) for Q-functions is small, and enough samples are used to estimate the local occupancy measure. Notably, when the size of \(\kappa\)-neighborhood \(|\mathcal{N}_{i}^{\kappa}|\) is relatively small for all agents compared to the total number of agents \(n\), the term \(\sum_{i\in\mathcal{N}}|\mathcal{N}_{i}^{\kappa}|^{2}/n^{2}\) approaches \(\mathcal{O}(1/n)\) and \(\Delta(\delta_{0})=\mathcal{O}(n\phi_{0}^{2\kappa})\) approximately holds.
Suppose that an \(\mathcal{O}(1/(\epsilon_{0})^{2})\) oracle is used for the truncated Q-function estimation (line 5 in Algorithm 1), i.e., the approximation (24) is achieved with \(\mathcal{O}(1/(\epsilon_{0})^{2})\) samples. We analyze the sample complexity of Algorithm 1 to compute an \(\epsilon\)-stationary point.
**Theorem 2**.: _Suppose that Assumptions 1-4 hold and an \(\mathcal{O}(1/(\epsilon_{0})^{2})\) oracle is used for the truncated Q-function estimation. For every \(\epsilon>0\) and \(\delta\in(0,1)\), let \(T=\mathcal{O}(\epsilon^{-1})\), \(\eta_{\theta}^{t}\equiv 1/(4L_{\theta})\), \(\epsilon_{0}=\sqrt{\epsilon}\), \(\delta_{0}=\delta/(2nT)\), batch size \(B=\mathcal{O}\left(\log(1/\delta_{0})\epsilon^{-1}\right)\), episode length \(H=\mathcal{O}\left(\log(1/\epsilon)\right)\). Then, with probability \(1-\delta\), it holds that_
\[\frac{1}{T}\sum_{t=0}^{T-1}\big{\|}\nabla_{\theta}F(\theta^{t})\big{\|}^{2}= \mathcal{O}\left(\epsilon+n\phi_{0}^{2\kappa}\right). \tag{46}\]
_The total number of samples required is \(\widetilde{\mathcal{O}}(\epsilon^{-2})\)._
Proof.: The \(\epsilon\)-stationarity (46) follows directly from (38) and (39) in Theorem 1. In every iteration, \(B\times H=\widetilde{\mathcal{O}}(\epsilon^{-1})\) samples are used to estimate the occupancy measure and compute the empirical shadow reward, (perhaps another) \(\mathcal{O}(1/\epsilon_{0}^{2})=\mathcal{O}(\epsilon^{-1})\) samples are used to estimate the truncated Q-function. Since there are \(T=\mathcal{O}(\epsilon^{-1})\) iterations, the total number of samples used is \(\widetilde{\mathcal{O}}(\epsilon^{-2})\).
As discussed in Section 3, the TD-learning procedure (23) is an \(\mathcal{O}(1/(\epsilon_{0})^{2})\) oracle for the truncated Q-function estimation with high probability. Below, we provide two further remarks.
**Remark 2** (Global Optimality).: _Suppose that the utility function \(f(\lambda)\) is concave in \(\lambda\), which generalizes the linear objective for standard RL. If the policy parameterization satisfies [15, Assumption 5.11], then problem (5) does not have spurious local solutions. Thus, the error bound (38) implies convergence to global optimality._
**Remark 3**.: _The communication radius \(\kappa\) plays an important role in both Theorems 1 and 2. As \(\kappa\) increases, the term \(\phi_{0}^{2\kappa}\) decreases, yet the size of the \(\kappa\)-neighborhood \(|\mathcal{N}_{i}^{\kappa}|\) increases, making the constant \(\sum_{i\in\mathcal{N}}|\mathcal{N}_{i}^{\kappa}|^{2}/n^{2}\) increase. Also, the increase of \(|\mathcal{N}_{i}^{\kappa}|\) will amplify the communication cost and make the estimation of truncated Q-functions less efficient. Thus, finding a good balance is important in determining \(\kappa\)._
**Remark 4**.: _In this work, we focus on the policy search in a class of localized policies, where each local policy \(\pi_{\theta_{i}}^{i}(a_{i}|s_{\mathcal{N}_{i}^{\kappa}})\) only depends on the states of agents in \(\mathcal{N}_{i}^{\kappa}\). It is possible to relax this "hard" requirement to a "soft" requirement. Below, we briefly describe the idea of this extension._
_Consider the factorization \(\pi_{\theta}(a|s)=\prod_{i\in\mathcal{N}}\pi_{\theta_{i}}^{i}\left(a_{i}|s\right)\), where each local policy \(\pi_{\theta_{i}}^{i}\) depends on the global state \(s\). We assume that a form of spatial correlation decay property holds for the local policy
\(\pi^{i}_{\theta_{i}}(a_{i}|s)\) and the associated local score function \(\psi_{\theta_{i}}(a_{i}|s)=\nabla_{\theta_{i}}\log\pi^{i}_{\theta_{i}}(a_{i}|s)\), such that_
\[\begin{split}&\sup_{s,s^{\prime}_{\mathcal{N}^{\kappa}_{-i}}}\mathrm{ TV}\left(\pi^{i}_{\theta_{i}}(\cdot|s_{\mathcal{N}^{\kappa}_{i}},s_{\mathcal{N}^{ \kappa}_{-i}}),\pi^{i}_{\theta_{i}}(\cdot|s_{\mathcal{N}^{\kappa}_{i}},s^{ \prime}_{\mathcal{N}^{\kappa}_{-i}})\right)\leq c_{1}\phi^{\kappa}_{1},\\ &\sup_{s,s^{\prime}_{\mathcal{N}^{\kappa}_{-i}}}\mathrm{TV}\left( \psi_{\theta_{i}}(\cdot|s_{\mathcal{N}^{\kappa}_{i}},s_{\mathcal{N}^{\kappa}_{ -i}}),\psi_{\theta_{i}}(\cdot|s_{\mathcal{N}^{\kappa}_{i}},s^{\prime}_{ \mathcal{N}^{\kappa}_{-i}})\right)\leq c_{1}\phi^{\kappa}_{1},\end{split} \tag{47}\]
_where \(c_{1}\geq 0\) and \(\phi_{1}\in(0,1)\) are two constants. In light of this decay property, we define the induced truncated policy of \(\pi^{i}_{\theta_{i}}\) as (similar to the definition of truncated Q-function)_
\[\widehat{\pi}^{i}_{\theta_{i}}(a_{i}|s_{\mathcal{N}^{\kappa}_{i}}):=\pi^{i}_ {\theta_{i}}(a_{i}|s_{\mathcal{N}^{\kappa}_{i}},\bar{s}_{\mathcal{N}^{\kappa} _{-i}}),\ \forall s_{\mathcal{N}^{\kappa}_{i}}\in\mathcal{S}_{\mathcal{N}^{\kappa}_{i}}, \tag{48}\]
_where \(\bar{s}_{\mathcal{N}^{\kappa}_{-i}}\) is any fixed state for the agents in \(\mathcal{N}^{\kappa}_{-i}\). Similarly, we define \(\widehat{\psi}(a_{i}|s_{\mathcal{N}^{\kappa}_{i}})\) as the truncated score function._
_By using the truncated policy and score function, we can still implement Algorithm 1 without violating the observability and communication requirements. Meanwhile, it is important to quantify the information loss in using the truncated policy as an approximation for the true policy, which depends on the global state. Specifically, new approximation errors would arise in the trajectory sampling, and then affect the estimation of shadow rewards, shadow Q-functions, and policy gradients. Under condition (47), the errors in the occupancy measure can be upper-bounded as follows._
**Lemma 3**.: _Suppose that condition (47) holds. Let \(\widehat{\pi}_{\theta}:=\prod_{i\in\mathcal{N}}\widehat{\pi}^{i}_{\theta_{i}}\) be the induced truncated policy of \(\pi_{\theta}\). It holds that_
\[\left\|\lambda^{\widehat{\pi}_{\theta}}_{i}-\lambda^{\pi_{\theta}}_{i}\right\| _{1}\leq\frac{nc_{1}\phi^{k}_{1}}{(1-\gamma)^{2}},\forall i\in\mathcal{N}. \tag{49}\]
_When Assumption 2 holds, we can further show that the errors in the empirical shadow rewards and truncated Q-functions have the same order \(\mathcal{O}(\phi^{\kappa}_{1})\) as the occupancy measure. Together, the same convergence rate and sample complexity as in Theorems 1 and 2 can be proved with an approximation error that has the order \(\mathcal{O}(\phi^{2\kappa}_{0}+\phi^{2\kappa}_{1})\), which accounts for the inaccuracies from both the use of truncated policy gradients and the truncated policies. We refer the reader to [28] for the proof of Lemma 3._
## 5 Conclusions
In this paper, we study the scalable MARL with general utilities, defined as nonlinear functions of the team's long-term state-action occupancy measure. We propose a scalable distributed policy gradient algorithm with shadow reward and localized policy, which has three steps: (1) shadow reward estimation, (2) truncated shadow Q-function estimation, and (3) truncated policy gradient estimation and policy update. By exploiting the spatial correlation decay property of the network structure, we rigorously establish the convergence and sample complexity of the proposed algorithm. Future work includes generalization to the safety-critical setting and considering information asymmetry among the agents.
## Acknowledgment
This work was supported by grants from ARO, AFOSR, ONR and NSF. |
2304.05279 | Negative-Weight Single-Source Shortest Paths in Near-Linear Time: Now
Faster! | In this work we revisit the fundamental Single-Source Shortest Paths (SSSP)
problem with possibly negative edge weights. A recent breakthrough result by
Bernstein, Nanongkai and Wulff-Nilsen established a near-linear $O(m \log^8(n)
\log(W))$-time algorithm for negative-weight SSSP, where $W$ is an upper bound
on the magnitude of the smallest negative-weight edge. In this work we improve
the running time to $O(m \log^2(n) \log(nW) \log\log n)$, which is an
improvement by nearly six log-factors. Some of these log-factors are easy to
shave (e.g. replacing the priority queue used in Dijkstra's algorithm), while
others are significantly more involved (e.g. to find negative cycles we design
an algorithm reminiscent of noisy binary search and analyze it with drift
analysis).
As side results, we obtain an algorithm to compute the minimum cycle mean in
the same running time as well as a new construction for computing Low-Diameter
Decompositions in directed graphs. | Karl Bringmann, Alejandro Cassis, Nick Fischer | 2023-04-11T15:26:28Z | http://arxiv.org/abs/2304.05279v1 | # Negative-Weight Single-Source Shortest Paths in Near-Linear Time: Now Faster!
###### Abstract
In this work we revisit the fundamental Single-Source Shortest Paths (SSSP) problem with possibly negative edge weights. A recent breakthrough result by Bernstein, Nanongkai and Wulff-Nilsen established a near-linear \(O(m\log^{8}(n)\log(W))\)-time algorithm for negative-weight SSSP, where \(W\) is an upper bound on the magnitude of the smallest negative-weight edge. In this work we improve the running time to \(O(m\log^{2}(n)\log(nW)\log\log n)\), which is an improvement by nearly six log-factors. Some of these log-factors are easy to shave (e.g. replacing the priority queue used in Dijkstra's algorithm), while others are significantly more involved (e.g. to find negative cycles we design an algorithm reminiscent of noisy binary search and analyze it with drift analysis).
As side results, we obtain an algorithm to compute the minimum cycle mean in the same running time as well as a new construction for computing Low-Diameter Decompositions in directed graphs.
Shortest paths, Low Diameter Decomposition, Drift analysis 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmright 2012 acmcopyright 2012 acmright 2012 acmcopyright 2012 acmright 2012 acmcopyright 2012 acmright 2012 acmcopyright 2012 acmcopyright 2012 acmright 2012 acmcopyright 2012 acmright 2012 acmcopyright 2012 acmcopyright 2012 acmright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmright 2012 acmcopyright 2012 acmright 2012 acmcopyright 2012 acmcopyright 2012 acmright 2012 acmcopyright 2012 acmright 2012 acmcopyright 2012 acmright 2012 acmright 2012 acmright 2012 acmcopyright 2012 acmright 2012 acmcopyright 2012 acmright 2012 acright 2012 acmright 2012 acmcopyright 2012 acmcopyright 2012 acmright 2012 acmright 2012 acmright 2012 acmright 2012 acright 2012 acmright 2012 acright 2
lead to time \(O(m\sqrt{n}\log W)\)[32, 33, 34]; here and throughout, \(W\) is the magnitude of the smallest negative edge weight in the graph.1 Other papers focused on specialized graph classes, leading e.g. to near-linear time algorithms for planar directed graphs [43, 38, 24, 37], and improved algorithms for dense graphs with small weights [53].
Footnote 1: Strictly speaking, \(W\geq 0\) is the smallest number such that all edge weights satisfy \(w(e)\geq-W\). By slight abuse of notation we typically write \(O(\log W)\) to express \(O(\max\{\,1,\log W\,\})\).
An alternative approach is to model SSSP as a minimum-cost flow problem.2 In the last decade, a combination of convex optimization techniques and dynamic algorithms have resulted in a series of advancements in minimum-cost flow computations [21, 7, 59, 58] and thus also for negative-weight SSSP, with running times \(\widetilde{O}(m^{10/7})\)[21], \(\widetilde{O}(m^{4/3})\)[7] and \(\widetilde{O}(m+n^{3/2})\)[59]. This line of research recently culminated in an almost-linear \(m^{1+o(1)}\). time algorithm by Chen, Kyng, Liu, Peng, Probst Gutenberg and Sachdeva [17].
Footnote 2: To model SSSP as a minimum-cost flow problem, interpret each edge \(e\) with weight \(w(e)\) as an edge with infinite capacity and cost \(w(e)\). Moreover, add an artificial sink vertex \(t\) to the graph, and add unit-capacity cost-\(0\) edges from all vertices \(v\) to \(t\). Then any minimum-cost flow routing \(n\) units from \(s\) to \(t\) corresponds exactly to a shortest path tree in the original graph (assuming that it does not contain a negative-weight cycle).
Finally, at the same time as the breakthrough in computing minimum-cost flows, Bernstein, Nanongkai and Wulff-Nilsen [11] found an astonishing _near-linear_\(O(m\log^{8}(n)\log(W))\)-time algorithm for negative-weight SSSP. We will refer to their algorithm as the _BNW algorithm_. The BNW algorithm is combinatorial and arguably simple, and thus a satisfying answer to the coarse-grained complexity of the negative-weight SSSP problem. However, the story does not end here. In this work, we press further and investigate the following question which was left open by Bernstein et al. [11]:
_Can we further improve the number of log-factors_
_in the running time of negative-weight SSSP?_
For comparison, the _nonnegative_-weights SSSP problem underwent a long series of lower-order improvements in the last century [23, 29, 30, 31, 56, 51, 52, 60, 61, 44, 2, 20, 57], including improvements by log-factors or even just loglog-factors.3 In the same spirit, we initiate the fine-grained study of lower-order factors for negative-weight shortest paths.
Footnote 3: In these papers, the Dijkstra running time \(O(m+n\log n)\) was improved to the current state of the art \(O(m+n\log\log\min\{n,C\})\)[57], where \(C\) is the largest weight in the graph.
### Our Results
In our main result we make significant progress on our driving question, and improve the BNW algorithm by nearly six log-factors:
[Negative-Weight SSSP] There is a Las Vegas algorithm which, given a directed graph \(G\) and a source node \(s\), either computes a shortest path tree from \(s\) or finds a negative cycle in \(G\), running in time \(O((m+n\log\log n)\log^{2}n\log(nW))\) with high probability (and in expectation).
We obtain this result by optimizing the BNW algorithm, pushing it to its limits. Aaron Bernstein remarked in a presentation of their result that "something like \(\log^{5}\) is inherent to [their] current framework".4 It is thus surprising that we obtain such a dramatic improvement to nearly three log-factors within the same broader framework. Despite this speed-up, our
algorithm is still modular and simple in its core. In Section 2 we discuss the technical similarities and differences between our algorithm and the BNW algorithm in detail.
Recall that computing shortest paths is only reasonable in graphs without negative cycles (as otherwise two nodes are possibly connected by a path of arbitrarily small negative weight). In light of this, we solve the negative-weight SSSP problem in its strongest possible form in Theorem 1: The algorithm either returns a shortest path tree, or returns a negative cycle as a certificate that no shortest path tree exists. In fact, the subproblem of detecting negative cycles has received considerable attention on its own in the literature (see e.g. the survey [19]).
In the presence of negative cycles, a natural alternative to finding one such cycle is to compute all distances in the graph anyway (where some of the distances are \(-\infty\) or \(\infty\)). This task can be solved in the same running time:
[Negative-Weight Single-Source Distances] There is a Las Vegas algorithm, which, given a directed graph \(G\) and a source \(s\in V(G)\), computes the distances from \(s\) to all other vertices in the graph (these distances are possibly \(-\infty\) or \(\infty\)), running in time \(O((m+n\log\log n)\log^{2}n\log(nW))\) with high probability (and in expectation).
Owing to the countless practical applications of shortest paths problems, it is an important question to ask whether there is a negative-weights shortest paths algorithm that has a competitive implementation. The typical practical implementation in competitive programming uses optimized variants of Bellman-Ford's algorithm, such as the "Shortest Path Faster Algorithm" [46, 22] (see also [18] for an experimental evaluation of other more sophisticated variants of Bellman-Ford). However, it is easy to find instances for which these algorithms require time \(\Omega(mn)\). It would be exciting if, after decades of competitive programming, there finally was an efficient implementation to deal with these instances. With its nine log-factors, the BNW algorithm does not qualify as a practical candidate. We believe that our work paves the way for a comparably fast implementation.
In addition to our main result, we make progress on two closely related problems: Computing the minimum cycle mean, and low-diameter decompositions in directed graphs. We describe these results in the following sections.
#### Minimum Cycle Mean
In a directed weighted graph, the _mean_ of a cycle \(C\) is defined as the ratio \(\bar{w}(C)=w(C)/|C|\) where \(w(C)\) is the total weight of \(C\). The _Minimum Cycle Mean_ problem is to compute, in a given directed weighted graph, the minimum mean across all cycles, \(\min_{C}\bar{w}(C)\). This is a central problem in the context of network flows [1], with applications to verification and reactive systems analysis [15].
There is a large body of literature on computing the Minimum Cycle Mean. In 1987, Karp [36] established an \(O(mn)\)-time algorithm, which is the fastest strongly polynomial time algorithm to this date. In terms of weakly polynomial algorithms, Lawler observed that the problem is equivalent to detecting negative cycles, up to a factor \(O(\log(nW))\)[40, 39]. Indeed, note that one direction is trivial: The graph has a negative cycle if and only if the minimum cycle mean is negative. For the other direction, he provided a reduction to detecting negative cycles on \(O(\log(nW))\) graphs with modified rational edge weights. Thus, following Lawler's observation, any negative-weight SSSP algorithm can be turned into a Minimum Cycle Mean algorithm in a black-box way with running time overhead \(O(\log(nW))\).
There are also results specific to Minimum Cycle Mean computations: Orlin and Ahuja [47] designed an algorithm in time \(O(m\sqrt{n}\log(nW))\) (improving over the baseline
\(O(m\sqrt{n}\log^{2}(nW))\) which follows from the SSSP algorithms by [32, 33, 34]). For the special case of dense graphs with 0-1-weights, an \(O(n^{2})\)-time algorithm is known [14]. Finally, in terms of approximation algorithms it is known how to compute a \((1+\varepsilon)\)-approximation in time \(\widetilde{O}(n^{\omega}\log(W)/\varepsilon)\)[15].
As for negative-weight SSSP, all these algorithms are dominated by the recent BNW algorithm: By Lawler's observation, their algorithm computes the minimum cycle mean in time \(O(m\log^{8}(n)\log^{2}(nW))\). In fact, it is implicit in their work that the running time can be reduced to \(O(m\log^{8}(n)\log(nW))\). Our contribution is again that we reduce the number of log-factors from nine to nearly three:
[Minimum Cycle Mean] There is a Las Vegas algorithm, which given a directed graph \(G\) finds a cycle \(C\) with minimum mean weight \(\bar{w}(C)=\min_{C^{\prime}}\bar{w}(C^{\prime})\), running in time \(O((m+n\log\log n)\log^{2}n\log(nW))\) with high probability (and in expectation).
#### Directed Low-Diameter Decompositions
A crucial ingredient to the BNW algorithm is a Low-Diameter Decomposition (LDD) in directed graphs. Our SSSP algorithm differs in that regard from the BNW algorithm, and does not explicitly use LDDs. Nevertheless, as a side result of this work we improve the best known LDD in directed graphs.
LDDs have been first studied by Awerbuch almost 40 years ago [3] and have ever since found several applications, mostly for undirected graphs and mostly in distributed, parallel and dynamic settings [5, 6, 4, 42, 8, 12, 45, 48, 27, 16, 10, 28, 11]. The precise definitions in these works mostly differ, but the common idea is to select a small subset of edges \(S\) such that after removing all edges in \(S\) from the graph, the remaining graph has (strongly) connected components with bounded diameter.
For directed graphs, we distinguish two types of LDDs: _Weak_ LDDs ensure that for every strongly connected component \(C\) in the graph \(G\setminus S\), the diameter of \(C\)_in the original graph_ is bounded. A _strong_ LDD exhibits the stronger property that the diameter of \(C\) in the graph \(G\setminus S\) is bounded.
[Directed Low-Diameter Decomposition] A weak Low-Diameter Decomposition with overhead \(\rho\) is a Las Vegas algorithm that, given a directed graph \(G\) with nonnegative edge weights \(w\) and a parameter \(D>0\), computes an edge set \(S\subseteq E(G)\) with the following properties:
* Sparse Hitting: _For any edge \(e\in E\), \(\mathbf{P}(e\in S)\leq O(\frac{w(e)}{D}\cdot\rho+\frac{1}{\mathrm{poly}(n)})\)._
* Weak Diameter: _Every SCC \(C\) in \(G\setminus S\) has weak diameter at most \(D\) (that is, for any two vertices \(u,v\in C\), we have \(\mathrm{dist}_{G}(u,v)\leq D\))._
_We say that the Low-Diameter Decomposition is strong if it additionally satisfies the following stronger property:
* Strong Diameter: _Every SCC \(C\) in \(G\setminus S\) has diameter at most \(D\) (that is, for any two vertices \(u,v\in C\), we have \(\mathrm{dist}_{G\setminus S}(u,v)\leq D\))._
For directed graphs, the state-of-the-art _weak_ LDD was developed by Bernstein, Nanongkai and Wulff-Nilsen [11] as a tool for their shortest paths algorithm. Their result is a weak LDD with polylogarithmic overhead \(O(\log^{2}n)\) running in near-linear time \(O(m\log^{2}n+n\log^{2}n\log\log n)\). In terms of _strong_ LDDs, no comparable result is known. While it is not hard to adapt their algorithm to compute a strong LDD, this augmentation suffers from a slower running time \(\Omega(nm)\). Our contribution is designing the first strong LDD computable in near-linear time, with only slightly worse overhead \(O(\log^{3}n)\):
**Theorem 5** (Strong Low-Diameter Decomposition).: _There is a strong Low-Diameter Decomposition with overhead \(O(\log^{3}n)\), computable in time \(O((m+n\log\log n)\log^{2}n)\) with high probability (and in expectation)._
### Technical Overview
Our algorithm is inspired by BNW algorithm and follows its general framework, but differs in many aspects. In this section we give a detailed comparison.
#### The Framework
The presentation of our algorithm is modular: We will first focus on the SSSP problem on a restricted class of graphs (to which we will simply refer as _restricted_ graphs, see the next Definition 6). In the second step we demonstrate how to obtain our results for SSSP on general graphs, for finding negative cycles, and for computing the minimum cycle mean by reducing to the restricted problem in a black-box manner.
**Definition 6** (Restricted Graphs).: _An edge-weighted directed graph \(G=(V,E,w)\) with a designated source vertex \(s\in V\) is restricted if it satisfies:_
* _The edge weights are integral and at least_ \(-1\)_._
* _The minimum cycle mean is at least_ \(1\)_._
* _The source_ \(s\) _is connected to every other vertex by an edge of weight_ \(0\)_._
In particular, note that restricted graphs do not contain negative cycles, and therefore it is always possible to compute a shortest path tree. The _Restricted SSSP_ problem is to compute a shortest path tree in a given restricted graph \(G\). We write \(T_{\mathrm{RSSSP}}(m,n)\) for the optimal running time of a Restricted SSSP algorithm with error probability \(\frac{1}{2}\), say.
#### Improvement 1: Faster Restricted SSSP via Better Decompositions
Bernstein et al. [11] proved that \(T_{\mathrm{RSSSP}}(m,n)=O(m\log^{5}n)\). Our first contribution is that we have nearly three log-factors and improve this bound to \(T_{\mathrm{RSSSP}}(m,n)=O((m+n\log\log n)\log^{2}n)\) (see Theorem 18).
At a high level, the idea of the BNW algorithm is to decompose the graph by finding a subset of edges \(S\) suitable for the following two subtasks: (1) We can recursively compute shortest paths in the graph \(G\setminus S\) obtained by removing the edges in \(S\), and thereby make enough progress to incur in total only a small polylogarithmic overhead in the running time. And (2), given the outcome of the recursive call, we can efficiently "add back" the edges from \(S\) to obtain a correct shortest path tree for \(G\). For the latter task, the crucial property is that \(S\) intersects every shortest path in \(G\) at most \(O(\log n)\) times (in expectation), as then a simple generalization of Dijkstra's and Bellman-Ford's algorithm can adjust the shortest path tree in near-linear time (see Lemma 25).
For our result, we keep the implementation of step (2) mostly intact, except that we use a faster implementation of Dijkstra's algorithm due to Thorup [57] (see Lemma 25). The most significant difference takes place in step (1), where we change how the algorithm selects \(S\). Specifically, Bernstein et al. used a directed Low-Diameter Decomposition to implement the decomposition. We are following the same thought, but derive a more efficient and direct decomposition scheme. To this end, we define the following key parameter:
**Definition 7**.: _Let \(G\) be a restricted graph with designated source \(s\). We define \(\kappa(G)\) as the maximum number of negative edges (that is, edges of weight exactly \(-1\)) in any simple path \(P\) which starts at \(s\) and has nonpositive weight \(w(P)\leq 0\)._
Our new decomposition can be stated as follows.
**Lemma 8** (Decomposition).: _Let \(G\) be a restricted graph with source vertex \(s\in V(G)\) and \(\kappa\geq\kappa(G)\). There is a randomized algorithm \(\textsc{Decompose}(G,\kappa)\) running in expected time \(O((m+n\log\log n)\log n)\) that computes an edge set \(S\subseteq E(G)\) such that:_
1. Progress: _With high probability, for any strongly connected component_ \(C\) _in_ \(G\setminus S\)_, we have (i)_ \(|C|\leq\frac{3}{4}|V(G)|\) _or (ii)_ \(\kappa(G[C\cup\{\,s\,]\})\leq\frac{\kappa}{2}\)_._
2. Sparse Hitting: _For any shortest_ \(s\)_-_\(v\)_-path_ \(P\) _in_ \(G\)_, we have_ \(\mathbf{E}(|P\cap S|)\leq O(\log n)\)_._
The sparse hitting property is exactly what we need for (2). With the progress condition, we ensure that \(|V(G)|\cdot\kappa(G)\) reduces by a constant factor when recurring on the strongly connected components of \(G\setminus S\). The recursion tree therefore reaches depth at most \(O(\log(n\cdot\kappa(G)))=O(\log n)\). In summary, with this new idea we can compute shortest paths in restricted graphs in time \(O((m+n\log\log n)\log^{2}n)\).
#### Improvement 2: Faster Scaling
It remains to lift our Restricted SSSP algorithm to the general SSSP problem at the expense of at most one more \(\log\)-factor \(\log(nW)\). In comparison, the BNW algorithm spends four \(\log\)-factors \(O(\log^{3}n\log W)\) here. As a warm-up, we assume that the given graph is promised not to contain a negative cycle.
Warm-Up: From Restricted Graphs to Graphs without Negative CyclesThis task is a prime example amenable to the _scaling technique_ from the 80's [32, 33, 34]: By rounding the weights in the given graph \(G\) from \(w(e)\) to \(\lceil\frac{3w(e)}{W+1}\rceil+1\) we ensure that (i) all weights are at least \(-1\) and (ii) the minimum cycle mean is at least \(1\), and thus we turn \(G\) into a restricted graph \(H\) (see Lemma 30). We compute the shortest paths in \(H\) and use the computed distances (by means of a _potential function_) to augment the weights in the original graph \(G\). If \(G\) has smallest weight \(-W\), in this way we can obtain an _equivalent_ graph \(G^{\prime}\) with smallest weight \(-\frac{3}{4}W\), where equivalence is defined as follows:
**Definition 9** (Equivalent Graphs).: _We say that two graphs \(G,G^{\prime}\) over the same set of vertices and edges are equivalent if (1) any shortest path in \(G\) is also a shortest path in \(G^{\prime}\) and vice versa, and (2) for any cycle \(C\), \(w_{G}(C)=w_{G^{\prime}}(C)\)._
Hence, by (1) we continue to compute shortest paths in \(G^{\prime}\). At first glance it seems that repeating this scaling step incurs only a factor \(\log W\) to the running time, but for subtle reasons the overhead is actually \(\log(nW)\). Another issue is that the Restricted SSSP algorithm errs with constant probability. The easy fix loses another \(\log n\) factor due to boosting (this is how Bernstein et al. obtain their algorithm SPMonteCarlo, see [11, Theorem 7.1]). Fortunately, we can "merge" the scaling and boosting steps to reduce the overhead to \(\log(nW)\) in total, see Theorem 29.
From Restricted Graphs to Arbitrary GraphsWhat if \(G\) contains a negative cycle? In this case, our goal is to find and return one such negative cycle. Besides the obvious advantage that it makes the output more informative, this also allows us to strengthen the algorithm from Monte Carlo to Las Vegas, since both a shortest path tree and a negative cycle serve as
_certificates_ that can be efficiently tested. Using the scaling technique as before, we can easily _detect_ whether a given graph contains a negative cycle in time \(O(T_{\text{RSSSP}}(m,n)\cdot\log(nW))\) (even with high probability, see Corollary 3.2), but we cannot _find_ such a cycle.
We give an efficient reduction from finding negative cycles to Restricted SSSP with overhead \(O(\log(nW))\). This reduction is the technically most involved part of our paper. In the following paragraphs we attempt to give a glimpse into the main ideas.
A Noisy-Binary-Search-Style ProblemFor the rest of this overview, we phrase our core challenge as abstract as possible, and omit further context. We will use the following notation: given a directed graph \(G\) and an integer \(M\), we write \(G^{+M}\) to denote the graph obtained by adding \(M\) to every edge weight of \(G\). Consider the following task:
[Threshold] Given a weighted graph \(G\), compute the smallest integer \(M^{*}\geq 0\) such that the graph \(G^{+M^{*}}\), which is obtained from \(G\) by adding \(M^{*}\) to all edge weights, does not contain a negative cycle.
Our goal is to solve the Threshold problem in time \(O(T_{\text{RSSSP}}(m,n)\log(nW))\) (from this it follows that we can find negative cycles in the same time, see Lemma 3.2). As a tool, we are allowed to use the following lemma as a black-box (which can be proven similarly to the warm-up case):
[Informal Lemma 3.2] There is an \(O(T_{\text{RSSSP}}(m,n))\)-time algorithm that, given a graph \(G\) with minimum weight \(-W\), either returns an equivalent graph \(G^{\prime}\) with minimum weight \(-\frac{3}{4}W\), or returns NegativeCycle. If \(G\) does not contain a negative cycle, then the algorithm returns NegativeCycle with error probability at most \(0.01\).
Morally, Lemma 3.2 provides a test whether a given graph \(G\) contains a negative cycle. A natural idea is therefore to find \(M^{*}\) by binary search, using Lemma 3.2 as the tester. However, note that this tester is _one-sided_: If \(G\) contains a negative cycle, then the tester is not obliged to detect one. Fortunately, we can turn the tester into a win-win algorithm to compute \(M^{*}\).
We first describe our Threshold algorithm in an idealized setting where we assume that the tester from Lemma 3.2 has error probability \(0\). We let \(d=\frac{1}{5}W\), and run the tester on the graph \(G^{+d}\). There are two cases:
* _The tester returns NegativeCycle:_ In the idealized setting we can assume that \(G^{+d}\) indeed contains a negative cycle. We therefore compute the threshold of \(G^{+d}\) recursively, and return that value plus \(d\). Note that the minimum weight of \(G^{+d}\) is at least \(-W+d=-\frac{4}{5}W\).
* _The tester returns an equivalent graph \(G^{\prime}\):_ In this case, we recursively compute and return the threshold value of \((G^{\prime})^{-d}\). Note that the graphs \(G\) and \((G^{\prime})^{-d}\) share the same threshold value, as by Definition 3.2 we have \(w_{(G^{\prime})^{-d}}(C)=w_{G^{\prime}}(C)-d=w_{G^{+d}}(C)-d=w_{G}(C)\) for any cycle \(C\). Moreover, since \(G^{+d}\) has smallest weight \(-\frac{4}{5}W\), the equivalent graph \(G^{\prime}\) has smallest weight at least \(-\frac{3}{4}\cdot\frac{4}{5}W=-\frac{2}{5}W\) by Lemma 3.2. Therefore, \((G^{\prime})^{-d}\) has smallest weight at least \(-\frac{3}{5}W-d=-\frac{4}{5}W\).
In both cases, we recursively compute the threshold of a graph with smallest weight at least \(-\frac{4}{5}W\). Therefore, the recursion reaches depth \(O(\log W)\) until we have reduced the graph to constant minimum weight and the problem becomes easy.
The above algorithm works in the idealized setting, but what about the unidealized setting, where the tester can err with constant probability? We could of course first boost the tester to succeed with high probability. In combination with the above algorithm, this would solve the Threshold problem in time \(O(T_{\text{RSSSP}}(m,n)\log(nW)\log n)\).
However, the true complexity of this task lies in avoiding the naive boosting. By precisely understanding the unidealized setting with constant error probability, we improve the running time for Threshold to \(O(T_{\text{RSSSP}}(m,n)\log(nW))\). To this end, it seems that one could apply the technique of _noisy binary search_ (see e.g. [49, 25, 50]). Unfortunately, the known results do not seem applicable to our situation, as Lemma 11 only provides a one-sided tester. Our solution to this final challenge is an innovative combination of the algorithm sketched above with ideas from noisy binary search. The analysis makes use of _drift analysis_ (see e.g. [41]), which involves defining a suitable _drift function_ (a quantity which in expectation decreases by a constant factor in each step and is zero if and only if we found the optimal value \(M^{*}\)) and an application of a _drift theorem_ (see Theorem 44) to prove that the drift function rapidly approaches zero.
### Summary of Log Shaves
Finally, to ease the comparison with the BNW algorithm, we compactly list where exactly we shave the nearly six log-factors. We start with the improvements in the Restricted SSSP algorithm:
* We use Thorup's priority queue [57] to speed up Dijkstra's algorithm, see Lemma 25. This reduces the cost of one log-factor to a loglog-factor.
* The Sparse Hitting property of our decomposition scheme (Lemma 8) incurs only an \(O(\log n)\) overhead in comparison to the \(O(\log^{2}n)\) overhead due to the Low-Diameter Decomposition in the BNW algorithm.
* The Progress property of our decomposition scheme (Lemma 8) ensures that the recursion depth of our Restricted SSSP algorithm is just \(O(\log n)\). The analogous recursion depth in the BNW algorithm is \(O(\log^{2}n)\) (depth \(\log n\) for reducing the number of nodes \(n\) times depth \(\log n\) for reducing \(\max_{v}\eta_{G}(v)\)).
Next, we summarize the log-factors shaved in the scaling step:
* The BNW algorithm amplifies the success probability of the Restricted SSSP algorithm by repeating it \(O(\log n)\) times. We combine the boosting with the scaling steps which saves this log-factor.
* We improve the overall reduction from finding a negative cycle to Restricted SSSP. In particular, we give an implementation of Threshold which is faster by two log-factors (see Lemmas 36 and 42). This is where we use an involved analysis via a drift function.
### Open Problems
Our work leaves open several interesting questions. Can our algorithm for negative-weight Single-Source Shortest Paths be improved further? Specifically:
1. _Can the number of \(\log n\) factors be improved further?_ In our algorithm, we suffer three log-factors because of (i) the scaling technique (\(\log(nW)\)) to reduce to restricted graphs, and on restricted graphs (ii) the inherent \(\log n\) overhead of the graph decomposition and (iii) the recursion depth \(\log n\) to progressively reduce \(\kappa(G)\), all of which seem unavoidable. We therefore believe that it is hard to improve upon our algorithm without substantially changing the framework.
2. _Can the loss due to the scaling technique be reduced from \(\log(nW)\) to \(\log W\)?_ The classical scaling technique, as a reduction to graphs with weights at least \(-1\), requires only \(\log W\) iterations [34]. But in our setting, due to the stronger conditions for _restricted_ graphs (and due to the boosting), we need \(\log(nW)\) iterations. Can we do better?
3. _Can the_ \(\log W\) _factor be removed from the running time altogether?_ That is, is there a _strongly polynomial_ algorithm in near-linear time? In terms of non-scaling algorithms, the Bellman-Ford algorithm remains state of the art with running time \(O(nm)\). This question has been asked repeatedly and appears to be very hard.
4. _Can the algorithm be derandomized?_ The fastest deterministic algorithm for negative-weight SSSP remains the \(O(m\sqrt{n}\log(W))\) algorithm by [32, 33, 34] and it is open to find a near-linear-time algorithm.
### Outline
This paper is structured as follows. In Section 2 we give some formal preliminaries. In Section 3 we present our algorithm for negative-weight SSSP in restricted graphs. In Section 4 we extend the algorithm from the previous section to work on general graphs without negative cycles. In Section 5 we remove this assumption and strengthen the algorithm to find negative cycles without worsening the running time. In Section 6 we give our result for computing the minimum cycle mean. Finally, in Section 7 we give our improved results for Low-Diameter Decompositions in directed graphs.
## 2 Preliminaries
We write \([n]=\{\,1,\ldots,n\,\}\) and \(\widetilde{O}(T)=T\cdot(\log T)^{O(1)}\). An event occurs _with high probability_ if it occurs with probability \(1-1/n^{c}\) for an arbitrarily large constant \(c\) (here, \(n\) is the number of vertices in the input graph). Unless further specified, our algorithms are Monte Carlo algorithms that succeed with high probability.
Directed GraphsThroughout we consider directed edge-weighted graphs \(G=(V,E,w)\). Here \(V=V(G)\) is the set of vertices and \(E=E(G)\subseteq V(G)^{2}\) is the set of edges. All edge weights are integers, denoted by \(w(e)=w(u,v)\) for \(e=(u,v)\in E(G)\). We typically set \(n=|V(G)|\) and \(m=|E(G)|\). We write \(G[C]\) to denote the _induced subgraph_ with vertices \(C\subseteq V(G)\) and write \(G\setminus S\) to denote the graph \(G\) after deleting all edges in \(S\subseteq E\). We write \(\deg(v)\) for the (out-)degree of \(v\), that is, the number of edges starting from \(v\).
A _strongly connected component (SCC)_ is a maximal set of vertices \(C\subseteq V(G)\) in which all pairs are reachable from each other. It is known that every directed graph can be decomposed into a collection of SCCs, and the graph obtained by compressing the SCCs into single nodes is acyclic. It is also known that the SCCs can be computed in linear time:
[Strongly Connected Components, [55]] In any directed graph \(G\), the strongly connected components can be identified in time \(O(n+m)\).
For a set of edges \(S\) (such as a path or a cycle), we write \(w(S)=\sum_{e\in S}w(e)\). A negative cycle is a cycle \(C\) with \(w(C)<0\). For vertices \(u,v\), we write \(\operatorname{dist}_{G}(u,v)\) for the length of the shortest \(u\)-\(v\)-path. If there is a negative-weight cycle in some \(u\)-\(v\)-path, we set \(\operatorname{dist}_{G}(u,v)=-\infty\), and if there is no path from \(u\) to \(v\) we set \(\operatorname{dist}_{G}(u,v)=\infty\).
[Balls] For a vertex \(v\), and a nonnegative integer \(r\), we denote the out-ball centered at \(v\) with radius \(r\) by \(B_{G}^{\text{out}}(v,r)=\{\,u\in V(G):\operatorname{dist}_{G}(v,u)\leq r\,\}\). Similarly, we denote the in-ball centered at \(v\) with radius \(r\) by \(B_{G}^{\text{in}}(v,r)=\{\,u\in V(G):\operatorname{dist}_{G}(u,v)\leq r\,\}\). Further, we write \(\partial B_{G}^{\text{out}}(v,r)=\{\,(u,w)\in E:u\in B^{\text{out}}(v,r)\wedge w \notin B^{\text{out}}(v,r)\,\}\) to denote the boundary edges of an out-ball, and \(\partial B_{G}^{\text{in}}(v,r)=\{\,(u,w)\in E:u\notin B^{\text{in}}(v,r)\wedge w \in B^{\text{in}}(v,r)\,\}\) for an in-ball.
In all these notations, we occasionally drop the subscript \(G\) if it is clear from context.
The _Single-Source Shortest Paths (SSSP)_ problem is to compute the distances \(\operatorname{dist}_{G}(s,v)\) for a designated source vertex \(s\in V(G)\) to all other vertices \(v\in V(G)\). When \(G\) does not contain negative cycles, this is equivalent to compute a _shortest path tree_ from \(s\) (that is, a tree in which every \(s\)-to-\(v\) path is a shortest path in \(G\)). For graphs with _nonnegative_ edge weights, Dijkstra's classical algorithm solves the SSSP problem in near-linear time. We use the following result by Thorup, which replaces the \(\log n\) overhead by \(\log\log n\) (in the RAM model, see the paragraph on the machine model below).
[Dijkstra's Algorithm, [23, 57]] In any directed graph \(G\) with nonnegative edge weights, the SSSP problem can be solved in time \(O(m+n\log\log n)\).
[Bellman-Ford's Algorithm, [54, 26, 9, 46]] In any directed graph \(G\), the SSSP problem can be solved in time \(O(mn)\).
PotentialsLet \(G\) be a directed graph. We refer to functions \(\phi:V(G)\to\mathbf{Z}\) as _potential functions_. We write \(G_{\phi}\) for the graph obtained from \(G\) by changing the edge weights to \(w_{\phi}(u,v)=w(u,v)+\phi(u)-\phi(v)\).
[Equivalent Graphs] We say that two graphs \(G,G^{\prime}\) over the same set of vertices and edges are equivalent if (1) any shortest path in \(G\) is also a shortest path in \(G^{\prime}\) and vice versa, and (2) for any cycle \(C\), \(w_{G}(C)=w_{G^{\prime}}(C)\).
[Johnson's Trick, [35]] Let \(G\) be a directed graph, and let \(\phi\) be an arbitrary potential function. Then \(w_{\phi}(P)=w(P)+\phi(u)-\phi(v)\) for any \(u\)-\(v\)-path \(P\), and \(w_{\phi}(C)=w(C)\) for any cycle \(C\). It follows that \(G\) and \(G_{\phi}\) are equivalent.
[[35]] Let \(G\) be a directed graph without negative cycles and let \(s\in V\) be a source vertex that can reach every other node. Then, for the potential \(\phi\) defined as \(\phi(v)=\operatorname{dist}_{G}(s,v)\), it holds that \(w_{\phi}(e)\geq 0\) for all edges \(e\in E\).
Machine ModelWe work in the standard word RAM model with word size \(\Theta(\log n+\log M)\), where \(n\) is the number of vertices and \(M\) is an upper bound on the largest edge weight in absolute value. That is, we assume that we can store vertex identifiers and edge weights in a single machine word, and perform basic operations in unit time.
## 3 SSSP on Restricted Graphs
In this section we give an efficient algorithm for SSSP on restricted graphs (recall Definition 3). Specifically, we prove the following theorem:
[Restricted SSSP] In a restricted graph \(G\) with source vertex \(s\in V(G)\), we can compute a shortest path tree from \(s\) in time \(O((m+n\log\log n)\log^{2}n)\) with constant error probability \(\frac{1}{2}\). (If the algorithm does not succeed, it returns Fail.)
We develop this algorithm in two steps: First, we prove our decomposition scheme for restricted graphs (Section 3.1) and then we use the decomposition scheme to build an SSSP algorithm for restricted graphs (Section 3.2).
### Decomposition for Restricted Graphs
In this section, we prove the decomposition lemma:
[Decomposition] Let \(G\) be a restricted graph with source vertex \(s\in V(G)\) and \(\kappa\geq\kappa(G)\). There is a randomized algorithm \(\textsc{Decompose}(G,\kappa)\) running in expected time \(O((m+n\log\log n)\log n)\) that computes an edge set \(S\subseteq E(G)\) such that:
1. Progress: _With high probability, for any strongly connected component \(C\) in \(G\setminus S\), we have (i) \(|C|\leq\frac{3}{4}|V(G)|\) or (ii) \(\kappa(G[C\cup\{\,s\,\}])\leq\frac{\kappa}{2}\)._
2. Sparse Hitting: _For any shortest \(s\)-\(v\)-path \(P\) in \(G\), we have \(\mathbf{E}(|P\cap S|)\leq O(\log n)\)._
For the proof, we introduce some notation. Let \(G_{\geq 0}\) denote the graph obtained by replacing negative edge weights by \(0\) in the graph \(G\). A vertex \(v\) is _out-heavy_ if \(|B^{out}_{G_{\geq 0}}(v,\frac{\kappa}{4})|>\frac{n}{2}\) and _out-light_ if \(|B^{out}_{G_{\geq 0}}(v,\frac{\kappa}{4})|\leq\frac{3n}{4}\). Note that there can be vertices which are both out-heavy and out-light. We similarly define _in-light_ and _in-heavy_ vertices with "\(B^{in}_{G_{\geq 0}}\)" in place of "\(B^{out}_{G_{\geq 0}}\)".
[Heavy-Light Classification] There is an algorithm which, given a directed graph \(G\), labels every vertex correctly as either in-light or in-heavy (vertices which are both in-light and in-heavy may receive either label). The algorithm runs in time \(O((m+n\log\log n)\log n)\log n)\) and succeeds with high probability.
Note that by applying this lemma to the graph \(G^{rev}\) obtained by flipping the edge orientations, we can similarly classify vertices into out-light and out-heavy. We omit the proof for now as it follows easily from Lemma 4 which we state and prove in Section 7.
We are ready to state the decomposition algorithm: First, label each vertex as out-light or out-heavy and as in-light or in-heavy using the previous lemma. Then, as long as \(G\) contains a vertex \(v\) which is labeled out-light or in-light (say it is out-light), we will carve out a ball around \(v\). To this end, we sample a radius \(r\) from the geometric distribution \(\mathrm{Geom}(20\log n/\kappa)\), we cut the edges \(\partial B^{out}_{G_{\geq 0}}(v,r)\) (that is, the set of edges leaving \(B^{out}_{G_{\geq 0}}(v,r)\)) and we remove all vertices in \(B^{out}_{G_{\geq 0}}(v,r)\) from the graph. We summarize the procedure in Algorithm 1. In what follows, we prove correctness of this algorithm.
[Sparse Hitting of Algorithm 1] Let \(P\) be a shortest \(s\)-\(v\)-path in \(G\) and let \(S\) be the output of \(\textsc{Decompose}(G,\kappa)\). Then \(\mathbf{E}(|P\cap S|)\leq O(\log n)\).
Proof.: Focus on any edge \(e=(x,y)\in E(G)\). We analyze the probability that \(e\in S\). We first analyze the probability of \(e\) being included into \(S\) in Line 7 (and the same analysis applies to the case where the edge is included in Line 11). Focus on any iteration of the loop in Line 5 for some out-light vertex \(v\). There are three options:
* \(x,y\not\in B^{out}_{G\geq 0}(v,r)\): The edge \(e\) is not touched in this iteration. It might or might not be included in later iterations.
* \(x\in B^{out}_{G\geq 0}(v,r)\) and \(y\not\in B^{out}_{G\geq 0}(v,r)\): The edge \(e\) is contained in \(\partial B^{out}_{G\geq 0}(v,r)\) and thus definitely included into \(S\).
* \(y\in B^{out}_{G\geq 0}(v,r)\): The edge \(e\) is definitely not included into \(S\). Indeed, \(e\not\in\partial B^{out}_{G\geq 0}(v,r)\), so we do not include \(e\) into \(S\) in this iteration. Moreover, as we remove \(y\) from \(G\) after this iteration, we will never consider the edge \(e\) again. Recall that the radius \(r\) is sampled from the geometric distribution \(\operatorname{Geom}(p)\) for \(p:=20\log n/\kappa\). Therefore, we have that \[\mathbf{P}(e \in S)\leq\max_{v\in V}\;\operatorname*{\mathbf{P}}_{r\sim \operatorname{Geom}(p)}(y\not\in B^{out}_{G\geq 0}(v,r)\mid x\in B^{out}_{G \geq 0}(v,r))\] \[=\max_{v\in V}\;\operatorname*{\mathbf{P}}_{r\sim\operatorname{ Geom}(p)}(r<\operatorname{dist}_{G_{\geq 0}}(v,y)\mid r\geq\operatorname{ dist}_{G_{\geq 0}}(v,x))\] \[\leq\max_{v\in V}\;\operatorname*{\mathbf{P}}_{r\sim\operatorname{ Geom}(p)}(r<\operatorname{dist}_{G_{\geq 0}}(v,x)+w_{G_{\geq 0}}(e)\mid r\geq \operatorname{dist}_{G_{\geq 0}}(v,x))\] By the memoryless property of geometric distributions, we may replace \(r\) by the (nonnegative) random variable \(r^{\prime}:=r-\operatorname{dist}_{G\geq 0}(v,x)\): \[=\max_{v\in V}\;\operatorname*{\mathbf{P}}_{r^{\prime}\sim \operatorname{Geom}(p)}(r^{\prime}<w_{G\geq 0}(e))\] \[=\operatorname*{\mathbf{P}}_{r^{\prime}\sim\operatorname{Geom}(p)} (r^{\prime}<w_{G_{\geq 0}}(e))\] \[\leq p\cdot w_{G_{\geq 0}}(e).\] The last inequality follows since we can interpret \(r^{\prime}\sim\operatorname{Geom}(p)\) as the number of coin tosses until we obtain heads, where each toss is independent and lands heads with probability \(p\). Thus, by a union bound, \(\mathbf{P}(r^{\prime}<w_{G_{\geq 0}}(e))\) is upper bounded by the probability that at least one of \(w_{G\geq 0}(e)\) coin tosses lands heads.
Now consider a shortest \(s\)-\(v\)-path \(P\) in \(G\). Recall that \(w_{G}(P)\leq 0\), since \(G\) is a restricted graph. Hence, \(P\) contains at most \(\kappa(G)\leq\kappa\) edges with negative weight (i.e., with weight exactly \(-1\)). It follows that \(w_{G\geq 0}(P)\leq\kappa\) and thus finally:
\[\mathbf{E}(|P\cap S|)=\sum_{e\in P}\mathbf{P}(e\in S)=\sum_{e\in P}p\cdot w_{G _{\geq 0}}(e)\leq p\cdot w_{G_{\geq 0}}(P)\leq p\kappa=O(\log n).\qed\]
In what follows, we will need the following lemma.
Let \(G\) be a directed graph. Then \(\min_{C}\bar{w}(C)=\min_{Z}\bar{w}(Z)\) where \(C\) ranges over all cycles and \(Z\) ranges over all closed walks in \(G\).
Proof.: Write \(c=\min_{C}\bar{w}(C)\) and \(z=\min_{Z}\bar{w}(Z)\). It suffices to prove that \(c\leq z\). Take the closed walk \(Z\) witnessing \(z\) with the minimum number of edges. If \(Z\) is a cycle, then we clearly have \(c\leq z\). Otherwise, \(Z\) must revisit at least one vertex and can therefore be split into two closed walks \(Z_{1},Z_{2}\). By the minimality of \(Z\) we have \(\bar{w}(Z_{1}),\bar{w}(Z_{2})>z\). But note that
\[z\cdot|Z|=w(Z)=w(Z_{1})+w(Z_{2})>z\cdot|Z_{1}|+z\cdot|Z_{2}|=z\cdot|Z|,\]
a contradiction.
**Lemma 22** (Progress of Algorithm 1).: _Let \(S\) be the output of Decompose\((G,\kappa)\). Then, with high probability, any strongly connected component \(C\) in \(G\setminus S\) satisfies (i) \(|C|\leq\frac{3}{4}|V(G)|\) or (ii) \(\kappa(G[C])\leq\frac{\kappa}{2}\)._
Proof.: Throughout, condition on the event that the heavy-light classification was successful (which happens with high probability). Observe that whenever we carve out a ball \(B^{out}_{G_{\geq 0}}(v,r)\) and include its outgoing edges \(\partial B^{out}_{G_{\geq 0}}(v,r)\) into \(S\), then any two vertices \(x\in B^{out}_{G_{\geq 0}}(v,r)\) and \(y\not\in B^{out}_{G_{\geq 0}}(v,r)\) cannot be part of the same strongly connected component in \(G\setminus S\) (as there is no path from \(x\) to \(y\)). The same argument applies to \(B^{in}_{G_{\geq 0}}(v,r)\).
Therefore, there are only two types of strongly connected components: (i) Those contained in \(B^{out}_{G_{\geq 0}}(v,r)\) or \(B^{in}_{G_{\geq 0}}(v,r)\), and (ii) those in the remaining graph after it no longer contains light vertices. We argue that each component of type (i) satisfies that \(|C|\leq\frac{3}{4}|V(G)|\) (with high probability) and that each component of type (ii) satisfies \(\kappa(G[C])\leq\frac{\kappa}{2}\).
In case (i) we have \(|C|\leq|B^{out}_{G_{\geq 0}}(v,r)|\). Since \(v\) is out-light, it follows that \(|C|\leq\frac{3}{4}|V(G)|\) whenever \(r\leq\frac{\kappa}{4}\). This event happens with high probability as:
\[\underset{r\sim\mathrm{Geom}(20\log n/\kappa)}{\mathbf{P}}\left(r>\frac{\kappa }{4}\right)\leq\left(1-\frac{20\log n}{\kappa}\right)^{\frac{\kappa}{2}}\leq \exp(-5\log n)\leq n^{-5}.\]
The number of iterations is bounded by \(n\), thus by a union bound we never have \(r>\frac{\kappa}{4}\) with probability at least \(1-n^{-4}\). A similar argument applies if we carve \(B^{in}_{G_{\geq 0}}(v,r)\) when \(v\) is in-light.
Next, focus on case (ii). Let \(C\) be a strongly connected component in the remaining graph \(G\) after carving out all balls centered at light vertices. Suppose that \(\kappa(G[C])>\frac{\kappa}{2}\). We will construct a closed walk \(Z\) in \(G\) with mean weight \(\bar{w}(Z)<1\), contradicting the assumption that \(G\) is restricted by Lemma 21. Let \(P\) be the \(s\)-\(v\)-path in \(G[C\cup\{\,s\,\}]\) of nonpositive weight witnessing the largest number of negative edges (i.e., the path that witnesses \(\kappa(G[C\cup\{\,s\,\}])\)), and let \(u\) be the first vertex (after \(s\)) on that path \(P\). Let \(P_{1}\) be the \(u\)-\(v\)-path obtained by removing the \(s\)-\(u\)-edge from \(P\). Since the \(s\)-\(u\)-edge has weight \(0\), we have that \(w(P_{1})\leq 0\) and that \(P_{1}\) contains more than \(\frac{\kappa}{2}\) negative-weight edges. Since \(u,v\) are both out-heavy and in-heavy vertices in the original graph \(G\), we have that \(|B^{out}_{G_{\geq 0}}(v,\frac{\kappa}{4})|,|B^{in}_{G_{\geq 0}}(u,\frac{ \kappa}{4})|>\frac{\kappa}{2}\). It follows that these two balls must intersect and that there exists a \(v\)-\(u\)-path \(P_{2}\) of length \(w(P_{2})\leq\frac{\kappa}{4}+\frac{\kappa}{4}=\frac{\kappa}{2}\). Combining \(P_{1}\) and \(P_{2}\), we obtain a closed walk \(Z\) with total weight \(w(Z)\leq\frac{\kappa}{2}\) containing more than \(\frac{\kappa}{2}\) (negative-weight) edges. It follows that \(\bar{w}(Z)<1\) yielding the claimed contradiction.
Proof of Lemma 8.: The correctness is immediate by the previous lemmas: Lemma 22 proves the progress property, and Lemma 20 the sparse hitting property. Next, we analyze the running time. Computing the heavy-light classification takes time \(O((m+n\log\log n)\log n)\) due to Lemma 19. Sampling each radius \(r\) from the geometric distribution \(\mathrm{Geom}(20\log n/\kappa)\) runs in expected constant time in the word RAM with word size \(\Omega(\log n)\)[13], so the overhead for sampling the radii is \(O(n)\) in expectation. To compute the balls we use Dijkstra's algorithm. Using Thorup's priority queue [57], each vertex explored in Dijkstra's takes time \(O(\log\log n)\) and each edge time \(O(1)\). Since every vertex contained in some ball is removed from subsequent iterations, a vertex participates in at most one ball. Note that a naive implementation of this would reinitialize the priority queue and distance array at each iteration of the while-loop. To avoid this, we initialize the priority queue and array of distances once, before the execution of the while-loops. Then, at the end of an iteration of the while-loop we reinitialize them in time proportional to the removed vertices and edges
(this is the same approach as in the BNW algorithm [11]). Thus, the overall time to compute all the balls is indeed \(O(m+n\log\log n)\).
### Proof of Theorem 3.1
With the graph decomposition in hand, we can present our full algorithm for Restricted SSSP. The overall structure closely follows the BNW algorithm (see [11, Algorithm 1]).
We start with the following crucial definition.
Let \(G\) be a directed graph with a designated source vertex \(s\). For any vertex \(v\in V(G)\), we denote by \(\eta_{G}(v)\) the smallest number of negative-weight edges in any shortest \(s\)-\(v\)-path.
The next proposition captures the relationship between the parameters \(\kappa(G)\) and \(\eta_{G}(\cdot)\) when \(G\) is restricted (see Definitions 7 and 23).
Let \(G\) be a restricted graph with source vertex \(s\). Then, for every vertex \(v\in V\) it holds that \(\eta_{G}(v)\leq\kappa(G)\).
Proof.: Fix a vertex \(v\). Let \(P\) be a shortest \(s\)-\(v\) path witnessing \(\eta_{G}(v)\) (see Definition 23). Since \(G\) is restricted, it does not contain negative cycles and thus \(P\) is a simple path. Furthermore, since there is an edge from \(s\) to \(v\) of weight \(0\), it follows that \(w_{G}(P)\leq 0\). Recall that \(\kappa(G)\) is the maximum number of negative edges in any simple path which starts at \(s\) and has nonpositive weight (see Definition 7). Therefore, it follows that \(\eta_{G}(v)\leq\kappa(G)\).
Next, we use two lemmas from [11]:
[Dijkstra with Negative Weights, similar to [11, Lemma 3.3]] Let \(G\) be a directed graph with source vertex \(s\in V(G)\) that does not contain a negative cycle. There is an algorithm that computes a shortest path tree from \(s\) in time \(O(\sum_{v}(\deg(v)+\log\log n)\cdot\eta_{G}(v))\). (If \(G\) contains a negative cycle, the algorithm does not terminate.)
The main differences to [11, Lemma 3.3] are that we use a faster priority queue for Dijkstra and that [11, Lemma 3.3] is restricted to graphs of constant maximum degree. Therefore, we devote Appendix A to a self-contained proof of Lemma 25.
[DAG Edges, [11, Lemma 3.2]] Let \(G\) be a directed graph with nonnegative edge weights inside its SCCs. Then we can compute a potential function \(\phi\) such that \(G_{\phi}\) has nonnegative edge weights (everywhere) in time \(O(n+m)\).
Proof Sketch.: For the complete proof, see [11, Lemma 3.2]. The idea is to treat the graph as a DAG of SCCs, and to assign a potential function \(\phi\) to every SCC such that the DAG edges become nonnegative. One way to achieve this is by computing a topological ordering, and by assigning \(\phi(v)\) to be \(W\) times the rank of \(v\)'s SCC in that ordering (here, \(-W\) is the smallest weight in \(G\)). Then \(G_{\phi}\) satisfies the claim.
The AlgorithmWe are ready to state the algorithm; see Algorithm 2 for the pseudocode. Recall that \(\kappa(G)\) is the maximum number of negative edges in any path \(P\) starting at \(s\) with \(w(P)\leq 0\) (Definition 7). If \(\kappa(G)\leq 2\), we run Lemma 25 to compute the distances from \(s\). Otherwise, we start with applying our graph decomposition. That is, we compute a set of edges \(S\), such that any strongly connected component \(C\) in the graph \(G\setminus S\) is either small or has an improved \(\kappa\)-value. This constitutes enough progress to solve the induced graphs \(G[C\cup\{\,s\,\}]\) recursively. The recursive calls produce shortest path trees and thereby a
potential function \(\phi_{1}\) such that \(G_{\phi_{1}}\) has nonnegative edge weights inside each SCC. We then add back the missing edges by first calling Lemma 26 (to fix the edges \(e\not\in S\) between strongly connected components) and then Lemma 25 (to fix the edges \(e\in S\)). The correctness proof is easy:
[Correctness of Algorithm 2] Let \(G\) be an arbitrary directed graph (not necessarily restricted), and let \(\kappa\) be arbitrary. Then, if RestrictedSSSP\((G,\kappa)\) terminates, it correctly computes a shortest path tree from the designated source vertex \(s\).
Proof.: If \(\kappa\leq 2\) and the call in Line 3 terminates, then it correctly computes a shortest path tree due to Lemma 25. If \(\kappa>2\), then in Line 10 we compute a potential function \(\phi_{2}\) and in Line 11 we run Lemma 25 to compute a shortest path tree in the graph \(G_{\phi_{2}}\). Assuming that Lemma 25 terminates, this computation is correct since \(G_{\phi_{2}}\) is equivalent to \(G\).
[Running Time of Algorithm 2] Let \(G\) be a restricted graph with \(\kappa(G)\leq\kappa\). Then RestrictedSSSP\((G,\kappa)\) runs in expected time \(O((m+n\log\log n)\log^{2}n)\).
Proof.: We first analyze the running time of a single call to Algorithm 2, ignoring the time spent in recursive calls. For the base case, when \(\kappa(G)\leq 2\), the running time of Line 3 is \(O(m+n\log\log n)\) by Lemma 25 and Proposition 24. Otherwise, the call to Decompose\((G,\kappa)\) in Line 4 runs in time \(O((m+n\log\log n)\log n)\) by Lemma 8. Computing the strongly connected components in \(G\setminus S\) is in linear time \(O(m+n)\), and so is the call to Lemma 26 in Line 10.
Analyzing the running time of Line 11 takes some more effort. Recall that \(\eta_{G_{\phi_{2}}}(v)\) is the minimum number of negative edges in any \(s\)-\(v\) path in \(G_{\phi_{2}}\) (see Definition 23). Our intermediate goal is to bound \(\mathbf{E}(\eta_{G_{\phi_{2}}}(v))=O(\log n)\) for all vertices \(v\). Let \(S\) be the set of edges computed by the decomposition, as in the algorithm. We proceed in three steps:
* _Claim 1:_\(G_{\phi_{1}}\setminus S\) _has nonnegative edges inside its SCCs._ The recursive calls in Line 8 correctly compute the distances by Lemma 27. Hence, for any two nodes \(u,v\in C_{i}\), we have that \(w_{\phi_{1}}(u,v)=w(u,v)+\operatorname{dist}_{G[C_{i}\cup\{\,s\,\}]}(s,u)- \operatorname{dist}_{G[C_{i}\cup\{\,s\,\}]}(s,v)\geq 0\), by the triangle inequality.
* _Claim 2:_\(G_{\phi_{2}}\setminus S\) _has only nonnegative edges._ This is immediate by Lemma 26.
* _Claim 3: For every node_ \(v\) _we have_ \(\mathbf{E}(\eta_{G_{\phi_{2}}}(v))\leq O(\log n)\). Let \(P\) be a shortest \(s\)-\(v\)-path in \(G\). Since \(G\) and \(G_{\phi_{2}}\) are equivalent, \(P\) is also a shortest path in \(G_{\phi_{2}}\). By the previous claim, the only candidate negative edges in \(P\) are the edges in \(S\). Therefore, we have that \(\mathbf{E}(\eta_{G_{\phi_{2}}}(v))\leq\mathbf{E}(|P\cap S|)=O(\log n)\), by Lemma 8.
The expected running time of Line 11 is thus bounded by
\[O\left(\sum_{v\in V(G)}(\deg(v)+\log\log n)\cdot\mathbf{E}(\eta_ {G_{\phi_{2}}}(v))\right)\] \[\quad=O\left(\sum_{v\in V(G)}(\deg(v)+\log\log n)\cdot\log n\right)\] \[\quad=O((m+n\log\log n)\log n).\]
Therefore, a single execution of Algorithm 2 runs in time \(O((m+n\log\log n)\log n)\); let \(c\) denote the hidden constant in the \(O\)-notation.
We finally analyze the total running time, taking into account the recursive calls. We inductively prove that the running time is bounded by \(c(m+n\log\log n)\log n\cdot\log_{4/3}(n\kappa)\).
We claim that for each recursive call on a subgraph \(G[C_{i}\cup\{\,s\,\}]\), where \(C_{i}\) is a strongly connected component in \(G\setminus S\), it holds that (i) \(G[C_{i}\cup\{\,s\,\}]\) is a restricted graph and that (ii) \(\kappa(G[C_{i}\cup\{\,s\,\}])\leq\kappa_{i}\). To see (i), observe that any subgraph of \(G\) containing \(s\) is also restricted. To show (ii), we distinguish two cases: Either \(|C_{i}|\leq\frac{3n}{4}\), in which case we trivially have \(\kappa(G[C_{i}\cup\{\,s\,\}])\leq\kappa(G)\leq\kappa=\kappa_{i}\). Our \(|C_{i}|>\frac{3n}{4}\), and in this case Lemma 8 guarantees that \(\kappa(G[C_{i}\cup\{\,s\,\}])\leq\frac{\kappa}{2}=\kappa_{i}\). It follows by induction that each recursive call runs in time \(c\cdot(|E(G[C_{i}\cup\{\,s\,\}])|+|C_{i}|\log\log n)\log n\cdot\log_{4/3}(|C_{ i}|\kappa_{i})\). Moreover, observe that in either case we have \(|C_{i}|\kappa_{i}\leq\frac{3}{4}n\kappa\). Therefore the total time can be bounded by
\[c(m+n\log\log n)\log n+\sum_{i=1}^{\ell}c\cdot(|E(G[C_{i}\cup \{\,s\,\}])|+|C_{i}|\log\log n)\log n\cdot\log_{4/3}(|C_{i}|\kappa_{i})\] \[\quad\leq c(m+n\log\log n)\log n\] \[\quad\quad+\sum_{i=1}^{\ell}c\cdot(|E(G[C_{i}\cup\{\,s\,\}])|+|C _{i}|\log\log n)\log n\cdot(\log_{4/3}(n\kappa)-1)\] \[\quad\leq c(m+n\log\log n)\log n+c(m+n\log\log n)\log n\cdot(\log _{4/3}(n\kappa)-1)\] \[\quad=cm\log n\log\log n\cdot\log_{4/3}(n\kappa),\]
where in the third step we used that \(\sum_{i}|E(G[C_{i}\cup\{\,s\,\}])|\leq m\) and that \(\sum_{i}|C_{i}|\leq n\). This completes the running time analysis.
Proof of Theorem 18.: This proof is almost immediate from the previous two Lemmas 27 and 28. In combination, these lemmas prove that Algorithm 2 is a Las Vegas algorithm for the Restricted SSSP problem which runs in expected time \(O((m+n\log\log n)\log^{2}n)\). By interrupting the algorithm after twice its expected running time (and returning Fail in that case), we obtain a Monte Carlo algorithm with worst-case running time \(O((m+n\log\log n)\log^{2}n)\) and error probability \(\frac{1}{2}\) as claimed.
We remark that Algorithm 2 is correct even if the input graph \(G\) is not restricted--therefore, whenever \(G\) contains a negative cycle, the algorithm cannot terminate.
## 4 SSSP on Graphs without Negative Cycles
In this section we present the \(O((m+n\log\log n)\log^{2}(n)\log(nW))\)-time algorithm for SSSP on graphs \(G\) without negative cycles. Later in Section 5, we will remove the assumption that \(G\) does not contain negative cycles, and strengthen the algorithm to find a negative cycle if it exists.
The main idea is to use _scaling_ and some tricks for probability amplification in order to extend our algorithm for restricted graphs developed in Section 3. More precisely, we use the standard _scaling technique_[32, 33, 34, 11] to reduce the computation of SSSP in an arbitrary graph (without negative cycles) to the case of restricted graphs. Formally, we prove the following theorem:
[Scaling Algorithm for SSSP] There is a Las Vegas algorithm which, given a directed graph \(G\) without negative cycles and with a source vertex \(s\in V(G)\), computes a shortest path tree from \(s\), running in time \(O(T_{\textsc{RSSSP}}(m,n)\cdot\log(nW))\) with high probability (and in expectation).
One-Step ScalingThe idea of the scaling algorithm is to increase the smallest weight in \(G\) step-by-step, while maintaining an equivalent graph. The following Lemma 3.3 gives the implementation of one such scaling step as a direct reduction to Restricted SSSP.
[One-Step Scaling] Let \(G\) be a directed graph that does not contain a negative cycle and with minimum weight greater than \(-3W\) (for some integer \(W\geq 1\)). There is an algorithm \(\textsc{Scale}(G)\) computing \(\phi\) such that \(G_{\phi}\) has minimum weight greater than \(-2W\), which succeeds with constant probability (if the algorithm does not succeed, it returns Fail) and runs in time \(O(T_{\textsc{RSSSP}}(m,n))\).
Proof.: We construct a restricted graph \(H\) as a copy of \(G\) with modified edge weights \(w_{H}(e)=\lceil w_{G}(e)/W\rceil+1\). We also add a source vertex \(s\) to \(H\), and put edges of weight \(0\) from \(s\) to all other vertices. We compute a shortest path tree from \(s\) in \(H\) using Theorem 1.2, and return the potential \(\phi\) defined by \(\phi(v)=W\cdot\operatorname{dist}_{H}(s,v)\). For the pseudocode, see Algorithm 3. Note that the running time is dominated by computing shortest paths in a restricted graph.
To prove that the algorithm is correct, we first check that \(H\) is indeed restricted (see Definition 6):
* Each edge weight satisfies \(w_{H}(e)=\lceil w_{G}(e)/W\rceil+1\geq\lceil(-3W+1)/W\rceil+1=-1\).
* Consider any cycle \(C\) in \(H\). Recall that \(w_{G}(C)\geq 0\) (as \(G\) does not contain negative cycles), and thus \[\bar{w}_{H}(C)=\frac{w_{H}(C)}{|C|}=\frac{1}{|C|}\sum_{e\in C}w_{H}(e)=1+\frac{1 }{|C|}\sum_{e\in C}\left[w_{G}(e)\cdot\frac{1}{W}\right]\geq 1+\frac{w_{G}(C)}{W|C|} \geq 1.\] In particular, the minimum cycle mean in \(H\) is at least \(1\).
* Finally, we have artificially added a source vertex \(s\) to \(H\) with weight-\(0\) edges to all other vertices.
It remains to prove that the potential \(\phi\) defined by \(\phi(v)=W\cdot\operatorname{dist}_{H}(s,v)\) satisfies that \(G_{\phi}\) has minimum edge weight more than \(-2W\). Consider any edge \(e=(u,v)\). Since by definition \(w_{H}(e)<w_{G}(e)\cdot\frac{1}{W}+2\), we have that \(w_{G}(e)>W\cdot(w_{H}(e)-2)\). It follows that
\[w_{G_{\phi}}(e) =w_{G}(e)+\phi(u)-\phi(v)\] \[=w_{G}(e)+W\cdot\operatorname{dist}_{H}(s,u)-W\cdot\operatorname {dist}_{H}(s,v)\] \[>-2W+W\cdot w_{H}(e)+W\cdot\operatorname{dist}_{H}(s,u)-W\cdot \operatorname{dist}_{H}(s,v)\] \[\geq-2W.\] In the last step we have used the triangle inequality \(\operatorname{dist}_{H}(s,v)\leq\operatorname{dist}_{H}(s,u)+w_{H}(u,v)\).
Finally, we argue that the algorithm succeeds with constant probability. Observe that the algorithm succeeds if the computation of the shortest path tree from \(s\) succeeds in Line 4 (indeed, all other steps are deterministic). Since \(H\) is restricted, Theorem 4.1 guarantees that this holds with constant probability, and if it does not suceed it returns Fail, completing the proof.
The Complete Scaling AlgorithmWe are ready to state the algorithm \(\operatorname{SSSP}(G,s)\) which implements Theorem 4.2. We construct a graph \(G_{0}\) by multiplying every edge weight of \(G\) by \(4n\). Then, for \(i=0,\ldots,L-1\) where \(L=\Theta(\log(nW))\), we call \(\textsc{Scale}(G_{i})\) (we repeat the call until it succeeds) to obtain a potential \(\phi_{i}\) and set \(G_{i+1}:=(G_{i})_{\phi_{i}}\). Next, we construct a graph \(G^{*}\) as a copy of \(G_{L}\), with every negative edge weight replaced by \(0\). Finally, we compute a shortest path tree in \(G^{*}\) using Dijkstra's algorithm. For the details, see the pseudocode in Algorithm 4.
[Running Time of Algorithm 4] If \(G\) does not contain a negative cycle, then \(\operatorname{SSSP}(G,s)\) runs in time \(O(T_{\operatorname{RSSSP}}(m,n)\cdot\log(nW))\) with high probability (and in expectation).
Proof.: We analyze the running time of the for-loop, which runs for \(L=O(\log(nW))\) iterations. Each iteration repeatedly calls \(\textsc{Scale}(G_{i})\) until one such call succeeds. By Lemma 4.1, a
single call succeeds with constant probability (say, \(\frac{1}{2}\)) and runs in time \(O(T_{\mathrm{RSSSP}}(m,n))\). We can therefore model the running time of the \(i\)-th iteration by \(O(X_{i}\cdot T_{\mathrm{RSSSP}}(m,n))\) where \(X_{i}\sim\mathrm{Geom}(\frac{1}{2})\) is a geometric random variable. Therefore, by Chernoff's bound, the time of the for-loop is bounded by \(O(\sum_{i=0}^{L-1}X_{i}\cdot T_{\mathrm{RSSSP}}(m,n))=O(T_{\mathrm{RSSSP}}(m,n) \cdot L)\) with probability at least \(1-\exp(-\Omega(L))\geq 1-n^{-\Omega(1)}\). Finally, observe that \(T_{\mathrm{RSSSP}}(m,n)=\Omega(m+n)\), and therefore the call to Dijkstra's algorithm in Line 8 is dominated by the time spent in the for-loop.
[Correctness of Algorithm 4] If \(G\) does not contain a negative cycle, then Algorithm 4 correctly computes a shortest path tree from \(s\).
Proof.: Consider an execution of Algorithm 4. We prove that any shortest path in \(G^{*}\) is a shortest path in \(G\), and hence the shortest path tree from \(s\) computed in \(G^{*}\) is also a shortest path tree from \(s\) in \(G\), implying correctness. We proceed in three steps:
* As \(G_{0}\) is a copy of \(G\) with scaled edge weights \(w_{G_{0}}(e)=4n\cdot w_{G}(e)\), any path \(P\) also has scaled weight \(w_{G_{0}}(P)=4n\cdot w_{G}(P)\) and therefore \(G\) and \(G_{0}\) are equivalent.
* Since the graphs \(G_{0},\ldots,G_{L}\) are obtained from each other by adding potential functions, they are equivalent (see Lemma 3.1). Moreover, by the properties of Lemma 3.1, the smallest weight \(-W\) increases by a factor \(\frac{2}{3}\) in every step until \(G_{L}\) has smallest weight at least \(-3\). Here we use that \(L=\Omega(\log(nW))\) for sufficiently large hidden constant.
* \(G^{*}\) is the graph obtained from \(G_{L}\) by replacing negative-weight edges by \(0\)-weight edges. Consider any non-shortest \(u\)-\(v\)-path \(P^{\prime}\) in \(G_{L}\). We will show that \(P^{\prime}\) is also not a shortest \(u\)-\(v\) path in \(G^{*}\), which completes the argument. Towards that end, let \(P\) be any shortest \(u\)-\(v\)-path. Recall that \(G_{L}\) equals \((G_{0})_{\phi}\) for some potential function \(\phi\). Therefore: \[w_{G_{L}}(P^{\prime})-w_{G_{L}}(P)\] \[\qquad=w_{G_{0}}(P^{\prime})+\phi(u)-\phi(v)-w_{G_{0}}(P)-\phi(u) +\phi(v)\] \[\qquad=w_{G_{0}}(P^{\prime})-w_{G_{0}}(P)\] \[\qquad\geq 4n,\] where the last inequality uses that the weights of \(P\) and \(P^{\prime}\) in \(G_{0}\) differ by at least \(4n\) (this is why we scaled the edge weights by \(4n\) in \(G_{0}\)). Finally, recall that by transitioning to \(G^{*}\) we can increase the weight of any path by at most \(3\cdot(n-1)\). It follows that \[w_{G^{*}}(P^{\prime})-w_{G^{*}}(P)\geq w_{G_{L}}(P^{\prime})-w_{G_{L}}(P)-3 \cdot(n-1)\geq 4n-3\cdot(n-1)>0,\] and therefore, \(P^{\prime}\) is not a shortest \(u\)-\(v\)-path in \(G^{*}\). Hence, a shortest path in \(G^{*}\) is also a shortest path in \(G_{L}\), and since \(G_{L}\) is equivalent to \(G\), it is also a shortest path in \(G\).
The proof of Theorem 4.1 is immediate by combining Lemmas 3.1 and 3.1.
We end this section with the following lemma, which will be useful in the next section.
Let \(G\) be a directed weighted graph and \(s\in V(G)\). If \(\mathrm{SSSP}(G,s)\) terminates, then \(G\) does not contain negative cycles.
Proof.: Assume for the sake of contradiction that \(G\) has a negative cycle \(C\) and that \(\mathrm{SSSP}(G,s)\) terminates. Consider the graph \(G_{L}\) which is constructed in the last iteration of the for-loop in Line 4. Note that \(G_{L}\) is equivalent to \(G_{0}\), since it was obtained by adding potential functions. Observe that the weight of \(C\) in \(G_{0}\) and \(G_{L}\) is at most \(-4n\), since it was negative in \(G\) and we scaled by a factor \(4n\) (see Lemma 3.1). Recall that we chose \(L=\Theta(\log(nW))\) with large enough hidden constant so that the smallest weight in \(G_{L}\) is at least \(-3\). This implies that the weight of the minimum cycle in \(G_{L}\) is at least \(-3n\), a contradiction.
## 5 Finding Negative Cycles
In Section 4 we developed an algorithm to compute a shortest path tree with high probability in a graph without negative cycles. In this section, we extend that result to _find_ a negative cycle if it exists. As a warm-up, we observe that the SSSP algorithm developed in Theorem 4 can be used to _detect_ the presence of a negative cycle with high probability:
Let \(G\) be a directed graph. Then, there is an algorithm \(\textsc{DetectNegCycle}(G)\) with the following properties:
* If \(G\) has a negative cycle, then the algorithm reports NegCycle.
* If \(G\) does not have a negative cycle, then with high probability it returns NoNegCycle
* It runs in time \(O(T_{\textsc{RSSSP}}(m,n)\log(nW))\).
Proof.: The algorithm adds a dummy source \(s\) connected with \(0\)-weight edges to all vertices in \(G\) and runs SSSP\((G,s)\). If it finishes within its time budget, we return NoNegCycle, otherwise we interrupt the computation and return NegCycle. The running time follows immediately by the guarantee of Theorem 4.
Now we argue about correctness. If \(G\) contains no negative cycles, then the algorithm returns NoNegCycle with high probability due to Theorem 4. If \(G\) contains a negative cycle, then Lemma 4 implies that SSSP\((G,s)\) does not terminate, so in this case we always report NegCycle.
_Finding_ the negative cycle though, requires some more work. Towards this end, we follow the ideas of [11]. They reduced the problem of finding a negative cycle to a problem called Threshold, which we define next. We will use the following notation: given a directed graph \(G\) and an integer \(M\), we write \(G^{+M}\) to denote the graph obtained by adding \(M\) to every edge weight of \(G\).
[Threshold] Given a directed graph \(G\), \(\textsc{Threshold}(G)\) is the smallest integer \(M^{*}\geq 0\) such that \(G^{+M^{*}}\) contains no negative cycle.
For a graph \(G\), we write \(T_{\textsc{Threshold}}(m,n)\) for the optimal running time of an algorithm computing Threshold\((G)\) with high probability.
The remainder of the section is organized as follows: in Section 5.1 we give the reduction from finding negative cycles to Threshold. In Section 5.2 we give an implementation of Threshold which has an extra log-factor compared to the promised Theorem 4, but it has the benefit of being simple. Finally, in Section 5.3 we give a faster (but more involved) implementation of Threshold which yields Theorem 4.
### Reduction to Threshold
In this section we restate the reduction given by Bernstein et al. in [11, Section 7.1] from finding a negative cycle if it exists, to Threshold and RestrictedSSSP (see their algorithm SPLasVegas).
[Finding Negative Cycles] Let \(G\) be a directed graph with a negative cycle. There is a Las Vegas algorithm \(\textsc{FindNegCycle}(G)\) which finds a negative cycle in \(G\), and runs in time \(O(T_{\textsc{RSSSP}}(m,n)\log(nW)+T_{\textsc{Threshold}}(m,n))\) with high probability.
Proof.: See the pseudocode in Algorithm 5 for a concise description. We start by defining a graph \(G_{0}\) which is a copy of \(G\) but with edge weights multiplied by \(n^{3}+1\). Then we compute \(M^{*}\) using \(\textsc{Threshold}(G_{0})\), and let \(G_{1}\) be \(G_{0}^{+M^{*}}\). Next, we add a dummy source \(s\) to \(G_{1}\) connected with \(0\)-weight edges to all other vertices, and run SSSP on the resulting graph from \(s\). We then use the distances computed to construct a potential \(\phi\), and construct a graph \(G_{2}\) by applying the potential \(\phi\) to \(G_{1}\) and subsequently removing all the edges with weight larger than \(n\). Finally, we check if \(G_{2}\) contains any cycle (of any weight) and if so, check it has negative weight in the original graph \(G\) and return it. Otherwise, we restart the algorithm from the beginning.
The correctness is obvious: When the algorithm terminates, it clearly returns a negative cycle. The interesting part is to show that with high probability the algorithm finds a negative cycle \(C\) without restarting. The call to \(\textsc{Threshold}(G_{0})\) in Line 3 returns the smallest \(M^{*}\geq 0\) such that \(G_{0}\) contains no negative cycle, with high probability. In this case, by definition, \(G_{1}\) does not contain a negative cycle, and therefore by Theorem 29 the call to \(\textsc{SSSP}(G_{1},s)\) correctly computes a shortest path tree from \(s\). From now on, we condition on these two events.
\(\rhd\) Claim 37.: It holds that \(M^{*}>n^{2}\).
Proof.: Let \(C\) be a simple cycle in \(G\) with minimum (negative) weight. Since \(G_{1}=G_{0}^{+M^{*}}\) contains no negative cycles, the weight of \(C\) in \(G_{1}\) is \(0\leq w_{1}(C)=w_{0}(C)+M^{*}|C|\). The claim follows by noting that \(w_{0}(C)<-n^{3}\) due to the scaling in Line 2, and that \(|C|\leq n\) because \(C\) is simple. \(\lhd\)
Next, we argue that a cycle of minimum weight in \(G\) remains a cycle in \(G_{2}\), and conversely that any simple cycle in \(G_{2}\) corresponds to a negative weight cycle in \(G\). Note that this is enough to prove that the algorithm terminates with high probability without a restart.
\(\rhd\) Claim 38.: Let \(C\) be a simple cycle in \(G\) of minimum weight. Then, \(C\) is a cycle in \(G_{2}\).
Proof.: First note that the weight of \(C\) in \(G_{0}^{+M^{*}}\) (and thus also in \(G_{1}\)) is at most \(n\). This holds since \(M^{*}\) is the smallest integer such that \(G_{0}^{+M^{*}}\) contains no negative cycles, which means that \(w_{0}(C)-|C|<0\). Second, note that since Line 5 correctly computes a shortest path tree in \(G_{1}\), it holds that the edge weights in \((G_{1})_{\phi}\) are all non-negative (by Lemma 17). Moreover, the weight of \(C\) in \((G_{1})_{\phi}\) is the same as in \(G_{1}\) (by Lemma 16). Thus, we conclude that the removal of the edges of weight greater than \(n\) in \((G_{1})_{\phi}\) to obtain \(G_{2}\) leaves \(C\) untouched.
* [leftmargin=*]
* Any cycle \(C^{\prime}\) in \(G_{2}\) has negative weight in \(G\).
Proof.: Note that \(w_{2}(C^{\prime})\leq n^{2}\) since every edge in \(G_{2}\) has weight at most \(n\). Moreover, since \(G_{2}\) is obtained from \(G_{1}\) by adding a potential, it holds that \(w_{2}(C^{\prime})=w_{1}(C^{\prime})\) (by Lemma 3.1). Therefore, \(w_{0}(C^{\prime})=w_{1}(C^{\prime})-M^{*}|C^{\prime}|\leq n^{2}-M^{*}<0\) where the last inequality holds since \(M^{*}>n^{2}\) by Claim 3.1.
Finally, we analyze the running time. The call to Threshold\((G_{0})\) succeeds with high probability (see Definition 3.2). Conditioned on this, \(G_{1}\) contains no negative cycles. Thus by Theorem 3.1, the call to SSSP\((G,s)\) runs in time \(O(T_{\textsc{RSSSP}}(m,n)\log(nW))\) with high probability. Note that the remaining steps of the algorithm take time \(O(m)\). Therefore, we conclude that the overall running time is \(O(T_{\textsc{RSSSP}}(m,n)\log(nW)+T_{\textsc{Threshold}}(m,n))\) with high probability.
### Simple Implementation of Threshold
In this section we give a simple implementation of Threshold which combined with Lemma 3.2 yields an algorithm to find negative cycles in time \(O(T_{\textsc{RSSSP}}(m,n)\log n\log(nW))\). This procedure shares one log-factor compared to [11] (see their algorithm FindTresh in Lemma 7.1). Later, in Section 5.3, we give an improved but more intricate algorithm.
As a building block, we will use the routine Scale from Lemma 3.2. The following lemma boosts the probability of success of Scale and uses a different parameterization of the minimum weight in the input graph, which will streamline our presentation.
[Test Scale] Let \(G\) be a directed graph with minimum weight at least \(-W\) where \(W\geq 24\), and let \(0<\delta<1\) be a parameter. There is an algorithm TestScale\((G,\delta)\) with the following properties:
* If \(G\) does not contain a negative cycle, then with probability at least \(1-\delta\) it succeeds and returns a potential \(\phi\) such that \(G_{\phi}\) has minimum weight at least \(-\frac{3}{4}W\). If it does not succeed, it returns Fail.
* It runs in time \(O(T_{\textsc{RSSSP}}(m,n)\cdot\log(1/\delta))\).
Proof.: We run Scale\((G)\) (see Lemma 3.2) for \(O(\log(1/\delta))\) repetitions. Each execution either returns a potential \(\phi\), or it fails. We return Fail if and only if \(all\) these repetitions fail. The running time analysis is immediate by Lemma 3.2.
Now we analyze correctness. First we look at the success probability. Lemma 3.2 guarantees that if \(G\) does not contain a negative cycle, then each invocation to Scale\((G)\) returns a potential \(\phi\) with constant probability. Thus, in this case, the probability that all \(O(\log(1/\delta))\) repetitions fail and we return Fail is at most \(\delta\), as stated. Next, we analyze the increase in the minimum weight of \(G_{\phi}\). Recall that the minimum weight in \(G\) is at least \(-W\). Let \(k\) be the largest integer such that \(W\geq 3k\), and let \(-W^{\prime}\) denote the minimum weight of \(G_{\phi}\). In particular, the minimum weight in \(G\) is greater than \(-3(k+1)\), so Lemma 3.2 guarantees that
\[-W^{\prime}>-2(k+1)\geq-\tfrac{2}{3}W-2\geq-\tfrac{2}{3}W-\tfrac{1}{12}W=- \tfrac{3}{4}W,\]
where the last inequality uses the assumption that \(W\geq 24\).
[Slow Threshold] Let \(G\) be a directed graph. There is an algorithm computing Threshold\((G)\) (Definition 3.2) which succeeds with high probability and runs in worst-case time \(O(T_{\textsc{RSSSP}}(m,n)\log n\log(nW))\).
Proof.: We summarize the pseudocode in Algorithm 6. Let \(-W\) be the smallest weight in \(G\). If \(W\leq 48\) (i.e., all weights are at least \(-48\)) we clearly have that the correct answer lies in the range \(0\leq M^{*}\leq 48\). We brute-force the answer by exhaustively checking which graph \(G^{+47},\ldots,G^{+0}\) is the first one containing a negative cycle. For this test we use the algorithm DetectNegCycle\((G)\). Corollary 34 guarantees that it reports correct answers with high probability.
If \(W>48\), we make progress by reducing the problem to another instance with larger minimum weight. Let \(M=\lceil\frac{W}{2}\rceil\), and run \(\textsc{TestScale}(G^{+M},\delta)\) for \(\delta:=1/n^{10}\). We distinguish two cases based on the outcome of TestScale:
* Case 1: \(\textsc{TestScale}(G^{+M},\delta)=\phi\) for a potential function \(\phi\). Then recursively compute and return \(\textsc{SlowThreshold}(G_{\phi})\). First note that this is correct, i.e., that the answer is unchanged by recursing on \(G_{\phi}\), since the potential does not change the weight of any cycle (see Lemma 16). Second, note that we make progress by increasing the smallest weight in \(G_{\phi}\) to least \(-\frac{11}{12}W\): To see this, note that the minimum weight of \(G^{+M}\) is at least \(-\frac{1}{2}W\), and thus, Lemma 40 guarantees that the smallest weight in \(G_{\phi}^{+M}\) is at least \(-\frac{3}{8}W\). Therefore, it follows that the smallest weight in \(G_{\phi}\) is at least \[-\tfrac{3}{8}W-M=\tfrac{3}{8}W-\lceil\tfrac{1}{2}W\rceil\geq-\tfrac{7}{8}W-1> -\tfrac{7}{8}W-\tfrac{1}{24}W=-\tfrac{11}{12}W,\] where the second inequality uses the assumption that \(W>24\).
* Case 2: \(\textsc{TestScale}(G^{+M},\delta)=\textsc{Fail}\). By Lemma 40, if \(G^{+M}\) does not contain a negative cycle then with high probability the output is not Fail. Conditioned on this event, we conclude that \(G^{+M}\) contains a negative cycle. Thus, we know that the optimal answer \(M^{*}\) satisfies \(M^{*}\geq M\), and therefore we return \(M+\textsc{SlowThreshold}(G^{+M})\). Note that this also improves the most negative edge weight to \(-W+M\geq-\frac{11}{12}W\).
We claim that the running time is bounded by \(O(T_{\textsc{RSSSP}}(m,n)\log n\log(nW))\). To see this, note that in the base case, when \(W\leq 48\), the algorithm calls DetectNegCycle\((G)\) and therefore takes time \(O(T_{\textsc{RSSSP}}(m,n)\cdot\log(nW))\) (see Corollary 34). We claim that the higher levels of the recursion take time \(O(T_{\textsc{RSSSP}}(m,n)\log n\log W)\) in total. Note that each such level takes time \(O(T_{\textsc{RSSSP}}(m,n)\cdot\log n)\) due to the call to TestScale (Lemma 40) and thus, it suffices to bound the recursion depth by \(O(\log W)\). To this end, observe that we always recur on graphs for which \(W\) has decreased by a constant factor.
Finally note that each call to TestScale succeeds with high probability, and we make one call for each of the \(O(\log W)\) recursive calls. Thus, by a union bound the algorithm succeeds
with high probability. (Strictly speaking, for this union bound we assume that \(\log W\leq n\); if instead \(\log W>n\), we can simply use Bellman-Ford's algorithm.)
### Fast Implementation of Threshold
In this section we give the fast implementation of Threshold.
[Fast Threshold] Let \(G\) be a directed graph. There is an algorithm computing \(\textsc{Threshold}(G)\) (see Definition 3.2) which suceeds with high probability, and runs in worst-case time \(O(T_{\textsc{RSSSP}}(m,n)\log(nW))\).
The algorithm is intricate, so we start with a high level description to convey some intuition.
High-Level IdeaLet \(\Delta\) be a parameter and let \(M^{*}\geq 0\) be the right threshold. Let us look at what happens if we make a call to \(\textsc{TestScale}(G^{+W-\Delta},\delta)\), where \(1-\delta\) is the success probability and \(-W\) is the minimum edge weight in \(G\). If \(G^{+W-\Delta}\) does not have negative cycles, then Lemma 4.2 guarantees that with probability at least \(1-\delta\) we obtain a potential \(\phi\). On the other hand, if \(G^{+W-\Delta}\) contains a negative cycle, then we have _no guarantee_ from Lemma 4.2. That is, the algorithm might return a potential, or it might return Fail. The upside is that as long as we obtain a potential, regardless whether there is a negative cycle or not, we can make progress by (additively) increasing the minimum edge weight by \(\approx\Delta\). Moreover, if we obtain Fail, then we conclude that with probability at least \(1-\delta\) the graph \(G^{+W-\Delta}\) contains a negative cycle. This suggests the following idea. We make a call to \(\textsc{TestScale}(G^{+W-\Delta},\delta)\), and consider the two outcomes:
1. \(\textsc{TestScale}(G^{+W-\Delta},\delta)=\phi\). Then, we set \(G:=G_{\phi}\) and increase \(\Delta:=2\Delta\).
2. \(\textsc{TestScale}(G^{+W-\Delta},\delta)=\textsc{Fail}\). Then, we decrease \(\Delta:=\Delta/2\).
If we are in Case 1, then the minimum edge weight \(-W^{\prime}\) of \(G_{\phi}\) is increased by \(\Delta\). This in turn, decreases the gap \(W^{\prime}-M^{*}\) (note that at all times \(M^{*}\leq W^{\prime}\)). Thus, larger \(\Delta\) implies larger progress in decreasing \(W^{\prime}-M^{*}\). This is why in this case we double \(\Delta\). On the other hand, if we are in Case 2 then by the guarantee of Lemma 4.2, we conclude that with probability at least \(1-\delta\) the graph \(G^{+W-\Delta}\) contains a negative cycle. Intuitively, this means that \(\Delta\) is too large. Therefore, we halve \(\Delta\) to eventually make progress in Case 1 again.
In short, we know that when \(G^{+W-\Delta}\) does not have negative cycles, or equivalently \(W-M^{*}\geq\Delta\), then with probability at least \(1-\delta\) we will make progress in Case 1 by decreasing the gap \(W-M^{*}\). On the other hand, if we are in Case 2 and \(G^{+W-\Delta}\) has a negative cycle, or equivalently \(W-M^{*}<\Delta\), then we will make progress by decreasing \(\Delta\).
Perhaps surprisingly, we will show that this idea can be implemented by choosing \(\delta=0.01\), and not \(1/\operatorname{poly}(n)\) as in the implementation of Lemma 4.2 (which was the reason for getting an extra \(O(\log n)\)-factor there). For this, we will formalize the progress as some _drift function_ that decreases in expectation in each iteration, and then apply a _drift theorem_ (see Theorem 4.2).
The AlgorithmNow we formalize this approach. We proceed in an iterative way. At iteration \(t\), we have a graph \(G_{t}\) with minimum weight \(-W_{t}\), and we maintain a parameter \(\Delta_{t}\). We make a call to \(\textsc{Scale}(G^{+W_{t}-\Delta_{t}},\delta)\) with \(\delta:=0.01\). If we obtain a potential \(\phi\) as answer, we set \(G_{t+1}:=(G_{t})_{\phi}\) and \(\Delta_{t+1}:=2\Delta\). Otherwise, we set \(G_{t+1}:=G_{t}\) and \(\Delta_{t+1}:=\frac{1}{2}\Delta_{t}\). After \(T=\Theta(\log(nW))\) iterations, we stop and return \(W_{T}\) as the answer. The complete pseudocode (which additionally handles some corner cases) is in Algorithm 7.
To quantify the progress made by the algorithm, we define the following _drift function_ at iteration \(t\):
\[D_{t}:=(W_{t}-M^{*})^{20}\cdot\max\left\{\frac{2\Delta_{t}}{W_{t}-M^{*}},\frac{W_ {t}-M^{*}}{2\Delta_{t}}\right\}, \tag{1}\]
Observe that we always have \(\Delta_{t}\geq 1\) and \(W_{t}\geq M^{*}\) throughout the algorithm. To cover the case \(W_{t}=M^{*}\) (where the above expression leads to a division by \(0\)), formally we actually define the drift function by
\[D_{t}:=\max\left\{(W_{t}-M^{*})^{19}\cdot 2\Delta_{t},\frac{(W_{t}-M^{*})^{2 1}}{2\Delta_{t}}\right\}. \tag{2}\]
For the sake of readability, in the following we work with (1), with the understanding that formally we mean (2).
We will show that \(D_{t}\) decreases by a constant factor (in expectation) in each iteration of the for-loop in Line 4. Note that when \(D_{t}\) reaches \(0\), then we have that \(W_{t}=M^{*}\), so we are done.
[Negative Drift] For any \(d>0\) and \(t\geq 0\) it holds that
\[\mathbf{E}(D_{t+1}\mid D_{t}=d)\leq 0.7\cdot d.\]
Before proving Lemma 4.2, let us see how to obtain Lemma 4.2 from it. For this, we will use the following tool: [Multiplicative Drift, see e.g. [41, Theorem 18]] Let \((X_{t})_{t\geq 0}\) be a sequence of non-negative random variables with a finite state space \(\mathcal{S}\) of non-negative integers. Suppose that \(X_{0}=s_{0}\), and there exists \(\delta>0\) such that for all \(s\in\mathcal{S}\setminus\{0\}\) and all \(t\geq 0\), \(\mathbf{E}(X_{t+1}\mid X_{t}=s)\leq(1-\delta)s\). Then, for all \(r\geq 0\),
\[\mathbf{P}(X_{r}>0)\leq e^{-\delta\cdot r}\cdot s_{0}.\]
By Markov's inequality, \(\mathbf{P}(X_{r}>0)=\mathbf{P}(X_{r}\geq 1)\leq\mathbf{E}(X_{r})\). By applying the bound \(\mathbf{E}(X_{t+1}\mid X_{t}=d)\leq(1-\delta)d\) for \(r\) times, we obtain that
\[\mathbf{P}(X_{r}>0)\leq(1-\delta)^{r}\cdot s_{0}\leq\exp(-\delta r)\cdot s_{0}.\]
Proof of Lemma 42.: See Algorithm 7 for the pseudocode. First we analyze the running time. During each iteration of the for-loop, it either holds that \(W_{t}\leq 24\) and we solve the problem directly using at most \(24\) calls to DetectNegCycle, or we make a call to TestScale. Each call to TestScale takes time \(O(T_{\mathrm{RSSSP}}(m,n))\) by Lemma 40, and we only make the calls to DetectNegCycle once which take total time \(O(T_{\mathrm{RSSSP}}(m,n)\log n)\) by Corollary 34. Since \(T=\Theta(\log(nW))\), the overall running time is bounded by \(O(T_{\mathrm{RSSSP}}(m,n)\log n+T_{\mathrm{RSSSP}}(m,n)\log(nW))\), as claimed.
Now we analyze correctness. Note that at every iteration, \(G_{t}\) is equivalent to \(G\) since the only way we modify the graph is by adding potentials (see Lemma 16). Thus, if at some point we have that \(W_{t}\leq 24\) then the correct answer lies in the range \(0\leq M^{*}\leq 24\). The for-loop in Line 7 exhaustively checks which is the correct value by making calls to DetectNegCycle. By Corollary 34, this is correct with high probability.
Now suppose the algorithm does not terminate in Line 8. We claim that the final drift \(D_{T}\) is zero with high probability. Note that this implies correctness, since \(D_{T}=0\) if and only if \(W_{T}=M^{*}\) (to see this, observe that \(\Delta_{T}\geq 1\) due to Line 12). To prove the claim, we will use Theorem 44. Note that Lemma 43 gives us that \(\mathbf{E}(D_{t+1}\mid D_{t}=d)\leq 0.7d\). Moreover, we can bound the initial drift \(D_{0}\) as
\[D_{0}=(W-M^{*})^{20}\cdot\max\left\{\frac{2\Delta_{0}}{W-M^{*}},\frac{W-M^{*}} {2\Delta_{0}}\right\}\leq(W-M^{*})^{21}\cdot 2\Delta_{0}\leq 4W^{21}.\]
Hence, Theorem 44 yields that \(\mathbf{P}(D_{T}>0)\leq\exp(-0.7T)\cdot 4W^{21}\). Since \(T=\Theta(\log(nW))\), we conclude that \(\mathbf{P}(D_{T}>0)\leq n^{-\Omega(1)}\), which finishes the proof.
Proof of Lemma 43.: Focus on iteration \(t\) of the for-loop in Line 4. Let \(E_{1}\) be the event that we obtain a potential \(\phi\) (i.e. that the if-statement in Line 9 succeeds) and let \(E_{2}:=\neg E_{1}\) be the complement. We start by observing how the parameters \(W_{t+1}\) and \(\Delta_{t+1}\) change depending on whether \(E_{1}\) or \(E_{2}\) occur.
\(\rhd\)Claim 45.: If \(E_{1}\) occurs, then \(W_{t+1}\leq W_{t}-\frac{\Delta_{t}}{4}\), and \(\Delta_{t+1}=2\Delta_{t}\).
Proof.: If the call to TestScale in Line 9 returns a potential \(\phi\), then we set \(G_{t+1}=(G_{t})_{\phi}\) and \(\Delta_{t+1}=2\Delta_{t}\). Observe that the minimum weight of \(G_{t}^{+W_{t}-\Delta_{t}}\) is \(\Delta_{t}\). Hence, Lemma 40 guarantees that the minimum weight of \((G_{t})_{\phi}^{+W_{t}-\Delta_{t}}\) is at least \(-\frac{3}{4}\Delta_{t}\). Since \(G_{t+1}=(G_{t})_{\phi}\) is defined by substracting \(W_{t}-\Delta_{t}\) from every edge weight in \((G_{t})_{\phi}^{+W_{t}-\Delta_{t}}\), we obtain that \(-W_{t+1}\geq-W_{t}+\frac{1}{4}\Delta_{t}\).
\(\rhd\)Claim 46.: If \(E_{2}\) occurs, then \(W_{t+1}=W_{t}\) and \(\Delta_{t+1}=\max\{1,\Delta_{t}/2\}\) and \(D_{t+1}\leq 2D_{t}\).
Proof.: The first two statements are immediate by Line 12. Towards the third statement, for the function \(f(x):=\max\{x,1/x\}\) we observe that if \(x,y>0\) differ by at most a factor \(2\) then also \(f(x),f(y)\) differ by at most a factor \(2\). Now we use that \(D_{t}=(W_{t}-M^{*})^{20}\cdot f(2\Delta_{t}/(W_{t}-M^{*}))\). Since \(\Delta_{t}\geq 1\), it holds that \(\Delta_{t},\Delta_{t+1}\) differ by at most a factor \(2\), and thus \(D_{t},D_{t+1}\) differ by at most a factor \(2\).
With these claims, we proceed to bound the drift \(D_{t+1}\) when \(D_{t}>0\). Recall that we defined
\[D_{t}=(W_{t}-M^{*})^{20}\cdot\max\left\{\frac{2\Delta_{t}}{W_{t}-M^{*}},\frac{W_ {t}-M^{*}}{2\Delta_{t}}\right\}. \tag{1}\]
Note that it always holds that \(W_{t}\geq M^{*}\) and \(W_{t+1}\geq M^{*}\). Moreover, since \(D_{t}>0\), we can assume that \(W_{t}-M^{*}>0\), since otherwise \(W_{t}-M^{*}=0\) and hence \(D_{t}=0\). We proceed making a case distinction based on the term that achieves the maximum in (1).
**Case 1**: \(\Delta_{t}\geq\frac{1}{2}(W_{t}-M^{*})\): Then, we have that \(D_{t}=(W_{t}-M^{*})^{19}\cdot 2\Delta_{t}\). If \(E_{1}\) occurs, then by Claim 45 it holds that \(\Delta_{t+1}\geq\Delta_{t}\geq\frac{1}{2}(W_{t}-M^{*})\geq\frac{1}{2}(W_{t+1}- M^{*})\). Therefore, using (1) we can bound the drift \(D_{t+1}\) by
\[D_{t+1} =(W_{t+1}-M^{*})^{19}\cdot 2\Delta_{t+1}\] \[\leq(W_{t}-M^{*}-\frac{\Delta_{t}}{4})^{19}\cdot 4\Delta_{t}\] \[\leq(W_{t}-M^{*}-\frac{1}{8}(W_{t}-M^{*}))^{19}\cdot 4\Delta_{t}\] \[\leq(\frac{7}{8})^{19}\cdot 2D_{t}\leq 0.16D_{t},\]
where we used Claim 45 in the first inequality, and the second inequality follows since by the assumption of Case 1 we have that \(\frac{\Delta_{t}}{4}\geq\frac{1}{8}(W_{t}-M^{*})\).
If \(E_{2}\) occurs instead, we make a further case distinction:
**Case 1.1**: \(\Delta_{t}>W_{t}-M^{*}\): Note that if \(\Delta_{t}=1\), then since \(W_{t}\) and \(M^{*}\) are integers it follows that \(W_{t}=M^{*}\), and consequently \(D_{t}=0\), which contradicts the assumption that \(D_{t}>0\). Therefore, we can assume that \(\Delta_{t}\geq 2\). In particular, by Claim 46 we have \(\Delta_{t+1}=\frac{1}{2}\Delta_{t}>\frac{1}{2}(W_{t}-M^{*})=\frac{1}{2}(W_{t+1 }-M^{*})\). Thus, by (1) we can express the drift \(D_{t+1}\) as
\[D_{t+1}=(W_{t+1}-M^{*})^{19}\cdot 2\Delta_{t+1}=(W_{t}-M^{*})^{19}\cdot \Delta_{t}=\frac{D_{t}}{2}.\]
**Case 1.2**: \(\Delta_{t}\leq W_{t}-M^{*}\): Observe that in this case \(G^{+W_{t}-\Delta_{t}}\) contains no negative cycle. Moreover, we can assume that \(W_{t}>24\) since otherwise the problem is solved directly in Line 7. Therefore, by Lemma 40 we have that \(\mathbf{P}(E_{2})\leq 0.01\). Moreover, by by Claim 46 we have \(D_{t+1}\leq 2D_{t}\).
Combining the above, we conclude that for Case 1 it holds that
\[\mathbf{E}(D_{t+1}\mid D_{t}) \leq\mathbf{P}(E_{1})\,\mathbf{E}(D_{t+1}\mid D_{t},E_{1})+ \mathbf{P}(E_{2})\,\mathbf{E}(D_{t+1}\mid D_{t},E_{2})\] \[\leq 1\cdot 0.16D_{t}+\max\left\{1\cdot\frac{1}{2}D_{t},0.01 \cdot 2D_{t}\right\}\leq 0.66D_{t}.\]
**Case 2**: \(\Delta_{t}<\frac{1}{2}(W_{t}-M^{*})\): Then, it holds that \(D_{t}=(W_{t}-M^{*})^{21}/(2\Delta_{t})\). If \(E_{2}\) occurs, then by the same argument as in Case 1.2 we have that \(D_{t+1}\leq 2D_{t}\) and \(\mathbf{P}(E_{2})\leq 0.01\).
If \(E_{1}\) occurs instead, then we make a further case distinction:
**Case 2.1**: \(\Delta_{t+1}<\frac{1}{2}(W_{t+1}-M^{*})\): Then using (1), it holds that
\[D_{t+1}=\frac{(W_{t+1}-M^{*})^{21}}{2\Delta_{t+1}}\leq\frac{(W_{t}-M^{*})^{21} }{4\Delta_{t}}=\frac{D_{t}}{2},\]
where the inequality holds due to Claim 45.
**Case 2.2**: \(\Delta_{t+1}\geq\frac{1}{2}(W_{t+1}-M^{*})\): Then it holds that \(D_{t+1}=(W_{t+1}-M^{*})^{19}\cdot 2\Delta_{t+1}\). Since by the assumption of Case 2 we have \((W_{t}-M^{*})/(2\Delta_{t})\geq 1\) and by Claim 45 we have \(\Delta_{t+1}=2\Delta_{t}\), we can bound \(D_{t+1}\) as
\[D_{t+1} =(W_{t+1}-M^{*})^{19}\cdot 2\Delta_{t+1}\] \[\leq(W_{t+1}-M^{*})^{19}\cdot 4\Delta_{t}\cdot\left(\frac{W_{t}-M^{*} }{2\Delta_{t}}\right)^{2}\] \[=(W_{t+1}-M^{*})^{19}\cdot(W_{t}-M^{*})^{2}\cdot\frac{1}{\Delta_{ t}}. \tag{3}\]
By Claim 45, we have that \(W_{t+1}\leq W_{t}-\frac{\Delta_{t}}{4}\). Hence, can bound \(W_{t+1}-M^{*}\) as
\[W_{t+1}-M^{*} =\tfrac{16}{17}(W_{t+1}-M^{*})+\tfrac{1}{17}(W_{t+1}-M^{*})\] \[\leq\tfrac{16}{17}(W_{t}-M^{*}-\tfrac{\Delta_{t}}{4})+\tfrac{1}{1 7}(W_{t+1}-M^{*})\] \[=\tfrac{16}{17}(W_{t}-M^{*})-\tfrac{16}{17}\cdot\tfrac{\Delta_{t} }{4}+\tfrac{1}{17}(W_{t+1}-M^{*}). \tag{4}\]
By Claim 45 and the assumption of Case 2.2, we have that \(2\Delta_{t}=\Delta_{t+1}\geq\tfrac{1}{2}(W_{t+1}-M^{*})\). This implies that \(\tfrac{\Delta_{t}}{4}\geq\tfrac{1}{16}(W_{t+1}-M^{*})\). Plugging this into (4), we obtain that
\[W_{t+1}-M^{*} \leq\tfrac{16}{17}(W_{t}-M^{*})-\tfrac{1}{17}(W_{t+1}-M^{*})+ \tfrac{1}{17}(W_{t+1}-M^{*})\] \[=\tfrac{16}{17}(W_{t}-M^{*}). \tag{5}\]
Finally, we combine (3) and (5) to obtain that
\[D_{t+1} \leq(W_{t+1}-M^{*})^{19}(W_{t}-M^{*})^{2}\cdot\tfrac{1}{\Delta_{t}}\] \[\leq(\tfrac{16}{17})^{19}(W_{t}-M^{*})^{21}\cdot\tfrac{1}{\Delta_ {t}}\] \[=(\tfrac{16}{17})^{19}\cdot 2\cdot D_{t}\] \[\leq 0.65D_{t}\]
Combining the subcases considered, we conclude that for Case 2 it holds that
\[\operatorname{\mathbf{E}}(D_{t+1}\mid D_{t}) \leq\operatorname{\mathbf{P}}(E_{1})\operatorname{\mathbf{E}}(D_ {t+1}\mid D_{t},E_{1})+\operatorname{\mathbf{P}}(E_{2})\operatorname{ \mathbf{E}}(D_{t+1}\mid D_{t},E_{2})\] \[\leq 1\cdot\max\left\{\tfrac{1}{2}D_{t},0.65D_{t}\right\}+0.01 \cdot 2D_{t}\leq 0.67\cdot D_{t}.\]
Since cases 1 and 2 are exhaustive, the proof is concluded.
### Putting Everything Together
Now we put the pieces together to prove our main theorem.
[Negative-Weight SSSP] There is a Las Vegas algorithm which, given a directed graph \(G\) and a source node \(s\), either computes a shortest path tree from \(s\) or finds a negative cycle in \(G\), running in time \(O((m+n\log\log n)\log^{2}n\log(nW))\) with high probability (and in expectation).
Proof.: The algorithm alternatingly runs the following two steps, and interputs each step after it exceeds a time budget of \(O((m+n\log\log n)\log^{2}n\log(nW))\):
1. Run SSSP\((G,s)\). If this algorithm finishes in time and returns a shortest path tree, we check that the shortest path tree is correct (by relaxing all edges and testing whether any distance in the tree changes) and return this shortest path tree in the positive case. Otherwise, we continue with step 2.
2. Run FindNegCycle\((G)\) (using Lemma 42 to implement Threshold). If this algorithm finishes in time and returns a negative cycle, we verify that the output is indeed a negative cycle and return this negative cycle in the positive case. Otherwise, we continue with step 1.
The algorithm is clearly correct: Whenever it terminates, it reports a correct solution. Let us focus on the running time. We distinguish two cases: First, assume that \(G\) does _not_ contain a negative cycle. By Theorem 4.1 step 1 runs in time \(O((m+n\log\log n)\log^{2}n\log(nW))\) with high probability and is not interrupted in this case. Moreover, the SSSP algorithm returns a
correct shortest path tree with high probability, and thereby terminates the algorithm after just one iteration of step 1.
On the other hand, suppose that \(G\) contains a negative cycle. The algorithm runs step 1 which is wasted effort in this case, but costs only time \(O((m+n\log\log n)\log^{2}n\log(nW))\). Afterwards, by Lemmas 3.2 and 3.2, a single execution of step 2 runs within the time budget with high probability. Moreover, since Lemma 3.2 is a Las Vegas algorithm, it returns a true negative cycle and the algorithm terminates.
The previous two paragraphs prove that the algorithm terminates after successively running step 1 and step 2 in time \(O((m+n\log\log n)\log^{2}n\log(nW))\) with high probability. Since we independently repeat these steps until the algorithm terminates, the same bound applies to the expected running time.
Next, we prove Theorem 3.2 using the previous Theorem 3.2 as a black-box.
[Negative-Weight Single-Source Distances] There is a Las Vegas algorithm, which, given a directed graph \(G\) and a source \(s\in V(G)\), computes the distances from \(s\) to all other vertices in the graph (these distances are possibly \(-\infty\) or \(\infty\)), running in time \(O((m+n\log\log n)\log^{2}n\log(nW))\) with high probability (and in expectation).
Proof.: First, remove all vertices from the graph not reachable from \(s\) and return distance \(\infty\) for each such vertex. Then compute the set of strongly connected components \(C_{1},\ldots,C_{\ell}\) in \(G\) in time \(O(m+n)\). For every SCC \(C_{i}\), run our SSSP algorithm from Theorem 3.2 on \(G[C_{i}]\) to detect whether it contains a negative cycle. For every vertex contained in a SCC with a negative cycle, we return distance \(-\infty\) (as this SCC is reachable from \(s\) and contains a negative cycle, we can loop indefinitely). Similarly, report \(-\infty\) for all vertices reachable from one of the \(-\infty\)-distance vertices. After removing all vertices at distance \(-\infty\), the remaining graph does no longer contain a negative cycle. We may therefore run the SSSP algorithm on the remaining graph to compute the missing distances.
Let \(n_{i}\) and \(m_{i}\) denote the number of vertices and edges in the subgraph \(G[C_{i}]\). Then the total running time is
\[O\left(T_{\mathrm{SSSP}}(m,n,W)+\sum_{i}T_{\mathrm{SSSP}}(m_{i}, n_{i},W)\right)\] \[\quad=O\left(\left(m+n\log\log n+\sum_{i}m_{i}+\sum_{i}n_{i}\log \log n\right)\log n^{2}\log(nW)\right)\] \[\quad=O((m+n\log\log n)\log^{2}n\log(nW)),\]
using that \(\sum_{i}m_{i}\leq m\) and that \(\sum_{i}n_{i}\leq n\).
## 6 Minimum Cycle Mean
In this section we prove Theorem 3.2, i.e., we present the \(O((m+n\log\log n)\log^{2}(n)\log(nW))\)-time algorithm to compute the minimum cycle mean of a graph \(G\).
Given a directed graph \(G\), we denote by \(\mu^{*}(G)\) the value of the minimum cycle mean, i.e., \(\mu^{*}(G):=\min_{C}\bar{w}(C)\). To develop our algorithm, the following characterization of the minimum cycle mean will be useful:
Let \(G\) be a directed graph. Then,
\[\mu^{*}(G)=-\min\{Q\in\mathbf{Q}\mid G^{+Q}\text{ contains no negative cycle}\}.\]
Proof.: By definition, we have that \(\mu^{*}(G)=\min_{C}\bar{w}(C)\). Equivalently, \(\mu^{*}(G)\) is the largest rational number \(\mu\) such that \(\mu\leq w(C)/|C|\) holds for all cycles \(C\) in \(G\). In particular, \(w(C)-\mu\cdot|C|\geq 0\) holds for all cycles \(C\), which is equivalent to \(G^{-\mu}\) not having negative cycles.
Recall that \(\textsc{Threshold}(G)\), computes the minimum integer \(M^{*}\geq 0\) such that \(G^{+M^{*}}\) contains no negative cycle (Definition 35). This is very similar to the characterization of the minimum cycle mean given by Lemma 47, except that the latter minimizes over rational numbers that are not necessarily non-negative. To overcome this, we will use the following simple propositions:
Let \(G\) be a directed graph and let \(a\geq 1,b\geq 0\) be integers. Let \(H\) be a copy of \(G\) where each edge has weight \(w_{H}(e):=a\cdot w_{G}(e)+b\). Let \(C\) be any cycle in \(G\). Then, \(\bar{w}_{H}(C)=a\cdot\bar{w}_{G}(G)+b\).
Proof.: Note that the weight of \(C\) in \(H\) is exactly \(w_{H}(C)=a\cdot w_{G}(C)+b\cdot|C|\). Therefore, the cycle mean of \(C\) in \(H\) equals \(\bar{w}_{H}(C)=a\cdot w_{G}(C)/|C|+b=a\cdot\bar{w}_{G}(C)+b\).
Let \(C\) and \(C^{\prime}\) be two cycles in a directed graph \(G\) with distinct means, i.e. \(\bar{w}(C)\neq\bar{w}(C^{\prime})\). Then, \(|\bar{w}(C)-\bar{w}(C^{\prime})|\geq 1/n^{2}\).
Proof.: By definition, we can express \(|\bar{w}(C)-\bar{w}(C^{\prime})|\) as
\[\left|\frac{w(C)}{|C|}-\frac{w(C^{\prime})}{|C^{\prime}|}\right|=\left|\frac{w (C)|C^{\prime}|-w(C^{\prime})|C|}{|C|\cdot|C^{\prime}|}\right|\geq\frac{1}{|C| |C^{\prime}|},\]
where we used that \(\bar{w}(C)\neq\bar{w}(C^{\prime})\). Since \(|C|,|C^{\prime}|\leq n\), we have that this is at least \(1/n^{2}\).
We will use the following lemma, which is a Las Vegas implementation of Lemma 42.
Let \(G\) be a directed graph. There is a Las Vegas algorithm which computes \(\textsc{Threshold}(G)\) (see Definition 35) and runs in time \(O((m+n\log\log n)\log^{2}n\log(nW))\) with high probability (and in expectation).
Proof.: The algorithm computes \(M^{*}=\textsc{Threshold}(G)\) using Lemma 42. By definition, this returns the smallest integer \(M^{*}\) such that \(G^{+M^{*}}\) contains no negative cycles with high probability (recall Definition 35). To turn it into a Las Vegas algorithm, we need to verify that the output is correct. For this, we add a source vertex \(s\) connected with \(0\)-weight edges to all other vertices and use Theorem 1 to test if \(G^{+M^{*}}\) contains no negative cycles and that \(G^{+M^{*}-1}\) contains negative cycles. If either test fails, the algorithm restarts.
The correctness of this procedure follows since Theorem 1 is a Las Vegas algorithm. For the running time, observe that call to Lemma 42 (using the bound on \(T_{\mathrm{RSSSP}}(m,n)\) of Theorem 18) and the calls to Theorem 1 run in time \(O((m+n\log\log n)\log^{2}n\log(nW))\). Moreover, Lemma 42 guarantees that the value \(M^{*}\) is correct with high probability. Thus, the algorithm terminates in \(O((m+n\log\log n)\log^{2}n\log(nW))\)-time with high probability.
[Minimum Cycle Mean] There is a Las Vegas algorithm, which given a directed graph \(G\) finds a cycle \(C\) with minimum mean weight \(\bar{w}(C)=\min_{C^{\prime}}\bar{w}(C^{\prime})\), running in time \(O((m+n\log\log n)\log^{2}n\log(nW))\) with high probability (and in expectation).
Proof.: We construct a graph \(H\) by modifying each edge weight of \(G\) to \(n^{2}w(e)-n^{3}L\), where \(L\) is the largest edge-weight in \(G\). Then, we compute \(M^{*}:=\textsc{Threshold}(H)\) using Lemma 50. Finally, we find a negative cycle in \(H^{+M^{\prime}-1}\) using Lemma 36. See Algorithm 8 for the pseudocode.
The running time is dominated by the calls to Threshold and FindNegCycle. Using Lemma 50 the call to Threshold takes time \(O((m+n\log\log n)\log^{2}n\log(nW))\) with high probability. By Lemma 36 the call to FindNegCycle, (using Lemma 42 to implement Threshold and Theorem 18 to bound \(T_{\mathrm{RSSSP}}(m,n)\)) takes time \(O((m+n\log\log n)\log^{2}n\log(nW))\) with high probability as well. Thus, the algorithm runs in the claimed running time.
To analyze the correctness, note that Proposition 48 implies that a cycle \(C\) is the minimizer of \(\bar{w}_{G}(C)\) if and only if it is the minimizer of \(\bar{w}_{H}(C)\). Thus, it suffices to find a cycle of minimum mean in \(H\). We will argue that the cycle found by the algorithm is the minimizer.
\(\rhd\) Claim 51.: The value \(M^{*}\) computed in Line 4 satisfies \(M^{*}=\lceil-\mu^{*}(H)\rceil\).
Proof.: We observe that the minimum cycle mean in \(H\) is non-positive, i.e., \(\mu^{*}(H)\leq 0\). To see this, note that any cycle \(C\) in \(G\) has weight at most \(w_{G}(C)\leq nL\). Thus, by the way we set the weights in \(H\), any cycle in \(H\) has weight \(w_{H}(C)=n^{2}w_{G}(C)-n^{3}L|C|\leq n^{3}L-n^{3}L=0\). This means that in Lemma 47 we can minimize over \(Q\geq 0\), i.e. that
\[\mu^{*}(H)=-\min\{0\leq Q\in\mathbf{Q}\mid H^{+Q}\text{ contains no negative cycle}\}. \tag{6}\]
Recall that by definition of Threshold\((H)\), \(M^{*}\) is the smallest non-negative integer such that \(H^{+M^{*}}\) has no negative cycles, i.e.
\[M^{*}=\min\{0\leq M\in\mathbf{Z}\mid H^{+M}\text{ contains no negative cycle}\}. \tag{7}\]
Combining (6) and (7), we conclude that \(M^{*}=\lceil-\mu^{*}(H)\rceil\), as claimed. \(\lhd\)
It follows that \(H^{+M^{\prime}-1}\) indeed contains a negative cycle. By Lemma 50, the call to Threshold is correct. Hence, \(H^{+M^{\prime}-1}\) contains a negative cycle and the call to FindNegCycle is correct by Lemma 36. Let \(C\) be the cycle obtained in Line 5. Since it has negative weight in \(H^{+M^{\prime}-1}\), its weight in \(H\) is less than \(-|C|(M^{*}-1)\). Hence, it holds that \(\bar{w}_{H}(C)<-M^{*}+1\). Moreover, since \(H^{+M^{*}}\) contains no negative cycle, every cycle \(C^{\prime}\) has mean weight \(\bar{w}_{H}(C^{\prime})\geq-M^{*}\).
Now consider a minimum mean cycle \(C^{\prime}\). As we have seen, we have
\[-M^{*}\leq\bar{w}_{H}(C^{\prime})\leq\bar{w}_{H}(C)<-M^{*}+1. \tag{8}\]
Assume for the sake of contradiction that \(\bar{w}_{H}(C)\neq\bar{w}_{H}(C^{\prime})\). Then by Proposition 49 we have that \(|\bar{w}_{G}(C)-\bar{w}_{G}(C^{\prime})|\geq 1/n^{2}\), and by Proposition 48 it holds that \(\bar{w}_{H}(C)=n^{2}\cdot w_{G}(C)-n^{3}L\)
and \(\bar{w}_{H}(C^{\prime})=n^{2}\cdot w_{G}(C^{\prime})-n^{3}L\). Combining these facts, we obtain that \(|\bar{w}_{H}(C)-\bar{w}_{H}(C^{\prime})|\geq 1\). This contradicts Equation (8). Hence, we obtain \(\bar{w}_{H}(C)=\bar{w}_{H}(C^{\prime})\), so the computed cycle \(C\) is a minimizer of \(\bar{w}_{H}(C)\) and thus also of \(\bar{w}_{G}(C)\).
## 7 Low-Diameter Decompositions
In this section we establish our strong Low-Diameter Decomposition (LDD). Recall that in a strong LDD (as defined in Definition 4), the goal is to select a small set of edges \(S\) such that after removing the edges in \(S\), each strongly connected component in the remaining graph has bounded diameter. Our result is the following theorem, which proves that strong LDDs exist (which was known by [11]) and can be efficiently computed (which was open):
[Strong Low-Diameter Decomposition] There is a strong Low-Diameter Decomposition with overhead \(O(\log^{3}n)\), computable in time \(O((m+n\log\log n)\log^{2}n)\) with high probability (and in expectation).
### Heavy and Light Vertices
In the algorithm we will distinguish between _heavy_ and _light_ vertices, depending on how large the out- and in-balls of these vertices are. To classify vertices as heavy or light, we rely on the following simple lemmas:
[Estimate Ball Sizes] Let \(\varepsilon>0\). Given a directed graph \(G\) with nonnegative edge weights and \(r>0\), we can approximate \(|B^{out}(v,r)|\) with additive error \(\varepsilon n\) for each vertex \(v\). With high probability, the algorithm succeeds and runs in time \(O(\varepsilon^{-2}\log n\cdot(m+n\log\log n))\).
Proof.: Sample random vertices \(v_{1},\ldots,v_{k}\in V(G)\) (with repetition) for \(k:=5\varepsilon^{-2}\log n\). Compute \(B^{in}(v_{i},r)\) for all \(i\in[k]\). Using Dijkstra's algorithm with Thorup's priority queue [23, 57], this step runs in time \(O(k\cdot(m+n\log\log n))=O(\varepsilon^{-2}\log n\cdot(m+n\log\log n))\). Now return for each vertex \(v\), the estimate
\[b(v):=\frac{n}{k}\cdot|\{\,i\in[k]:v\in B^{in}(v_{i},r)\,\}|.\]
We claim that this estimate is accurate. Let \(I_{i}\) denote the indicator variable whether \(v_{i}\in B^{out}(v,r)\), and let \(I:=\sum_{i=1}^{k}I_{i}\). Then the random variable \(b(v)\) is exactly
\[b(v)=\frac{n}{k}\cdot I.\]
Note that \(\mathbf{P}(I_{i}=1)=|B^{out}(v,r)|/n\). In expectation we therefore have
\[\mathbf{E}(b(v))=\frac{n}{k}\cdot\sum_{i=1}^{k}\mathbf{P}(I_{i}=1)=\frac{n}{k} \cdot\frac{k}{n}\cdot|B^{out}(v,r)|=|B^{out}(v,r)|.\]
Using Chernoff's bound we have \(\mathbf{P}(|I-\mathbf{E}(I)|>a)<2\exp(-2a^{2}/k)\). For \(a:=\varepsilon k\) we obtain
\[\mathbf{P}(|b(v)-\mathbf{E}(b(v))|>\varepsilon n)=\mathbf{P}(|I-\mathbf{E}(I)| >\varepsilon k)<2\exp(-2\varepsilon^{2}k)\leq 2n^{-10}.\]
Hence, with high probability the computed estimates are accurate.
[Heavy/Light Classification] There is an algorithm \(\textsc{Light}(G,r)\) that, given a directed graph \(G\) and a radius \(r\), returns a set \(L\subseteq V(G)\) with the following properties:
* For all \(v\in L\), it holds that \(|B^{out}(v,r)|\leq\frac{7}{8}n\).
* _For all_ \(v\in V(G)\setminus L\)_, it holds that_ \(|B^{\mathit{out}}(v,r)|\geq\frac{3}{4}n\)_._
* \(\textsc{Light}(G,r)\) _runs in time_ \(O((m+n\log\log n)\log n)\)_._
Proof.: We run the previous Lemma 52 with parameter \(\varepsilon:=\frac{1}{16}\), and let \(L\) be the subset of vertices with estimated ball sizes at most \(\frac{13}{16}n\). With high probability, the estimates have additive error at most \(\varepsilon n=\frac{1}{16}n\). Therefore any vertex \(v\in L\) satisfies \(|B^{\mathit{out}}(v,r)|\leq\frac{13}{16}n+\frac{1}{16}n=\frac{7}{8}n\) and any vertex \(v\in V(G)\setminus L\) satisfies \(|B^{\mathit{out}}(v,r)|\geq\frac{13}{16}n-\frac{1}{16}n=\frac{3}{4}n\). The running time is dominated by Lemma 52 which runs in time \(O((m+n\log\log n)\log n)\) as claimed.
### The Strong Low-Diameter Decomposition
The strong LDD works as follows: Let \(R=\frac{D}{10\log n}\). First, we run Lemma 53 on \(G\) with radius \(R\) to compute a set \(L^{\mathit{out}}\) and we run Lemma 53 on the reversed graph with radius \(R\) to compute a set \(L^{\mathit{in}}\). We refer to the vertices in \(L^{\mathit{out}}\) as _out-light_, to the vertices in \(L^{\mathit{in}}\) as _in-light_, and to the vertices in \(V(G)\setminus(L^{\mathit{out}}\cup L^{\mathit{in}})\) as _heavy_. Then we distinguish two cases:
* _The heavy case:_ If there is a heavy vertex \(v\in V(G)\setminus(L^{\mathit{out}}\cup L^{\mathit{in}})\), we compute the set of vertices \(W\) that both reach \(v\) and are reachable from \(v\) within distance \(R\), i.e., \(W=B^{\mathit{out}}(v,R)\cap B^{\mathit{in}}(v,R)\). Let \(T^{\mathit{out}},T^{\mathit{in}}\) denote the shortest path trees from \(v\) to \(W\) and from \(W\) to \(v\), respectively. Let \(C\) be the union of vertices in \(T^{\mathit{out}}\) and \(T^{\mathit{in}}\). We _collapse_\(C\) (that is, we replace all vertices in \(C\) by a single super-vertex) and consider the remaining (multi-)graph \(G/C\). We recursively compute the strong LDD in \(G/C\), resulting in a set of edges \(S\). In \(S\) we uncollapse all edges involving the super-vertex (i.e., for any edge \((v,u)\in E(G)\) which became an edge \((C,u)\) in the collapsed graph, we revert \((C,u)\) back to \((v,u)\)) and return \(S\).
* _The light case:_ If there is no heavy vertex, then each vertex is out-light or in-light. For each vertex \(v\) (which is out-light, say) we can therefore proceed in the standard way: Sample a radius \(r\) from a geometric distribution with parameter \(O(\log n/D)\), cut the edges leaving \(B^{\mathit{out}}(v,r)\) and recur on both the inside and the outside of the ball \(B^{\mathit{out}}(v,r)\).
We summarize the pseudocode with the precise parameters in Algorithm 9. Throughout this section, we denote by \(n_{0}\) the size of the original graph and by \(n\) the size of the current graph \(G\) (in the current recursive call of the algorithm).
[Strong Diameter of Algorithm 9] With high probability, \(\textsc{StrongLDD}(G,D)\) either returns \(\textsc{Fail}\) or a set of edges \(S\subseteq E(G)\) such that every strongly connected component \(C\) of \(G\setminus S\) has diameter at most \(D\), i.e., \(\max_{u,v\in C}\mathrm{dist}_{G[C]}(u,v)\leq D\).
Proof.: With high probability, the heavy-light classification works correctly in the execution of \(\textsc{StrongLDD}(G,D)\) (and all recursive calls). We condition on this event and treat the classification as perfect.
As before, we have to distinguish the heavy and the light case. In the heavy case, let \(v\) be the heavy vertex and let \(W,T^{\mathit{out}},T^{\mathit{in}},C\) be as in the algorithm. We claim that the induced subgraph \(G[C]\) has diameter at most \(4R\). Take any vertex \(x\in C\); it suffices to prove that both \(\mathrm{dist}_{G[C]}(v,x)\leq 2R\) and \(\mathrm{dist}_{G[C]}(x,v)\leq 2R\). We show the former claim and omit the latter. There are two easy cases: Either we have \(x\in T^{\mathit{out}}\) in which case we immediately have that \(\mathrm{dist}_{G[C]}(v,x)\leq R\) (as any path in \(T^{\mathit{out}}\) has length at most \(R\)). Or we have \(x\in T^{\mathit{in}}\), in which case there exists some intermediate vertex \(y\in W\) with \(\mathrm{dist}_{G[C]}(y,x)\leq R\). But then also \(\mathrm{dist}_{G[C]}(v,y)\leq R\) and in combination we obtain \(\mathrm{dist}_{G[C]}(v,x)\leq 2R\) as claimed.
Recall that the algorithm collapses the vertices in \(C\), and computes a strong LDD \(S\) on the remaining multigraph with parameter \(D-4R\). We assume by induction that the recursive call computes a correct strong decomposition (for \(G/C\)). To see that the decomposition is also correct for \(G\), take any two vertices \(u,v\) in the same strongly connected component in \(G\setminus S\). We have that \(\operatorname{dist}_{(G/C)\setminus S}(u,v)\leq D-4R\). If the shortest \(u\)-\(v\)-path in \(G/C\) does not touch the supervertex, then we immediately have \(\operatorname{dist}_{G\setminus S}(u,v)\leq D-4R\leq D\). If the shortest path touches the supervertex, then we can replace the path through \(C\) by a path of length \(\operatorname{diam}(G[C])\leq 4R\). It follows that \(\operatorname{dist}_{G\setminus S}(u,v)\leq D-4R+4R\leq D\).
The correctness of the light case is exactly as in the known LDD by [11], and similar to Lemma 8: For every ball \(B^{out}(v,r)\) (or \(B^{in}(v,r)\)) that the algorithm carves out, we remove all outgoing edges \(\partial B^{out}(v,r)\) (or all incoming edges \(\partial B^{in}(v,r)\), respectively). Thus, two vertices \(x,y\) in the remaining graph are part of the same strongly connected component only if both \(x,y\in B^{out}(v,r)\) or both \(x,y\not\in B^{out}(v,r)\). The algorithm continues the loop on all vertices outside \(B^{out}(v,r)\) and recurs inside \(B^{out}(v,r)\). By induction, both calls succeed and reduce the diameter to at most \(D\).
Eventually the algorithm reaches a base case where \(G\) contains only a constant number of nodes and edges--in this case, we can select \(S\) to be the whole set of edges.
[Sparse Hitting of Algorithm 9] For any edge \(e\in E(G)\), the probability that
_is contained in the output of \(\textsc{StrongLDD}(G,D)\) is at most \(O(\frac{w(e)}{D}\cdot\log^{3}(n_{0})+\frac{1}{\operatorname{poly}(n)})\)._
Proof.: In this proof we condition on the event that the initially computed heavy/light classification is correct. Since this event happens with high probability, we only increase the hitting probabilities by \(\frac{1}{\operatorname{poly}(n)}\) for all edges.
Let \(p(n,w,D)\) be an upper bound on the probability that an edge of weight \(w\) is contained in the output of \(\textsc{StrongLDD}(G,D)\), where \(G\) is an \(n\)-vertex graph. We inductively prove that \(p(n,w,D)\leq\frac{w}{D}\cdot 1000\log(n_{0})\log^{2}(n)\) which is as claimed. We distinguish the heavy and light case in Algorithm 9.
The Light Case.Suppose that the algorithm enters the light case (that is, there is no vertex classified as heavy). Focus on some edge \(e=(x,y)\) of weight \(w=w(e)\). We distinguish three cases for each iteration. Suppose that the current iteration selects an out-light vertex \(v\).
* \(x,y\not\in B^{out}(v,r)\): The edge \(e\) is not touched in this iteration and remains a part of the graph \(G\). It may or may not be included in the output, depending on the future iterations.
* \(x\in B^{out}(v,r)\) and \(y\not\in B^{out}(v,r)\): In this case \(e\in\partial B^{out}(v,r)\) and thus the edge is included into \(S\).
* \(y\in B^{out}(v,r)\): The edge is not included in \(\partial B^{out}(v,r)\). It may however be included in the recursive call on \(B^{out}(v,r)\). In the recursive call we have that \(|B^{out}(v,r)|\leq|B^{out}(v,R)|\leq\frac{7n}{8}\), as \(r\leq R\) (in the opposite case the algorithm fails and no edge is returned) and by Lemma 53 as \(v\) is out-light.
Combining these cases, we obtain the following recursion for \(p(n,w,D)\). In the calculation we abbreviate \(q:=R^{-1}\cdot 10\log(n_{0})\):
\[p(n,w,D)\leq\max_{v\in V(G)}\operatorname*{\mathbf{P}}_{r\sim \operatorname{Geom}(q)}(y\not\in B^{out}(v,r)\mid x\in B^{out}(v,r))+p(\frac{7 n}{8},w,D)\] \[\qquad\leq\max_{v\in V(G)}\operatorname*{\mathbf{P}}_{r\sim \operatorname{Geom}(q)}(r<\operatorname{dist}(v,y)\mid r\geq\operatorname{ dist}(v,x))+p(\frac{7n}{8},w,D)\] \[\qquad\leq\max_{v\in V(G)}\operatorname*{\mathbf{P}}_{r\sim \operatorname{Geom}(q)}(r<\operatorname{dist}(v,x)+w\mid r\geq\operatorname{ dist}(v,x))+p(\frac{7n}{8},w,D)\]
Let \(r^{\prime}:=r-\operatorname{dist}(v,x)\). Conditioned on the event \(r\geq\operatorname{dist}(v,x)\), \(r^{\prime}\) is a nonnegative random random variable and by the memoryless property of geometric distributions, \(r^{\prime}\) is sampled from \(\operatorname{Geom}(q)\), too:
\[\leq\max_{v\in V(G)}\operatorname*{\mathbf{P}}_{r^{\prime}\sim \operatorname{Geom}(q)}(r<w)+p(\frac{7n}{8},w,D)\] \[\leq wq+p(\frac{7n}{8},w,D)\] \[\leq\frac{w}{D}\cdot 100\log(n_{0})\log(n)+p(\frac{7n}{8},w,D).\]
In the last step, we have plugged in \(q=R^{-1}\cdot 10\log(n_{0})=\frac{1}{D}\cdot 100\log(n_{0})\log(n)\). It follows by induction that \(p(n,w,D)\leq\frac{w}{D}\cdot 100\log(n_{0})\log(n)\log_{8/7}(n)\leq\frac{w}{D} \cdot 1000\log(n_{0})\log^{2}(n)\).
The same analysis applies also to the in-balls with "\(B^{in}\)" in place of "\(B^{out}\)".
The Heavy Case.In the heavy case, the algorithm selects a heavy vertex \(v\), computes the sets \(W=B^{out}(v,R)\cap B^{in}(v,R)\) and \(C\supseteq W\) and recurs on the graph \(G/C\) in which we contract the vertex set \(C\) to a single vertex. We have \(|B^{out}(v,R)|,|B^{in}(v,R)|>\frac{3n}{4}\) by Lemma 53 since \(v\) is heavy. It follows that \(|C|\geq|W|>\frac{n}{2}\) and therefore the contracted
graph has size \(|V(G/C)|\leq\frac{n}{2}\). As we call the algorithm recursively with parameter \(D-4R\) where \(R=\frac{D}{10\log n}\), we obtain the following recurrence:
\[p(n,w,D)\leq p(\tfrac{n}{2},w,D-4R).\]
Using the induction hypothesis, we obtain:
\[p(n,w,D)\] \[\leq\frac{w}{D-4R}\cdot 1000\log^{2}(n_{0})\log(\tfrac{n}{2})\] \[\leq\frac{w}{D}\cdot\frac{1}{1-\frac{4}{10\log n}}\cdot 1000\log^{2}(n _{0})\log(\tfrac{n}{2})\] \[=\frac{w}{D}\cdot\frac{\log(n)}{\log(n)-\frac{4}{10}}\cdot 1000 \log^{2}(n_{0})\cdot(\log(n)-1)\] \[\leq\frac{w}{D}\cdot 1000\log^{2}(n_{0})\cdot\log(n).\qed\]
[Running Time of Algorithm 9] The algorithm \(\textsc{Strong}\textsc{LDD}(G,D)\) runs in time \(O((m+n_{0}\log\log n_{0})\log^{2}(n_{0}))\).
Proof.: First focus on a single call of the algorithm and ignore the cost of recursive calls. It takes time \(O((m+n_{0}\log\log n_{0})\log(n_{0}))\) to compute the heavy-light classification. In the heavy case, we can compute \(W,T^{out},T^{in},C\) in Dijkstra-time \(O(m+n_{0}\log\log n_{0})\). In the light case, we can also carve out all balls \(B^{out}(v,r)\) and \(B^{in}(v,r)\) in total time \(O(m+n_{0}\log\log(n_{0}))\), although the formal analysis is more involved: Observe that we explore each vertex at most once spending time \(O(\log\log n_{0})\), and that we explore each edge at most once spending time \(O(1)\). Since the analysis is similar to Lemma 3.1, we omit further details.
As the algorithm recurs on disjoint subgraphs of \(G\), where the number of nodes in each subgraph is a constant factor smaller than the original number of nodes or less, the running time becomes \(O((m+n_{0}\log\log n_{0})\log(n_{0})^{2})\).
[Failure Probability of Algorithm 9] Strong\(\textsc{LDD}(G,D)\) returns Fail with probability at most \(O(n_{0}^{-8})\).
Proof.: As shown in detail in the previous lemmas, with every recursive call the number of vertices reduces by a constant factor and thus the recursion reaches depth at most \(O(\log n_{0})\). In each recursive call, the loops in Lines 1 and 1 run at most \(n_{0}\) times. For each execution, the error event is that \(r>R\), where \(r\sim\operatorname{Geom}(R^{-1}\cdot 10\log(n_{0}))\). This event happens with probability at most \(\exp(-10\log(n_{0}))\leq n_{0}^{-10}\), and therefore the algorithm returns Fail with probability at most \(O(n_{0}\log n_{0})\cdot n_{0}^{-10}\leq O(n_{0}^{-8})\).
Proof of Theorem 3.1.: To compute the claimed strong LDD we call Strong\(\textsc{LDD}(G,\frac{1}{2}D)\) with the following two modifications:
First, whenever some recursive call returns Fail, we simply restart the whole algorithm.
Second, we test whether the returned set of edges \(S\subseteq E(G)\) satisfies the Strong Diameter property. To this end, we compute the strongly connected components in \(G\setminus S\) and compute, for any such component \(C\), a \(2\)-approximation of its diameter. By a standard argument, such a \(2\)-approximation can be obtained in Dijkstra-time by (1) selecting an arbitrary node \(v\), (2) computing \(d^{out}:=\max_{u\in V(G)}d_{G}(v,u)\) by solving SSSP on \(G\), (3) computing \(d^{in}:=\max_{u\in V(G)}d_{G}(u,v)\) by solving SSSP on the reversed graph of \(G\), and returning \(\max\{d^{in},d^{out}\}\). If the diameter approximations are at most \(\frac{D}{2}\) in all components, we return \(S\). Otherwise, we restart the whole algorithm.
This algorithm indeed never fails to satisfy the Strong Diameter property: Since the diameter approximations have approximation factor at most \(2\), we have certified that the diameter of any strongly connected component is at most \(D\) in the graph \(G\setminus S\). Moreover, with high probability the execution of Algorithm 9 passes both tests with high probability (by Lemmas 54 and 57), and therefore we expect to repeat the algorithm \(O(1)\) times. Since the repetitions are independent of each other, the edge hitting probability increases only by a constant factor and remains \(O(\frac{v(e)}{D}\cdot\log^{3}(n_{0}))\) by Lemma 55.
Finally, consider the running time. As argued before, with high probability we avoid restarting Algorithm 9 altogether. Thus, with high probability the algorithm runs in total time is \(O((m+n_{0}\log\log n_{0})\log^{2}(n_{0}))\) by Lemma 56. Since we expect to repeat the algorithm at most \(O(1)\) times, the same bound applies to the expected running time.
## Appendix A Lazy Dijkstra
This section is devoted to a proof of the following lemma, stating that Dijkstra's algorithm can be adapted to work with negative edges in time depending on the \(\eta_{G}(v)\) values. Recall that \(\eta_{G}(v)\) denotes the minimum number of negative-weight edges in a shortest \(s\)-\(v\) path in \(G\).
[Dijkstra with Negative Weights, similar to [11, Lemma 3.3]] Let \(G\) be a directed graph with source vertex \(s\in V(G)\) that does not contain a negative cycle. There is an algorithm that computes a shortest path tree from \(s\) in time \(O(\sum_{v}(\deg(v)+\log\log n)\cdot\eta_{G}(v))\). (If \(G\) contains a negative cycle, the algorithm does not terminate.)
This lemma is basically [11, Lemma 3.3], but the statement differs slightly. We provide a self-contained proof that morally follows the one in [11, Appendix A].
We give the pseudocode for Lemma 25 in Algorithm 10. Throughout, let \(G=(V,E,w)\) be the given directed weighted graph with possibly negative edge weights. We write \(E^{\geq 0}\) for the subset of edges with nonnegative weight, and \(E^{<0}\) for the subset of edges with negative weight. In the pseudocode, we rely on Thorup's priority queue:
[Thorup's Priority Queue [57]] There is a priority queue implementation for storing \(n\) integer keys that supports the operations FindMin, Insert and DecreaseKey in constant time, and Delete in time \(O(\log\log n)\).
For the analysis of the algorithm, we define two central quantities. Let \(v\) be a vertex, then we define
\[\operatorname{dist}_{i}(v) =\min\{\,w(P):P\text{ is an $s$-$v$-path containing less than $i$ negative edges}\,\},\] \[\operatorname{dist}^{\prime}_{i}(v) =\min\left\{\operatorname{dist}_{i}(v),\min_{\begin{subarray}{ c}w\in V\\ w(w,v)<0\end{subarray}}\operatorname{dist}_{i}(u)+w(u,v)\right\}.\]
Note that \(\operatorname{dist}_{0}(v)=\operatorname{dist}^{\prime}_{0}(v)=\infty\). We start with some observations involving these quantities \(\operatorname{dist}_{i}\) and \(\operatorname{dist}^{\prime}_{i}\):
For all \(i\), \(\operatorname{dist}_{i}(v)\geq\operatorname{dist}^{\prime}_{i}(v)\geq \operatorname{dist}_{i+1}(v)\).
For all \(v\),
\[\operatorname{dist}_{i+1}(v)=\min\left\{\operatorname{dist}_{i}(v),\min_{ \begin{subarray}{c}u\in V\\ \operatorname{dist}_{i}(u)>\operatorname{dist}^{\prime}_{i}(u)\end{subarray}} \operatorname{dist}^{\prime}_{i}(u)+\operatorname{dist}_{G^{\geq 0}}(u,v) \right\}.\]
Proof.: The statement is clear if \(\operatorname{dist}_{i}(v)=\operatorname{dist}_{i+1}(v)\), so assume that \(\operatorname{dist}_{i+1}(v)<\operatorname{dist}_{i}(v)\). Let \(P\) be the path witnessing \(\operatorname{dist}_{i+1}(v)\), i.e., a shortest \(s\)-\(v\)-path containing less than \(i+1\) negative edges. Let \((x,u)\) denote the last negative-weight edge in \(P\), and partition the path \(P\) into subpaths \(P_{1}\,x\,u\,P_{2}\). Then the first segment \(P_{1}\,x\) is a path containing less than \(i\) negative-weight edges and the segment \(u\,P_{2}\) does not contain any negative-weight edges. Therefore,
\[\operatorname{dist}_{i+1}(v)=\operatorname{dist}_{i}(x)+w(x,u)+\operatorname{ dist}_{G^{\geq 0}}(u,v)\geq\operatorname{dist}^{\prime}_{i}(u)+\operatorname{ dist}_{G^{\geq 0}}(u,v).\]
Suppose, for the sake of contradiction, that \(\operatorname{dist}_{i}(u)=\operatorname{dist}^{\prime}_{i}(u)\). Then
\[\operatorname{dist}_{i+1}(v)\geq\operatorname{dist}_{i}(u)+\operatorname{ dist}_{G^{\geq 0}}(u,v)\geq\operatorname{dist}_{i}(v),\]
which contradicts our initial assumption.
For all \(v\),
\[\operatorname{dist}^{\prime}_{i}(v)=\min\left\{\operatorname{dist}_{i}(v), \min_{\begin{subarray}{c}u\in\\ \operatorname{dist}_{i-1}(u)>\operatorname{dist}_{i}(u)\end{subarray}} \operatorname{dist}_{i}(u)+w(u,v)\right\}\]
Proof.: The statement is clear if \(\operatorname{dist}_{i}(v)=\operatorname{dist}^{\prime}_{i}(v)\), so suppose that \(\operatorname{dist}^{\prime}_{i}(v)<\operatorname{dist}_{i}(v)\). Then there is some vertex \(u\in V\) with \(w(u,v)<0\) such that \(\operatorname{dist}^{\prime}_{i}(v)=\operatorname{dist}_{i}(u)+w(u,v)\). It suffices to prove that \(\operatorname{dist}_{i-1}(u)>\operatorname{dist}_{i}(u)\). Suppose for the sake of contradiction that \(\operatorname{dist}_{i-1}(u)=\operatorname{dist}_{i}(u)\). Then \(\operatorname{dist}^{\prime}_{i}(v)=\operatorname{dist}_{i-1}(u)+w(u,v)\geq \operatorname{dist}^{\prime}_{i-1}(v)\), which contradicts our initial assumption (by Observation 59).
**Lemma 62** (Invariants of Algorithm 10).: _Consider the \(i\)-th iteration of the loop in Algorithm 10 (starting at \(1\)). Then the following invariants hold:_
1. _After the Dijkstra phase (after Line_ 11_):_ 1. \(d[v]=\operatorname{dist}_{i}(v)\) _for all vertices_ \(v\)_, and_ 2. \(A=\{\,v:\operatorname{dist}_{i-1}(v)>\operatorname{dist}_{i}(v)\,\}\)_._
2. _After the Bellman-Ford phase (after Line_ 16_):_ 1. \(d[v]=\operatorname{dist}^{\prime}_{i}(v)\) _for all vertices_ \(v\)_, and_ 2. \(Q=\{\,v:\operatorname{dist}_{i}(v)>\operatorname{dist}^{\prime}_{i}(v)\,\}\)_._
Proof.: We prove the invariants by induction on \(i\).
First Dijkstra PhaseWe start with the analysis of the first iteration, \(i=1\). The execution of the Dijkstra phase behaves exactly like the regular Dijkstra algorithm. It follows that \(d[v]=\operatorname{dist}_{G^{\geq 0}}(s,v)=\operatorname{dist}_{1}(v)\), as claimed in Invariant 1a. Moreover, we include in \(A\) exactly all vertices which were reachable from \(s\) in \(G^{\geq 0}\). Indeed, for these vertices \(v\) we have that \(\operatorname{dist}_{1}(v)=\operatorname{dist}_{G^{\geq 0}}(s,v)<\infty\) and \(\operatorname{dist}_{0}(v)=\infty\), and thus \(A=\{\,v:\operatorname{dist}_{0}(v)>\operatorname{dist}_{1}(v)\,\}\), which proves Invariant 1b.
Later Dijkstra PhaseNext, we analyze the Dijkstra phase for a later iteration, \(i>1\). Letting \(d^{\prime}\) denote the state of the array \(d\) after the Dijkstra phase, our goal is to prove that \(d^{\prime}[v]=\operatorname{dist}_{i}(v)\) for all vertices \(v\). So fix any vertex \(v\); we may assume that \(\operatorname{dist}_{i+1}(v)<\operatorname{dist}_{i}(v)\), as otherwise the statement is easy using that the algorithm never increases \(d[\cdot]\). A standard analysis of Dijkstra's algorithm reveals that
\[d^{\prime}[v]=\min_{u\in Q}(d[u]+\operatorname{dist}_{G^{\geq 0}}(u,v)),\]
where \(Q\) is the queue before the execution of Dijkstra. By plugging in the induction hypothesis and Observation 60, we obtain that indeed
\[d^{\prime}[v]=\min_{\begin{subarray}{c}u\in V\\ \operatorname{dist}_{i-1}(v)>\operatorname{dist}^{\prime}_{i-1}(v)\end{subarray} }d[u]+\operatorname{dist}_{G^{\geq 0}}(u,v)=\operatorname{dist}_{i}(v),\]
which proves Invariant 1a.
To analyze Invariant 1b and the set \(A\), first recall that we reset \(A\) to an empty set before executing the Dijkstra phase. Afterwards, we add to \(A\) exactly those vertices that are either (i) contained in the queue \(Q\) initially or (ii) for which \(d^{\prime}[v]<d[v]\). Note that these sets are exactly (i) \(\{\,v:\operatorname{dist}_{i}(v)>\operatorname{dist}^{\prime}_{i}(v)\,\}\) and (ii) \(\{\,v:\operatorname{dist}^{\prime}_{i-1}(v)>\operatorname{dist}_{i}(v)\,\}\) whose union is exactly \(\{\,v:\operatorname{dist}_{i-1}(v)>\operatorname{dist}_{i}(v)\,\}\) by Observation 59.
Bellman-Ford PhaseThe analysis of the Bellman-Ford phase is simpler. Writing again \(d^{\prime}\) for the state of the array \(d\) after the execution of the Bellman-Ford phase, by Observation 61 we have that
\[d^{\prime}[v]=\min_{\begin{subarray}{c}u\in A\\ w(u,v)<0\end{subarray}}d[u]+w(u,v)=\min_{\begin{subarray}{c}u\in V\\ \operatorname{dist}^{\prime}_{i-1}(u)>\operatorname{dist}_{i}(u)\\ w(u,v)<0\end{subarray}}\operatorname{dist}_{i}(u)+w(u,v)=\operatorname{dist}^{ \prime}_{i}(v),\]
which proves Invariant 2a. Here again we have assumed that \(\operatorname{dist}^{\prime}_{i}(v)<\operatorname{dist}_{i}(v)\), as otherwise the statement is trivial since the algorithm never increases \(d[\cdot]\).
Moreover, after the Dijkstra phase has terminated, the queue \(Q\) was empty. Afterwards, in the current Bellman-Ford phase, we have inserted exactly those vertices \(v\) into the queue for which \(\operatorname{dist}_{i}(v)>\operatorname{dist}^{\prime}_{i}(v)\) and thus \(Q=\{\,v:\operatorname{dist}_{i}(v)>\operatorname{dist}^{\prime}_{i}(v)\,\}\), which proves Invariant 2b.
From these invariants (and the preceding observations), we can easily conclude the correctness of Algorithm 10:
[Correctness of Algorithm 10] If the given graph \(G\) contains a negative cycle, then Algorithm 10 does not terminate. Moreover, if Algorithm 10 terminates, then it has correctly computed \(d[v]=\operatorname{dist}_{G}(s,v)\).
Proof.: We show that after the algorithm has terminated, all edges \((u,v)\) are _relaxed_, meaning that \(d[v]\leq d[u]+w(u,v)\). Indeed, suppose there is an edge \((u,v)\) which is not relaxed, i.e., \(d[v]>d[u]+w(u,v)\). Let \(i\) denote the final iteration of the algorithm. By Invariant 2a we have that \(d[x]=\operatorname{dist}^{\prime}_{i}(x)\) and by Invariant 2b we have that \(\operatorname{dist}^{\prime}_{i}(x)=\operatorname{dist}_{i}(x)\) (assuming that \(Q=\emptyset\)), for all vertices \(x\). We distinguish two cases: If \(w(u,v)\geq 0\), then we have that \(\operatorname{dist}_{i}(v)>\operatorname{dist}_{i}(u)+w(u,v)\)--a contradiction. And if \(w(u,v)<0\), then we have that \(\operatorname{dist}^{\prime}_{i}(v)=\operatorname{dist}_{i}(u)+w(u,v)\)--also a contradiction.
So far we have proved that if the algorithm terminates, all edges are relaxed. It is easy to check that if \(G\) contains a negative cycle, then at least one edge in that cycle cannot be relaxed. It follows that the algorithm does not terminate whenever \(G\) contains a negative cycle.
Instead, assume that \(G\) does not contain a negative cycle. We claim that the algorithm has correctly computed all distances. First, recall that throughout we have \(d[v]\geq\operatorname{dist}_{G}(s,v)\). Consider any shortest \(s\)-\(v\)-path \(P\); we prove that \(d[v]=w(P)\) by induction on the length of \(P\). For \(|P|=0\), we have correctly set \(d[s]=0\) initially. (Note that \(\operatorname{dist}_{G}(s,s)\) cannot be negative as otherwise \(G\) would contain a negative cycle.) So assume that \(P\) is nonempty and that \(P\) can be written as \(P_{1}\,u\,v\). Then by induction \(d[u]=\operatorname{dist}_{G}(P_{1}\,u)\). Since the edge \((u,v)\) is relaxed, we have that \(d[v]\leq d[u]+w(u,v)=w(P)=\operatorname{dist}_{G}(s,v)\). Recall that we also have \(d[v]\geq\operatorname{dist}_{G}(s,v)\) and therefore \(d[v]=\operatorname{dist}_{G}(s,v)\).
For us, the most relevant change in the proof is the running time analysis. Recall that \(\eta_{G}(v)\) denotes the minimum number of negative edges in a shortest \(s\)-\(v\)-path, and that \(\deg(v)\) denotes the out-degree of a vertex \(v\).
[Running Time of Algorithm 10] Assume that \(G\) does not contain a negative cycle. Then Algorithm 10 runs in time \(O(\sum_{v}(\deg(v)+\log\log n)\eta_{G}(v))\).
Proof.: Consider a single iteration of the algorithm. Letting \(A\) denote the state of the set \(A\) at the end of (Dijkstra's phase of) the iteration, the running time of the whole iteration can be bounded by:
\[O\left(\sum_{v\in A}(\deg(v)+\log\log n)\right).\]
Indeed, in the Dijkstra phase, in each iteration we spend time \(O(\log\log n)\) for deleting an element from the queue (Lemma 4.2), but for each such deletion in \(Q\) we add a new element to \(A\). Moreover, both in the Dijkstra phase and the Bellman-Ford phase we only enumerate edges starting from a vertex in \(A\), amounting for a total number of \(O(\sum_{v\in A}\deg(v))\) edges. The inner steps of the loops (in Lines 9 to 11 and Lines 14 to 16) run in constant time each (Lemma 4.2).
Let us write \(A_{i}\) for the state of \(i\) in the \(i\)-th iteration. Then the total running time is
\[O\left(\sum_{i=1}^{\infty}\sum_{v\in A_{i}}(\deg(v)+\log\log n)\right)=O\left( \sum_{v\in V}\left|\left\{\,i:v\in A_{i}\,\right\}\right|\cdot(\deg(v)+\log\log n )\right).\]
To complete the proof, it suffices to show that \(\left|\left\{\,i:v\in A_{i}\,\right\}\right|\leq\eta_{G}(v)\). To see this, we first observe that \(\operatorname{dist}_{\eta_{G}(v)+1}(v)=\operatorname{dist}_{\eta_{G}(v)+2}= \cdots=\operatorname{dist}_{G}(s,v)\). Since, by the invariants above we know that \(A_{i}=\left\{\,v:\operatorname{dist}_{i-1}(v)>\operatorname{dist}_{i}(v)\,\right\}\), it follows that \(v\) can only be contained in the sets \(A_{1},\ldots,A_{\eta_{G}(v)}\).
In combination, Lemmas 63 and 64 complete the proof of Lemma 25.
|
2307.04437 | HORTENSIA, a program package for the simulation of nonadiabatic
autoionization dynamics in molecules | We present a program package for the simulation of ultrafast
vibration-induced autoionization dynamics in molecular anions in the manifold
of the adiabatic anionic states and the discretized ionization continuum. This
program, called HORTENSIA ($\underline{Ho}$pping $\underline{r}$eal-time
$\underline{t}$rajectories for $\underline{e}$lectron-ejection by
$\underline{n}$onadiabatic $\underline{s}$elf-$\underline{i}$onization in
$\underline{a}$nions), is based on the nonadiabatic surface-hopping
methodology, wherein nuclei are propagated as an ensemble along classical
trajectories in the quantum-mechanical potential created by the electronic
density of the molecular system. The electronic Schr\"odinger equation is
numerically integrated along the trajectory, providing the time evolution of
electronic state coefficients, from which switching probabilities into discrete
electronic states are determined. In the case of a discretized continuum state,
this hopping event is interpreted as the ejection on an electron. The derived
diabatic and nonadiabatic couplings in the time-dependent electronic
Schr\"odinger equation are calculated from anionic and neutral wavefunctions
obtained from quantum chemical calculations with commercially available program
packages interfaced with our program.
Based on this methodology, we demonstrate the simulation of autoionization
electron kinetic energy spectra that are both time- and angle-resolved. In
addition, the program yields data that can be interpreted easily with respect
to geometric characteristics such as bonding distances and angles, which
facilitates the detection of molecular configurations important for the
autoionization process.
Moreover, useful extensions are included, namely generation tools for initial
conditions and input files as well as for the evaluation of output files both
through console commands and a graphical user interface. | Kevin Issler, Roland Mitrić, Jens Petersen | 2023-07-10T09:27:01Z | http://arxiv.org/abs/2307.04437v2 | HORTENSIIA, a program package for the simulation of nonadiabatic autoionization dynamics in molecules
###### Abstract
We present a program package for the simulation of ultrafast vibration-induced autoionization dynamics in molecular anions in the manifold of the adiabatic anionic states and the discretized ionization continuum. This program, called HORTENSIIA (_H_opping _r_eal-time _t_rajectories for _e_lectron-ejection by _n_onadiabatic self-ionization in _g_ions), is based on the nonadiabatic surface-hopping methodology, wherein nuclei are propagated as an ensemble along classical trajectories in the quantum-mechanical potential created by the electronic density of the molecular system. The electronic Schrodinger equation is numerically integrated along the trajectory, providing the time evolution of electronic state coefficients, from which switching probabilities into discrete electronic states are determined. In the case of a discretized continuum state, this hopping event is interpreted as the ejection on an electron. The derived diabatic and nonadiabatic couplings in the time-dependent electronic Schrodinger equation are calculated from anionic and neutral wavefunctions obtained from quantum chemical calculations with commercially available program packages interfaced with our program.
Based on this methodology, we demonstrate the simulation of autoionization electron kinetic energy spectra that are both time- and angle-resolved. In addition, the program yields data that can be interpreted easily with respect to geometric characteristics such as bonding distances and angles, which facilitates the detection of molecular configurations important for the autoionization process.
Moreover, useful extensions are included, namely generation tools for initial conditions and input files as well as for the evaluation of output files both through console commands and a graphical user interface.
+
Footnote †: preprint: HORTENSIIA
## I Introduction
After generation of a temporary molecular anion through electron attachment, there are three possible competing relaxation mechanisms.[1] These are a) radiative deactivation, assuming that there is a lower-lying anion state that is stable with respect to ionization, b) dissociative electron attachment, in which the captured electron induces geometric change in the molecule resulting in fragmentation into more stable products, a neutral and an anionic subsystem. And lastly, c) autoionization, in which after a finite period of time the metastable state decays via electron ejection. The process of dissociative electron attachment is observed for example in DNA, where capture of low-energy electrons leads to single and double strand breaks [2; 3], or in a variety of substances in nanoscale thin films [4]. Prominent examples for autoionization include excited dipole- and quadrupole-bound anions with binding energies slightly below the ionization threshold [5; 6; 7; 8], intermolecular Coulombic decay at the FADH\({}^{-}\) cofactor involved in DNA-photolesion repair [9] and autoionization induced by vibrational excitation in organic molecules [10; 11; 12; 13; 14; 15]. Generally the finite lifetime of a metastable state with respect to autoionization can vary strongly from only a few femtoseconds [16; 17] up to milliseconds [18; 16]. Recently, several experiments have provided insights into the dynamics of such processes in dipole- and quadrupole-bound organic anions on a (sub-)picosecond timescale. [19; 20; 21; 11; 15; 22]
Although the process of autoionization is well-known and -observed experimentally by a multitude of methods, as can be seen in the references given above, the theoretical description of autoionizing systems is challenging [23], especially if one is interested in the mechanistic details of the intricate ultrafast relaxation dynamics. Autoionization processes can follow different general mechanisms, depending on how energy is redistributed among the system's degrees of freedom. Besides a purely electronic variant, where already the electronic energy of the system lies above the ionization threshold and electron ejection may proceed via tunneling, there is also the possibility of a nonadiabatic mechanism in which rotational or vibrational energy of the nuclei is transformed into the kinetic energy of the ejected electron.
In the following, we focus on the case of vibrational autoionization. This process can thus be viewed as a nonadiabatic transition between a vibrationally excited bound N-electron system and continuum electronic states consisting of an N-1 electron molecular core and a free electron. Early theoretical treatments have focused on the computation of ionization rates [24; 25; 26] as well as on establishing propensity rules for the ionization transitions [27]. While a full dynamical treatment of vibrational autoionization is highly desirable, an entirely quantum-dynamical approach is computationally prohibitive. As an alternative, a mixed quantum-classical ansatz can be considered, further motivated by the success of this type of methodology in the description of bound-state nonadiabatic processes and the simulation of time-resolved spectroscopic signals. [28; 29; 30; 31; 32] Although to date there have been several implementations of mixed quantum-classical dynamics simulations for bound-state problems made publicly available [33; 34; 35],
no program addressing the simulation of vibration-induced autoionization processes has been published so far.
Therefore, in this work we present the program package implementing our approach to describe vibrational autoionization through quantum-classical dynamics in the framework of the surface-hopping methodology in the manifold of bound and continuum electronic states as described recently [36]. Therein, nuclear motion is considered classically, while the electronic system is treated quantum-mechanically. Nonadiabatic transitions between electronic states accompanied by change of the classical vibrational energy of the molecule describe the energy exchange between the two subsystems. With this program package and the underlying methodology, one is able to gain insight into the geometric and electronic evolution in the course of the autoionization process as well as to calculate the time-, energy- and angle-distribution of the generated free electrons, which serve as experimental observables for monitoring autoionization dynamics.
We illustrate our program on the example of the 2-cyanopyrolide anion, which bears a dipole-bound excited state slightly below the electron detachment threshold while the vibrationally excited states are metastable and decay via autoionization. [8]
In the following section a brief theoretical description of the method is given. In section III an overview of the actual implementation is provided. The subsequent section IV details performance-related issues, namely quality of approximations in the theory and runtime and memory optimization within the program, as well as a dynamics simulation example for the 2-cyanopyrolide anion. Finally in section V a conclusion and outlook are given.
## II Theory
Our methodological framework is based on the surface-hopping procedure as proposed by Tully [37], in which the coupled electron-nuclear dynamics of molecular systems is approached in a quantum-classical fashion. Specifically, the nuclei are propagated classically according to Newton's equations of motion,
\[M\ddot{\mathbf{R}}(t)=\mathbf{F}_{i}(\mathbf{R}[t])\equiv-\nabla_{\mathbf{R }}E_{i}(\mathbf{R}[t]), \tag{1}\]
where the force \(\mathbf{F}_{i}(\mathbf{R}[t])\) is obtained as the negative gradient of the electronic potential energy surface (PES) \(E_{i}(\mathbf{R}[t])\). In the above equation, \(M\) denotes a diagonal matrix containing the nuclear masses. For an ensemble of initial conditions, this leads to trajectories \(\mathbf{R}(t)\) moving on the given PES. Simultaneously, the electronic time-dependent Schrodinger equation
\[i\hbar\dot{\Psi}(\mathbf{r};\mathbf{R}[t])=\hat{H}_{el}\Psi(\mathbf{r}; \mathbf{R}[t]), \tag{2}\]
with the electronic Hamiltonian \(\hat{H}_{el}\) is solved. The electronic wavefunction can be expanded into a set of orthonormal basis states, which in the case of autoionization includes bound states \(\Phi_{nn^{\prime}}\) (denoted with a primed index) as well as continuum states \(\tilde{\Phi}_{nn^{\prime}}\) (denoted with a double-primed index):
\[\Psi\big{(}\mathbf{r},\mathbf{R}[t],t\big{)}= \sum_{nn^{\prime}}c_{nn^{\prime}}(t)\Phi_{nn^{\prime}}\big{(} \mathbf{r},\mathbf{R}[t]\big{)}+\] \[\sum_{nn^{\prime\prime}}\int d^{3}\mathbf{k}\;\tilde{c}_{nn^{ \prime\prime}}(\mathbf{k},t)\tilde{\Phi}_{nn^{\prime\prime}}(\mathbf{k}, \mathbf{r},\mathbf{R}[t]), \tag{3}\]
where \(\mathbf{k}\) denotes the continuously varying wave vector of the free electron, while \(m^{\prime\prime}\) is the quantum number of the remaining neutral state. We assume the wavefunctions \(\Phi_{nn^{\prime}}\) and \(\tilde{\Phi}_{nn^{\prime\prime}}\) to be single Slater determinants (ground state) or an expansion of singly excited Slater determinants (excited state). In the frame of the presented methodology we discretize the continuum states, leading to
\[\int d^{3}\mathbf{k}\,\tilde{c}_{nn^{\prime\prime}}(\mathbf{k},t )\tilde{\Phi}_{nn^{\prime\prime}}(\mathbf{k},\mathbf{r},\mathbf{R}[t])\] \[\approx \sum_{i}(\Delta\mathcal{V}_{k})^{\frac{1}{2}}\tilde{c}_{nn^{\prime \prime}}(\mathbf{k}_{i},t)(\Delta\mathcal{V}_{k})^{\frac{1}{2}}\tilde{\Phi}_{ nn^{\prime\prime}}(\mathbf{k}_{i},\mathbf{r},\mathbf{R}[t])\] \[\approx \sum_{i}c_{nn^{\prime\prime}}(\mathbf{k}_{i},t)\Phi_{nn^{\prime \prime}}(\mathbf{k}_{i},\mathbf{r},\mathbf{R}[t]), \tag{4}\]
where \(\Delta\mathcal{V}_{k}\) denotes the volume element in \(k\)-space and the discretized and continuum state expansion coefficients are related according to \(c_{nn^{\prime}}(\mathbf{k}_{i},t)=(\Delta\mathcal{V}_{k})^{\frac{1}{2}}\tilde{ c}_{nn^{\prime}}(\mathbf{k}_{i},t)\). The actual determination of the wave vectors and the implementation of the discretization procedure are explained in detail in the next chapter.
Insertion of Eq. (3) into the time-dependent Schrodinger equation (2), multiplication from the left by an eigenstate \(\langle\Phi_{n}|\) and evaluation of the arising terms leads to a set of coupled differential equations for the electronic state coefficients \(c_{n}\):
\[\dot{c}_{n}(t)=\sum_{j}\left[-\frac{i}{\hbar}H_{nm}(\mathbf{R}[t])-D_{mn}( \mathbf{R}[t])\right]c_{m}(t), \tag{5}\]
with the matrix elements of the electronic Hamiltonian \(H_{nm}=\langle\Phi_{n}|H_{el}|\Phi_{m}\rangle\) and the nonadiabatic couplings \(D_{nm}=\langle\Phi_{n}|\tilde{\Phi}_{m}\rangle=\dot{\mathbf{R}}\cdot\langle \Phi_{n}|\nabla_{\mathcal{R}}|\Phi_{m}\rangle\). These can be divided into separate expressions for the bound and continuum states, resulting in the diabatic and nonadiabatic couplings between two bound anion states,
\[H_{nn^{\prime}m^{\prime}} =\langle\Phi_{n^{\prime}}|\hat{H}|\Phi_{nn^{\prime}}\rangle \tag{6}\] \[D_{n^{\prime}m^{\prime}} =\langle\Phi_{n^{\prime}}|\dot{\Phi}_{nn^{\prime}}\rangle\,, \tag{7}\]
and between a bound and a discretized continuum state,
\[H_{nn^{\prime\prime}m^{\prime}}(\mathbf{k}_{i})=(\Delta\mathcal{V}_{k})^{ \frac{1}{2}}\,\langle\tilde{\Phi}_{nn^{\prime}}(\mathbf{k}_{i})|\hat{H}|\Phi_{ nn^{\prime}}\rangle \tag{8}\] \[D_{n^{\prime\prime}m^{\prime}}(\mathbf{k}_{i})=\langle\Phi_{n^{ \prime}}(\mathbf{k}_{i})|\Phi_{nn^{\prime}}\rangle=(\Delta\mathcal{V}_{k})^{ \frac{1}{2}}\,\langle\tilde{\Phi}_{n^{\prime\prime}}(\mathbf{k}_{i})|\dot{ \Phi}_{nn^{\prime}}\rangle\,. \tag{9}\]
In the above equations, the approximation to neglect the coupling terms between the continuum states has been introduced. The discretized continuum states consist of an antisymmetrized product of a bound \(N-1\) electron neutral state and a molecular scattering state of the free electron
\[\Phi_{n^{\prime\prime}}(\mathbf{k}_{i})=\mathcal{A}\left(\Phi_{n^{\prime\prime}} ^{(\mathbf{n})}\cdot\psi(\mathbf{k}_{i})\right). \tag{10}\]
The simplest approximation to the free electron states in the presence of a neutral molecular core are plane waves
\[\psi(\mathbf{k}_{i})\approx\mathcal{N}\mathrm{e}^{\mathbf{k}_{i}\cdot\mathbf{r}} \tag{11}\]
with a normalization constant \(\mathcal{N}=(2\pi)^{-3/2}\) to satisfy the orthonormality demanded in Eq. (3). Since this function would be completely independent on the electronic and nuclear configuration of the molecular core, which is a strong simplification, the plane waves are orthogonalized with respect to the anion's molecular orbitals (MOs) \(\phi_{m}\) to include (at least to a certain degree) dependence on the molecular structure according to
\[\tilde{\psi}(\mathbf{k}_{i}) =(2\pi)^{-3/2}\mathcal{N}_{ortho}\left(\mathrm{e}^{\mathbf{k}_{i} \cdot\mathbf{r}}-\sum_{m}^{\mathrm{occ}}\left\langle\phi_{m}|\mathrm{e}^{ \mathbf{k}_{i}\cdot\mathbf{r}}\right\rangle\phi_{m}\right)\] \[=\mathcal{N}_{ortho}\left(\psi(\mathbf{k}_{i})-\sum_{m}^{ \mathrm{occ}}\left\langle\phi_{m}|\psi(\mathbf{k}_{i})\right\rangle\phi_{m} \right), \tag{12}\]
with the normalization constant
\[\mathcal{N}_{ortho}=\left(1-\sum_{m}^{\mathrm{occ}}\left|\left\langle\phi_{m }|\psi(\mathbf{k}_{i})\right\rangle\right|^{2}\right)^{-1/2} \tag{13}\]
arising from the orthogonalization.
Notably, the summation over \(m\) includes the occupied MOs in all'relevant' Slater determinants of all considered electronic states, that is, we considered all determinants which are needed to sufficiently represent the ground state and full CIS wavefunction of the excited state. Beginning from the highest contribution to a wavefunction, determinants are included until a specific percentage or a user-adjusted maximum number of configurations per electronic state is reached (95 % / 10 configurations in the case of vinylidene [36]). Considering for now the special case where only the anion's ground state is included, the used MOs are simply the energetically lowest ones up to the highest-occupied molecular orbital (HOMO).
The overlap integral between a plane wave and an MO present in Eq. (13), \(\left\langle\phi_{m}|\psi(\mathbf{k}_{i})\right\rangle\), can be computed analytically by expanding the MO into the Gaussian atomic orbital (AO) basis, with the integral involving a single AO \(|\nu\rangle\) given by
\[\langle\nu|\psi(\mathbf{k})\rangle =(2\pi)^{-3/2}\int d^{3}\mathrm{\mathbf{r}}\,\mathrm{e}^{\mathbf{ k}\cdot\mathbf{r}}\phi_{\nu}(\mathbf{r})\] \[=(2\alpha_{\nu})^{-3/2}\exp\left(i\mathbf{k}\cdot\mathbf{A}_{ \nu}-\frac{k^{2}}{4\alpha_{\nu}}\right)\] \[\quad\times\prod_{j=x,y,z}(-i\sqrt{4\alpha_{\nu}})^{-n_{\nu,j}}H _{n_{\nu,j}}\left(\frac{k_{j}}{\sqrt{4\alpha_{\nu}}}\right), \tag{14}\]
where the \(H_{n_{\nu,j}}\) are the Hermite polynomials of order \(n_{\nu,j}\).
### Electronic coupling terms
There are anionic systems, for example the vinylidene anion [36], that do not support a bound excited state, in which case the consideration of only the ground state and the continuum in the process of autoionization is sufficient. Besides that, for example in molecules exhibiting dipole-bound excited states [8, 38, 39], several bound anionic states and the interaction among them are relevant as well. Nonetheless, to keep the formalism concise, if not noted otherwise we discuss in the following the electronic coupling terms for the special case of both anion and neutral molecule being in their respective electronic ground states, which in turn are represented by a single Slater determinant. The generalization to excited states and/or multideterminantal wavefunctions is straightforward. [39] We denote the bound anionic ground state wavefunction by \(|\Phi_{0}\rangle\) and the continuum wavefunctions by \(|\Phi_{i}\rangle\), the latter being constructed as an antisymmetrized product of the neutral ground state and a free electron state function with wave vector \(\mathbf{k}_{i}\), similar to Eq. (10).
#### ii.1.1 Diabatic couplings
In the case of two adiabatic bound anion states, the coupling matrix elements \(H_{n^{\prime}n^{\prime}}\) given in Eq. (6) yield zero for all \(n^{\prime}\neq m^{\prime}\) since these states are orthonormal eigenstates of the electronic Hamiltonian.
On the other hand, since in our methodology the bound and continuum state wavefunctions are constructed using separate quantum-chemical calculations for the anion and neutral, and the free electron wavefunction is taken as a plane wave, the continuum state functions are crude approximations to the actual adiabatic eigenfunctions of the electronic Hamiltonian for the \(N\)-electron system and therefore, diabatic couplings between the bound and continuum electronic states arise.
As elaborated in detail in Ref. [36], according to Eq. (8) and defining \(V_{0}^{\mathrm{dia}}(\mathbf{k}_{i})\) as
\[H_{0}(\mathbf{k}_{i})\equiv\left\langle\Phi_{i}|\hat{H}|\Phi_{0}\right\rangle \equiv(\Delta\gamma_{k})^{\frac{1}{2}}\,V_{i0}^{\mathrm{dia}}(\mathbf{k}_{i}), \tag{15}\]
the diabatic coupling between a bound and a continuum state can be written in terms of the AO basis as
\[V_{i0}^{\mathrm{dia}}(\mathbf{k}_{i})= \sum_{\lambda\mu\nu}\Bigg{[}A_{\lambda\mu\nu}\Big{(}\left\langle \mathbf{k}_{i}\lambda||\mu\nu\right\rangle-\sum_{\sigma}B_{\sigma}\left\langle \sigma\lambda||\mu\nu\right\rangle\Big{)}+\] \[\tilde{A}_{\lambda\mu\nu}\Big{(}\left\langle\mathbf{k}_{i}\lambda \left|\mu\nu\right\rangle-\sum_{\sigma}B_{\sigma}\left\langle\sigma\lambda \left|\mu\nu\right\rangle\right.\Big{)}\Bigg{]}. \tag{16}\]
In this formula the Greek letters denote the AO basis functions, \(\left\langle\mathbf{k}_{i}\lambda|\mu\nu\right\rangle\) is an electron-electron repulsion integral and \(\left\langle\mathbf{k}_{i}\lambda|\mu\nu\right\rangle=\left\langle\mathbf{k}_{ i}\lambda|\mu\nu\right\rangle-\left\langle\mathbf{k}_{i}\lambda|\nu\mu\right\rangle\) its antisymmetrized variant. The prefactors \(A_{\lambda\mu\nu}\), \(\tilde{A}_{\lambda\mu\nu}\) and \(B_{\sigma}\) comprise AO expansion coefficients and overlap integrals and are defined as fol
lows (assuming that the extra electron of the anion has \(\alpha\) spin):
\[A_{\lambda\mu\nu} =\sum_{n}^{\mathrm{occ},\alpha}\sum_{q,p<q}^{\infty,\alpha}(-1)^{n +p+q-1}\mathrm{det}\ \mathbf{S}_{in,pq}\] \[\times\left(c_{\lambda}^{(n)}-\sum_{u}^{\mathrm{occ},\alpha}c_{ \lambda}^{(u)}S_{uu}\right)c_{\mu}^{(p)}c_{\nu}^{(q)} \tag{17}\] \[\tilde{A}_{\lambda\mu\nu} =\sum_{n}^{\mathrm{occ},\beta\,\mathrm{occ},\alpha\,\mathrm{occ },\beta}(-1)^{n+p+q-1}\mathrm{det}\ \mathbf{S}_{in,pq}\] \[\times\left(c_{\lambda}^{(n)}-\sum_{\tilde{u}}^{\mathrm{occ}, \beta}c_{\lambda}^{(\tilde{u})}S_{in}\right)c_{\mu}^{(p)}c_{\nu}^{(\tilde{q})}\] (18) \[B_{\sigma} =\sum_{r}^{\mathrm{occ},\alpha}\sum_{\rho}c_{\sigma}^{(r)}c_{ \rho}^{(r)}\left\langle\mathbf{k}_{i}|\rho\right\rangle, \tag{19}\]
where the indices (including their variants with an overbar) \(p,q,r\) refer to anion MOs, \(n,u\) to neutral MOs, and \(\mathrm{det}\ \mathbf{S}_{in,pq}\) denotes the minor determinant of the overlap matrix between continuum and bound state orbitals where the rows of the free electron orbital \(\bar{\psi}(\mathbf{k}_{i})\) and neutral orbital \(\chi_{n}\) as well as the columns of anion orbitals \(\phi_{p}\) and \(\phi_{q}\) have been deleted. For the full derivation of these equations the reader is referred to Ref. [36].
#### ii.1.2 Nonadiabatic couplings
The nonadiabatic coupling terms as defined in Eqs. (7) and (9) are calculated using the finite-difference approximation for the time derivative, which leads to
\[D_{\ell 0}(t) =\langle\Phi_{i}(t)|\frac{d}{dt}\Phi_{0}(t)\rangle \tag{20}\] \[\approx\frac{1}{2\Delta t}\left(\langle\Phi_{i}(t-\Delta t)|\Phi_ {0}(t)\rangle-\langle\Phi_{i}(t)|\Phi_{0}(t-\Delta t)\rangle\right) \tag{21}\]
In the case of two anionic bound states, these terms are evaluated according to Refs. [40, 41, 42].
One can simplify the arising terms by integrating over all but one electron coordinate. For the first term of Eq. (21) this yields
\[\langle\Phi_{i}(t^{\prime})|\Phi_{0}(t)\rangle=N^{-1/2}\left\langle\bar{\psi} (\mathbf{k}_{i},t^{\prime})|\psi^{D}(t^{\prime},t)\right\rangle, \tag{22}\]
where we have abbreviated \(t^{\prime}=t-\Delta t\) and have defined the one-electron function \(\psi^{D}(t^{\prime},t)\), which is an analog to a molecular Dyson orbital with the \(N\)- and \(N-1\)- wavefunctions taken at different time steps and geometries. Using Eqs. (12) and (22) the resulting nonadiabatic coupling terms read
\[D_{00}(\mathbf{k}_{i},t) =\frac{(\Delta\mathcal{V}_{k}^{2})^{\frac{1}{2}}\mathcal{N}_{ortho }}{2\sqrt{N}\Delta t}\Big{[}\left\langle\psi(\mathbf{k}_{i})|\psi^{D}(t^{ \prime},t)\right\rangle-\left\langle\psi(\mathbf{k}_{i})|\psi^{D}(t,t^{\prime })\right\rangle-\sum_{n}\left\langle\psi(\mathbf{k}_{i})|\phi_{n}(t)\right\rangle \langle\phi_{n}(t^{\prime})|\psi^{D}(t^{\prime},t)\rangle\] \[+\sum_{n}\left\langle\psi(\mathbf{k}_{i})|\phi_{n}(t)\right\rangle \langle\phi_{n}(t)|\psi^{D}(t,t^{\prime})\right\rangle\Big{]}. \tag{23}\]
### Adiabatic ionization and electronic decay
The main focus of the above presented methodology lies on describing the nonadiabatic process of vibrational autoionization. However, in the course of the molecule's dynamical evolution instances can occur where the occupied anionic state becomes unbound as the result of changes in nuclear geometry. In this case, ionization is possible as an exclusively _adiabatic_ electronic process without coupling to the nuclear motion. This process can be included approximately in our method by simulating the temporal spread of the ejected electron as a wavepacket evolving freely in space. As a quantitative measure, the electronic spatial extent, i.e., the expectation value of \(\hat{\mathbf{r}}^{2}\), is calculated as a function of time.
Specifically, once a time step is reached where the VDE has become negative, the highest-occupied orbital of the last bound geometry, \(\phi(\mathbf{r},t_{0})\), is used as the initial free electronic wavepacket. In the case where one only considers the anionic ground state, this corresponds to the HOMO. If also an excited state is involved, natural transition orbitals (NTOs) [43] are calculated and the highest-occupied and lowest-unoccupied NTO (HONTO and LUNTO) are used for the anionic ground and excited state, respectively. Such an electronic wavepacket is then propagated in time and its spatial extent is evaluated according to
\[\left\langle\hat{\mathbf{r}}^{2}\right\rangle(t) =\left\langle\phi(\mathbf{r},t)|\hat{\mathbf{r}}^{2}|\phi( \mathbf{r},t)\right\rangle\] \[=\sum_{\mu\nu}c_{\mu}c_{\nu}\left\langle\phi_{\mu}(\mathbf{r},t)| \hat{\mathbf{r}}^{2}|\phi_{\nu}(\mathbf{r},t)\right\rangle. \tag{24}\]
Here \(\phi_{\mu,\nu}\) denote the Gaussian atomic basis functions freely propagated in time:
\[\phi_{\mu}(\mathbf{r},t)=\int d^{3}\mathbf{r}^{\prime}\,K(\mathbf{r},\mathbf{ r}^{\prime},t,0)\phi_{\mu}(\mathbf{r}^{\prime},0) \tag{25}\]
with the free electron propagator
\[K(\mathbf{r},\mathbf{r}^{\prime},t,0)=\left\langle\mathbf{r}\left|\mathrm{e}^{- i\hat{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf \mathbf \mathbf \mathbf }} \
tronic wavepacket:
\[\varphi_{\mu}(\mathbf{r},t) = N_{l_{i},l_{j},\mathrm{e}^{-\frac{\alpha}{1+\beta t}r^{2}}}\left[- \Lambda\frac{i\beta t}{2\alpha}(1+i\beta t)^{-\frac{5}{2}}+\right. \tag{27}\] \[\left.(1+i\beta t)^{-\frac{3}{2}-\sum_{j}l_{j}}\prod_{j=x,y,z}(r_{ j}-A_{j})^{l_{j}}\right],\]
where \(\mathbf{A}\) is the spatial center of the respective basis function, \(l_{i}\) denotes the angular momentum quantum number for the \(i\)'th spatial direction and \(\Lambda\) is a constant that is unity if one of the \(l_{i}=2\) and zero if all \(l_{i}<2\). The AO integrals in Eq. (24) are calculated with an implementation of the McMurchie-Davidson scheme McMurchie and Davidson (1958). To relate the spatial extent in a simple way to the lifetime of the unbound state, an auxiliary spherically symmetric electron distribution is considered which within the initially determined radius \(r_{0}=\sqrt{\left\langle\mathbf{r}^{2}\right\rangle(t_{0})}\) contains a probability of 99%. Subsequently, with \(\left\langle\mathbf{r}^{2}\right\rangle\) increasing with time, the probability within \(r_{0}\) decreases, giving rise to a population decay curve which can be related to a time constant \(\tau\). The latter is incorporated into the propagation of the electronic wavefunction given by Eq. (5) by adding an imaginary component to the electronic state energy,
\[E^{\mathrm{(a)}}\to E^{\mathrm{(a)}}-\frac{i\hbar}{2\tau}, \tag{28}\]
which leads to an exponential population decay due to adiabatic ionization in regions where the VDE is negative for the given electronic state.
### Surface-hopping procedure
Solution of the set of Eqs. (5) along a nuclear trajectory yields the time-dependent electronic state coefficients \(c_{n}(t)\). Within the surface-hopping methodology, a switch from the occupied bound electronic state \(n\) to any other state \(m\) is determined by the hopping probability depending on the electronic state populations \(\rho_{m}=|c_{n}|^{2}\), which is
\[P_{n\to m}=-\frac{\dot{\rho}_{m}}{\rho_{m}}\frac{\dot{\rho}_{mm}}{\sum_{k} \dot{\rho}_{kk}}\Delta t \tag{29}\]
for \(\dot{\rho}_{m}<0\) and \(\dot{\rho}_{mm}>0\) and zero in any other instance. In the above expression, the sum over \(k\) includes all states with \(\dot{\rho}_{kk}>0\). In case a surface hop occurs, to ensure energy conservation the nuclear velocities are rescaled such that for kinetic energies \(T\) and electronic potential energies \(E_{n}\) of anion (a) and neutral (n) the following conditions are fulfilled:
\[T^{\mathrm{(a)}}=T^{\mathrm{(a)}}+E^{\mathrm{(a)}}_{n}-E^{\mathrm{(a)}}_{m} \tag{30}\]
for a hop between anionic bound states and
\[T^{\prime\mathrm{(n)}}=E^{\mathrm{(a)}}_{n}+T^{\mathrm{(a)}}-E^{\mathrm{(n)}} _{m}-E_{\mathrm{el}}(\mathbf{k}_{i}) \tag{31}\]
for a hop into the continuum (i.e. autoionization). For a more detailed description of the hopping procedure the reader is referred to Ref. [45].
## III Program implementation
In the following chapter a detailed account of how the theory is actually implemented in the program package will be provided. For an easier understanding, in Fig. 1 the program flow is displayed schematically, with a color code indicating the module handling the respective task.
Starting from the generation of an ensemble of nuclear coordinates \(\mathbf{R}(t)\) and velocities \(\dot{\mathbf{R}}(t)\) at the time \(t=t_{initial}\) using the wignerEnsemble module in the wigner folder (red), a first quantum-chemical calculation is performed by an external quantum-chemistry program - to date these include Gaussian09/Gaussian16Gunnel (1958) and QChemGunnel (1958) (blue) - which yields the forces from which the accelerations \(\dot{\mathbf{R}}(t)\) of the nuclei are computed. The nuclei are then propagated by integration of Newton's equations of motion for one nuclear time step using the nuclearDynamics module (orange). With the new nuclear coordinates \(\mathbf{R}(t+\Delta t)\), a new set of quantum-chemical calculations can be performed, yielding the new energy gradients necessary for the evaluation of the velocities \(\dot{\mathbf{R}}(t+\Delta t)\). With the quantum-chemical calculations at \(t\) and \(t+\Delta t\), one is now able to construct the electronic continuum states as well as the coupling matrices of the diabatic and nonadiabatic couplings using the populationDynamics module (green). From this point, the electronic state coefficients \(\mathbf{c}(t)\) are propagated in parallel to the nuclear dynamics by integrating the electronic Schrodinger equation, yielding \(\mathbf{c}(t+\Delta t)\). These are utilized to compute hopping probabilities from the occupied bound state to all other (bound and continuum) states. The switching between the states is induced stochastically according to the respective hopping probabilities given in Eq. (29). After writing the results into the various output files time is shifted to \(t=t+\Delta t\), thereby completing one time step.
To make this initial overview more specific, in the following the underlying algorithms are explained in more detail.
### Electronic structure calculation
All electronic structure and energy gradient calculations can be performed by using any Kohn-Sham (TD)-DFT level of theory provided within the Gaussian09, Gaussian16 or QChem program packages. The AO basis set needs to be defined explicitly in a separate input file, thus also allowing for additional augmentation of basis sets, which is of utmost importance when describing molecular anions.Kohn and Sham (1958); Sham and Kohn (1958) The handlerG09 and handlerQChem modules provide an interface to the external programs by creating input files and calling the respective programs. The dysonG09 and dysonQChem modules contain classes that parse the external output files and organize the data into the form needed in the program.
### Generation of initial conditions
The initial nuclear coordinates and velocities are determined by stochastic sampling of an appropriate probability distribution function for the harmonic normal modes of the
system. These can be computed from the electronic Hessian matrix at an optimized geometry of the studied molecule. For molecules in the vibrational ground state as well as for a thermal ensemble of molecules, the Wigner function
\[\rho_{W}(\{Q_{i},P_{i}\})=\frac{1}{(\pi\hbar)^{N}}\prod_{i=1}^{N}\alpha_{i}(T) \exp\left(-\frac{\alpha_{i}(T)}{\hbar\omega_{i}}(P_{i}^{2}+\omega_{i}^{2}Q_{i}^ {2})\right) \tag{32}\]
with
\[\alpha_{i}(T)=\tanh\left(\frac{\hbar\omega_{i}}{2k_{B}T}\right) \tag{33}\]
is employed, where \(\{Q_{i},P_{i}\}\) denote the normal coordinates and momenta, \(\omega_{i}\) is the angular frequency of normal mode \(\nu_{i}\) and \(T\) the thermodynamic temperature.
Besides these cases, in experiments investigating vibration-induced autoionization another type of initial conditions is often important in which one or more normal vibrations of the system are excited by laser irradiation. In principle, the respective initial conditions could be also generated by using a Wigner function. However, Wigner functions for excited vibrational states can assume negative values and can thus not be directly identified with a probability distribution. A possible approach might be to regard the positive and negative parts of the Wigner function separately as probability distributions and to run a "positive" and a "negative" ensemble of initial conditions, the final properties of the system then being obtained by appropriate averaging. As a more efficient alternative, which gets on with only one single ensemble, we employ a positive definite probability distribution constructed from the excited-vibrational state wavefunctions in position and momentum space,
\[\rho_{\upsilon}^{(i)}(Q_{i},P_{i})=|\chi_{\upsilon}^{(i)}(Q_{i})|^{2}|\tilde{ \chi}_{\upsilon}^{(i)}(P_{i})|^{2}, \tag{34}\]
where \(\chi_{\upsilon}^{(i)}(Q_{i})\) and \(\tilde{\chi}_{\upsilon}^{(i)}(P_{i})\) are the harmonic oscillator wavefunctions for quantum state \(\upsilon\) of normal mode \(\nu_{i}\) in position and momentum space, respectively.
### Nuclear dynamics
Given Newton's equations of motion (1), the nuclei are propagated by numerical solution using the velocity Verlet algorithm[49] for a user-defined time step. Within this algorithm, the nuclear coordinates at \(t+\Delta t\) are obtained from a Taylor series expansion around the coordinates at \(t\):
\[\mathbf{R}(t+\Delta t)\approx\mathbf{R}(t)+\dot{\mathbf{R}}(t)\Delta t+\frac {1}{2}M^{-1}\mathbf{F}(t)\Delta t^{2}, \tag{35}\]
where in the last term the acceleration has been formulated using the force \(\mathbf{F}\) given by the electronic potential energy gradient (cf. Eq. (1)). With the new nuclear coordinates, the
Figure 1: Schematic of the dynamics procedure as implemented in the HORTENSIA program package. The box coloration matches the specific tasks to the program modules as follows: red: wigner/wignerEnsemble.py, blue: external quantum-chemistry program, orange: nuclearDynamics.py, green: populationDynamics.py
force at \(t+\Delta t\) can be evaluated, giving rise to the new nuclear velocities
\[\dot{\mathbf{R}}(t+\Delta t)=\dot{\mathbf{R}}(t)+\frac{\Delta t}{2}M^{-1}\left[ \mathbf{F}(t)+\mathbf{F}(t+\Delta t)\right]. \tag{36}\]
Due to the approximative nature of the algorithm above and the accuracy of the calculated energy gradients, it is possible that the velocities develop small overall translational or rotational components although the initial conditions were determined with these degrees of freedom set at rest. These numerical inaccuracies are detected, in the case of translational velocity by the shift of the center of mass away from the origin of the coordinate system, in the case of rotation by the calculation of the angular velocity according to
\[\boldsymbol{\omega}_{rot}=I^{-1}\mathbf{L} \tag{37}\]
with the moment of inertia \(I\) and the angular momentum \(\mathbf{L}\). The translational and rotational portions of the nuclear velocities are then subtracted from the total velocity and the remaining vibrational part is rescaled to ensure energy conservation.
After each nuclear dynamics step, the new nuclear coordinates and velocities are written into separate output files, the coordinates in a format of consecutive xyz files which can be visualized easily by external software (for example with the VMD program package [50], which is warmly recommended).
### Electronic dynamics
Since the evaluation of electronic coupling terms in Eq. (5) is, apart from the external quantum-chemistry calculations, the computationally most expensive step in the dynamics, several approximations need to be implemented, which will be discussed in the following
#### ii.4.1 Calculation of coupling terms
Before calculating the coupling terms, the discretization procedure for the generation of wave vectors needed to construct the continuum state wavefunctions will be discussed. To uniformly discretize angular orientation and kinetic energy of ejected electrons, it is natural to discretize angular and energetic distribution separately. Since the kinetic energy of a plane wave is
\[E_{kin}(\mathbf{k}_{i})=\frac{\hbar^{2}|\mathbf{k}_{i}|^{2}}{2m_{e}} \tag{38}\]
and therefore proportional to the length of the wave vector squared, this length is discretized such that the desired energy range is covered evenly. For a given energy, the vector orientations are approximately evenly distributed according to the Fibonacci sphere algorithm [51]. The volume elements \(\Delta\gamma_{k}\) needed for calculating the bound-continuum couplings in Eqs. (8) and (9) are constructed as the difference of spherical caps around the corresponding wave vectors with the base diameter as an average over the six nearest points on the sphere surrounding the vector.
In the diabatic coupling terms in the AO basis (Eq. (16)) two types of four-center integrals are present: (i) such involving four Gaussian-type atomic orbitals (GTOs), \(\left\langle\sigma\lambda\left|\mu\nu\right\rangle\right\rangle\). These are evaluated by using the libcint library [52] within the PySCF program package [53, 34]. (ii) integrals involving a plane wave of wave vector \(\mathbf{k}_{i}\) and three GTOs, \(\left\langle\mathbf{k}_{i}\lambda\left|\mu\nu\right\rangle\right.\) These terms can in principle be calculated analytically as outlined, e.g., in Ref. [55], but this is computationally unfeasible for the present purpose since an immense number of plane waves has to be included for a proper discretization of the ionization continuum. Instead, the plane waves are approximated by their Taylor expansion around the center of basis function \(\left|\mu\right\rangle\), \(\mathbf{R}_{\mu}\). As will be discussed in the Performance Section later on, for sufficient accuracy in the approximation it is necessary to include not only the zero'th order term (assuming the plane wave to be constant in the vicinity of the molecule), but also the first-order term, resulting in the approximation
\[\mathrm{e}^{\mathbf{k}\cdot\mathbf{r}} =\mathrm{e}^{\mathbf{i}\mathbf{k}\cdot\mathbf{R}_{\mu}}\mathrm{e }^{\mathbf{i}\cdot(\mathbf{r}-\mathbf{R}_{\mu})}\] \[\approx\mathrm{e}^{\mathbf{i}\mathbf{k}\cdot\mathbf{R}_{\mu}} \left[1+i\mathbf{k}\cdot(\mathbf{r}-\mathbf{R}_{\mu})\right]. \tag{39}\]
This leads to two terms for the two-electron integrals as follows:
\[\left\langle\mathbf{k}_{i}\lambda\left|\mu\nu\right\rangle\approx\mathrm{e}^{ \mathbf{i}\mathbf{k}\cdot\mathbf{R}_{\mu}}\left[\left\langle\lambda\left|\mu \nu\right\rangle+i\mathbf{k}\left\langle\lambda\left|\hat{\mu}\nu\right\rangle \right].\right. \tag{40}\]
In the above expression, \(\left|\hat{\mu}\right\rangle\) is an AO basis function with an angular momentum quantum number by one higher than \(\left|\mu\right\rangle\) while having the same Gaussian exponent. This heavily reduces the amount of two-electron integrals to be computed from \(n_{AO}^{3}\pi_{PW}\) to \(n_{AO}^{2}\left[n_{AO}+n_{AO}^{\prime}\right]\), with \(n_{AO}\) being the total number of AO basis functions, \(n_{AO}^{\prime}\) the total number of basis functions with increased quantum number and \(n_{PW}\) the total number of plane waves. For instance, in the case of vinylidene in Ref. [36], this amounts to a reduction by a factor of \(\sim\)30000. These terms are again evaluated using the PySCF module. The prefactors \(A\), \(\tilde{A}\) and \(B\) present in Eq. (16) are straightforwardly implemented in Python according to Eqs. (17), (18) and (19). Evaluation of the Dyson orbitals needed for the calculation of the nonadiabatic couplings is implemented as described before in Ref. [28] for arbitrary basis sets for the anion and the neutral molecule. After construction of the Dyson orbitals from all bound anionic states to the neutral ground state the nonadiabatic coupling terms are then calculated according to Eq. (23). To ensure that the wavefunctions of bound states do not switch their arbitrary signs (which can happen, since the external quantum-chemistry calculations are independent of each other), the overlap of electronic wavefunctions of all bound states are tracked throughout the trajectories and accounted for in all formulae involving the respective states.
#### ii.4.2 Calculation of electronic state coefficients
The electronic degrees of freedom are propagated by solving the time-dependent Schrodinger equation (5) in the manifold of all considered bound anion and continuum electronic states using Adams' method as implemented in the ode class
of Python's scipy.integrate module[56] with a user-defined integration time step. For increased computational stability the equations are beforehand transformed into the interaction picture, introducing the new electronic state coefficients
\[a_{n}(t)=c_{n}(t)\;\mathrm{e}^{\frac{i}{\hbar}H_{nm}t}. \tag{41}\]
Inserting this into Eq. (5) leads to the actually implemented electronic equation of motion
\[\dot{a}_{n}(t)=\sum_{m}\left[-\frac{i}{\hbar}\tilde{H}_{nm}-D_{nm}\right]a_{m} (t)\mathrm{e}^{-\frac{i}{\hbar}(H_{nm}-H_{nm})t} \tag{42}\]
where \(\tilde{H}_{nm}\) denotes the Hamiltonian matrix of the system with zeros on the diagonal.
#### iii.4.3 Hopping procedure
Hopping probabilities are directly evaluated according to Eq. (29) from the state coefficients: A random number between 0 and 1 is generated using the random function in the numpy.random module and hopping is conducted accordingly. Once a trajectory hops into a continuum state, it could in principle be straightforwardly continued on the potential energy surface of the neutral molecule. Although this can be quite insightful if one is interested in the subsequent geometric changes of the ionized system, we follow a different approach and stop the trajectories after electron detachment since our focus is set on the actual autoionization process. This allows us to implement a modification of the surface-hopping procedure that leads to a great improvement of the hopping statistics. The idea is to divide a single trajectory into'sub-trajectories', i.e. to evaluate if a trajectory hops a number \(n_{sultraj}\) of times (see Fig. 1). Every time a sub-trajectory hops into the continuum, \(n_{sultraj}\) is reduced by one and once it reaches zero, the underlying nuclear dynamics is stopped. It has to be noted that this procedure is only followed for hops into the continuum, while for hops between bound anionic states only a single hopping event per trajectory and time step is possible due to the need to continue the nuclear dynamics on an unambiguously determined potential energy surface.
### Graphical user interface
Our program package comes with a graphical user interface (GUI) for the input generation as well as an analysis tool for trajectories. An example of the former is displayed in Fig. 2. In the input generator, which is started with
```
$hortensia--gui
```
in addition to all relevant settings for the actual simulation, the user may find options for the generation of a complete folder structure for the trajectories as well as bash submit scripts to be used with the Slurm Workload Manager[57]. Furthermore, the above mentioned Wigner ensemble scripts can be used and initial conditions can be generated. Therefore it is highly recommended to use the GUI feature.
Additionally, through the command
```
$hortensia--analysis
```
one can open the analysis tool which is able to read output files and visualize them in a sub-window using the matplotlib program package[58].
### Installation
The most convenient way to install the program package is downloading or cloning the _repository on our Github page[59]_. In the main folder, execute
```
$pythoncsyetup.pybuild_ext--inplace $pipinstall.
```
to first compile the Cython modules and then install the program. The program package requires (and will automatically pip install)
* python >= 3.8
* for faster summation of large arrays, mainly in the calculation of the two-center integrals in Eqs. (16) and (40)
* mainly in the integration of the electronic Schrodinger equation as outlined in subsection III.4.2
* for the calculation of the two-electron integrals in Eqs. (16) and (40)
* for the parallelization of diabatic couplings
* for the plots in the sub-window of the analysis tool described before
and all dependencies thereof. Using the command
```
$pipuninstallhortensia_latest
```
## IV Discussion
In this section we will quantify aspects of the program related to overall performance. This includes the quality of approximations within the methodology as well as optimization of time consumption and computational resources. Moreover the exemplary autoionization dynamics of the 2-cyanopyrrolide anion is discussed.
### Accuracy of k-vector discretization and integral approximations
The accuracy of the Fibonacci sphere algorithm for angular discretization in \(k\)-space is illustrated in Fig. 3 by the covered surface area of a unit sphere using a given number of distributed points. The total surface area (orange graph) is presented with the relative error \(|A_{\text{fib}}-A_{\text{sphere}}|/A_{\text{sphere}}\) (green graph) to the exact surface area \(4\pi\approx 12.566\) (blue line). The
approximated area rapidly converges to a value of \(\sim\)12.243, which corresponds to a relative error of \(\sim\)2.575 %. Since in the coverage of k-vector lengths no additional approximation is introduced and for their respective volume elements the k-space is divided energetically evenly (thus covered exactly with respect to vector length), the error in the surface area for specific vector lengths equates to the overall error of the volume elements. Therefore the sum of these volume elements results in a total volume that deviates by less than 3 % from the actual sphere for arbitrary numbers of vector orientations \(n_{s}\geq 30\) and lengths \(n_{E}\) (giving a total number of wave vectors \(n_{k}=n_{E}\cdot n_{s}\)).
The approximation of the plane wave by the first terms of its Taylor expansion as introduced in Eq. (40) relies on the assumption that the amplitude of the plane wave only changes marginally within the extent of the AOs. Fig. 4 shows a comparison between the approximation with linear terms, an even simpler constant-wave approximation where \(\mathrm{e}^{\mathrm{i}\mathbf{kr}}\approx\mathrm{e}^{\mathrm{i}\mathbf{k} \mathbf{R}}\mu\) and the exact integrals for selected plane wave vectors for 2-cyanopyrrolide, which serves as an example molecule illustrating the applicability of the program (see section IV.3 below). Two error measures are compared: a relative value of average deviations (\(\epsilon_{1}\)) in Fig. 4a) and an average value of relative deviations (\(\epsilon_{2}\)) in Fig. 4b), which differ insofar as in \(\epsilon_{1}\), the deviations between exact and approximate integrals are averaged first and then divided by the overall average value of the exact integrals, while in \(\epsilon_{2}\), first for each individual integral the relative error is computed, followed by averaging the results. The averages are reported for three illustrative plane wave energies and grouped according to the Gaussian exponent of the basis function sharing its electron coordinate with the plane wave as "core", "valence" and "diffuse" with decreasing size of the exponent (for details see Fig. 4). Overall, it becomes evident that for both error measures, the linear approximation of the plane wave is clearly superior to the
Figure 3: Comparison of the actual surface area of a unit sphere (\(A_{\mathrm{sphere}}=4\pi\), blue line) and the approximated surface area as described in subsection III.4.1 for up to \(10^{4}\) vector orientations (orange). The relative error is given in green.
Figure 2: Left: example page of the graphical input generation tool; right: output analysis GUI with the electronic state coefficients of a single, exemplary 2-cyanopyrrolide trajectory. The molecular structure is only an illustrative image, which was created using the VMD program, and its creation is not part of the presented program.
constant approximation. The values of \(\epsilon_{1}\) are always much smaller than those of \(\epsilon_{2}\), which is due to the fact that the relative errors of smaller integrals tend to be larger than those of larger ones, and the definition of \(\epsilon_{1}\) partially compensates for this fact. Errors larger than a few percent only occur for \(\epsilon_{2}\) calculated for diffuse basis functions at larger plane wave energies. Since in the actual computations, the approximate integrals are employed to calculate the diabatic couplings and for this, the sum over the entire basis set is taken (cf. Eq. (16)), especially the smallness of error \(\epsilon_{1}\) encourages the use of the linear approximation.
### Optimization of program performance
Where computationally advantageous, we separate the time-dependent and -independent parts of the underlying equations and pre-calculate the time-independent terms at the beginning of the simulation. This results in higher overall memory usage, however of only several hundred MB to a few GB (depending on the molecular system), but leads to significant time-saving, which is still a desirable trade-off when calculating on CPU clusters but may limit the use of the program on single desktop computers.
Furthermore, for increased performance the summation over the four-center integrals in terms 2 and 4 on the right side of Eq. (16) is implemented as follows: one first pre-calculates the terms \(A\), \(\tilde{A}\) and \(B\) given in Eqs. (17)-(19) for all AOs. Then the calculation of the four-center integrals using the PySCF program package is divided into \(n_{proc}\) smaller terms, \(n_{proc}\) being the user-defined number of processors, and then evaluated in parallel utilizing the \(\mathrm{joblib}^{60}\) library by explicit summation over all AO combinations implemented in a Cython [61] module, therefore reducing the memory usage by ridding oneself from massive arrays while also improving the runtime performance of this time bottleneck through parallelization.
Together with the calculation of coupling terms, the most time-consuming step of the simulation is the two external quantum-chemical calculations needed in each time step. There are a few options to improve the performance of these calculations, the easiest of which are to increase the number of utilized processors and to reduce convergence time by loading the results of the last time step as an initial guess for the new calculation. Another possibility is in the choice of basis sets. Finding a basis set for anions prone to autoionization can be challenging due to the small ionization energies and the diffusivity of the states that comes with it. Therefore one has to consider basis sets augmented with enough diffuse basis functions to reasonably describe the properties of the system [23]. Although popular basis sets such as doubly and triply augmented Dunning-style basis sets (daug-cc-pVDZ, taug-cc-pVDZ) are (generally speaking) a potentially good choice for the description of loosely-bound anions, the size of these basis sets is computationally prohibitive if one aims to run dynamics simulations and therefore thousands of consecutive quantum-chemical calculations. A good alternative can be the usage of smaller basis sets (such as 6-311++G**[62, 63]) augmented with additional diffuse functions generated by geometric progressions of the Gaussian exponents as outlined in Ref. [48].
Considering the overall time consumption, no real performance benchmarks exist with which to compare our program package, since the theory behind it is rather novel. Therefore we will briefly discuss the specific case of vinylidene from our work presented in Ref. [36] and the 2-cyanopyrrolide example discussed in detail in section IV.3 below.
The vinylidene dynamics was performed for a total time of 3 ps in 15000 nuclear dynamics time steps at the \(\omega\)B97XD64/d-aug-cc-pVDZ level of theory, which consists of 146 primitive Gaussian basis functions and 96000 plane waves, amounting to \(\sim\)460 million 2-electron integrals per time step to be solved (cf. Eqs. (16) and (40)). Using 6 Intel Xeon E5-2660 (v3) processors per trajectory, the average computation time was around 11 days and 14 hours with a peak memory usage of \(\sim\)9 GB. Of the total time, around 5 days (or 43 %) were needed for the external quantum-chemistry calculations with the Gaussian09 program package. It also has to be noted that of the remaining time \(\sim\)30 % can
Figure 4: Errors (in %) of hybrid Gaussian-plane wave electron repulsion integrals \(\langle\mathbf{k}_{i}\lambda|\mu\nu\rangle\) for 2-cyanopyrrolide employing the 6-311++G** +382p basis set. The molecular structure has been optimized in the dipole-bound excited state at the TDDFT/\(\omega\)B97XD level using the same basis set. Two types of error measures are reported: a) \(e_{2}^{\mathrm{ap}}=\langle\left|l_{ex}-l_{ap}\right|/\left|l_{ex}\right|\rangle\) and b) \(e_{2}^{\mathrm{ap}}=\langle\left|l_{ex}-l_{ap}\right|/\left|l_{ex}\right|\rangle\)\({}^{a}\), where \(l_{ex}\) denotes the exact integral, \(l_{ap}\) the approximate value either according to Eq. (40) (linear, red bars) or assuming the plane wave to be constant (orange bars). For \(e_{2}^{\mathrm{ap}}\) the average has been computed for all integrals with \(\left|l_{ex}\right|>10^{-16}E_{H}\). It is compute the average the integrals are grouped according to the exponent \(\alpha\) of basis function \(\mu\) as core (\(\alpha>10\,a_{0}^{-2}\)), valence (\(10\,a_{0}^{-2}<\alpha<0.1\,a_{0}^{-2}\)), and diffuse (\(\alpha<0.1\,a_{0}^{-2}\)). For each plane wave energy (\(E_{1}=0.0015\) eV, \(E_{2}=0.1\) eV, \(E_{3}=0.2\) eV), the average has been taken over all distinct integrals provided by the basis set as well as over 24 different k-vectors corresponding to the direction vectors of the vertices of a snub cube. In a), the core and valence error bars are multiplied by a factor of 100 and 10, respectively, to enhance visibility.
be attributed to the calculation of the 2-electron integrals in the diabatic couplings running on a single processor, which has since been parallelized for improved performance.
In the simulation of the 2-cyanopyrrolide dynamics, the 6-311++G**+3s2p basis set consists of 297 primitive Gaussian basis functions which results in \(\sim\)7.8 billion 2-electron integrals to be summed over per time step. The average computation time amounted to 13 days and 12 hours on 10 Intel Xeon E5-2660 (v3) processors per trajectory, for a total time of 200 fs in 1000 nuclear time steps. The inclusion of an excited state leads to a massive increase in time consumption in the quantum-chemical calculations (which account for \(\sim\)47 %/ 6.4 days of the total computation time) as well as the evaluation of diabatic couplings, where the summation of all integral terms (cf. Eq. (16)) is now also conducted on 10 processors using the joblib module.
### Illustrative example: Autoionization of the 2-cyanopyrrolide anion
To illustrate the scope of our program, we simulated the vibration-induced autoionization dynamics of the example anion 2-cyanopyrrolide. Experimentally, this molecule was measured to have an adiabatic electron affinity of 3.0981 eV and possesses a Rydberg-s type dipole-bound state 29.8 meV below the ionization threshold.[8] As can be seen in Table 1, which compares several quantum-chemistry methods and basis sets with the data measured by Wang _et al._, the experimental data is reproduced quite well using the \(\omega\)B97XD functional and large, diffuse basis sets such as triply augmented pVDZ/pVTZ. Moreover, although the description of the molecule with standard Pople-type basis sets is fairly inaccurate, further augmentation with extra diffuse basis functions (see Ref. [48]), in this case placed on the nitrogen atoms, also leads to good agreement with the experimental values. At the same time this approach retains a significantly smaller total number of basis functions, therefore keeping computational effort manageable. Fig. 5 shows the HONTO and LUNTO, visualizing the spatial distribution of the excess electron in the ground and excited state at the optimized geometry of the dipole-bound first excited state employing the \(\omega\)B97XD functional and the 6-311++G** basis set augmented with three diffuse s- and two diffuse p-functions on each nitrogen atom (henceforth abbreviated as 6-311++G** + 3s2p). The shape of the excess electron's probability distribution in the dipole-bound state is of s-type, showing that employing additional higher polarization functions (d-/f-type) would lead to no further improvement in the description of the system. This is in complete agreement with a dipole moment of the neutral species of 5.02 D, well below the second critical dipole moment of \(\sim\)10 D needed for the binding of an electron in a p-type orbital,[38] consequently resulting in an s-type distribution centered around the positive end of the molecular dipole vector.
Using the 6-311++G** + 3s2p basis set with the \(\omega\)B97XD functional, we simulated the vibration-induced autoionization dynamics in the first excited state with the normal mode at 946 cm\({}^{-1}\) of A' symmetry (\(\nu_{11}\) when sorted by increasing mode energy irrespective of symmetry) excited by one vibrational quantum. The initial conditions were generated as described in subsection III.2. Mode \(\nu_{11}\) involves a symmetric stretching of the C-H bonds at carbon atoms 4 and 5 as well as a ring breathing motion affecting mostly the ring N and carbon 3. The numbering of atoms is provided in Fig. 6a), which illustrates the resulting set of initial conditions by the superposition of all initial structures. In Fig. 6b), the distance between the ring nitrogen and the carbon 3 is depicted, which exhibits a bimodal distribution typical for an excited vibrational state. The particular choice of vibrational excitation corresponds to the experimentally observed resonance 7 of the
Figure 5: HONTO and LUNTO of 2-cyanopyrrolide at the optimized geometry of the dipole-bound first excited state at the \(\omega\)B97XD/ 6-311++G** + 3s2p level of theory with an isovalue of 0.003.
Figure 6: a) Overlay of all initial molecular structures used in the dynamics simulation of 2-cyanopyrrolide, b) distribution for 10000 initial conditions as a function of distance (in Å) between the nitrogen (1) and carbon (3) atom as marked in a), showing a bimodal structure.
photodetachment spectrum in Ref. [8]. The simulation was carried out propagating an ensemble of 53 trajectories for a total of 200 fs (1000 nuclear time steps) with a discretized continuum of 400 plane wave energies evenly spaced from 0.0 eV to 0.138 eV and 96 orientations per energy. The maximum allowed kinetic energy of the plane wave is the sum of the vibrational excitation energy and the difference in zero-point energies of anion and neutral system, that is, the maximum excess energy available upon ionization.
Notice that due to the very low electron binding energy of the dipole-bound state and the approximative nature of the quantum chemically determined energies, it is challenging to precisely reproduce subtle binding energy differences on the meV scale along the trajectories. Thus, some instances of negative VDE occur in the dynamics. However, the experimental data from Ref. [8] only feature a peak attributed to vibrational autoionization. Therefore, we only include the latter in our simulation and neglect adiabatic ionization.
The nuclear dynamics following the vibrational excitation is characterized by relatively small amplitude motion. This is due to the overall low internal energy of the molecule and its rigidity as a cyclic system. In the course of the dynamics, the molecular dipole moment associated with the neutral core, which is responsible for electron binding in the excited state, exhibits slight oscillatory behavior while being approximately situated in the molecular plane. This leads to an anisotropic ejection of electrons predominantly in the molecular plane along the axis containing the cyano group, as can be inferred from the Mollweide projection of the angle-dependent distribution of **k**-vectors, summed over all k values shown in Fig. 7a). The resulting electron distribution is thus p-shaped, with maxima along the x-(cyano group) axis and minima in the yz-plane exhibiting only about 20% of the maximal intensity, as can be seen in Fig. 7b). This observation is in line with the qualitative considerations of nonadiabatic autoionization from dipole-bound states outlined in Ref. [70]. No transitions to the anionic ground state are observed in our simulation due to a large energy gap regardless of geometry, therefore the angular electron distribution is solely due to ionization from the s-type dipole-bound state.
Regarding the electron kinetic energies, the distribution displayed in Fig. 8a) is obtained, exhibiting a broad peak near the maximally possible energy of 0.138 eV. This can be attributed to a transition in which the vibrational energy of the excited mode is transferred completely to the outgoing electron, i.e., the vibrational energy of the molecule is reduced by one quantum in line with the propensity rules for vibrational autoionization established by Simons [27]. Further analysis of the peak shape should be taken with care, since for conceptual reasons vibrational resolution is not within the scope of quantum-classical dynamics.
Besides the spatial and energetic distribution of the ejected electrons, our simulation provides access to the timescale in which the ionization process takes place. Fig. 8b) shows the time-dependent population of the bound anionic states, which exhibits a rapid decay that can be fit to an exponential function with a time constant of 500 fs. This value corresponds to a spectral width of around 70 cm\({}^{-1}\), which is of comparable size to the observation made in Ref. [8].
Overall this example calculation shows the applicability and scope of the method in the context of small to medium sized molecular anions, providing a means to gain molecular-level insight into the spatio-temporal dynamics of vibration-induced autoionization processes complementary to experimental measurements.
\begin{table}
\begin{tabular}{l c c c c c} Method & \(\mathrm{AEA_{GS}}\) & \(\mathrm{AEA_{DBS}}\) & \(\mathrm{VD_{GS}^{z}}\) & \(\mathrm{VAE_{GS}^{z}}\) & \(\mathrm{\Delta E_{DBS}^{z}}\) \\ \hline \(\omega\)B97XD / aug-cc-pVDZ\({}^{\mathrm{b}}\) & 3.075 & -0.674 & 3.225 & 2.932 & 3.612 \\ \(\omega\)B97XD / d-aug-cc-pVDZ\({}^{\mathrm{c}}\) & 3.071 & -0.117* & 3.221 & 2.929 & 3.046 \\ \(\omega\)B97XD / t-aug-cc-pVDZ\({}^{\mathrm{c}}\) & 3.070 & 0.031* & 3.220 & 2.928 & 2.897 \\ \(\omega\)B97XD / t-aug-cc-pVDZ\({}^{\mathrm{c}}\) & 3.044 & 0.044* & 3.200 & 2.899 & 2.855 \\ \(\omega\)B97XD / 6-31+G*g*g4 & 3.062 & -1.374 & 3.212 & 2.919 & 4.302 \\ \(\omega\)B97XD / 6-31+G*g* +3s2p & 3.064 & 0.060* & 3.217 & 2.921 & 2.861 \\ \(\omega\)B97XD / 6-31+G**e & 3.095 & -0.759 & 3.249 & 2.949 & 3.716 \\ \(\omega\)B97XD / 6-311+G** +3s & 3.095 & 0.064 & 3.249 & 2.949 & 2.887 \\ \(\omega\)**B97XD / 6-311+G** +**3s2p & **3.094** & **0.063** & **3.248** & **2.948** & **2.887** \\ \(\omega\)B97XD / 6-311+G** +s3s2p2d & 3.076 & 0.055* & 3.230 & 2.930 & 2.875 \\ \hline EOM-CCSD\({}^{\mathrm{f}}\) / t-aug-cc-pVDZ & 2.844 & -0.102* & 3.016 & 2.686 & 2.788 \\ EOM-CCSD / 6-31+G** +3s2p & 2.679 & -0.346* & 2.855 & 2.518 & 2.864 \\ EOM-CCSD / 6-311+G** +3s2p & 2.748 & -0.148* & 2.929 & 2.584 & 2.732 \\ \hline Experiment\({}^{\mathrm{g}}\) & 3.0981 & 0.0298 & & & \\ \end{tabular}
\end{table}
Table 1: Comparison of adiabatic electron affinities (AEA), vertical detachment energies to the neutral ground state (\(\mathrm{VDE_{GS}}\)), vertical attachment energies to the anionic ground state (\(\mathrm{VAE_{GS}}\)) and excitation energy to the dipole-bound state (\(\mathrm{\Delta E_{DBS}}\)) for the 2-cyanopyrrolide anion. The superscript indicates at which optimized geometry (a = anion, n = neutral) the respective value is calculated. The method used in the dynamics simulation is indicated in bold font. The added basis functions +Xs etc. are generated according to Ref. [48] (with a factor for the geometric progression of 3.5) and centered on the nitrogen atoms. All energies values are given in eV. In some cases \(\mathrm{AEA_{DBS}}\) could not be obtained, instead the vertical attachment energy at the neutral equilibrium geometry is given as an approximation (denoted with *). a Ref. [65], \({}^{\mathrm{b}}\) Ref. [65] and [66], \({}^{\mathrm{c}}\) Ref. [65]-[67], \({}^{\mathrm{d}}\) Ref. [62], \({}^{\mathrm{e}}\) Ref. [62] and \(63\), \({}^{\mathrm{f}}\) Ref. [68] and 69
## V Conclusion
We have presented the Python program package HORTENSIA (Hopping real-time trajectories for electron-ejection by nonadiabatic self-ionization in anions) for the simulation of vibration-induced autoionization processes in molecular anions. The program implements our recently introduced extended surface hopping approach for the quantum-classical description of nonadiabatic autoionization dynamics, where the electronic degrees of freedom are treated quantum-mechanically, while the nuclear motion is represented by classical trajectories. The electronic states included in the dynamics simulation comprise the bound adiabatic anionic states and discretized 'ionized system' states composed of a neutral core and a free electron wave function, between which nonadiabatic transitions are simulated in a stochastical manner from hopping probabilities obtained from changes in electronic state coefficients according to Tully's fewest-switches algorithm. The time-dependent state coefficients are calculated by solution of the electronic Schrodinger equation containing the nonadiabatic as well as diabatic couplings between the considered electronic states according to our presented methodology.
As shown in the example of 2-cyanopyrrolide, time- and angle-resolved electron kinetic energy signals are obtained directly from the surface-hopping trajectories. Since no deactivation to the ground state is observed in our simulation, autoionization with a time constant of 500 fs is identified as the only available deactivation pathway in the dipole-bound state of 2-cyanopyrrolide on the simulated timescale, with an anisotropic, p-like ejection of electrons along the cyanox-axis. Moreover, with our program geometric data is yielded which allows for the structural analysis of molecules throughout the autoionization dynamics, providing easy access to geometric characteristics of the considered system, as demonstrated extensively in the example of the vinylidene [36] and 1-nitropropane [39] anions.
Furthermore, the implementation and internal structure of our program package was discussed, which also consists of secondary functionalities such as an input generator and a routine for the creation of initial conditions for nuclear coordinates and velocities within an easy-to-operate graphical user interface (GUI). Moreover, the program package provides the user with an additional GUI for the analysis and graphical representation of the most important dynamics results.
In the future, useful extensions of the methodology could be the implementation of neutral molecules to be ionized, which requires the description of scattering states interacting with a cationic core, as well as the inclusion of laser field coupling (analogous to the FISH method [71]) to describe photoionization beyond the perturbative limit, thereby providing an extension of the approach developed in Ref. [28]. In addition, the treatment of electronically adiabatic autoionization could be combined with an ab inito computation of the electronic resonance lifetimes, e.g., along the lines presented in Ref. [72].
## Data Availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Figure 8: a) Simulated electron kinetic energy distribution of all hopping events after excitation of mode \(\nu_{11}\) and propagation for 200 fs (orange histogram) and running average over 5 points/7.5 meV (red curve), b) time-dependent population of all bound anion states (dark green) and exponential fit with a time constant of \(\tau=500\) fs (light green).
Figure 7: a) Mollweide projection of the angular distribution of ejected electrons in the 2-cyanopyrrolide dynamics, summed over all energies. The x-axis (\(\varphi=0\), \(\theta=90\) degrees) is aligned with the cyano group and the molecule lies within the xy-plane (\(\theta=90\) degrees); b) Slices through the Mollweide projection at \(\varphi\) angles of 0 (positive x direction, blue), 180 (negative x direction, orange), 90 (positive y direction, green) and 270 (negative y direction, red) degrees.
## Author Declarations
### Conflict of Interest
The authors have no conflicts to disclose.
### Author Contributions
Kevin Issler: Data curation (lead); Formal analysis (lead); Investigation (lead); Methodology (equal); Software (lead); Visualization (lead); Writing - original draft (equal). Roland Mitric: Conceptualization (equal); Funding acquisition (lead); Methodology (supporting); Project administration (lead); Resources (lead); Supervision (equal); Writing - review & editing (equal). Jens Petersen: Conceptualization (equal); Formal analysis (supporting); Methodology (lead); Software (supporting); Supervision (equal); Visualization (supporting); Writing - original draft (equal), Writing - review & editing (equal).
|
2307.12255 | ResWCAE: Biometric Pattern Image Denoising Using Residual
Wavelet-Conditioned Autoencoder | The utilization of biometric authentication with pattern images is
increasingly popular in compact Internet of Things (IoT) devices. However, the
reliability of such systems can be compromised by image quality issues,
particularly in the presence of high levels of noise. While state-of-the-art
deep learning algorithms designed for generic image denoising have shown
promise, their large number of parameters and lack of optimization for unique
biometric pattern retrieval make them unsuitable for these devices and
scenarios. In response to these challenges, this paper proposes a lightweight
and robust deep learning architecture, the Residual Wavelet-Conditioned
Convolutional Autoencoder (Res-WCAE) with a Kullback-Leibler divergence (KLD)
regularization, designed specifically for fingerprint image denoising. Res-WCAE
comprises two encoders - an image encoder and a wavelet encoder - and one
decoder. Residual connections between the image encoder and decoder are
leveraged to preserve fine-grained spatial features, where the bottleneck layer
conditioned on the compressed representation of features obtained from the
wavelet encoder using approximation and detail subimages in the
wavelet-transform domain. The effectiveness of Res-WCAE is evaluated against
several state-of-the-art denoising methods, and the experimental results
demonstrate that Res-WCAE outperforms these methods, particularly for heavily
degraded fingerprint images in the presence of high levels of noise. Overall,
Res-WCAE shows promise as a solution to the challenges faced by biometric
authentication systems in compact IoT devices. | Youzhi Liang, Wen Liang | 2023-07-23T08:02:27Z | http://arxiv.org/abs/2307.12255v1 | # ResWCAE: Biometric Pattern Image Denoising Using Residual Wavelet-Conditioned Autoencoder
###### Abstract
The utilization of biometric authentication with pattern images is increasingly popular in compact Internet of Things (IoT) devices. However, the reliability of such systems can be compromised by image quality issues, particularly in the presence of high levels of noise. While state-of-the-art deep learning algorithms designed for generic image denoising have shown promise, their large number of parameters and lack of optimization for unique biometric pattern retrieval make them unsuitable for these devices and scenarios. In response to these challenges, this paper proposes a lightweight and robust deep learning architecture, the Residual Wavelet-Conditioned Convolutional Autoencoder (Res-WCAE) with a Kullback-Leibler divergence (KLD) regularization, designed specifically for fingerprint image denoising. Res-WCAE comprises two encoders - an image encoder and a wavelet encoder - and one decoder. Residual connections between the image encoder and decoder are leveraged to preserve fine-grained spatial features, where the bottleneck layer conditioned on the compressed representation of features obtained from the wavelet encoder using approximation and detail subimages in the wavelet-transform domain. The effectiveness of Res-WCAE is evaluated against several state-of-the-art denoising methods, and the experimental results demonstrate that Res-WCAE outperforms these methods, particularly for heavily degraded fingerprint images in the presence of high levels of noise. Overall, Res-WCAE shows promise as a solution to the challenges faced by biometric authentication systems in compact IoT devices.
## 1 Introduction
Biometric authentication has gained popularity with the recent advances in sensing systems and vision algorithms [1]. Biometric traits, including voice, gait, face, and fingerprints, are widely used to unlock devices, including phones, computers, door locks, and other Internet of Things (IoT) devices [2]. Fingerprint recognition, in particular, is an active research area due to the unique and stable ridge and minutiae features of fingerprints. However, fingerprint images are susceptible to image quality issues induced by various impression conditions, such as humidity, wetness, dirt, as well as user behavior, resulting in low-quality images that require noise reduction or inpainting of adjacent areas [3]. High levels of noise caused by the failure or degradation of sensors and by the parched or drenched conditions of user fingers, can lead to repeated fingerprint recognition failure, therefore necessitating denoising of fingerprint images as a pre-processing step to facilitate subsequent operations such as fingerprint authentication and verification [4].
Desnoising of fingerprint images is considered as a subfield of image denoising, a well-established and actively researched area in low-level vision, as it is a crucial step in numerous practical applications,
including medical imaging, surveillance and photography [5; 6]. The fundamental objective of image denoising is to retrieve a noise-free image from a noisy observation image. Let \(\mathbf{I}(m,n)\) be a matrix of size \(M\times N\), representing a noise-free image, where \((m,n)\) are the coordinates of the pixel; let \(\mathbf{J}(m,n)\) be a matrix of the same size as \(\mathbf{I}(m,n)\), representing the corresponding noisy image. The resulting noisy image \(J(x,y)\), following an image degradation model, can be expressed as: \(\mathbf{J}(m,n)=\mathbf{I}(m,n)+\mathbf{\epsilon}(m,n)\), where \(\epsilon(m,n)\sim\mathcal{N}(0,\sigma^{2})\). This additive white Gaussian noise (AWGN) model, which adds independent Gaussian noise to each pixel, is a widely used approach in image processing to simulate noise in images and assess the effectiveness of noise reduction algorithms. It is essential to note that the noise added to each pixel is independent of the additive noise added to other pixels in the image.
State-of-the-art deep learning algorithms have achieved remarkable performance on generic image denoising tasks [7; 8]. Nevertheless, the associated large number of parameters, ranging from millions to billions, renders them unsuitable for deployment on compact Internet of Things (IoT) devices. Furthermore, these algorithms are designed to denoise images acquired using CMOS sensing with low levels of noise, i.e., \(\sigma\leq 50\), which may not be appropriate for fingerprint images acquired using capacitive sensing, in particular under the condition of high levels of noise when hands are too dry or wet, i.e., \(\sigma\in[100,200]\). Fingerprint images are characterized by unique features such as ridges and valleys that must be preserved during the denoising process. Thus, specialized algorithms are required for fingerprint image denoising, which are specifically designed to handle these distinct features targeted for high noisy conditions. Such algorithms leverage fingerprint-specific information, such as ridge orientation or minutiae, to guide the denoising process and protect the critical features of the fingerprint. The use of a generic image denoising algorithm for fingerprint images may not be suitable, as it may result in the loss of vital fingerprint information, reducing the accuracy and reliability of fingerprint recognition systems. State-of-the-art fingerprint denoising models are also limited in the noise levels [9; 10; 11; 12].
In this paper, we propose and evaluate a deep learning architecture, a Residual Wavelet-Conditioned Convolutional Autoencoder (Res-WCAE), to retrieve the underlying intricate and unique fingerprint features, dedicately developed for denoising heavily degraded biometric pattern images in the presence of significantly high levels of noise interference for images with capacitive sensing techniques. We evaluate our model performance using two datasets consisting of AWGN and synthetic images. Our proposed approach exhibits both lightweight, accurate and robust features, thus making it a highly suitable candidate for deployment in practical scenarios, including small Internet of Things (IoT) devices.
## 2 Methods
Our research proposes a Residual Wavelet-Conditioned Convolutional Autoencoder (Res-WCAE) architecture for capturing fine-grained features in fingerprint pattern images obtained through capacitive sensing devices such as cell phones and other compact Internet of Things (IoT) devices. The Res-WCAE architecture comprises two encoders - an image encoder and a wavelet encoder - and one decoder. These encoders work in unison to construct condition layer for the decoder by leveraging compressed features in both the spatial domain and frequency domain, as illustrated in Fig. 1. Additionally, residual connections between the image encoder and decoder have been incorporated to enhance the spatial details of the fingerprint patterns. The Res-WCAE can handle a wide range of noise levels with a standard deviation of \(\sigma_{\epsilon}\) ranging from 0 to 200, and achieves state-of-the-art denoising performance, notably including in noise levels that were not covered in prior research, to the best of our knowledge.
### Image Encoder
The image encoder consists of four down-sampling convolutional layers, producing the condensed representation of image information as well as supplementing the decoder with fine-grained spatial details through residual connections. The input to the image encoder is a 2D gray-scale image that is passed through a sequence of down-sampling layers. These layers reduce the spatial dimensions of the input image while simultaneously increasing the number of channels. The first layer applies a 3x3 convolution with 32 filters and a stride of 2, followed by a Rectified Linear Unit (ReLU) activation function. The output of this layer is then passed to the second layer, which applies a similar
convolution with 64 filters and a stride of 2, also followed by a ReLU activation function. The same process is repeated in the third and fourth down-sampling layers, which have 128 and 256 filters respectively. The output of each layer in the image decoder can be represented as follows:
\[\mathbf{y}_{\mathcal{E},img}^{[l]}=\mathcal{F}_{\mathcal{E},img}^{[l]}\left(\mathbf{y}_ {\mathcal{E},img}^{[l-1]};\mathbf{\Theta}_{\mathcal{E},img}^{[l]}\right),\]
where \(\mathbf{y}_{\mathcal{E},img}^{[l]}\) denotes the output of the image encoder at layer \(l\), \(\mathcal{F}_{\mathcal{E},img}^{[l]}(\cdot;\mathbf{\Theta}_{\mathcal{E},img}^{[l]})\) denotes the function of the convolutional neural network followed by a ReLU activation function at layer \(l\), with trainable parameters \(\mathbf{\Theta}_{\mathcal{E},img}^{[l]}\) of the image encoder. The input to the first layer of the image encoder is the noisy image, i.e. \(\mathbf{y}_{\mathcal{E},img}^{[0]}=\mathbf{J}(M,N)\).
The output, \(\mathbf{y}_{\mathcal{E},img}^{[l]}\), of each down-sampling layer in the image decoder not only serves as input for the next down-sampling layer but also partially serves as input for the corresponding upsampling layers in the decoder. To preserve important spatial details of the image, the ResCAE employs residual connections between the image encoder and decoder blocks. These connections allow the network to propagate information from the image encoder to the decoder, while retaining fine-grained spatial details of the image. Specifically, the output of each down-sampling block is concatenated with the output of the corresponding up-sampling block, enabling the network to preserve the spatial information and recover intricate details, which will be elaborated in the decoder section.
### Wavelet Encoder
The wavelet encoder employs wavelet transform to extract features from the input image. The wavelet transform coefficients are subsequently passed through a sequence of convolutional layers, which reduce the number of channels while preserving the spatial dimensions, where three convolutional layers with 16, 32, and 64 filters, respectively, followed by a rectified linear unit (ReLU) activation function.
Wavelet transform is a widely-used technique in signal and image processing due to its capability to capture both time and frequency domain information [13; 14]. In image processing, wavelet transform can extract both high-frequency details and low-frequency approximations, offering a multi-resolution analysis of the image by decomposing the image into several levels of detail, which is highly beneficial for capturing fine-grained features of an image while reducing noise and redundancy [15; 16]. The two-dimensional (2D) wavelet decomposition of a discrete image \(\mathbf{J}(M,N)\) into \(K\) octaves results in \(3K+1\) subimages that represent the image at different scales and orientations:
Figure 1: A schematic for the architecture of a Residual Wavelet-Conditioned Convolutional Autoencoder (ResWCAE), including a sample noise-free image, \(\mathbf{I}(103,96)\), and a denoised image, \(\hat{\mathbf{I}}(103,96)\).
\[\mathbb{J}_{K}=\left[\mathbf{J}_{K},\bigcup_{k=1}^{K}\{\mathbf{j}_{k}^{1},\mathbf{j}_{k}^{2}, \mathbf{j}_{k}^{3}\}\right],\]
where \(\mathbf{J}_{K}\) denotes a low-resolution approximation of the original image \(\mathbf{J}(M,N)\) and \(\{\mathbf{j}_{k}^{1},\mathbf{j}_{k}^{2},\mathbf{j}_{k}^{3}\}\) represents the wavelet subimages containing the image details at different scales (\(2^{k}\)) and orientations.
Fingerprints exhibit quasi-periodic patterns with dominant frequencies typically located in the middle frequency channels of the wavelet decomposition, as noted in prior research [17; 18]. By taking into account ridge orientation and spatial frequency across different regions of the image, one can better capture the inherent nature of the fingerprint image [19; 20]. We employ a three-layer convolutional neural network (CNN) to extract the condensed feature representation in the wavelet-transform domain. The output of each layer in the wavelet encoder can be represented as follows:
\[\mathbf{y}_{\mathcal{E},well}^{[l]}=\mathcal{F}_{\mathcal{E},well}^{[l]}\left(\bm {y}_{\mathcal{E},well}^{[l-1]};\mathbf{\Theta}_{\mathcal{E},well}^{[l]}\right),\]
where \(\mathbf{y}_{\mathcal{E},well}^{[l]}\) denotes the output of the wavelet encoder at layer \(l\), \(\mathcal{F}_{\mathcal{E},well}^{[l]}(\cdot;\mathbf{\Theta}_{\mathcal{E},well}^{[l]})\) denotes the function of the convolutional neural network followed by a ReLU activation at layer \(l\) with trainable parameters \(\mathbf{\Theta}_{\mathcal{E},well}^{[l]}\). The input to the first wavelet encoder layer is the 2D wavelet decomposition subimages \(\mathbb{J}_{K}\). We leverage the subimages of wavelet coefficients \(\mathbb{J}_{K}\) with a level of three obtained using Symlets wavelet as input to a three-layer CNN, enabling us to extract a condensed representation of the wavelet transform domain features, as demonstrated in previous works [21; 22]. Our wavelet encoder is designed to construct an adaptive trainable and parametrized thresholding technique, in contrast to the soft and hard thresholding techniques that are widely used in prior literature [2; 23; 24].
### Decoder
The decoder of the network consists of a sequence of up-sampling layers that progressively increase the spatial dimensions of the input while decreasing the number of channels. The up-sampling process initiates with a 3x3 transpose convolution that employs 128 filters and a stride of 2, followed by a rectified linear unit (ReLU) activation function. This process is repeated in the next two layers, with 64 and 32 filters respectively. Finally, the last layer applies a 3x3 transpose convolution with a single filter and a sigmoid activation function to produce the gray-scale image.
The condition layer incorporates the compressed representation of the image and concatenates it with the adaptive compressed representation of the wavelet domain features. By integrating the conditional input, the decoder reconstructs data that is specific to the fingerprint scenario. The output of the encoder layer that takes the condition layer as input can be expressed as:
\[\mathbf{y}_{\mathcal{D}}^{[3]}=\mathcal{F}_{\mathcal{D}}^{[3]}\left(\left[\mathbf{y }_{\mathcal{E},img}^{[4]}\parallel\mathbf{y}_{\mathcal{E},well}^{[3]}\right];\mathbf{ \Theta}_{\mathcal{D}}^{[3]}\right),\]
where \(\mathbf{y}_{\mathcal{D}}^{[3]}\) denotes the output of the decoder at layer 3, \(\mathcal{F}_{\mathcal{D}}^{[3]}(\cdot;\mathbf{\Theta}_{\mathcal{D}}^{[3]})\) denotes the function of the convolutional neural network followed by a ReLU activation at layer 3 with trainable parameters \(\mathbf{\Theta}_{\mathcal{D}}^{[3]}\) and \([\cdot\parallel\cdot]\) denotes the concatenation operation. The output of the last decoder layer is the denoised image \(\hat{\mathbf{I}}(M,N)=\mathbf{y}_{\mathcal{D}}^{[L]}\).
In addition, the network utilizes residual connections between the image encoder and decoder blocks to enhance performance on intricate spatial details [25; 26]. These connections enable the network to transmit information from the image encoder to the decoder, while retaining crucial fingerprint image details. In particular, the output of each down-sampling block is concatenated with the output of the corresponding up-sampling block, which helps maintain spatial information and facilitates the network's ability to recover fine-grained details. The final decoder layer incorporates a bilinear interpolation technique to upsample the feature maps in the decoder blocks, thereby restoring the spatial resolution of the image. The output of each layer in the decoder can be represented as follows:
\[\mathbf{y}_{\mathcal{D}}^{[l]}=\mathcal{F}_{\mathcal{D}}^{[l]}\left(\left[\mathbf{y} _{\mathcal{D}}^{[l+1]}\parallel\mathbf{y}_{\mathcal{E},well}^{[l+1]}\right];\mathbf{ \Theta}_{\mathcal{D}}^{[l]}\right).\]
To enhance the generalizability of our model, we introduce a regularized cost function that incorporates Kullback-Leibler (KL) divergence regularization through the use of a prior distribution [27; 28]. The regularized cost function for Res-WCAE is formulated as the expected loss over the training set using the \(L^{2}\)-norm, along with the expected loss over the model parameters using KL divergence, expressed as follows:
\[\mathcal{L}\left(\mathbf{\Theta}\right)=\mathbb{E}_{\mathbf{I}}\left\|\mathbf{y}_{ \mathcal{D}}^{[L]}-\mathbf{I}\right\|^{2}+\lambda\mathbb{E}_{\mathbf{y}}D_{\text{KL}} \left(\mathbf{y}_{\mathcal{D}}^{[L]}\parallel\mathbf{I}\right)\]
where \(\mathbf{\Theta}\) denotes all the trainable parameters, \(\lambda D_{\text{KL}}\left(\mathbf{y}_{\mathcal{D}}^{[L]}\parallel\mathbf{I}\right)\) denotes the KL divergence of \(\mathbf{y}_{\mathcal{D}}^{[L]}\) from prior distribution \(\mathbf{I}\). The inclusion of KL divergence regularization in our model aims to prevent overfitting to a single training instance, analogous to adapting the target distribution performed by conventional backpropagation algorithms [29; 30].
## 3 Experiments, Results and Discussion
The Sokoto Coventry Fingerprint Dataset (SOCOFing) was selected for the purpose of constructing and evaluating the models in this study. SOCOFing is a biometric fingerprint database that has been specifically designed for academic research purposes, as documented in [31]. The dataset is comprised of a total of 6,000 fingerprint images that were collected from 600 African subjects, as outlined in [31]. Figure 2(b) provides representative samples from the dataset. During the preprocessing stage, the images in the dataset were converted into grayscale images with a resolution of 103x96 pixels, as per standard practice. All images were originally stored in the.BMP format.
To ensure a reliable evaluation of our models, we partitioned the dataset into training, holdout validation, and testing sets in a 70:15:15 ratio. We initialized the weights and trained all neural network architectures from scratch using a mini-batch size of 32. The learning rate was set to 0.001 and the models were trained for a maximum of 200 iterations. In total, we trained and evaluated four neural network architectures, including dense neural network, Autoencoder, wavelet feature conditioned Autoencoder, and Res-WCAE. We selected models based on their performance on the validation set. Our findings show that Res-WCAE outperformed all other models and achieved state-of-the-art performance in the presence of all levels of noise. Figure 2 (a) depicts the loss vs epoch, which showed moderate fluctuations due to the mini-batch training. We also evaluated the improved Peak Signal-to-Noise Ratio (\(\Delta\)_PSNR_) relative to the PSNR of the noisy image (\(\mathbf{J}\left(m,n\right)\)). As illustrated in Figure 2 (a) inset, the averaged improved PSNR was approximately 7.5 dB for a wide range of noise levels.
Figure 2: Samples of original figures, noisy figures and denoised figures for noise level \(\sigma\) from 100 to 200.. Model A: placeholder, Model B: placeholder, Model C: placeholder, Model D: Res-WCAE.
Figure 2(a) showcases the effectiveness of the ResWCAE denoising model in reconstructing the intricate features of fingerprint patterns. Despite high levels of noise, the denoised samples still exhibit discernible miniarea, which highlights the model's ability to capture fine details. Notably, we varied the noise level \(\sigma\) from 100 to 200, which is a wider range than that used in prior studies (0 to 100), thus demonstrating the robustness of our model.
We conduct a more rigorous assessment of the denoising models using three evaluation metrics, including Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and Mean Squared Error (MSE). The models under study were a Res-WCAE, an autoencoder, and an added noise figure, which served as a baseline for comparison. The results of the study are summarized in Table 1.
The findings revealed that the ResWCAE model demonstrated superior denoising performance, as evidenced by the highest PSNR and SSIM values and the lowest MSE value. The autoencoder neural network model achieved intermediate performance, with higher PSNR and SSIM values than the noisy figure baseline but lower values than the ResWCAE. ResWCAE model also outperforms the state-of-the-art models applied in the field of fingerprint pattern denoising, including U-Finger model [10] and Fpd-m-net [11].
## 4 Conclusion
In conclusion, the increasing popularity of biometric authentication in compact Internet of Things (IoT) devices has raised concerns about the reliability of such systems due to image quality issues, especially when dealing with high levels of noise. This paper addresses these challenges by introducing a novel and robust deep learning architecture called Residual Wavelet-Conditioned Convolutional Autoencoder (Res-WCAE) with Kullback-Leibler divergence (KLD) regularization, specifically designed for fingerprint image denoising. By leveraging two encoders - an image encoder and a wavelet encoder - along with residual connections and a compressed representation of features from the wavelet domain, Res-WCAE effectively preserves fine-grained spatial features, outperforming several state-of-the-art denoising methods, especially in heavily degraded fingerprint images with significant noise. The proposed Res-WCAE offers promising solutions for enhancing the reliability of biometric authentication systems in compact IoT devices, presenting a potential breakthrough in the field of image denoising and biometric pattern retrieval.
|
2304.04167 | Neural network assisted quantum state and process tomography using
limited data sets | In this study we employ a feed-forward artificial neural network (FFNN)
architecture to perform tomography of quantum states and processes obtained
from noisy experimental data. To evaluate the performance of the FFNN, we use a
heavily reduced data set and show that the density and process matrices of
unknown quantum states and processes can be reconstructed with high fidelity.
We use the FFNN model to tomograph 100 two-qubit and 128 three-qubit states
which were experimentally generated on a nuclear magnetic resonance (NMR)
quantum processor. The FFNN model is further used to characterize different
quantum processes including two-qubit entangling gates, a shaped pulsed field
gradient, intrinsic decoherence processes present in an NMR system, and various
two-qubit noise channels (correlated bit flip, correlated phase flip and a
combined bit and phase flip). The results obtained via the FFNN model are
compared with standard quantum state and process tomography methods and the
computed fidelities demonstrates that for all cases, the FFNN model outperforms
the standard methods for tomography. | Akshay Gaikwad, Omkar Bihani, Arvind, Kavita Dorai | 2023-04-09T05:51:16Z | http://arxiv.org/abs/2304.04167v1 | # Neural network assisted quantum state and process tomography using limited data sets
###### Abstract
In this study we employ a feed-forward artificial neural network (FFNN) architecture to perform tomography of quantum states and processes obtained from noisy experimental data. To evaluate the performance of the FFNN, we use a heavily reduced data set and show that the density and process matrices of unknown quantum states and processes can be reconstructed with high fidelity. We use the FFNN model to tomograph 100 two-qubit and 128 three-qubit states which were experimentally generated on a nuclear magnetic resonance (NMR) quantum processor. The FFNN model is further used to characterize different quantum processes including two-qubit entangling gates, a shaped pulsed field gradient, intrinsic decoherence processes present in an NMR system, and various two-qubit noise channels (correlated bit flip, correlated phase flip and a combined bit and phase flip). The results obtained via the FFNN model are compared with standard quantum state and process tomography methods and the computed fidelities demonstrates that for all cases, the FFNN model outperforms the standard methods for tomography.
+
Footnote †: These authors contributed equally to this work
+
Footnote †: These authors contributed equally to this work
+
Footnote †: These authors contributed equally to this work
## I Introduction
Quantum state tomography (QST) and quantum process tomography (QPT) are essential techniques to characterize unknown quantum states and processes respectively, and to evaluate the quality of quantum devices [1; 2; 3]. Numerous computationally and experimentally efficient QST and QPT algorithms have been designed such as self-guided tomography[4], adaptive tomography [5], compressed sensing based QST and QPT protocols which use heavily reduced data sets [6; 7], selective QPT [8; 9; 10], and direct QST/QPT using weak measurements [11].
Recently, machine learning (ML) techniques have been used to improve the efficiency of tomography protocols [12; 13; 14]. QST was performed on entangled quantum states using a restricted Boltzmann machine based artificial neural network (ANN) model [15] and was experimentally implemented on an optical system [16]. ML based adaptive QST was performed which adapts to experiments and suggests suitable further measurements [17]. QST using an attention based generative network was realized experimentally on an IBMQ quantum computer[18]. ANN enhanced QST was carried out after minimizing state preparation and measurement errors when reconstructing the state on a photonic quantum dataset [19]. A convolutional ANN model was employed to reconstruct quantum states with tomography measurements in the presence of simulated noise [20]. Local measurement-based QST via ANN was experimentally demonstrated on NMR[21]. ML was used to detect experimental multipartite entanglement structure for NMR entangled states [22]. ANN was used to perform QST while taking into account measurement imperfections [23] and were trained to uniquely reconstruct a quantum state without requiring any prior information about the state [24]. ANN was used to reconstruct quantum states encoded in the spatial degrees of freedom of photons with high fidelity [25]. ML methods were used to directly estimate the fidelity of prepared quantum states [26]. ANN was used to reconstruct quantum states in the presence of various types of noise [27]. Quantum state tomography in intermediate-scale quantum devices was performed using conditional generative adversial networks [28].
In this study, we employed a Feed Forward Neural Network (FFNN) architecture to perform quantum state as well as process tomography. We trained and tested the model on states/processes generated computationally and then validated it on noisy experimental data generated on an NMR quantum processor. Furthermore, we tested the efficacy of the FFNN model on a heavily reduced data set, where a random fraction of the total data set was used. The FFNN model was able to reconstruct the true quantum states and quantum processes with high fidelity even with this heavily reduced data set.
This paper is organized as follows: Section II briefly describes the basic framework of the FFNN model in the context of QST and QPT; Section II.1 describes the FFNN architecture while Section II.2 details how to construct the FFNN training data set to perform QST and QPT. Sections III and IV contain the results of implementing the FFNN to perform QST and QPT of experimental NMR data, respectively. Section V contains a few concluding remarks.
## II FFNN Based QST and QPT
### The Basic FFNN Architecture
First we describe the multilayer perceptron model also referred to as a Feed-Forward-Neural network (FFNN) which we employ to the task of characterizing quantum states and processes. An ANN is a mathematical computing model motivated by the biological nervous system which consists of adaptive units called neurons which are connected to other neurons via weights. A neuron is activated when its value is greater than a 'threshold value' termed the bias. Figure 1 depicts a schematic of an ANN with \(n\) inputs \(x_{1},x_{2},\cdots,x_{n}\) which are connected to a neuron with weights \(w_{1},w_{2},\cdots,w_{n}\); the weighted sum of these inputs is compared with the bias \(b\) and is acted upon the activation function \(f\), with the output \(\tilde{y}=f(\sum_{i=1}^{n}w_{i}x_{i}-b)\).
A multilayer FFNN architecture consists of three layers: the input layer, the hidden layer and the output layer. Data is fed into the input layer, which is passed on to the hidden layers and finally from the last hidden layer, it arrives at the output layer. Figure 2 depicts a schematic of a prototypical FFNN model with one input layer, two hidden layers and one output layer, which has been employed (as an illustration) to perform QST of an experimental two-qubit NMR quantum state, using a heavily reduced data set.
The data is divided into two parts: a training dataset which is used to train the model, a process in which network parameters, (weights and biases) are updated based on the outcomes and a test dataset which is used to evaluate the network performance. Consider '\(m\)' training elements \(\{(\vec{x}^{(1)},\vec{y}^{(1)}),(\vec{x}^{(2)},\vec{y}^{(2)}),\cdots,(\vec{x} ^{(p)},\vec{y}^{(p)})\}\) where \(\vec{x}^{(i)}\) is the \(i^{th}\) input and \(\vec{y}^{(i)}\) is the corresponding output. Feeding these inputs to the network produces the outputs \([\vec{\tilde{y}}^{(1)},\vec{\tilde{y}}^{(2)},...,\vec{\tilde{y}}^{(p)}]\). Since network parameters are initialized randomly, the predicted output is not equal to the expected output. Training of this network can be achieved by minimizing a mean-squared-error cost function, with respect to the network parameters, by using a stochastic gradient descent method and the backpropagation algorithm [29]:
\[w_{ij}\to w^{\prime}_{ij}= w_{ij}-\frac{\eta}{p^{\prime}}\sum_{i=1}^{p^{\prime}}\frac{ \partial}{\partial w_{ij}}\mathcal{L}(\vec{x}^{(i)}) \tag{1}\] \[b_{i}\to b^{\prime}_{i}= b_{i}-\frac{\eta}{p^{\prime}}\sum_{i=1}^{p^{\prime}}\frac{ \partial}{\partial b_{i}}\mathcal{L}(\vec{x}^{(i)}) \tag{2}\]
where \(\mathcal{L}(x^{(i)})=||\vec{y}^{(i)}-\vec{\tilde{y}}^{(i)}||^{2}\) is the cost function of the randomly chosen \(m^{\prime}\) training inputs \(x^{(i)}\), \(\eta\) is the learning rate and \(w^{\prime}_{ij}\) and \(b^{\prime}_{i}\) are updated weights and biases, respectively.
### FFNN Training Dataset for QST and QPT
An \(n\)-qubit density operator \(\rho\) can be expressed as a matrix in the product basis by:
\[\rho=\sum_{i=0}^{3}\sum_{j=0}^{3}...\sum_{n=0}^{3}a_{ij...n}\sigma_{i}\otimes \sigma_{j}\otimes...\sigma_{n} \tag{3}\]
where \(a_{00...0}=1/2^{n}\), \(\sigma_{0}\) denotes the \(2\times 2\) identity matrix and \(\sigma_{i},i=1,2,3\) are single-qubit Pauli matrices.
The aim of QST is to reconstruct \(\rho\) from a set of tomographic measurements. The standard procedure for QST involves solving linear system of equations of the form [30]:
\[\mathcal{A}\mathcal{X}=\mathcal{B} \tag{4}\]
where \(\mathcal{A}\) is a fixed coefficient matrix and only depends on the chosen measurement settings, \(\mathcal{X}\) is a column matrix which contains elements of the density matrix which needs to be reconstructed, and the input vector \(\mathcal{B}\) contains the actual experimental data.
The FFNN model is trained on a dataset containing randomly generated pure and mixed states. To generate these ensembles, consider a normal distribution \(\mathcal{N}(\mu=0,\sigma^{2}=1)\) with zero mean and unit variance. An \(n\)-qubit pure random state in the computational basis is represented by an \(2^{n}\) column vector \(C\) whose \(i\)th entry \(c_{i}\) generated from the random distribution as follows:
\[c_{i}=\frac{1}{N}(\mathfrak{d}_{i}+i\,\mathfrak{e}_{i}) \tag{5}\]
where \(\mathfrak{d}_{i},\mathfrak{e}_{i}\) are randomly chosen from the distribution \(\mathcal{N}\) and \(N\) is a normalization factor to ensure that \(C\) represent a unit vector.
For mixed states:
\[R=\mathcal{D}_{i}+i\,\mathcal{E}_{i} \tag{6}\]
where \(R\) is a \(2^{n}\times 2^{n}\) matrix with its elements \(\mathcal{D}_{i},\mathcal{E}_{i}\) randomly sampled from the normal distribution \(\mathcal{N}\). Using
Figure 1: (Color online) Basic unit of an ANN model, where the \(x_{i}\) are the inputs, \(w_{i}\) are the weights, \(b\) is the bias, \(\sum\) is the summation function, \(f\) is an activation function and \(\tilde{y}\) is the output of the ANN.
the \(R\) matrix, the corresponding mixed state density matrix \(\rho_{\text{mix}}\) is constructed as \(\rho_{\text{mix}}=\frac{RR^{\dagger}}{\text{Tr}(\text{RR}^{\dagger})}\).
The FFNN is trained on both pure as well as mixed states and the appropriate density matrices are generated. After generating the density matrices \(\mathcal{X}_{i}\), the corresponding \(\mathcal{B}_{i}\) are computed using Eq.(4). The training elements \(\{\mathcal{B}_{i},\mathcal{X}_{i}\}\) are then used to train the FFNN model given in Figure 2, where \(\mathcal{B}_{i}\) are the inputs to the FFNN and \(\mathcal{X}_{i}\) are the corresponding labeled outputs.
QPT of given a quantum process is typically performed using the Kraus operator representation, wherein for a fixed operator basis set \(\{E_{i}\}\), a quantum map \(\Lambda\) acting on an input state \(\rho_{\text{in}}\) can be written as [31]:
\[\Lambda(\rho_{in})=\sum_{m,n}\chi_{mn}E_{m}\rho_{in}E_{n}^{\dagger} \tag{7}\]
where \(\chi_{mn}\) are the elements of the process matrix \(\chi\) characterizing the quantum map \(\Lambda\). The \(\chi\) matrix can be experimentally determined by preparing a complete set of linearly independent input states, estimating the output states after action of the map, and finally computing the elements of \(\chi_{mn}\) from these experimentally estimated output states via linear equations of the form[32]:
\[\beta\vec{\chi}=\vec{\lambda} \tag{8}\]
where \(\beta\) is a coefficient matrix, \(\vec{\chi}\) contains the elements \(\{\chi_{mn}\}\) which are to be determined and \(\vec{\lambda}\) is a vector representing the experimental data.
The training data set for using the FFNN model to perform QPT is constructed by randomly generating a set of unitary operators. The generated unitary operators are allowed to act upon the input states \(\rho_{in}=\{|0\rangle,|1\rangle,\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle),\frac{ 1}{\sqrt{2}}(|0\rangle+i|1\rangle)\}^{\otimes n}\), to obtain \(\rho_{out}=U\rho_{in}U^{\dagger}\). All the output states \(\rho_{out}\) are stacked to form form \(\vec{\lambda}\). Finally, \(\vec{\chi}\) is computed using Eq. (8). The training elements \(\{\vec{\lambda}_{i},\vec{\chi}_{i}\}\) will then be used to train the FFNN model, where \(\vec{\lambda}_{i}\) acts as the input to FFNN and \(\vec{\chi}_{i}\) is the corresponding labeled output.
To perform tomography of given state or process one has to perform a series of experiments. Then the input vector is constructed where the entries are outputs/readouts of the experiments. In the case of standard QST or QPT, a tomographically complete set of experiments needs to be done and the input vector corresponding to the tomographically complete set of experiments is referred to as the full data set. To perform QST and
\begin{table}
\begin{tabular}{c|c|c|c} \hline Dataset Size & \multicolumn{3}{c}{Fidelity} \\ \hline & Epoch(50) & Epoch(100) & Epoch(150) \\ \hline
500 & 0.6944 & 0.8507 & 0.8716 \\
1000 & 0.8793 & 0.8994 & 0.9025 \\
5000 & 0.9231 & 0.9262 & 0.9285 \\
10000 & 0.9278 & 0.9321 & 0.9332 \\
20000 & 0.9333 & 0.9362 & 0.9393 \\
80000 & 0.9413 & 0.9433 & 0.9432 \\ \hline \end{tabular}
\end{table}
Table 2: Average state fidelities obtained after training the FFNN model to perform QST on 3000 test three-qubit states (\(M_{\text{data}}=120\)) for training datasets of different sizes, with the number of epochs varying from 50 to 150 for each dataset (an epoch refers to one iteration of the complete training dataset).
\begin{table}
\begin{tabular}{c|c|c|c} \hline Dataset Size & \multicolumn{3}{c}{Fidelity} \\ \hline & Epoch(50) & Epoch(100) & Epoch(150) \\ \hline
500 & 0.8290 & 0.9176 & 0.9224 \\
1000 & 0.9244 & 0.9287 & 0.9298 \\
5000 & 0.9344 & 0.9379 & 0.9389 \\
10000 & 0.9378 & 0.9390 & 0.9400 \\
20000 & 0.9394 & 0.9414 & 0.9409 \\
80000 & 0.9426 & 0.9429 & 0.9422 \\ \hline \end{tabular}
\end{table}
Table 1: Average state fidelities obtained after training the FFNN model to perform QST on 3000 test two-qubit states (\(M_{\text{data}}=20\)) for training datasets of different sizes, with the number of epochs varying from 50 to 150 for each dataset (an epoch refers to one iteration of the complete training dataset).
Figure 2: (Color online) Flowchart illustrating the FFNN model used to perform QST on two-qubit quantum states generated on an NMR quantum; on the left, \(\rho_{in}\) represents the state which is to be tomographed; IY denotes a tomographic operation, which is followed by signal detection, the set of depicted NMR spectra are those obtained after the tomographic measurement. The FFNN with two hidden layers is represented next, which then uses a reduced data set to reconstruct the final experimental tomographs represented on the right.
QPT via FFNN on a heavily reduced data set of size \(m\), a reduced size input vector \(\vec{b}_{m}\) with fewer elements (and a correspondingly reduced \(\vec{\lambda}_{m}\)) is constructed by randomly selecting \(m\) elements from the input vectors while the remaining elements are set to 0 (zero padding); these reduced input vectors together with the corresponding labeled output vectors are used to train the FFNN.
The FFNN was trained and implemented using the Keras Python library [33] with the Tensor-Flow backend, on an Intel Xeon processor with 48GB RAM and a CPU base speed of 3.90GHz. To perform QST and QPT the LeakyReLU (\(\alpha=0.5\)) activation function was used for both the input and the hidden layers of the FFNN:
\[\text{LeakyReLU(x)}= x\,;\,x>0 \tag{9}\] \[= \alpha x\,;\,x<0\]
A linear activation function was used for the output layer. A cosine similarity loss function, \(\mathcal{L}=\arccos\left(\frac{\vec{b}\cdot\vec{y}}{||\vec{y}||.||\vec{y}||}\right)\) was used for validation and the _adagrad_ (\(\eta=0.5\)) optimizer with a learning rate \(\eta\), was used to train the network. The _adagrad_ optimizer adapts the learning rate relative to how frequently a parameter gets updated during training.
The FFNN was used to perform QST on 3000 two-qubit and three-qubit test quantum states and to perform QPT on 3000 two-qubit test quantum processes for training datasets of different sizes. The number of epochs were varied from 50 to 150 for each dataset, where an epoch refers to one iteration of the training dataset during the FFNN training process. The computed average fidelities of 3000 two-qubit and three-qubit test quantum states and 3000 two-qubit test quantum processes are shown in Tables 1, 2 and 3, respectively; \(M_{\text{data}}\) refers to the reduced size of the data set. After comparing the effect of training data size, the value of \(M_{\text{data}}\) and the number of epochs, the maximum size of the training data set was chosen to be 80000 and the maximum number of epochs was set to 150 for performing QST and QPT. After 150 training epochs the validation loss function remained constant.
## III FFNN based QST on experimental data
We used an NMR quantum processor as the experimental testbed to generate data for the FFNN model. We applied FFNN to perform QST of two-qubit and three-qubit quantum states using a heavily reduced data set of noisy data generated on an NMR quantum processor. The performance of the FFNN was evaluated by computing the average state or average process fidelity. The state fidelity is given by [34]:
\[\mathcal{F}=\frac{\left|\operatorname{Tr}\left[\rho_{\text{FFNN}}\rho_{\text {STD}}^{\dagger}\right]\right|}{\sqrt{\operatorname{Tr}\left[\rho_{\text{FFNN }}^{\dagger}\rho_{\text{FFNN}}\right]\operatorname{Tr}\left[\rho_{\text{STD }}^{\dagger}\rho_{\text{STD}}\right]}} \tag{10}\]
where \(\rho_{\text{FFNN}}\) and \(\rho_{\text{STD}}\) are the density matrices obtained via the FFNN and the standard linear inversion method, respectively. The process fidelity can be computed by replacing the \(\rho\) in Eq. (10) by \(\chi\), where \(\chi_{\text{FFNN}}\) and \(\chi_{\text{STD}}\) are process matrices obtained via the FFNN and standard linear inversion method, respectively.
QST of a two-qubit NMR system is typically performed using a set of four unitary rotations: \(\{II,IX,IY,XX\}\) where \(I\) denotes the identity operation and \(X(Y)\) denotes a \(90^{\circ}\)\(x\) rotation on the specified qubit. The input vector \(\vec{b}\) (Eq. (4)) is constructed by applying the tomographic pulses followed by measurement, wherein the signal which is recorded in the time domain is then Fourier transformed to obtain the NMR spectrum. For two qubits, there are four peaks in the NMR spectrum and each measurement yields eight elements of the vector \(\vec{b}\); the dimension of the input vector \(\vec{b}\) is \(33\times 1\) (32
Figure 3: (Color online) Fidelity (\(\mathcal{\bar{F}}\)) between the FFNN model and the standard linear inversion method vs size of the heavily reduced dataset (\(M_{data}\)), for QST performed on (a) 100 two-qubit states and (b) 128 three-qubit states respectively. The states are numbered on the \(x\)-axis and the color coded bar on the right represents the value of the fidelity.
\begin{table}
\begin{tabular}{c|c|c|c} \hline Dataset Size & \multicolumn{3}{c}{Fidelity} \\ \hline & Epoch(50) & Epoch(100) & Epoch(150) \\ \hline
500 & 0.4904 & 0.5421 & 0.5512 \\
2000 & 0.6872 & 0.7090 & 0.7202 \\
15000 & 0.7947 & 0.8128 & 0.8218 \\
20000 & 0.8047 & 0.8203 & 0.8295 \\
50000 & 0.8305 & 0.8482 & 0.8598 \\
80000 & 0.8441 & 0.8617 & 0.8691 \\ \hline \end{tabular}
\end{table}
Table 3: Average process fidelities obtained after training the FFNN model to perform QPT on 3000 test three-qubit states (\(M_{\text{data}}=200\)) for training datasets of different sizes, with the number of epochs varying from 50 to 150 for each dataset (an epoch refers to one iteration of the complete training dataset).
from tomographic pulses and 1 from the unit trace condition). Similarly, QST of a three-qubit NMR system is typically performed using a set of seven unitary rotations: \(\{III,IIY,IYY,VII,XYX,XXY,XXX\}\). Each measurement produces 12 resonance peaks in the NMR spectrum (4 per qubit); the dimension of the input vector \(\vec{b}\) is be \(169\times 1\). To evaluate the performance of FFNN model in achieving full QST of two-qubit and three-qubit states, we experimentally prepared 100 two-qubit states and 128 three-qubit states using different preparation settings and calculated the average fidelity between the density matrix predicted via the FFNN model and that obtained using the standard linear inversion method for QST. We also performed full FFNN based QST of maximally entangled two-qubit Bell states and three-qubit GHZ and Biseparable states using a heavily reduced data set.
The FFNN model was trained on 80,000 states to perform QST. To perform FFNN based QST on two- and three-qubit states, we used three hidden layers containing 100, 100 and 50 neurons and 300, 200 and 100 neurons, respectively. The performance of the trained FFNN is shown in Figure 3. The fidelity between density matrices obtained via FFNN and standard linear inversion method of 100 experimentally generated two-qubit states and of 128 experimentally generated three-qubit states is shown in Figures 3(a) and (b), respectively. The reduced input vector of size \(M_{\text{data}}\) is plotted on the \(y\)-axis and the quantum states are numbered along the \(x\)-axis.
The performance of the FFNN for QST is evaluated in Figure 4 by computing the average state fidelity \(\bar{\mathcal{F}}\) calculated over a set of test/experimental states. The reduced size \(M_{data}\) of the input vector which was fed into the FFNN is plotted along the \(x\)-axis. The average fidelity \(\bar{\mathcal{F}}\) and the standard deviation \(\sigma\) in average state fidelity \(\bar{\mathcal{F}}\) are plotted along the \(y\)-axis in (a) and (c) for two-qubit states and in (b) and (d) for three-qubit states, respectively. For a given value of \(M_{data}\), the average fidelity \(\bar{\mathcal{F}}_{i}=\frac{50}{50}\sum_{n=1}^{50}\mathcal{F}_{n}\) of a given quantum state \(\rho_{i}\) predicted via FFNN is calculated by randomly selecting \(M_{data}\) elements from the corresponding full input vector \(\vec{b}\) for 50 times. For test data sets (blue circles), the performance of the FFNN is evaluated by computing the average fidelity \(\bar{\mathcal{F}}=\frac{1}{3000}\sum_{n=1}^{3000}\bar{\mathcal{F}}_{n}\) over 3000 two-qubit and three-qubit states. For experimental data sets (red triangles), the performance of the FFNN is evaluated by computing the average fidelity \(\bar{\mathcal{F}}\) over 100 two-qubit and 128 three-qubit states, respectively. The standard deviation \(\sigma\) in average state fidelity \(\bar{\mathcal{F}}\) is:
\[\sigma=\sqrt{\frac{\sum_{i=1}^{N}(\bar{\mathcal{F}}_{i}-\bar{\mathcal{F}})^{2 }}{N-1}} \tag{11}\]
As inferred from Figure 4, the FFNN model is able to predict an unknown two-qubit test state with average fidelity \(\bar{\mathcal{F}}\geq 0.8392\pm 0.084\) for a reduced data set of size \(M_{data}\geq 8\), and is able to predict an unknown three-qubit test state with average fidelity \(\bar{\mathcal{F}}\geq 0.8630\pm 0.0407\) for a reduced data set of size \(M_{data}\geq 60\). Similarly, for experimental quantum states, the FFNN model is able to predict two-qubit states with an average fidelity \(\bar{\mathcal{F}}\geq 0.8466\pm 0.1450\) for a reduced data set of size \(M_{data}\geq 12\), while for three-qubit experimental states, the FFNN is able to predict the unknown quantum state with average fidelity \(\bar{\mathcal{F}}\geq 0.8327\pm 0.0716\) for a reduced data set of size \(M_{data}\geq 60\). When the full input vector \(\vec{b}\) is considered, the average fidelity calculated over 3000 two- and three
Figure 4: (Color online) Average fidelity (\(\bar{\mathcal{F}}\)) and standard deviation \(\Delta\bar{\mathcal{F}}\) plotted as a function of the size of the heavily reduced dataset (\(M_{data}\)) computed for FFNN based QST on two qubits ((a) and (c)) and on three qubits ((b) and (d)), respectively. The average fidelity for the test dataset (blue dots) is calculated for 3000 states while for experimental data-set the average fidelity is calculated by randomly choosing the reduced dataset \(M_{data}\) elements from the full set for 100 2-qubit states and 128 3-qubit states, and then repeating the procedure 50 times.
Figure 5: (Color online) Fidelity (\(\bar{\mathcal{F}}\)) versus size of the heavily reduced dataset (\(M_{\text{data}}\)) computed for FFNN based QST of (a) two-qubit Bell states, where the different bars correspond to four different Bell states, and (b) three-qubit GHZ (black and red cross-hatched bars) and Biseparable states (gray and horizontal blue bars).
qubit test states turns out to be \(\bar{\mathcal{F}}=0.9993\) and \(\bar{\mathcal{F}}=0.9989\), respectively. The average fidelity calculated over 100 two-qubit and 128 three-qubit experimental states turns out to be \(\bar{\mathcal{F}}=0.9983\) and \(\bar{\mathcal{F}}=0.9833\), respectively, for the full input data set.
The FFNN model was applied to perform QST of two-qubit maximally entangled Bell states and three-qubit GHZ and biseparable states. Figure 5 depicts the experimental fidelities \(\mathcal{F}(\rho_{\text{FFNN}},\rho_{\text{STD}})\) of two-qubit Bell states and three-qubit GHZ and biseparable states calculated between the density matrices predicted via FFNN and those obtained via standard linear inversion QST for a reduced data set of size \(M_{\text{data}}\). The black, crosshatched red, gray and horizontal blue bars in Figure 5(a) correspond to the Bell states \(|B_{1}\rangle=(|00\rangle+|11\rangle)/\sqrt{2}\), \(|B_{2}\rangle=(|01\rangle-|10\rangle)/\sqrt{2}\), \(|B_{3}\rangle=(|00\rangle-|11\rangle)/\sqrt{2}\) and \(|B_{4}\rangle=(|01\rangle+|10\rangle)/\sqrt{2}\), respectively. The black and red cross-hatched bars in Figure 5(b) correspond to three-qubit GHZ states \(|\psi_{1}\rangle=(|000\rangle+|111\rangle)/\sqrt{2}\) and \(|\psi_{2}\rangle=(|010\rangle+|101\rangle)/\sqrt{2}\) respectively, while the gray and horizontal blue bars correspond to three-qubit biseparable states \(|\psi_{3}\rangle=(|000\rangle+|001\rangle+|110\rangle+|111\rangle)/2\) and \(|\psi_{4}\rangle=(|000\rangle+|010\rangle+|101\rangle+|111\rangle)/2\), respectively. The bar plots in Figure 5 clearly demonstrate that the FFNN model is able to predict the two- and three-qubit entangled states with very high fidelity for a reduced data set.
We note here in passing that the size of the heavily reduced dataset \(M_{\text{data}}\) is equivalent to the number of experimental readouts which are used to perform QST (QPT), while the standard QST (QPT) methods based on linear inversion always use the full dataset. Hence, the highest value of \(M_{\text{data}}\) is the same as the size of the full dataset which is 32, 168 and 256 for two-qubit and three-qubit QST and for two-qubit QPT, respectively.
## IV FFNN based QPT on experimental data
We used the FFNN model to perform two-qubit QPT for three different experimental NMR data sets: (i) unitary quantum gates (ii) non-unitary processes such as natural NMR decoherence processes and pulsed field gradient and (iii) experimentally simulated correlated bit flip, correlated phase flip and correlated bit+phase flip noise channels using the duality algorithm on an NMR quantum processor.
### FFNN Reconstruction of Two-Qubit Unitary and Non-Unitary Processes
The FFNN model was trained on 80,000 synthesized two-qubit quantum processes using a heavily reduced data set, with three hidden layers containing 600, 400 and 300 neurons, respectively. The performance of the trained FFNN was evaluated using 3000 test and 10 experimentally implemented quantum processes on NMR.
The FFNN results for QPT of various two-qubit experimental quantum processes are shown in Figure 6. The quality of the FFNN is evaluated by means of the average process fidelity \(\bar{\mathcal{F}}\), between the process matrix predicted by the FFNN (\(\chi_{\text{FFNN}}\)) using a reduced data set of size \(M_{data}\) and the process matrix obtained via the standard QPT method (\(\chi_{\text{STD}}\)) using a full data set.
Figure 6(a) depicts the performance of the FFNN evaluated on 3000 two-qubit test quantum processes (blue circles), where the \(y\)-axis denotes the average fidelity \(\bar{\mathcal{F}}=\frac{1}{3000}\sum_{n=1}^{3000}\bar{\mathcal{F}}_{n}\), where \(\bar{\mathcal{F}}_{n}=\frac{1}{300}\sum_{i=1}^{300}\bar{\mathcal{F}}_{i}\) is the average fidelity of the \(n\)th test quantum process calculated by randomly constructing an input vector of given size and repeating the process 300 times. Similarly, the red triangles and pink stars correspond to four unitary and six non-unitary quantum processes respectively, obtained from experimental data. The plots given in Figure 6(a) clearly show that the FFNN model is able to predict unitary as well as non-unitary quantum processes from a noisy experimental reduced data set, with good accuracy. For instance, for \(M_{\text{data}}=160\), the FFNN is able to predict the test process with \(\bar{\mathcal{F}}=0.8411\pm 0.0284\), whereas the experimental unitary and non-unitary processes are obtained with \(\bar{\mathcal{F}}=0.8447\pm 0.038\) and \(0.8187\pm 0.0493\), respectively. Hence, the value of \(M_{data}\) can be accordingly, depending on the desired accuracy and precision. The standard deviation in average fidelity \(\bar{\mathcal{F}}\) is calculated using Eq. (11) over 3000 quantum processes and is depicted in Figure 6(b). From Figure 6(b), it can be
Figure 6: (Color online) (a) Average process fidelity (\(\bar{\mathcal{F}}\)) and (b) Standard deviation \(\Delta\bar{\mathcal{F}}\) obtained for QPT of two-qubit processes using the FFNN model versus size of the dataset (\(M_{data}\)). For the test dataset (blue dots) the average fidelity is calculated for 3000 processes while for experimental unitary processes (red triangles) and non-unitary processes (magenta stars) the average fidelity is calculated by randomly choosing the reduced dataset \(M_{data}\) elements from the full set for four unitary quantum gates and six non-unitary processes, and then repeating the procedure 300 times.
observed that the FFNN model performs better for the QPT of unitary processes as compared to non-unitary processes, since the corresponding process matrices are more sparse.
The experimental fidelity obtained via FFNN of individual quantum processes is given in Figure 8, where the average fidelity is calculated for a set of quantum processes for a given value of the reduced dataset \(M_{data}\). For the test dataset the \(\bar{\mathcal{F}}\) is calculated over \(3000\) test processes, whereas for the experimental data set, \(\bar{\mathcal{F}}\) is computed over four unitary and six non-unitary processes. For the unitary quantum gates: Identity, CX180, CNOT and CY90, (corresponding to a 'no operation' gate, a bit flip gate, a controlled rotation about the \(x\)-axis by \(180^{\circ}\) and a controlled rotation about the \(y\)-axis by \(90^{\circ}\), respectively) the FFNN is able to predict the corresponding process matrix with average fidelities of \(\bar{\mathcal{F}}=0.8767\pm 0.0356,0.8216\pm 0.0463,0.8314\pm 0.0387\) and \(0.8489\pm 0.0315\) respectively, using a reduced data set of size \(160\). The six non-unitary processes to be tomographed include free evolution processes for two different times: \(D1=0.05\) sec and \(D2=0.5\) sec, a magnetic field gradient pulse (MFGP), and three error channels, namely, a correlated bit flip (CBF) channel, a correlated phase flip (CPF) channel, and a correlated bit-phase flip (CBPF) channel. There are several noise channels acting simultaneously all the qubits, during the free evolution times \(D1\) and \(D2\), such as the phase damping channel (corresponding to the T\({}_{2}\) NMR relaxation process) and the amplitude damping channel (corresponding to the T\({}_{1}\) NMR relaxation process). The MFGP process is typically implemented using gradient coils in NMR hardware where the magnetic field gradient is along the \(z\)-axis. The MFGP process to be tomographed is a sine-shaped pulse of duration \(1000\mu\)s, \(100\) time intervals \(=100\) and an applied gradient strength of \(15\%\). For the intrinsic non-unitary quantum processes D1 D2, and the MFGP, the FFNN is able to predict the corresponding process matrix with average fidelities of \(\bar{\mathcal{F}}=0.8373\pm 0.0381,0.7607\pm 0.0690\) and \(0.7858\pm 0.0703\) respectively, using a reduced data set of size \(160\). It is evident from the computed fidelity values that the FFNN performs better if the process matrix is sparse.
Although our main goal is to prove that the FFNN is able to reconstruct quantum states and processes with a high fidelity even for heavily reduced datasets, we also wanted to verify the efficacy of the network when applied to a complete data set. The values of process fidelity obtained via FFNN for the full data set are shown in Table 4, where it is clearly evident that the FFNN is able to predict the underlying quantum process with very high fidelity, and works accurately even for non-unitary quantum processes. The somewhat lower fidelity of the D2 process as compared to other quantum processes can be attributed to the corresponding process matrix being less sparse.
### FFNN Reconstruction of Correlated Noise Channels
The duality simulation algorithm (DSA) can be used to simulate fully correlated two-qubit noise channels, namely the CBF, CPF and CBPF channels [35]. The FFNN model is then employed to fully characterize these channels. DSA allows us to simulate the arbitrary dynamics of an open quantum system in a single experiment where the ancilla system has a dimension equal to the total number of Kraus operators characterizing the given quantum channel. An arbitrary quantum channel having \(d\) Kraus operators can be simulated via DSA using unitary operations \(V\), \(W\), and the control operation \(U_{c}=\sum_{i=0}^{d-1}|i\rangle\langle i|\otimes U_{i}\) such that the following condition is satisfied:
\[E_{k}=\sum_{i=0}^{d-1}W_{ki}V_{i0}U_{i}\quad(k=0,1,2,...,d-1) \tag{12}\]
where \(E_{k}\) is the Kraus operator, and \(V_{i0}\) and \(W_{ki}\) are the elements of \(V\) and \(W\), respectively. The quantum circuit for DSA is given in Reference [35], where the initial state of the system is encoded as \(|0\rangle_{a}\otimes|\psi\rangle_{s}\) which is then acted upon by \(V\otimes I\) followed by \(U_{c}\) and \(W\otimes I\), and finally a measurement is performed on the system qubits.
For this study, the two-qubit CBF, CPF and CBPF channels are characterized using two Kraus operators as:
\[\text{CBF}:E_{0} =\sqrt{1-p}I^{\otimes 2},\quad E_{1}=\sqrt{p}\sigma_{x}^{\otimes 2}\] \[\text{CPF}:E_{0} =\sqrt{1-p}I^{\otimes 2},\quad E_{1}=\sqrt{p}\sigma_{z}^{\otimes 2}\] \[\text{CBPF}:E_{0} =\sqrt{1-p}I^{\otimes 2},\quad E_{1}=\sqrt{p}\sigma_{y}^{\otimes 2} \tag{13}\]
where \(p\) is the noise strength, which can also be interpreted as probability with which the state of the system is affected by the given noise channel. For \(p=0\) the state of the system is unaffected, and for \(p=1\) the state of the system is maximally affected by the given noise channel. Since all the three noise channels considered in this study have only two Kraus operators, they can be simulated using a single ancilla qubit. Hence for all three noise
\begin{table}
\begin{tabular}{|l|l||l|l|} \hline \hline Unitary & \(\mathcal{F}\) & Non-Unitary & \(\mathcal{F}\) \\ Process & & Process & \\ \hline Test & 0.9997 & D1 & 0.9987 \\ Identity & 0.9943 & D2 & 0.9635 \\ CNOT & 0.9996 & Grad & 0.9917 \\ CX180 & 0.9996 & CBF & 0.9943 \\ CY90 & 0.9996 & CPF & 0.9996 \\ & & CBPF & 0.9996 \\ \hline \end{tabular}
\end{table}
Table 4: Experimental fidelities \(\mathcal{F}\) computed between \(\chi_{\text{FFNN}}\), the process matrix predicted via FFNN using a full data set, and \(\chi_{\text{STD}}\), the process matrix obtained via the standard QPT method.
channels, one can set \(V=\left(\begin{array}{cc}\sqrt{1-p}&-\sqrt{p}\\ \sqrt{p}&\sqrt{1-p}\end{array}\right)\), \(W=I\), and \(U_{0}=I\otimes I\). The different \(U_{1}\) for CBF, CPF and CBPF channels are set to \(\sigma_{x}\otimes\sigma_{x}\), \(\sigma_{z}\otimes\sigma_{z}\) and \(\sigma_{y}\otimes\sigma_{y}\) respectively, such that the condition given in Eq. (12) is satisfied. Note that \(V\) can be interpreted as a rotation about the \(y\)-axis by an angle \(\theta\) such that \(p=\sin^{2}{(\frac{\theta}{2})}\).
The generalized quantum circuit using DSA to simulate all three error channels is given in Figure 7. For the CBF channel, \(U_{c}\) turns out to be a Control-NOT-NOT gate, where the value of \(\theta\) (Figure 7) is zero. For the CPF and the CBPF channels, the values of \(\theta,\phi\) (the angle and axis of rotation) are \((\frac{\pi}{2},y)\) and \((\frac{\pi}{2},z)\), respectively. The output from the tomographic measurements on the system qubits forms the column vector \(\overrightarrow{\lambda}\). For a given value of \(p\), the full vector \(\overrightarrow{\lambda}\) can be constructed by preparing the system qubits in a complete set of linearly independent input states.
As can be seen from Figure 8, the average fidelity \(\bar{\mathcal{F}}=0.8738\pm 0.0366,0.8272\pm 0.0403\) and \(0.8273\pm 0.0416\) for the experimentally simulated noise channels CBF, CPF and CBPF respectively, using a reduced data set of size 160. Since all three correlated noise channels are characterized by only two Kraus operators, the corresponding process matrices turn out to be sufficiently sparse (with only two non-zero elements in the process matrix). The FFNN can hence be used to accurately tomograph such noise channels with arbitrary noise strength using a heavily reduced dataset.
## V Conclusions
Much recent research has focused on training artificial neural networks to perform several quantum information processing tasks including tomography, entanglement characterization and quantum gate optimization. We designed and applied a FFNN to perform QST and QPT on experimental NMR data, in order to reconstruct the density and process matrices which characterize the true quantum state and process, respectively. The FFNN is able to predict the true quantum state and process with very high fidelity and performs in an exemplary fashion even when the experimental data set is heavily reduced. Compressed sensing is another method which also uses reduced data sets to perform tomography of quantum states and processes. However, this method requires prior knowledge such as system noise and also requires that the basis in which the desired state (process) is to be tomographed should be sufficiently sparse. The FFNN, on the other hand, does not need any such prior knowledge and works well for all types of quantum states and processes. Moreover, working with a heavily reduced data set has the benefit of substantially reducing experimental complexity since performing tomographically complete experiments grows exponentially with system size. One can perform very few experiments and feed this minimal experimental dataset as inputs to the FFNN, which can then reconstruct the true density or process matrix. Our results hence demonstrate that FFNN architectures are promising methods for performing QST and QPT of large qubit registers and are an attractive alternative to standard methods, since they require substantially fewer resources.
Figure 7: Quantum circuit to simulate the action of a correlated bit flip, a correlated phase flip and a correlated bit+phase flip noise channel. \(|\psi\rangle_{s}\) are a set of linearly independent two-qubit input states, \(|0\rangle_{a}\) denotes the state of the ancilla, \(V\) is a single-qubit rotation gate and \(U_{c}\) denotes a set of control operations with varying values of \((\theta,\phi)\), depending on the noise channel being simulated.
Figure 8: (Color online) Process fidelity (\(\bar{\mathcal{F}}\)) between FFNN model and the standard linear inversion method vs size of the heavily reduced dataset (\(M_{data}\)), for different unitary and non-unitary quantum processes. The various quantum processes are labeled on the \(x\)-axis and the color coded bar on the right represents the value of the fidelity.
###### Acknowledgements.
All experiments were performed on a Bruker Avance-III 600 MHz FT-NMR spectrometer at the NMR Research Facility at IISER Mohali. Arvind acknowledges funding from the Department of Science and Technology (DST), India, under Grant No DST/ICPS/QuST/Theme-1/2019/Q-68. K.D. acknowledges funding from the Department of Science and Technology (DST), India, under Grant No DST/ICPS/QuST/Theme-2/2019/Q-74.
|
2306.15613 | Incommensurate Magnetic Order in the $\mathbb{Z}_2$ Kagome Metal
GdV$_6$Sn$_6$ | We characterize the magnetic ground state of the topological kagome metal
GdV$_6$Sn$_6$ via resonant X-ray diffraction. Previous magnetoentropic studies
of GdV$_6$Sn$_6$ suggested the presence of a modulated magnetic order distinct
from the ferromagnetism that is easily polarized by the application of a
magnetic field. Diffraction data near the Gd-$L_2$ edge directly resolve a
$c$-axis modulated spin structure order on the Gd sublattice with an
incommensurate wave vector that evolves upon cooling toward a partial lock-in
transition. While equal moment (spiral) and amplitude (sine) modulated spin
states can not be unambiguously discerned from the scattering data, the overall
phenomenology suggests an amplitude modulated state with moments predominantly
oriented in the $ab$-plane. Comparisons to the ``double-flat" spiral state
observed in Mn-based $R$Mn$_6$Sn$_6$ kagome compounds of the same structure
type are discussed. | Zach Porter, Ganesh Pokharel, Jong-Woo Kim, Phillip J. Ryan, Stephen D. Wilson | 2023-06-27T16:54:43Z | http://arxiv.org/abs/2306.15613v1 | # Incommensurate Magnetic Order in the \(\mathbb{Z}_{2}\) Kagome Metal GdV\({}_{6}\)Sn\({}_{6}\)
###### Abstract
We characterize the magnetic ground state of the topological kagome metal GdV\({}_{6}\)Sn\({}_{6}\) via resonant X-ray diffraction. Previous magnetoentropic studies of GdV\({}_{6}\)Sn\({}_{6}\) suggested the presence of a modulated magnetic order distinct from the ferromagnetism that is easily polarized by the application of a magnetic field. Diffraction data near the Gd-\(L_{2}\) edge directly resolve a \(c\)-axis modulated spin structure order on the Gd sublattice with an incommensurate wave vector that evolves upon cooling toward a partial lock-in transition. While equal moment (spiral) and amplitude (sine) modulated spin states can not be unambiguously discerned from the scattering data, the overall phenomenology suggests an amplitude modulated state with moments predominantly oriented in the \(ab\)-plane. Comparisons to the "double-flat" spiral state observed in Mn-based \(R\)Mn\({}_{6}\)Sn\({}_{6}\) kagome compounds of the same structure type are discussed.
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
at the Advanced Photon Source at Argonne National Laboratory. The diffractometer endstation (Huber Psi-circle) was equipped with a Joule-Thomson stage displex cryostat capable of reaching a base temperature of 1.9 K. Measurements were performed near the Gd-\(L_{2}\) resonance at \(E_{i}=7.933\) keV using an area detector (Dectris Pilatus 100K) directly after the sample. Polarization analysis of the scattered beam was performed for select scans using a pyrolytic graphite (0,0,6) flat single crystal analyzer placed between the sample and a scintillation point detector. Magnetization was measured using a Quantum Design Magnetic Property Measurement System (MPMS3) with a single crystal attached to a quartz paddle with GE varnish.
Figure 2 shows thermodynamic measurements characterizing the onset of magnetic order in GdV\({}_{6}\)Sn\({}_{6}\). Prior magnetentropic studies identified the ground state as possibly noncollinear or modulated due to the presence of a low-field entropy barrier in the ordered state [15], since inflections in the temperature-dependent susceptibility evolve with increasing field [23]. Illustrating these low-field inflections, dc susceptibility \(\chi\)(_T_) data are plotted in Figure 2(a) with 10 mT applied within the basal \(ab\)-plane. Two features appear: the first is a cusp in the susceptibility at \(T_{AF1}=5.2\) K, followed by a second, more subtle cusp near \(T_{AF2}=3.8\) K.
Heat capacity data also capture these two temperature scales as plotted in Figure 2(b). Previously published \(C_{P}(T)\) data for GdV\({}_{6}\)Sn\({}_{6}\)[15] were analyzed to extract the magnetic entropy via removal of the phonon and charge contributions via a scaled subtraction of the nonmagnetic reference YV\({}_{6}\)Sn\({}_{6}\)[26]. The resulting magnetic \(C_{P,mag}(T)/T\) data are plotted along with the integrated entropy \(S_{mag}(T)\). Both \(T_{AF1}\) and \(T_{AF2}\) can be identified in \(C_{P,mag}(T)/T\) via the peak and inflection point marked as dashed lines in Figure 2 (b). We note here that the Schottky anomaly expected due to the mean-field splitting of the \(J=7/2\) ground state multiplet is expected to occur at \(T_{AF1}/4\approx 1.3\) K and is likely hidden in the shoulder of the broad magnetic entropy peak. There is substantial entropy that extends to temperatures far above the initial ordering temperature, and the entropy integration up to 30 K nearly reaches the expected \(R\)ln(8); however uncertainties in the lattice subtraction and an imperfect lattice standard begin to dominate at these temperatures. Notably, there is substantial entropy that continues to be released at temperatures far below \(T_{AF1}\), and as we will show next, scattering data reflect this via a continued staging of wave vectors as the low temperature limit is approached.
To explore the origins of these two features in \(\chi(T)\) and \(C_{P,mag}(T)/T\) data, resonant X-ray scattering data were collected near the Gd-\(L_{2}\) absorption edge. Magnetic superlattice reflections appear with a **q**=(0, 0, _l_) wave vector below the ordering temperatures \(T_{AF1}\) and \(T_{AF2}\) in a manner which mirrors the staged transition observed in \(\chi(T)\). To further illustrate this staging, _L_-scans were performed about the **Q**=(0, 0, 3.5) position in three
Figure 1: Incommensurate magnetism in GdV\({}_{6}\)Sn\({}_{6}\). Gd forms hexagonal planes with one site per unit cell. Two possible Gd orders are shown: spiral and amplitude-modulated. The propagation direction is along \(c\) but the moment orientation is uncertain.
Figure 2: Thermodynamics of GdV\({}_{6}\)Sn\({}_{6}\). Panel (a) shows the dc magnetic susceptibility \(\chi\) measured on (zero) field cooling, denoted (Z)FC. Panel (b) shows the magnetic contribution to the heat capacity \(C_{P,mag}/T\) (red dots) and entropy \(S_{mag}\) (black line). Vertical dashed lines mark two phase transitions.
temperature regimes: \(T>T_{AF1}\), \(T_{AF2}>T>T_{AF1}\), and \(T_{AF1}>T\). The resulting data are plotted in Figure 3(a).
In Figure 3(a), upon cooling below \(T_{AF1}\) an incommensurate set of reflections appear near **Q**=(0, 0, 3.47) and **Q**=(0, 0, 3.53) with \(\textbf{q}_{IC}\approx(0,0,0.47)\) below 5.2(1) K. Upon further cooling below \(T_{AF2}\), an additional, weaker reflection appears at the commensurate **Q**=(0, 0, 3.50) position with \(\textbf{q}_{C}\)=(0, 0,0.50) below 3.8(2) K and coexists with the incommensurate satellites. Both sets of peaks are long-range ordered, with widths of 0.007 r.l.u. along \(L\) and 0.003 r.l.u. along \(K\) that mirror those of nearby structural Bragg peaks. This indicates minimum correlation lengths of 0.2 microns that are constrained by the crystallinity of the sample.
To further probe the origin of the incommensurate satellite peaks, polarization analysis was performed to separate the magnetic \(\sigma-\pi^{\prime}\) scattering channel from the nonmagnetic \(\sigma-\sigma^{\prime}\) channel. The results are plotted in Figure 3(b), where the low temperature incommensurate peaks appear only in the \(\sigma-\pi^{\prime}\) channel, confirming their magnetic origin. The weaker commensurate peak is not clearly distinguishable in either polarization channel below \(T_{AF2}\). However, the analyzer greatly diminishes signal-to-noise for these measurements. Perhaps the enhancement near **Q**=(0, 0, 3.50) in the \(\sigma-\pi^{\prime}\) channel suggests the peak is contributing. As will be shown next, the commensurate peak's temperature dependence correlates to the lower temperature anomaly in \(\chi\)(\(T\)) and implies it also has a magnetic origin.
The detailed temperature dependence of both the \(\textbf{q}_{IC}\) and \(\textbf{q}_{C}\) peaks are plotted in Figure 4(a). Peaks were parameterized via line scans at each temperature, and fit with Gaussian line shapes that quantify the peak area and position upon cooling toward base temperature. The resulting thermal evolution of both commensurate and incommensurate order parameters was well as the incommensurate
Figure 3: Overview of the resonant X-ray diffraction. Both panels show scans along (0,0,\(L\)) at the Gd-\(L_{2}\) resonance. Panel (a) shows scans at several temperatures without an analyzer crystal. Panel (b) shows scans using an analyzer crystal in the \(\sigma-\pi^{\prime}\) and \(\sigma-\sigma^{\prime}\) polarization channels.
Figure 4: Temperature dependence of the resonant X-ray diffraction. Panel (a) is a false color map of the scattering near (0, 0, 3.5). Three features are visible: two incommensurate reflections, and one half-integer commensurate reflection. Panels (b,c) are Gaussian fit results to these data; \(\textbf{q}_{IC}\) results are the weighted mean of both peaks. Panel (b) shows the area (integrated intensity) of \(\textbf{q}_{IC}\) (orange dots) and \(\textbf{q}_{C}\) (purple diamonds). Lines in (b) are guides to the eye of the form \(\sqrt{1-T/T_{AF}}\). Panel (c) shows the wave vector \(|\textbf{q}_{IC}|\) as a function of temperature.
mensurate wave vector \(\mathbf{q}_{IC}\) are shown in Figures 4(b,c).
Upon entering the incommensurate phase below \(T_{AF1}\), \(\mathbf{q}_{IC}\) begins to decrease with continued cooling and eventually reaches a minimum near \(T_{AF2}\). Upon further cooling below \(T_{AF2}\), weak scattering weight appears at the commensurate \(\mathbf{q}_{C}\) position and the wavevector \(\mathbf{q}_{IC}\) of the incommensurate peaks begins to _increase_. This trend continues down to the lowest temperature probed (\(\approx\) 2 K). This interplay suggests that the discommensuration of the \(\mathbf{q}_{IC}\) order is tied to the emergence of the \(\mathbf{q}_{C}\) order parameter, consistent with a partial lock-in transition.
In the susceptibility \(\chi(\mathit{T})\), the slight easy-plane anisotropy combined with the fact that both \(T_{AF1}\) and \(T_{AF2}\) appear only under an in-plane field suggest that the moments are predominantly in-plane. Magnetic scattering surveyed in multiple zones reveals that the scattering is strongest in \(c\)-axis aligned \(\mathbf{Q}\)=(0, 0, \(L\)) type positions, also consistent with an in-plane component [27]. However, in order to unambiguously determine the moment orientation, future measurements such as azimuthal scans at multiple wave vectors will be required.
The incommensurate state observed in our REXS measurements is commonly indicative of either a flat spiral state or an amplitude-modulated state as illustrated by cartoons in Fig. 1. A flat spiral is reminiscent of the "double flat" spiral states reported in Mn-based \(R\)Mn\({}_{6}\)Sn\({}_{6}\) compounds [28, 29]. A key distinction, however, is that the magnetic order in compounds such as YMn\({}_{6}\)Sn\({}_{6}\) and LuMn\({}_{6}\)Sn\({}_{6}\) derives from their Mn-based bilayer kagome networks, which naively host competing interlayer exchange interactions. Meanwhile magnetic order in \(R\)V\({}_{6}\)Sn\({}_{6}\) compounds is derived from their triangular lattice lanthanide networks, where extended (beyond nearest neighbor) interlayer exchange interactions are much less likely. This suggests that RKKY interactions play a dominant role in the stabilization of the modulated magnetic state.
A further distinction is the development of a commensurate harmonic \(\mathbf{q}_{C}\) at _low temperature_ in GdV\({}_{6}\)Sn\({}_{6}\) versus the high temperature commensurate harmonic in (Y, Lu)Mn\({}_{6}\)Sn\({}_{6}\) that splits into incommensurate wavelengths upon cooling [24, 29]. The onset of the \(\mathbf{q}_{C}\) harmonic upon cooling below \(T_{AF2}\) can be envisioned as the local formation of phase-locked modulations of antiferromagnetic planes that propagate within the longer wavelength, incommensurate state. The fact that this commensurate harmonic strengthens upon cooling toward the ground state points to the formation of a partial lock-in transition [30]--one which may develop further at lower temperatures. This transition toward an equal moment state at low temperature is suggestive of an amplitude modulated character to the incommensurate order. The substantial entropy remaining far below the onset of the incommensurate \(T_{AF1}\)[31, 32] and the associated minimization of entropy achieved as the \(\mathbf{q}_{C}\) harmonic develops are also consistent with an amplitude modulated nature to the transition.
While the incommensurate state that forms below \(T_{AF1}\) mimics those of (Y,Lu)Mn\({}_{6}\)Sn\({}_{6}\), its origin should be distinct from the physics proposed in those compounds. The evolution of \(\mathbf{q}_{IC}\) upon cooling below \(T_{AF1}\) likely arises from entropic factors associated with an amplitude modulated state rather than lattice effects associated with unit cell changes tuning the balance of interlayer exchange interactions in a flat spiral picture. The incommensurate modulation wavevector \(\mathbf{q}_{IC}\) is roughly consistent with nesting between the small Fermi surface pockets identified along the M-L line of the Brillouin zone [23], again evoking an RKKY mechanism for the modulated state. Though future measurements examining whether the ordered spin state is collinear or noncollinear will be required, our current data demonstrate a complex condensation of magnetic order in the topological kagome metal GdV\({}_{6}\)Sn\({}_{6}\). This provides an interesting avenue for engineering new magnetic states in proximity to the kagome planes in \(R\)V\({}_{6}\)Sn\({}_{6}\) compounds and for exploring their impact on the topological band structures.
###### Acknowledgements.
This work was supported by the National Science Foundation (NSF) through Enabling Quantum Leap: Convergent Accelerated Discovery Foundries for Quantum Materials Science, Engineering and Information (Q-AMASE-i): Quantum Foundry at UC Santa Barbara (Grant No. DMR-1906325). S.D.W. and Z.P. acknowledge support from NSF Grant No. DMR-1905801. This research made use of the shared facilities of the NSF Materials Research Science and Engineering Center at UC Santa Barbara, Grant No. DMR-1720256. Z.P. acknowledges additional support from the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, under Contract No. DE-AC02-76SF00515. This research used resources of the Advanced Photon Source, a U.S. Department of Energy (DOE) Office of Science User Facility operated for the DOE Office of Science by Argonne National Laboratory under Contract No. DEAC02-06CH11357.
|
2307.02046 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | Zihuai Zhao, Wenqi Fan, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | 2023-07-05T06:03:40Z | http://arxiv.org/abs/2307.02046v6 | # Recommender Systems in the Era of
###### Abstract
With the prosperity of e-commerce and web applications, Recommender Systems (RecSys) have become an important component in our daily life, providing personalized suggestions that cater to user preferences. While Deep Neural Networks (DNNs) have made significant advancements in enhancing recommender systems by modeling user-item interactions and incorporating their textual side information, these DNN-based methods still have some limitations, such as difficulties in effectively understanding users' interests and capturing textual side information, inabilities in generalizing to various seen/unseen recommendation scenarios and reasoning on their predictions, etc. Meanwhile, the emergence of Large Language Models (LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural Language Processing (NLP) and Artificial Intelligence (AI), due to their remarkable abilities in fundamental responsibilities of language understanding and generation, as well as impressive generalization and reasoning capabilities. As a result, recent studies have attempted to harness the power of LLMs to enhance recommender systems. Given the rapid evolution of this research direction in recommender systems, there is a pressing need for a systematic overview that summarizes existing LLM-empowered recommender systems, so as to provide researchers and practitioners in relevant fields with an in-depth understanding. Therefore, in this survey, we conduct a comprehensive review of LLM-empowered recommender systems from various aspects including Pre-training, Fine-tuning, and Prompting. More specifically, we first introduce representative methods to harness the power of LLMs (as a feature encoder) for learning representations of users and items. Then, we review recent advanced techniques of LLMs for enhancing recommender systems from three paradigms, namely pre-training, fine-tuning, and prompting. Finally, we comprehensively discuss the promising future directions in this emerging field.
Recommender Systems, Large Language Models (LLMs), Pre-training and Fine-tuning, In-context Learning, Prompting.
## 1 Introduction
Recommender Systems (RecSys) play a vital role in alleviating information overload for enriching users' online experience (_i.e.,_ users need to filter overwhelming information to locate their interested information) [1, 2]. They offer personalized suggestions towards candidate items tailored to meet user preferences in various application domains, such as entertainment [3], e-commerce [4], and job matching [2]. For example, on movie recommendations (_e.g., IMDB_ and _Netflix_), the latest movies are recommended to users based on the content of movies and the past interaction histories of users, which help users discover new movies that accord with their interests. The basic idea of recommender systems is to make use of the interactions between users and items and their associated side information, especially textual information (_e.g.,_ item titles or descriptions, user profiles, and user reviews for items), to predict the matching score between users and items (_i.e.,_ the probability that the user would like the item) [5]. More specifically, collaborative behaviors between users and items have been leveraged to design various recommendation models, which can be further used to learn the representations of users and items [6, 7]. In addition, textual side information about users and items contains rich knowledge that can assist in the calculation of the matching scores, providing great opportunities to understand user preferences for advancing recommender systems [8].
Due to the remarkable ability of representation learning in various fields, Deep Neural Networks (DNNs) have been widely adopted to advance recommender systems [9, 10]. DNNs demonstrate distinctive abilities in modeling user-item interactions with different architectures. For example, as particularly effective tools for sequential data, Recurrent Neural Networks (RNNs) have been adopted to capture high-order dependencies in user interaction sequences [11, 12]. Considering users' online behaviors (_e.g.,_ chick, purchase, socializing) as graph-structured data, Graph Neural Networks (GNNs) have emerged as advanced representation learning techniques to learn user and item representations [1, 6, 13]. Meanwhile, DNNs have also demonstrated advantages in encoding side information. For instance, a BERT-based method is proposed to extract and utilize textual reviews from users [14].
Despite the aforementioned success, most existing advanced recommender systems still face some intrinsic limitations. _First_, due to the limitations on model scale and data size, previous DNN-based models (_e.g.,_ CNN and LSTM) and
pre-trained language models (_e.g._, BERT) for recommender systems cannot sufficiently capture textual knowledge about users and items, demonstrating their inferior natural language understanding capability, which leads to sub-optimal prediction performance in various recommendation scenarios. _Second_, most existing RecSys methods have been specifically designed for their own tasks and have inadequate generalization ability to their unseen recommendation tasks. For example, a recommendation algorithm is well-trained on a user-item rating matrix for predicting movies' rating scores, while it is challenging for this algorithm to perform top-\(k\) movie recommendations along with certain explanations. This is due to the fact that the design of these recommendation architectures highly depends on task-specific data and domain knowledge toward specific recommendation scenarios such as top-\(k\) recommendations, rating predictions, and explainable recommendations. _Third_, most existing DNN-based recommendation methods can achieve promising performance on recommendation tasks needing simple decisions (_e.g._, rating prediction, and top-\(k\) recommendations). However, they face difficulties in supporting complex and multi-step decisions that involve multiple reasoning steps. For instance, multi-step reasoning is crucial to trip planning recommendations, where RecSys should first consider popular tourist attractions based on the destination, then arrange a suitable itinerary corresponding to the tourist attractions, and finally recommend a journal plan according to specific user preferences (_e.g._, cost and time for travel).
Recently, as advanced natural language processing techniques, Large Language Models (LLMs) with billion parameters have generated large impacts on various research fields such as Natural Language Processing (NLP) [15], Computer Vision [16], and Molecule Discovery [17]. Technically, most existing LLMs are transformer-based models pre-trained on a vast amount of textual data from diverse sources, such as articles, books, websites, and other publicly available written materials. As the parameter size of LLMs continues to scale up with a larger training corpus, recent studies indicated that LLMs can lead to the emergence of remarkable capabilities [18, 19]. More specifically, LLMs have demonstrated the unprecedently powerful abilities of their fundamental responsibilities in language understanding and generation. These improvements enable LLMs to better comprehend human intentions and generate language responses that are more human-like in nature. Moreover, recent studies indicated that LLMs exhibit impressive generalization and reasoning capabilities, making LLMs better generalize to a variety of unseen tasks and domains. To be specific, instead of requiring extensive fine-tuning on each specific task, LLMs can apply their learned knowledge and reasoning skills to fit new tasks simply by providing appropriate instructions or a few task demonstrations. Advanced techniques such as in-context learning can further enhance such generalization performance of LLMs without being fine-tuned on specific downstream tasks [19]. In addition, empowered by prompting strategies such as chain-of-thought, LLMs can generate the outputs with step-by-step reasoning in complicated decision-making processes. Hence, given their powerful abilities, LLMs demonstrate great potential to revolutionize recommender systems.
Very recently, initial efforts have been made to explore
Figure 1: Examples of the applications of LLMs for various recommendation tasks in the scenario of movie recommendations. LLMs can leverage textual data (or even multimodal data like images) for recommendation tasks.
the potential of LLMs as a promising technique for the next-generation RecSys. For example, Chat-Rec [3] is proposed to enhance the recommendation accuracy and explainability by leveraging ChatGPT to interact with users through conversations and then refine the candidate sets generated by traditional RecSys for movie recommendations. Zhang et al. [20] employ T5 as LLM-based RecSys, which enables users to deliver their explicit preferences and intents in natural language as RecSys inputs, demonstrating better recommendation performance than merely based on user-item interactions. Figure 1 demonstrates some examples of applying LLMs for various movie recommendation tasks, including top-\(K\) recommendation, rating prediction, conversational recommendation, and explanation generation. Due to their rapid evolution, it is imperative to comprehensively review recent advances and challenges of LLMs-empowered recommender systems.
Therefore, in this survey, we provide a comprehensive overview of LLMs for recommender systems from the paradigms in terms of _pre-training_, _fine-tuning_, and _prompting_. The remaining part of this survey is organized as follows. First, we review the related works on RecSys and LLMs, and their combinations in Section 2. Then, two types of LLM-empowered RecSys that take advantage of LLMs to learn the representation of users and items are illustrated in Section 3, which are ID-based RecSys and textual side information-enhanced RecSys. Subsequently, we summarize the techniques for adopting LLMs to RecSys in terms of the pre-training & fine-tuning paradigm and the prompting paradigm in Sections 4 and 5, respectively. Finally, some challenges and potential future directions for LLM-empowered RecSys are discussed in Section 6.
Concurrent to our survey, Liu _et al._[21] review the training strategies and learning objectives of the language modeling paradigm adaptations for recommender systems. Wu _et al._[22] summarize the LLMs for recommender systems from discriminative and generative perspectives. Lin _et al._[23] introduce two orthogonal perspectives: where and how to adapt LLMs in recommender systems.
## 2 Related Work
In this section, we briefly review some related work on recommender systems and LLMs techniques.
### _Recommender Systems (RecSys)_
To address the information overload problem, recommender systems have emerged as a crucial tool in various online applications by providing personalized content and services to individual users [24, 25]. Typically, most existing recommendation approaches can fall into two main categories: Collaborative Filtering (CF) and Content-based recommendation. As the most common technique, CF-based recommendation methods aim to find similar behavior patterns of users to predict the likelihood of future interactions [12], which can be achieved by utilizing the historical interaction behaviors between users and items, such as purchase history or rating data. For example, as one of the most popular CF methods, Matrix Factorization (MF) is introduced to learn representations of users and items by using pure user-item interactions [26, 7]. In other words, unique identities of users and items (_i.e._, discrete IDs) are encoded to continue embedding vectors so that the matching score can be calculated easily for recommendations [27, 28]. Content-based recommendation methods generally take advantage of additional knowledge about users or items, such as user demographics or item descriptions, to enhance user and item representations for improving recommendation performance [29]. Note that as textual information is one of the most available contents for users and items, we mainly focus on text as content in this survey.
Due to the remarkable representation learning capabilities, deep learning techniques have been effectively applied to develop recommender systems [25, 5]. For instance, NeuMF is proposed to model non-linear interactions between users and items by replacing the general inner product with DNNs [30]. Considering that data in RecSys can be naturally represented as graph-structured data, GNN techniques are treated as the main deep learning approaches for learning meaningful representations of nodes (_i.e._, users and items) via message propagation strategies for recommender systems [1, 31, 32, 33]. In order to integrate textual knowledge about users and items, DeepCoNN is developed to use CNNs to encode users' reviews written for items with two parallel neural networks so as to contribute to rating predictions in recommender systems [8]. Meanwhile, a neural attention framework NARRE is introduced to simultaneously predict users' ratings towards items and generate review-level explanations for the predictions [34].
Recently, language models have been increasingly utilized in recommender systems due to their capacity to comprehend and produce human natural language. These models are designed to comprehend the semantics and syntax of human natural language, thereby enabling RecSys to provide more personalized recommendations, such as news recommendations [35, 36], and drug recommendations [37]. Specifically, a sequential recommendation method called BERT4Rec is proposed to adopt Bidirectional Encoder Representations from Transformers (_i.e._, BERT) to model the sequential nature of user behaviors [38]. Furthermore, to take advantage of Transformer's capability for language generation, Li _et al._[39] design a transformer-based framework to simultaneously make item recommendations and generate explanations in recommender systems.
### _Large Language Models (LLMs)_
As a type of advanced Artificial Intelligence (AI) techniques, LLMs are trained on a large amount of textural data with billions of parameters to understand the patterns and structures of natural language. There are several classical types of pre-trained language models available, such as BERT (Bidirectional Encoder Representations from Transformers) [40], GPT (Generative Pre-trained Transformer) [41], and T5 (Text-To-Text Transfer Transformer) [42]. Typically, these language models fall into three main categories: encoder-only models, decoder-only models, and encoder-decoder models.
BERT, GPT, and T5 are distinct models based on the Transformer architecture [43]. More specifically, BERT, an encoder-only model, uses bi-directional attention to process
token sequences, considering both the left and right context of each token. It is pre-trained based on massive amounts of text data using tasks like masked language modeling and next-sentence prediction, thereby capturing the nuances of language and meaning in context. This process translates text into a vector space, facilitating nuanced and context-aware analyses. On the other hand, GPT, based on the transformer decoder architecture, uses a self-attention mechanism for one-directional word sequence processing from left to right. GPT is mainly adopted in language generation tasks, mapping embedding vectors back to text space, and generating contextually relevant responses. At last, T5, an encoder-decoder model, could handle any text-to-text task by converting every natural language processing problem into a text generation problem. For instance, it can re-frame a sentiment analysis task into a text sequence, like _'sentiment: I love this movie.'_, which adds _'sentiment.'_ before _'I love this movie.'_. Then it will get the answer _'positive'_. By doing so, T5 uses the same model, objective, and training procedure for all tasks, making it a versatile tool for various NLP tasks.
Due to the increasing scale of models, LLMs have revolutionized the field of NLP by demonstrating unprecedented capabilities in understanding and generating human-like textual knowledge [18, 44]. These models (e.g., GPT-3 [15], LaMDA [45], PaLM [46], and Vicuna [47]) often based on transformer architectures, undergo training on extensive volumes of text data. This process enables them to capture complex patterns and nuances in human language. Recently, LLMs have demonstrated remarkable capabilities of ICL, a concept that is central to their design and functionality. ICL refers to the model's capacity to comprehend and provide answers based on the input context as opposed to merely relying on inside knowledge obtained through pre-training. Several works have explored the utilization of ICL in various tasks, such as SG-ICL [48] and EPR [49]. These works show that ICL allows LLMs to adapt their responses based on input context instead of generating generic responses. Another technique that can enhance the reasoning abilities of LLMs is chain-of-thought (CoT). This method involves supplying multiple demonstrations to describe the chain of thought as examples within the prompt, guiding the model's reasoning process [50]. An extension of the CoT is the concept of self-consistency, which operates by implementing a majority voting mechanism on answers [51]. Current researches continue to delve into the application of CoT in LLMs, such as STaR [52], THOR [53], and Tab-CoT [54]. By offering a set of prompts to direct the model's thought process, CoT enables the model to reason more effectively and deliver more accurate responses.
With the powerful abilities mentioned above, LLMs have shown remarkable potential in various fields, such as chemistry [17], education [55], and finance [56]. These models, such as ChatGPT, have also been instrumental in enhancing the functionality and user experience of RecSys. One of the key applications of LLMs in RecSys is the prediction of user ratings for items. This is achieved by analyzing historical user interactions and preferences, which in turn enhances the accuracy of the recommendations [57, 58]. LLMs have also been employed in sequential recommendations, which analyze the sequence of user interactions to predict their next preference, such as TALLRec [59], M6-Rec [60], PALR [61], and P5 [62]. Moreover, LLMs, particularly ChatGPT, have been utilized to generate explainable recommendations. One such example is Chat-Rec [3], which leverages ChatGPT to provide clear and comprehensible reasoning behind its suggestions, thereby fostering trust and user engagement. Furthermore, the interactive and conversational capabilities of LLMs have been harnessed to create a more dynamic recommendation experience. For instance, UniCRS [63] develops a knowledge-enhanced prompt learning framework to fulfill both conversation and recommendation subtasks based on a pre-trained language model. UniMIND [64] proposes a unified multi-task learning framework by using prompt-based learning strategies in conversational recommender systems. Furthermore, it is worth noting that to investigate the potential of LLMs in learning on graphs, Chen _et al._[18] introduce two possible pipelines: _LLMs-as-Enhancers_ (_e.g._, LLMs enhance the textual information of node attributes) and _LLMs-as-Predictors_ (_e.g._, LLMs serve as independent predictor in graph learning like link prediction problems), which provide guidance on the design of LLMs for graph-based recommendations.
## 3 Deep Representation Learning for LLM-based Recommender Systems
Users and items are atomic units of recommender systems. To denote items and users in recommender systems, the straightforward method assigns each item or user a unique index (_i.e._, discrete IDs). To capture users' preferences towards items, ID-based recommender systems are proposed to learn representations of users and items from user-item interactions. In addition, since textual side information about users and items provides rich knowledge to understand users' interests, textual side information-enhanced recommendation methods are developed to enhance user and item representation learning in an end-to-end training manner for recommender systems. In this section, we will introduce these two categories that take advantage of language models in recommender systems. These two kinds of recommender systems are illustrated in Figure 2.
### _ID-based Recommender Systems_
Recommender systems are commonly used to affect users' behaviors for making decisions from a range of candidate items. These user behaviors (_e.g._, click, like, and subscription) are generally represented as user-item interactions, where users and items are denoted as discrete IDs. Modern recommendation approaches are proposed to model these behaviors by learning embedding vectors of each ID representation. Generally, in LLM-based recommendation systems, an item or a user can be represented by a short phrase in the format of "\([prefix]\_[ID]\)", where the prefix denotes its type (_i.e._, item or user) and the ID number helps identify its uniqueness.
As the early exploration of LLM-based methods, a unified paradigm called P5 is proposed to facilitate the transfer of various recommendation data formats [62], such as user-item interactions, user profiles, item descriptions, and user reviews, into natural language sequences by mapping users and items into indexes. Note that the pre-trained T5 backbone
is used to train the P5 with personalized prompts. Meanwhile, P5 incorporates the normal index phrase with a pair of angle brackets to treat these indexes as special tokens in the vocabulary of LLMs (e.g., \(<item\_6637>\)), avoiding tokenizing the phrases into separate tokens. Based on P5, Hua et al. put forward four straightforward but effective indexing solutions [65]: sequential indexing, collaborative indexing, semantic (content-based) indexing, and hybrid indexing, underscoring the significance of indexing methods. Different from P5's randomly assigning numerical IDs to each user or item, Semantic IDs, a tuple of codewords with semantic meanings for each user or item, is proposed to serve as unique identifiers, each carrying semantic meaning for a particular user or item [66]. Meanwhile, to generate these codewords, a hierarchical method called RQ-VAE is also proposed [66] to leverage Semantic IDs, where recommendation data formats can be effectively transformed into natural language sequences for transformer-based models.
### _Textual Side Information-enhanced Recommender Systems_
Despite the aforementioned success, ID-based methods suffer from intrinsic limitations. That is due to the fact that pure ID indexing of users and items is naturally discrete, which cannot provide sufficient semantic information to capture representations of users and items for recommendations. As a result, it is very challenging to perform relevance calculations based on index representations among users and items, especially when user-item interactions are severely sparse. Meanwhile, ID indexing usually requires modifying the vocabularies and altering the parameters of LLMs, which brings additional computation costs.
To address these limitations, a promising alternative solution is to leverage textual side information of users and items, which includes user profiles, user reviews for items, and item titles or descriptions. Specifically, given the textual side information of an item or a user, language models like BERT can serve as the text encoder to map the item or user into the semantic space, where we can group similar items or users and figure out their differences in a more fine-grained granularity. For instance, Li _et al._ have investigated the performance comparison between ID and modality-based recommender systems, showing that ID-based recommender systems might be challenged by recommender systems that can better utilize side information [67]. Meanwhile, Unisec [68] is one such approach that takes advantage of item descriptions to learn transferable representations from various recommendation scenarios. More specifically, Unisec also introduces a lightweight item encoder to encode universal item representations by using parametric whitening and a mixture-of-experts (MoE) enhanced adaptor. Besides, text-based collaborative filtering (TCF) is also explored by prompting LLMs like GPT-3 [69]. Compared to the previous ID-based collaborative filtering, TCF methods demonstrate positive performance, proving the potential of textual side information-enhanced recommender systems.
However, solely relying on language models to encode item descriptions might excessively emphasize text features. To mitigate this issue, VQ-Rec [70] proposes to learn vector-quantized item representations, which can map item text into a vector of discrete indices (_i.e._, item codes) and use them to retrieve item representations from a code embedding table in recommendations. Beyond text features, Fan _et al._[71] propose a novel method for the Zero-Shot Item-based Recommendation (ZSIR), focusing on introducing a Product Knowledge Graph (PKG) to LLMs to refine item features. More specifically, user and item embeddings are learned via multiple pre-training tasks upon the PKG. Moreover, ShopperBERT [72] investigates modeling user behaviors to denote user representations in e-commerce recommender systems, which pre-trains user embedding through several pre-training tasks based on user purchase history. Furthermore, IDA-SR [72], an ID-Agnostic User Behavior Pre-training framework for Sequential Recommendation, directly retains representations from text information using pre-trained language models like BERT. Specifically, given an item \(i\) and its description with \(m\) tokens \(D_{i}=\{t_{1},t_{2},...,t_{m}\}\), an extra start-of-sequence token \([CLS]\) is added to the description \(D_{i}=\{[CLS],t_{1},t_{2},...,t_{m}\}\). Then, the description is fed as the input to LLMs. Finally, the embedding of the token \([CLS]\) could be used as the ID-agnostic item representation.
## 4 Pre-training & Fine-tuning LLMs for Recommender Systems
In general, there are three key manners in developing and deploying LLMs in recommendation tasks, namely,
Figure 2: An illustration of two methods for representing users and items for LLM-based RecSys: _ID-based representation_ (left) which denotes user-item interactions with discrete identities, and _Textual side information-enhanced representation_ (right) which leverages textual side information of users and items, including user profiles, user reviews for items, item titles or descriptions.
pre-training, fine-tuning,_ and _prompting_. In this section, we first introduce the pre-training and fine-tuning paradigms, which are shown in Figure 3 and Figure 4, respectively. More specifically, we will focus on the specific pre-training tasks applied in LLMs for recommender systems and fine-tuning strategies for better performance in downstream recommendation tasks. Note that the works mentioned below are summarized in Table I and Table II.
### _Pre-training Paradigm for Recommender Systems_
Pre-training is an important step in developing LLMs. It involves training LLMs on a vast amount of corpus consisting of diverse and unlabeled data. This strategy enables LLMs to acquire a broad understanding of various linguistic aspects, including grammar, syntax, semantics, and even common sense reasoning. Through pre-training, LLMs can learn to recognize and generate coherent and contextually appropriate responses. In general, there are two main methods to pre-train LLMs in the natural language domain, depending on the adopted model structure. One is _Masked Language Modeling_ (MLM) for encoder-only or encoder-decoder Transformer structures, which randomly masks tokens or spans in the sequence and requires LLMs to generate the masked tokens or spans based on the remaining context [82]. The other is _Next Token Prediction_ (NTP) for decoder-only Transformer structures, which requires prediction for the next token based on the given context [41].
In the context of recommender systems, most of the existing works follow the two classical pre-training strategies. Next, we will introduce representative methods. PTUM [73] proposes two similar pre-training tasks, Masked Behavior Prediction (MBP) and Next K behavior Prediction (NBP), to model user behaviors in recommender systems. Unlike language tokens, user behaviors are more diverse and thus more difficult to be predicted. In this case, instead of masking a span of tokens, PTUM only masks a single user behavior with the goal of predicting the masked behavior based on the other behaviors in the interaction sequence of the target user. On the other side, NBP models the relevance between past and future behaviors, which is crucial for user modeling. The goal of NBP is to predict the next \(k\) behaviors based on the user-item interaction history.
M6 [60] also adopts two pre-training objectives motivated by the two classical pre-training tasks, namely a text-infilling objective and an auto-regressive language generation objective, corresponding to the above two pre-training tasks, respectively. To be more specific, the text-infilling objective exhibits the pre-training task of BART [83], which randomly masks a span with several tokens in the text sequence and predicts these masked spans as the pre-training target, providing the capability to assess the plausibility of a text or an event in the recommendation scoring tasks. Meanwhile, the auto-regressive language generation objective follows the Next Token Prediction task in natural language pre-training, but it is slightly different as it predicts the unmasked sentence based on the masked sequence.
Additionally, P5 adopts multi-mask modeling and mixes datasets of various recommendation tasks for pre-training. In this case, it can be generalized to various recommendation tasks and even unseen tasks with zero-shot generation ability [62]. Across different recommendation tasks, P5 applies a unified indexing method for representing users and items in language sequence as stated in Section 3 so that the Masked Language Modelling task could be employed.
### _Fine-tuning Paradigm for Recommender Systems_
Fine-tuning is a crucial step in deploying pre-trained LLMs for specific downstream tasks. Especially for recommendation tasks, LLMs require fine-tuning to grasp more domain knowledge. Particularly, fine-tuning paradigm involves training the pre-trained model based on task-specific
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline Paradigms & Methods & Pre-training Tasks & Code Availability \\ \hline \multirow{3}{*}{Pre-training} & PTUM [73] & Masked Behavior Prediction & [https://github.com/watch15/PTUM](https://github.com/watch15/PTUM) \\ \cline{2-4} & M6 [60] & Auto-regressive Generation & Not available \\ \cline{1-1} \cline{2-4} & P5 [62] & Multi-task Modeling & [https://github.com/jeykigung/P5](https://github.com/jeykigung/P5) \\ \hline \hline \end{tabular}
\end{table}
Table I: Pre-training methods for LLM-empowered RecSys.
Figure 3: An illustration of two main pre-training methods of LLMs: _Masked Language Modeling_ (left) which randomly masks tokens or spans in the sequence and requires LLMs to generate the masked tokens or spans based on the remaining context, and _Next Token Prediction_ (right) which requires prediction for the next token based on the given context. In pre-training, LLMs are trained on a vast amount of corpus consisting of diverse and unlabeled data.
recommendation datasets that include user-item interaction behaviors (_e.g._, purchase, click, ratings) and side knowledge about users and items (_e.g._, users' social relations and items' descriptions). This process allows the model to specialize its knowledge and parameters to improve performance in the recommendation domain. In general, fine-tuning strategies can be divided into two categories according to the proportion of model weights changed to fit the given task. One is _full-model fine-tuning_, which changes the entire model weights in the fine-tuning process. By considering the computation cost, the other is _parameter-efficient fine-tuning_, which aims to change only a small part of weights or develop trainable adapters to fit specific tasks.
#### 4.2.1 Full-model Fine-tuning
As a straightforward strategy in deploying pre-trained LLMs to fit specific downstream recommendation tasks, full-model fine-tuning involves changing the entire model weights. For example, RecLLM [74] is proposed to fine-tune LaMDA as a Conversational Recommender System (CRS) for YouTube video recommendation. Meanwhile, GIRL [78] leverages a supervised fine-tuning strategy for instructing LLMs in job recommendation. However, directly fine-tuning LLMs might bring unintended bias into recommender systems, producing serious harm towards specific groups or individuals based on sensitive attributes such as gender, race and occupation. To mitigate such harmful effects, a simple LLMs-driven recommendation (LMRec) [75] is developed to alleviate the observed biases through train-side masking and test-side neutralization of non-preferential entities, which achieves satisfying results without significant performance drops. TransRec [76] studies pre-trained recommender systems in an end-to-end manner, by directly learning from the raw features of the mixture-of-modality items (_i.e._, texts and images). In this case, without relying on overlapped users or items, TransRec can be effectively transferred to different scenarios. Additionally, Carranza _et al._[77] propose privacy-preserving large-scale recommender systems by applying differentially private (DP) LLMs, which relieves certain challenges and limitations in DP training.
Contrastive learning has also emerged as a popular approach for fine-tuning LLMs in recommender systems. Several methods have been proposed in this direction. SBERT [79] introduces a triple loss function, where an intent sentence is paired with an anchor, and corresponding products are used as positive and negative examples in the e-commerce domain. Additionally, UniTRec [80] proposes a unified framework that combines discriminative matching scores and candidate text perplexity as contrastive objectives to improve text-based recommendations.
#### 4.2.2 Parameter-efficient Fine-tuning
Full-model fine-tuning requires large computational resources as the size of LLMs scales up. Currently, it is infeasible for a single consumption-level GPU to fine-tune the most advanced LLMs, which usually have more than 10 billion parameters. In this case, Parameter-efficient Fine-tuning (PEFT) targets fine-tuning LLMs efficiently with lower requirements for computational resources. PEFT involves fine-tuning a small proportion of model weights or a few extra trainable weights while fixing most of the parameters in LLMs to achieve comparable performance with full-model fine-tuning.
Currently, the most popular PEFT methods lie in introducing extra trainable weights as adapters. The adapter structure is designed for embedding into the transformer structure of LLMs [84]. For each Transformer layer, the adapter module is added twice: the first module is added after the projection following the multi-head attention, and the other is added after the two feed-forward layers. During fine-tuning, the original weights of pre-trained LLMs are fixed, while the adapters and layer normalization layers are fine-tuned to fit downstream tasks. Thus, adapters contribute to the expansion and generalization of LLMs, relieving the
\begin{table}
\begin{tabular}{c|c|c} \hline \hline Paradigms & Methods & References \\ \hline \multirow{2}{*}{Fine-tuning} & Full-model Fine-tuning & [74], [75], [76], [77], [78], [79], and [80]1 \\ \cline{2-3} & Parameter-efficient Fine-tuning & [59]2, [81], and [60] \\ \hline \multicolumn{2}{l}{Code Availability: 1[https://github.com/veason-silverbullet/unitrec](https://github.com/veason-silverbullet/unitrec), 2[https://github.com/sai990323/tallrec](https://github.com/sai990323/tallrec)} \\ \hline \hline \end{tabular}
\end{table}
Table 2: Fine-tuning methods applied in LLM-empowered RecSys.
Figure 4: An illustration of two main fine-tuning methods of LLMs: Full-model Fine-tuning (left) which involves changing the entire model weights, and Parameter-efficient Fine-tuning (right) which involves fine-tuning a small proportion of model weights or a few extra trainable weights while fixing most of the parameters in LLMs. In fine-tuning, LLMs are trained on a relatively small amount of corpus (_i.e._, compared to the amount of corpus for pre-training) of task-specific data.
problem of full-model fine-tuning and catastrophic forgetting. Inspired by the idea of adapters and low intrinsic ranks of weight matrices in LLMs, Low-Rank Adaptation of LLMs (LoRA) [85] introduces low-rank decomposition to simulate the change of parameters. Basically, LoRA adds a new pathway to specific modules handling matrix multiplication in the original structure of the LLMs. In the pathway, two serial matrices first reduce the dimension to a pre-defined dimension of the middle layer and then increase the dimension back. In this case, the dimension of the middle layer could simulate the intrinsic rank.
In recommender systems, PEFT can greatly reduce the computational cost of fine-tuning LLMs for recommendation tasks, which requires less update and maintains most of the model capabilities. TallRec [59] introduces an efficient and effective tuning framework on the LLaMA-7B model and LoRA for aligning LLMs with recommendation tasks, which can be executed on a single RTX 3090. GLRec [81] also takes the advantage of LoRA for fine-tuning and adapting LLMs as job recommender. Moreover, M6 [60] also applies LoRA fine-tuning, making it feasible to deploy LLMs in phone devices.
## 5 Prompting LLMs for Recommender Systems
Apart from the pre-training & fine-tuning paradigm, prompting serves as the latest paradigm for adapting LLMs to specific downstream tasks with the help of task-specific prompts. A prompt refers to a text template that can be applied to the input of LLMs. For example, a prompt _"The relation between_and_is_" can be designed to deploy LLMs for relation extraction tasks. Prompting enables LLMs to unify different downstream tasks into language generation tasks, which are aligned to their objectives during pre-training [86].
To facilitate the performance of LLMs for RecSys, prompting techniques like In-context Learning (ICL) and Chain-of-Thought (CoT) are increasingly investigated to manually design prompts for various recommendation tasks. In addition, prompt tuning serves as an additive technique of prompting, by adding prompt tokens to LLMs and then updating them based on task-specific recommendation datasets. More recently, instruction tuning that combines the pre-training & fine-tuning paradigm with prompting [87] is explored to fine-tune LLMs over multiple recommendation tasks with instruction-based prompts, which enhances the _zero-shot_ performance of LLMs on unseen recommendation tasks. Figure 5 compares the representative methods corresponding to each of the aforementioned three prompting techniques of LLMs, in terms of the input formation and parameter update of LLMs (_i.e._, either tunable or frozen). In this section, we will discuss the prompting, prompt tuning, and instruction tuning techniques in detail, for improving the performance of LLMs on recommendation tasks. In summary, Table III categorizes the existing works according to the aforementioned three techniques, including the specific recommendation tasks and the LLM backbones considered in these works.
### _Prompting_
The key idea of prompting is to keep LLMs frozen (_i.e._, no parameters updates), and adapt LLMs to downstream tasks via task-specific prompts. To recap the development of prompting strategies for adapting LLMs to downstream tasks, early-stage conventional prompting methods mainly target at unifying downstream tasks to language generation manners, such as text summarization, relation extraction, and sentiment analysis. Later on, ICL [15] emerges as a powerful prompting strategy that allows LLMs to learn new tasks (_i.e._, tasks with knowledge demanding objectives) based on contextual information. In addition, another up-to-date prompting strategy named CoT [50] serves as a particularly effective method for prompting LLMs to address downstream tasks with complex reasoning.
#### 5.1.1 Conventional Prompting
There are two major approaches for prompting pre-trained language models to improve the performance on specific downstream tasks. One approach is _prompt engineering_, which generates prompt by emulating text that language models encountered during pre-training (_e.g._, text in NLP tasks). This allows pre-trained language models to unify downstream tasks with unseen objectives into language generation tasks with known objectives. For instance, Liu _et al._[39] consider prompting ChatGPT to format the review summary task in recommendations into text summarization, with a prompt including _"Write a short sentence to summarize_". Another approach is _few-shot prompting_, where a few input-output examples (_i.e._, shots) are provided to prompt and guide pre-trained language models to generate desired output for specific downstream tasks.
Due to the huge gap between language generation tasks (_i.e._, the pre-training objectives of LLMs) and downstream recommendation tasks, these conventional prompting methods have only shown limited applications in specific recommendation tasks that have similar nature to language generation tasks, such as the review summary of users [39] and the relation labeling between items [4].
#### 5.1.2 In-Context Learning (ICL)
Alongside the introduction of GPT-3 [15], ICL is proposed as an advanced prompting strategy, which significantly boosts the performance of LLMs on adapting to many downstream tasks. Gao _et al._[86] attribute the success of ICL in prompting LLMs for downstream tasks to two designs: prompt and in-context demonstrations. In other words, the key innovation of ICL is to elicit the in-context ability of LLMs for learning (new or unseen) downstream tasks from context during the inference stage. In particular, two settings proposed in ICL are prevalently leveraged for prompting LLMs for RecSys. One is the few-shot setting, in which a few demonstrations with contexts and desired completions of the specific downstream tasks are provided along with prompts. The other is the zero-shot setting, where no demonstrations will be given to LLMs but only natural language descriptions of the specific downstream tasks are appended to the prompt. As shown in Figure 6, two brief templates of few-shot ICL and zero-shot ICL for recommendation tasks are provided, respectively.
* _Prompting LLMs for RecSys via Few-shot ICL._ A straightforward approach for prompting LLMs to downstream recommendation tasks is to teach LLMs how to act as RecSys. For instance, Liu _et al._[39] employ ChatGPT and propose separate task descriptions tailored to different recommendation tasks, including top-K recommendation, rating prediction, and explanation generation, to perform few-shot ICL based on corresponding input-output examples of each recommendation task. For instance, the user rating history is given as an example for rating prediction tasks. Similarly, other existing works propose their distinct insights into designing the in-context demonstrations for better recommendation performance. For example, a text description of role injection, such as _"You are a book rating expert."_, is proposed in [58] to augment the in-context demonstrations, which prevents LLMs from refusing to complete the recommendation tasks (_e.g._, LLMs sometimes respond with _"As a language model, I don't have the ability to recommend..."_ for recommendation tasks). Apart from teaching LLMs to directly act as RecSys, few-shot ICL is also leveraged to guide LLMs to call traditional RecSys or external domain tools for recommendations. For example, a framework named Chat-Rec [3] is proposed to bridge ChatGPT and traditional RecSys via few-shot ICL, where ChatGPT learns to receive candidate items from traditional RecSys and then refines the final recommendation results. What's more, Zhang [101] designs a textual API call template for external graph reasoning tools and successfully teaches ChatGPT to use those templates through few-shot ICL to access the graph-based recommendation results generated by the external tools.
* _Prompting LLMs for RecSys via Zero-shot ICL._ Many existing works consider both few-shot ICL and zero-shot ICL settings at the same time to compare their performance under the same recommendation tasks. Typically, few-shot ICL can outperform zero-shot ICL since additional in-context demonstrations are provided to LLMs. Despite the reduction in performance, zero-shot ICL entirely relieves the requirement of task-specific recommendation datasets to form in-context demonstrations and can be suitable for certain tasks like conversational recommendations, where users are not likely to provide any demonstration to LLMs. For example, Wang _et al._[92] prompt ChatGPT for conversational recommendations with a zero-shot ICL template containing two parts: a text description of conversational recommendation tasks (_e.g._, _"Recommend items based on user queries in the dialogue."_), and a format guideline in natural languages, such as _"The output format should be \(\langle n_{o}.\rangle\) \(\langle\)item title\(\rangle\)."_, making the recommendation results easier to parse.
Figure 5: An illustration of three representative methods of prompting LLMs: _in-context learning_ (top) which requires no parameter update of LLMs, _prompt tuning_ (middle) which adds new prompt tokens to LLMs and optimizes the prompt along with minimal parameter updates at the input layer of LLMs, and _instruction tuning_ (bottom) which fine-tunes LLMs over multiple tasks-specific prompts, also known as instructions.
Figure 6: Brief templates of few-shot ICL and zero-shot ICL for recommendation tasks.
#### 5.1.3 Chain-of-Thought (CoT) Prompting
Although ICL has shown great effectiveness in prompting LLMs for downstream tasks with in-context demonstrations, recent studies indicate that LLMs still have limited performance in reasoning-heavy tasks [50]. More specifically, by prompting LLMs with in-context examples of input-output pairs, the answers directly generated by LLMs often suffer from missing one or a few intermediate reasoning steps in multi-step problems like mathematical equations, leading to a broken reasoning logic that causes errors in the subsequent reasoning steps (_i.e.,_ "one-step missing errors" [50]). Similar multi-step problems also exist in RecSys, such as the multi-step reasoning of user preferences based on the multi-turn dialogues in conversational recommendations. To address such limitations, CoT offers a special prompting strategy to enhance the reasoning ability of LLMs, by annotating intermediate reasoning steps to prompt. This enables LLMs to break down complicated decision-making processes and generate the final output with step-by-step reasoning.
Considering the suitable prompting strategies for adapting LLMs to various downstream tasks with complex reasoning, Zhao _et al._[19] discuss the combination of ICL and CoT prompting under two major settings: Few-shot CoT and Zero-shot CoT, as illustrated below.
* _Zero-shot CoT._ By inserting tricky texts such as _"Let's think step by step"_ and _"Therefore, the answer is"_ to prompt, zero-shot CoT leads LLMs to generate task-specific reasoning steps independently, without providing any task-relevant instruction or grounding example.
* _Few-shot CoT._ Task-specific reasoning steps are manually designed for each demonstration in ICL, where the original input-output examples are augmented to input-CoT-output manners. Besides, CoT can also augment the task descriptions in ICL demonstrations, by adding interpretable descriptions of reasoning steps based on task-specific knowledge.
In practice, the design of appropriate CoT reasoning steps highly depends on the contexts and objectives of the specific recommendation tasks. For example, a simple CoT template _"Please infer the preference of the user and recommend suitable items."_ is proposed to guide LLMs to first infer the user's explicit preference and then generate final recommendations [20]. So far, there is still a notable lack of research addressing the general format of CoT prompting for recommendations tasks. Next, we present a preliminary idea of CoT prompting, through an example in the context of e-commerce recommendations below.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline Paradigms & Methods & Recommendation Tasks & LLM Backbones & References \\ \hline \multirow{8}{*}{Prompting} & Conventional Prompting & Review Summary & ChatGPT & [39] \\ \cline{2-5} & \multirow{3}{*}{Conventional Prompting} & Relation Labeling & ChatGPT & [4] \\ \cline{3-5} & & \multirow{3}{*}{Top-K Recommendation} & ChatGPT & [3, [88], [89], [90]3, [91]4, [92]5, [93]6, [94] \\ \cline{3-5} & & & ChatGPT & [95]7, [96] \\ \cline{3-5} & & & GPT-3 & [95]7, [96] \\ \cline{3-5} & \multirow{3}{*}{In-Context Learning (ICL)} & & T5 & [97], [98]8 \\ \cline{3-5} & & & PaLM & [99], [100] \\ \cline{3-5} & \multirow{3}{*}{Rating Prediction} & ChatGPT & [3, [58], [88]1, [101]9 \\ \cline{3-5} & & ChatGLM & [102]10 \\ \cline{3-5} & \multirow{3}{*}{Conversational Recommendation} & ChatGPT & [3, [92]5, [93]5, [103]] \\ \cline{3-5} & & Explanation Generation & ChatGPT & [3, [39]] \\ \cline{3-5} & & \multirow{3}{*}{The explanation Generation} & ChatGPT & [3, [39]] \\ \cline{3-5} & & & Top-K Recommendation & T5 & [20] \\ \hline \multirow{8}{*}{Prompt Tuning} & Hard Prompt Tuning & \multicolumn{3}{c|}{(Refer to ICL above, see Section 5.2.1 for explanations)} \\ \cline{3-5} & \multirow{3}{*}{Soft Prompt Tuning} & \multirow{3}{*}{Top-K Recommendation} & T5 & [104] \\ \cline{3-5} & & & PaLM & [99] \\ \cline{3-5} & & & M6 & [60] \\ \hline \multirow{8}{*}{Instruction Tuning} & Full-model Tuning with Prompt & Top-K Recommendation & T5 & [20] \\ \cline{3-5} & \multirow{3}{*}{Training with Prompt} & & LLaMA & [61], [78] \\ \cline{3-5} & & Rating Prediction & T5 & [57] \\ \cline{3-5} & Parameter-efficient Model & Top-K Recommendation & LLaMA & [105]11 \\ \cline{3-5} & Tuning with Prompt & Rating Prediction & LLaMA & [99]12, [81] \\ \hline \hline \multicolumn{5}{l}{Code Availability: 12[https://github.com/rainjurymod/LLM4RS](https://github.com/rainjurymod/LLM4RS), 23[https://github.com/jizh-zhang/FaiRLLM](https://github.com/jizh-zhang/FaiRLLM),} \\ \multicolumn{5}{l}{3[https://github.com/Jyonn/Jyonn/GENRE-requests](https://github.com/Jyonn/Jyonn/GENRE-requests), 4[https://github.com/RUCAIBox/LLMRank](https://github.com/RUCAIBox/LLMRank),} \\ \multicolumn{5}{l}{5[https://github.com/RUCAIBox/iEvaLM-CRS](https://github.com/RUCAIBox/iEvaLM-CRS), 6[https://github.com/Linvyaha/GeneRec](https://github.com/Linvyaha/GeneRec),} \\ \multicolumn{5}{l}{7[https://github.com/AGI-Edgepartners/LLM-Next-Item-Rec](https://github.com/AGI-Edgepartners/LLM-Next-Item-Rec), 8[https://github.com/JacksonWuvs/PromptRec](https://github.com/JacksonWuvs/PromptRec),} \\ \multicolumn{5}{l}{9[https://github.com/jwzhanggy/Graph_Toolformer](https://github.com/jwzhanggy/Graph_Toolformer),} \\ \multicolumn{5}{l}{10will be available at [https://gitee.com/mindsporc/models/tree/master/research/recommmed/KAR](https://gitee.com/mindsporc/models/tree/master/research/recommmed/KAR),} \\ \multicolumn{5}{l}{11[https://github.com/rutgerswiselab/GenRec](https://github.com/rutgerswiselab/GenRec), 12[https://anonymous.4open.science/r/LLM4Rec-Recsys](https://anonymous.4open.science/r/LLM4Rec-Recsys).} \\ \hline \hline \end{tabular}
\end{table}
Table 3: An organization of representative methods of prompting LLMs for RecSys in terms of three paradigms: prompting, prompt tuning, and instruction tuning. We subsequently categorize existing works corresponding to each paradigm, including the specific recommendation tasks and the LLM backbones considered in these works.
_[CoT Prompting] Based on the user purchase history, let's think step-by-step. First, please infer the user's high-level shopping intent. Second, what items are usually bought together with the purchased items? Finally, please select the most relevant items based on the shopping intent and recommend them to the user._
Despite the limited number of works on CoT prompting in the RecSys field, a recent research [106] has revealed the great effectiveness of adopting CoT prompting to facilitate the graph reasoning ability of LLMs (T5 particularly) by modeling the reasoning steps as nodes and connecting the reasoning paths as edges instead of a sequential chain. We believe that similar ideas can be potentially transferred, and contribute to the CoT prompting for RecSys, based on the fact that recommendation tasks can be considered as a special case of link prediction problems in graph learning.
### _Prompt Tuning_
In contrast to manually prompting LLMs for downstream tasks (_e.g._, manually generate task-specific prompt in natural language), prompt tuning serves as an additive technique of prompting, which adds new prompt tokens to LLMs and optimizes the prompt based on the task-specific dataset. Generally, prompt tuning requires less task-specific knowledge and human effort than manually designing prompts for specific tasks and only involves minimal parameter updates of the tunable prompt and the input layer of LLMs. For example, AutoPrompt [107] takes the step of decomposing prompt into a set of vocabulary tokens, and finding the suitable tokens to language models via gradient-based search with respect to the performance on specific tasks.
According to the definition, prompts can be either discrete (_i.e._, hard) or continuous (_i.e._, soft) that guide LLMs to generate the expected output [108]. Thus, we categorize prompt tuning strategies for prompting LLMs for RecSys into hard prompt tuning and soft prompt tuning, as illustrated below.
#### 5.2.1 Hard Prompt Tuning
Hard prompt tuning is to generate and update discrete text templates of prompt (_e.g._, in natural language), for prompting LLMs to specific downstream tasks. Dong et al. [108] argue that ICL can be considered as a subclass of hard prompt tuning and regard the in-context demonstrations in ICL as a part of the prompt. From this perspective, ICL performs hard prompt tuning for prompting LLMs to downstream recommendation tasks by refining prompts in natural language based on task-specific recommendation datasets. Despite the effectiveness and convenience of generating or refining natural language prompts for downstream recommendation tasks, hard prompt tuning inevitably faces the challenge of discrete optimization, which requires laborious trial and error to discover the vast vocabulary space in order to find suitable prompts for specific recommendation tasks.
#### 5.2.2 Soft Prompt Tuning
In contrast to discrete prompt, soft prompt tuning employs continuous vectors as prompt (_e.g._, text embeddings), and optimizes the prompt based on task-specific datasets, such as using gradient methods to update the prompt with respect to a recommendation loss. In LLMs, soft prompt tokens are often concatenated to the original input tokens at the input layer (_e.g._, tokenizer). During soft prompt tuning, only the soft prompt and minimal parameters at the input layer of LLMs will be updated.
To improve the recommendation performance of LLMs, some existing works combine advanced feature extraction and representation learning methods to better capture and embed task-specific information in RecSys into soft prompts. For instance, Wu _et al._[109] apply contrastive learning to capture user representations and encode them into prompt tokens, and Wang _et al._[63] and Guo _et al._[110] share the similar idea of encoding mutual information in cross-domain recommendations into soft prompt. In addition to directly embedding task-specific information into soft prompt, soft prompt can also be learned based on task-specific datasets. For example, randomly initialized soft prompts are adopted to guide T5 to generate desired recommendation results [104], where the soft prompt is optimized in an end-to-end manner with respect to a recommendation loss based on the T5 output. Compared to the hard prompt, the soft prompt is more feasible for tuning on continuous space but in a cost of explainability [104]. In other words, compared to task-specific hard prompt in a natural language like _"Your task is to recommend..."_, the relationships between the specific downstream tasks and the soft prompt written in continuous vectors are not interpretable to humans.
### _Instruction Tuning_
Although prompting LLMs has demonstrated remarkable few-shot performance on unseen downstream tasks, recent studies demonstrated that prompting strategies have much poorer zero-shot ability [87]. To address the limitations, instruction tuning is to train LLMs to follow prompts as task instructions, rather than to solve specific downstream tasks. More specifically, instruction tuning can be divided into two stages: "instruction" (_i.e._, prompt) generation and model "tuning", since the straightforward idea of instruction tuning is the combination of prompting and fine-tuning LLMs.
* _Instruction (Prompt) Generation Stage._ Formally, instruction tuning introduces a format of instruction-based prompt in natural language, which composes of task-oriented input (_i.e._, task descriptions based on task-specific dataset) and desired target (_i.e._, corresponding output based on task-specific dataset) pairs. Considering the instruction tuning of LLMs for downstream recommendation tasks, Zhang _et al._[20] propose a recommendation-oriented instruction template, including user preferences, intentions, and task forms, which serves as a common template for generating instructions for various recommendation tasks. More
directly, three-part instruction templates in the form of "task description-input-output" are used in [59, 61] to generate instructions based on task-specific recommendation datasets.
* _Model Tuning Stage._ The second stage is to fine-tune LLMs over multiple aforementioned instructions for downstream tasks, where we categorize the existing works on RecSys, as shown in Table III, according to the LLMs fine-tuning manners: full-model tuning and parameter-efficient model tuning (see Section 4.2 for explanations), since basically the same principles of fine-tuning LLMs are adopted in this stage. For example, Bao _et al._[59] utilize LoRA to make the instruction tuning of LLaMA more lightweight for downstream recommendation tasks.
In addition to textual data in RecSys, instruction tuning is recently explored to enhance the graph understanding ability of LLMs for recommendation tasks. In particular, Wu _et al._[81] propose an LLM-based prompt constructor to encode the paths of nodes (_e.g._, candidate items) and edges (_e.g._, relationships between items) in behavior graphs into natural language descriptions, which is subsequently used for instruction tuning an LLM-based recommender based on task-specific dataset.
## 6 Future Directions
In this survey, we have comprehensively reviewed the recent advanced techniques for LLM-enhanced recommender systems. Since the adaption of LLMs to recommender systems is still in the early stage, there are still many challenges and opportunities. In this section, we discuss some potential future directions in this field.
### _Hallucination Mitigation_
Although LLMs are used in various fields, a significant challenge is the phenomenon of _'hallucination'_, where language models generate outputs that are plausible-sounding but factually incorrect or not referable in the input data [111, 112]. For instance, considering a scenario where you are seeking today's news events, the LLMs erroneously recommend/generate news that, in fact, does not exist. The causes of this problem are manifold such as source-reference divergence existing in dataset, and training&modeling choices of neural network models [113]. Moreover, the hallucination issue poses severe threats to users and society, especially in high-stakes recommendation scenarios such as medical recommendations or legal advice, where the dissemination of incorrect information can have severe real consequences. To address such issues, employing factual knowledge graphs as supplementary factual knowledge during the training and inference stages of LLMs for RecSys is promising to mitigate the hallucination problem. In addition, the model's output stage can be scrutinized to verify the accuracy and factuality of the produced content.
### _Trustworthy Large Language Models for Recommender Systems_
The development of LLMs for RecSys has brought significant benefits to humans, including economic value creation, time and effort savings, and social benefits. However, these data-driven LLMs for RecSys might also pose serious threats to users and society [114, 5, 115], due to unreliable decisions making, unequal treatment of various consumers or producers, a lack of transparency and explainability, and privacy issues stemming from the extensive use of personal data for customization, among other concerns. As a result, there is an increasing concern about the issue of trustworthiness in LLMs for RecSys to mitigate the negative impacts and enhance public trust in LLM-based RecSys techniques. Thus, it is desired to achieve trustworthiness in LLMs for RecSys from four of the most crucial dimensions, including _Safety&Robustness, Non-discrimination&Fairness_, _Explainability_, and _Privacy_.
#### 6.2.1 Safety&Robustness
LLMs have been proven to advance recommender systems in various aspects, but they are also highly vulnerable to adversarial perturbations (_i.e._, minor changes in the input) that can compromise the safety and robustness of their uses in safety-critical applications [114, 44]. These vulnerabilities towards noisy inputs are frequently carried out with malicious intent, such as to gain unlawful profits and manipulate markets for specific products [116, 117, 118, 119]. Therefore, it is crucial to ensure that the output of LLMs for recommender systems is stable given small changes in the LLMs' input. In order to enhance model safety and robustness, GPT-4 integrates safety-related prompts during reinforcement learning from human feedback (RLHF) [120]. However, the RLHF method requires a significant number of experts for manual labeling, which might not be feasible in practice. An alternative solution might involve the automatic pre-processing of prompts designed for recommender tasks before input to LLMs. This could include pre-processing for malicious prompts or standardizing prompts with similar purposes to have the same final input, thus potentially improving safety and robustness. In addition, as one of the representative techniques, adversarial training [121] can be used to improve the robustness of LLM-based recommender systems.
#### 6.2.2 Non-discrimination&Fairness
LLMs, trained on vast datasets, often inadvertently learn and perpetuate biases and stereotypes in the human data that will later reveal themselves in the recommendation results. This phenomenon can lead to a range of adverse outcomes, from the propagation of stereotypes to the unfair treatment of certain user groups [122, 2, 2]. For instance, in the context of recommender systems, these biases can manifest as discriminatory recommendations, where certain items are unfairly promoted or demoted based on these learned biases. More recently, a few studies such as FaiRLLM [89] and UPS [104] explore the fairness problem in recommender systems brought by LLMs, which only focus on user-side and item generation task. Concurrently, Hou _et al._[91] guide LLMs with prompts to formalize the recommendation task as a conditional ranking task to improve item-side fairness. However, studies on non-discrimination and fairness in LLMs for RecSys are at a preliminary stage, further research is still needed.
#### 6.2.3 Explainability
Owing to privacy and security considerations, certain companies and organizations choose not to open-source their advanced LLMs, such as ChatGPT and GPT-4, indicating that the architectures and parameters of these LLMs for RecSys are not publicly available for the public to understand their complex internal working mechanisms. Consequently, LLMs for RecSys can be treated as the 'black box', complicating the process for users trying to comprehend why a specific output or recommendation was produced. Recently, Bills _et al._[124] try to use GPT-4 to generate natural language descriptions to explain the neuronal behavior in the GPT-2 model. While this study is foundational, it also introduces fresh perspectives for comprehending the workings of LLMs. Neurons exhibit intricate behaviors that may not be easily encapsulated through simple natural language. To this end, efforts should be made to understand how LLMs for RecSys function, so as to enhance the explainability of LLM-based recommender systems.
#### 6.2.4 Privacy
Privacy is a paramount concern when it comes to LLMs for RecSys. The reasons for this are multifold. On the one hand, the success of LLMs for recommender systems highly depends on large quantities of data that are collected from a variety of sources, such as social media and books. Users' sensitive information (_e.g._, email and gender) contained in data is likely to be used to train modern LLMs for enhancing prediction performance and providing personalized experiences, leading to the risk of leaking users' private information. On the other hand, these systems often handle sensitive user data, including personal preferences, online behaviors, and other identifiable information. If not properly protected, this data could be exploited, leading to breaches of privacy. Therefore, ensuring the privacy and security of this data is crucial. Carlini _et al._[125] show that LLMs might reveal some uses' real identity or private information when generating text. Recently, Li _et al._[126] introduce RAPT that allows users to customize LLMs with their private data based on prompt tuning. It provides a direction on how to protect user privacy at LLMs for RecSys.
### _Vertical Domain-Specific LLMs for Recommender Systems_
General LLMs, such as ChatGPT, whose powerful generation and inference capabilities make them a universal tool in various areas. Vertical domain-specific LLMs are LLMs that have been trained and optimized for a specific domain or industry, such as health [127] and finance [56]. Compared to general LLMs for RecSys, vertical domain-specific LLM-empowered RecSys are more focused on the knowledge and skills of a particular domain and have a higher degree of domain expertise and practicality. Instead of sifting through irrelevant information, users can focus on content that is directly aligned with their work or personalized preferences. By providing tailored recommendations, vertical domain-specific LLMs for RecSys can save professionals a significant amount of time. More recently, existing works have presented vertical domain-specific LLMs that cover a wide range of areas, such as medical care [128, 129], law [130, 131], and finance [132]. Due to trained specifically, these vertical domain-specific LLMs can better understand and process domain-specific knowledge, terminology and context. Yet the requirement for vast amounts of domain-specific data to train these models poses significant challenges in data collection and annotation. As such, constructing high-quality domain datasets and using suitable tuning strategies for specific domains are necessary steps in the development of vertical domain-specific LLMs for RecSys. In particular, Jin _et al._[133] propose a multilingual dataset named Amazon-M2 as a new setting of session-based recommendations from Amazon (_i.e._, sessions containing the interacted items of users) and inspire the opportunities to leverage LLMs as RecSys to learn on session graphs with multilingual and textual data, such as item (node) attributes including product titles, prices, and descriptions across session graphs of users from different locales (multilingual).
### _Users&Items Indexing_
Recent research suggests that LLMs may not perform well when dealing with long texts in RecSys, as it can be difficult to effectively capture user-item interaction information in long texts [91]. On the other hand, user-item interactions (e.g., click, like, and subscription) with unique identities (i.e., discrete IDs) in recommender systems contain rich collaborative knowledge and make great contributions to understanding and predicting user preferences, encompassing both explicit actions like ratings and reviews, as well as implicit behaviors like browsing history or purchase data. Several studies, including InstructRec [20], PALR [61], GPT4Rec [134] and UP5 [104], have attempted to utilize user-item history interaction information as text prompts inputted into LLMs (_e.g._, ChatGPT) in order to make recommendations. To address the long text problem, one possible solution is to perform user and item indexing for learning collaborative knowledge by incorporating user-item interactions. Therefore, rather than merely using text formats to represent users and items, advanced methods for indexing users&items are desired to build LLM-based recommender systems.
### _Fine-tuning Efficiency_
In the application of LLMs to RecSys, fine-tuning refers to the process of adapting a pre-trained LLM to a specific task or domain, such as recommending movies [61] or books [59]. This process allows the model to leverage the general language understanding capabilities learned during pre-training while specializing its knowledge to the task at hand. However, fine-tuning can be computationally expensive, particularly for very large models and large datasets in recommender systems. Therefore, improving the efficiency of fine-tuning is a key challenge. In this case, Fu _et al._[135] use adapter modules, which are small, plug-in neural networks that can be optimized separately from the main model, to achieve parameter-efficient transfer learning. However, the current adapter tuning techniques for RecSys fall slightly behind full-model fine-tuning when it comes to cross-platform image recommendation. The exploration of adapter tuning effects for multi-modal (_i.e._, both text and image) RecSys is a potential future direction. In addition,
given that most typical adapter tuning does not help to speed up the training process in practice, it is important to explore effective optimization techniques to reduce the computational cost and time for RecSys through end-to-end training.
### _Data Augmentation_
Most conventional studies in the recommender systems domain rely on real data-driven research, founded on the collection of user behavior data via user interaction in digital platforms or through the recruitment of annotators. Nonetheless, these approaches appear to be resource-intensive and may not be sustainable in the long term. The quality and variety of the input data directly influence the performance and versatility of the models. With the aim to overcome the shortcomings of real data-centric studies, Wang _et al._[136] introduce RecAgent, a simulation paradigm for recommender systems based on LLMs, which includes a user module for browsing and communication on the social media, and a recommender module for providing search or recommendation lists. Additionally, LLM-Rec [96] incorporates four prompting strategies to improve personalized content recommendations, which demonstrates through experiments that diverse prompts and input augmentation techniques can enhance recommendation performance. Therefore, rather than solely deploying LLMs as recommender systems, utilizing them for data augmentation to bolster recommendations emerges as a promising strategy in the future.
## 7 Conclusion
As one of the most advanced AI techniques, LLMs have achieved great success in various applications, such as molecule discovery and finance, owing to their remarkable abilities in language understanding and generation, powerful generalization and reasoning skills, and prompt-adaptation to new tasks and diverse domains. Similarly, increasing efforts have been made to revolutionize recommender systems with LLMs, so as to provide high-quality and personalized suggestion services. Given the rapid evolution of this research topic in recommender systems, there is a pressing need for a systematic overview that comprehensively summarizes the existing LLM-empowered recommender systems. To fill the gap, in this survey, we have provided a comprehensive overview of LLM-empowered RecSys from _pre-training\(\&\)fine-tuning_ and _prompting_ paradigms, so as to provide researchers and practitioners in relevant fields with an in-depth understanding. Nevertheless, the current research on LLMs for RecSys is still in its early stage which calls for more systematic and comprehensive studies of LLMs in this field. Therefore, we also discussed some potential future directions in this field.
|
2305.03521 | A Construction of Permutation Polynomials Using Rédei Function in
Even Characteristic | The R\'{e}dei function defined over a field of even characteristic has been
introduced by N\"{o}bauer in 1986 \cite{even}. In this paper, inspired by the
work of Fu et al. \cite{wang} in odd characteristic, employing the AGW
criterion \cite{agw}, we present a recursive construction of permutation
polynomials in even characteristic using the R\'{e}dei function over a field of
characteristic 2. | Daniel Panario, Nihal Uyar, Qiang Wang | 2023-05-05T13:26:09Z | http://arxiv.org/abs/2305.03521v3 | # A Construction of Permutation Polynomials Using Redei Function in Even Characteristic
###### Abstract
The Redei function defined over a field of even characteristic has been introduced by Nobauer in 1986 [31]. In this paper, inspired by the work of Fu et al. [14] in odd characteristic, employing the AGW criterion [1], we present a recursive construction of permutation polynomials in even characteristic using the Redei function over a field of characteristic 2.
**Keywords: Permutation polynomials, even characteristic, AGW criterion, Redei function**
## 1 Introduction
Let \(q\) be a power of a prime number, \(\mathbb{F}_{q}\) be the finite field with \(q\) elements and \(\mathbb{F}_{q}[x]\) be the ring of polynomials over \(\mathbb{F}_{q}\). It is well known that every function over \(\mathbb{F}_{q}\) can be expressed as a polynomial over \(\mathbb{F}_{q}\) and, an important class of polynomials is the one formed by bijections in \(\mathbb{F}_{q}\). A permutation polynomial \(f\in\mathbb{F}_{q}[x]\) is a bijection in \(\mathbb{F}_{q}\) into itself. Studies on permutation polynomials have started with Hermite and Dickson [9, 15] and it is still an active area of research with several applications in cryptography [11, 30], coding theory [19], combinatorial designs [10] and on many other areas of mathematics and engineering. For more information on permutation polynomials over finite fields, [16, 24, 29], and references there in provide an excellent survey.
Construction of permutation polynomials over a finite field of either odd characteristic or of even characteristic is an interesting hot topic. For some constructions of permutation polynomials over finite fields of odd characteristic, the reader is referred to [1, 2, 3, 13, 14, 20, 22, 35]. Some classes of permutation polynomials in a field of even characteristic presented in [4, 6, 17, 21, 33]. More than a decade ago, Akbary, Ghioca
and Wang introduced a useful criterion to study permutation polynomial on finite sets.
**Theorem 1**.: _[_1_]_ _Let \(A\), \(S\) and \(\bar{S}\) be finite sets with \(|S|=|\bar{S}|\). Let \(f,\bar{f},\lambda,\bar{\lambda}\) be maps on finite sets such that \(f:A\to A\), \(\bar{f}:S\to\bar{S}\), \(\lambda:A\to S\), \(\bar{\lambda}:A\to\bar{S}\) and \(\bar{\lambda}\circ f=\bar{f}\circ\lambda\) :_
\[\begin{array}{c}\includegraphics[width=142.26378pt]{images/142.
## 2 Redei functions
In this section, we define Redei function (as well as tangent-Chebyshev function) in \(\mathbb{F}_{q^{2}}\) where \(q\) is odd. Then state the central result [14, Theorem 1] that we adapt to obtain permutation polynomials in a field of even characteristic.
Let \(q\) be an odd prime power, \(n\) be a positive integer and \(\alpha\in\mathbb{F}_{q^{2}}^{*}\). The Redei function over \(\mathbb{F}_{q^{2}}\) is defined as follows:
\[R_{n}(x,\alpha)=\frac{G_{n}(x,\alpha)}{H_{n}(x,\alpha)}=\frac{\sum_{i=0}^{ \lfloor n/2\rfloor}{n\choose 2i}\alpha^{i}x^{n-2i}}{\sum_{i=0}^{\lfloor n/2 \rfloor}{n\choose 2i+1}\alpha^{i}x^{n-2i-1}}.\]
The degree-\(n\) tangent-Chebyshev rational function over \(\mathbb{F}_{q^{2}}\) with a non-square \(\alpha\in\mathbb{F}_{q^{2}}^{*}\) is defined as follows:
\[C_{n}(x,\alpha)=\frac{E_{n}(x,\alpha)}{F_{n}(x,\alpha)}=\frac{\sum_{i=0}^{ \lfloor n/2\rfloor}{n\choose 2i+1}\alpha^{i}x^{2i+1}}{\sum_{i=0}^{ \lfloor n/2\rfloor}{n\choose 2i}\alpha^{i}x^{2i}}.\]
Using Redei function in odd characteristic defined as above, we have the following construction of permutation polynomials over \(\mathbb{F}_{q^{2}}\).
**Theorem 3**.: _[_14_]_ _Let \(q\) be an odd prime power. Suppose \(n>0\) and \(m\) are two integers. Let \(\alpha\in\mathbb{F}_{q^{2}}\) satisfy \(\alpha^{q+1}=1\). Then, the polynomial_
\[P(x)=x^{n+m(q+1)}H_{n}(x^{q-1},\alpha)\]
_permutes \(\mathbb{F}_{q^{2}}\) if and only if any one of the following conditions holds:_
* \(\gcd(n(n+2m),q-1)=1\)_, when_ \(\sqrt{\alpha}\in\mu_{q+1}\)_;_
* \(\gcd(n+2m,q-1)=1\) _and_ \(\gcd(n,q+1)=1\)_, when_ \(\sqrt{\alpha}\notin\mu_{q+1}\)_._
We observe that the theorem above is also applicable to the polynomial \(G_{n}(x,\alpha)\). It is easy to show that we also have permutation polynomials in \(\mathbb{F}_{q^{2}}\) using the denominator and the numerator of the tangent-Chebyshev function by adapting the proof of [14, Theorem 1] with some simple algebraic manipulations as in [12] and use of the following fundamental equality:
\[C_{n}(x,\alpha)=\frac{1}{x}\circ R_{n}(x,\alpha)\circ\frac{1}{x}. \tag{1}\]
Equation (1) implies the polynomial \(F_{n}(x,\alpha)\), respectively \(E_{n}(x,\alpha)\), is equivalent to the polynomial \(G_{n}(x,\alpha)\), respectively \(H_{n}(x,\alpha)\), by the equality \(F_{n}(x,\alpha)=G_{n}(\frac{1}{x},\alpha)\). Therefore, we have the following corollary from [14, Theorem 1].
**Corollary 4**.: _Let \(q\) be an odd prime power, \(\alpha\in\mathbb{F}_{q^{2}}\) with \(\alpha^{q+1}=1\) and \(n,m\) be positive integers. Then,_
\[P(x)=x^{n+m(q+1)}F_{n}(x^{q-1},\alpha)\]
_permutes \(\mathbb{F}_{q^{2}}\) if and only if one of the following condition holds:_
* \(\gcd(n(n+2m),q-1)=1\)_, when_ \(\sqrt{\alpha}\in\mu_{q+1}\)_;_
* \(\gcd(n+2m,q-1)=1\) _and_ \(\gcd(n,q+1)=1\)_, when_ \(\sqrt{\alpha}\notin\mu_{q+1}\)_._
We note that this corollary is also applicable to the polynomial \(E_{n}(x,\alpha)\) as we have \(E_{n}(x,\alpha)=H_{n}(\frac{1}{x},\alpha)\).
Next, we define Redei function in even characteristic. Let \(q\) be an even prime power and \(h(x)=x^{2}+x+\alpha\) be an irreducible polynomial over \(\mathbb{F}_{q}\) where \(\alpha\in\mathbb{F}_{q}\). Let \(\beta\) be a root of this polynomial in \(\mathbb{F}_{q^{2}}\). The other root of this polynomial is \(\beta+1\) and therefore \(\alpha=\beta^{2}+\beta\) which implies \(\beta^{2}+\beta\in\mathbb{F}_{q}\) where \(\beta,\beta+1\in\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}\).
We can define Redei function in every non-zero characteristic finite field in the following way. Here, for simplicity, \(\bar{\beta}=\beta+1\) indicates the other root of \(h(x)\) different from \(\beta\) where \(h(x)\) is an irreducible polynomial over \(\mathbb{F}_{q}\) of the form \(x^{2}+x+\alpha\) where \(q\) is even. If \(q\) is odd, \(\bar{\beta}=-\beta\) indicates the root of \(x^{2}-\alpha\) different from \(\beta\).
**Definition 1**.: _[_12_]_ _The Redei function of degree \(n\) over \(\mathbb{F}_{q}\) where \(n\) is a positive integer and \(\alpha\in\mathbb{F}_{q}\) is defined by_
\[R_{n}(x,\alpha):=\rho^{-1}\circ x^{n}\circ\rho\]
_where \(\rho(x):=(x-\bar{\beta})/(x-\beta)\) and \(\rho^{-1}(x):=(\beta x-\bar{\beta})/(x-1)\) are the degree-one rational functions in \(\mathbb{F}_{q^{2}}(x)\) such that \((\rho^{-1}\circ\rho)(x)=x\)._
As we are interested in Redei function in a field of even characteristic in this paper, by the definition above, we have the following :
\[R_{n}(x,\alpha)=\frac{\beta(x+\beta+1)^{n}+(\beta+1)(x+\beta)^{n}}{(x+\beta+1 )^{n}+(x+\beta)^{n}}. \tag{2}\]
We have the fact that \(\rho(x)\) induces a bijection from \(\mathbb{F}_{q}\cup\{\infty\}\) to the set \(\mu_{q+1}\) and it implies that \(R_{n}(x,\alpha)\) permutes \(\mathbb{F}_{q}\cup\{\infty\}\) if and only if \(x^{n}\) permutes \(\mu_{q+1}\), i.e. \(\gcd(n,q+1)=1\).
Let us consider the following notation:
\[M_{n}(x,\alpha)=(x+\beta+1)^{n}+(x+\beta)^{n}=\sum_{i=0}^{n}{(\beta^{i}+( \beta+1)^{i})\binom{n}{i}x^{n-i}}\]
and
\[N_{n}(x,\alpha)=\beta(x+\beta+1)^{n}+(\beta+1)(x+\beta)^{n}=\sum_{i=0}^{n}{(( \beta+1)\beta^{i}+\beta(\beta+1)^{i})\binom{n}{i}x^{n-i}}.\]
With this notation, the Redei function in a field of characteristic \(2\) is defined as
\[R_{n}(x,\alpha)=\frac{N_{n}(x,\alpha)}{M_{n}(x,\alpha)} =\frac{\beta(x+\beta+1)^{n}+(\beta+1)(x+\beta)^{n}}{(x+\beta+1)^{n }+(x+\beta)^{n}}\] \[=\frac{\sum_{i=0}^{n}{((\beta+1)\beta^{i}+\beta(\beta+1)^{i}) \binom{n}{i}x^{n-i}}}{\sum_{i=0}^{n}{(\beta^{i}+(\beta+1)^{i})\binom{n}{i}x^{n -i}}}.\]
We remark that we denote the numerator and denominator as \(M_{n}(x,\alpha)\) and \(N_{n}(x,\alpha)\) instead of \(G_{n}(x,\alpha)\), \(H_{n}(x,\alpha)\) to emphasize they are polynomials defined over a field of characteristic \(2\).
## 3 A Construction of Permutation Polynomials in
Even Characteristic
In this section, we introduce a recursive construction of two classes of permutation polynomials in a field of characteristic \(2\). For this purpose, we prove a few lemmas and make some observations which are used in the proof of our main theorems.
**Proposition 5**.: _Let \(\beta\in\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}\) such that \(\alpha=\beta^{2}+\beta=1\) where \(q=2^{t}\), for some positive odd integer \(t\). Then \(\beta^{i}+(\beta+1)^{i}\in\mathbb{F}_{2}\) and \((\beta+1)\beta^{i}+\beta(\beta+1)^{i}\in\mathbb{F}_{2}\) for all positive integer \(i\)._
Proof.: We observe that showing \(\beta^{i}+(\beta+1)^{i}\in\mathbb{F}_{2}\) is enough to show \((\beta+1)\beta^{i}+\beta(\beta+1)^{i}\in\mathbb{F}_{2}\) for all positive integer \(i\), because we have the following relation by using the equalities \(\beta=\beta^{2}+1=(\beta+1)^{2}\) and \(\beta^{2}=\beta+1\).
\[(\beta+1)\beta^{i}+\beta(\beta+1)^{i} =(\beta+1)(\beta+1)^{2i}+\beta\beta^{2i}\] \[=(\beta+1)^{2i+1}+\beta^{2i+1}.\]
We also have the following equalities.
\[\beta^{i}+(\beta+1)^{i} =\beta^{i}+\beta^{2i}\] \[=(\beta+1)^{2i}+\beta^{2i}\] \[=((\beta+1)^{i}+\beta^{i})^{2}.\]
That is \(\beta^{i}+(\beta+1)^{i}\in\mathbb{F}_{2}\) for all positive integer \(i\).
**Remark 1**.: _Let \(\beta\in\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}\) and \(\beta^{2}+\beta=1\), i.e., \(\alpha=1\). In other words, \(\beta\) belongs to the extension of \(\mathbb{F}_{q}\) with the minimal polynomial \(x^{2}+x+1\). Suppose that \(t\) is even, that is \(q=2^{2k}\) for some positive integer \(k\). It means that \(\mathbb{F}_{q}\) contains \(\mathbb{F}_{4}\) as a subfield. Then, we can not consider the polynomial \(x^{2}+x+\alpha\) where \(\alpha=\beta^{2}+\beta=1\) to extend \(\mathbb{F}_{q}\) to \(\mathbb{F}_{q^{2}}\). Therefore, if \(\beta^{2}+\beta=1\), then \(t\) should be odd where \(q=2^{t}\)._
From now on, we consider \(q=2^{t}\) where \(t\) is an odd integer and we fix \(\alpha=1\). We also simplify the notation and use \(M_{n}(x)\) and \(N_{n}(x)\) instead of \(M_{n}(x,1)\) and \(N_{n}(x,1)\).
We denote \(a_{i}=\beta^{i}+(\beta+1)^{i}\) and \(b_{i}=(\beta+1)\beta^{i}+\beta(\beta+1)^{i}\), we have
\[\begin{split} b_{i}^{2}&=((\beta+1)\beta^{i}+\beta( \beta+1)^{i})^{2}=(\beta+1)^{2}\beta^{2i}+\beta^{2}(\beta+1)^{2i}\\ &=(\beta+1)^{2}(\beta+1)^{i}+\beta^{2}\beta^{i}=(\beta+1)^{i+2}+ \beta^{i+2}\\ &=a_{i+2}.\end{split}\]
Since \(b_{i}\in\mathbb{F}_{2}\) when \(\alpha=1\), this implies \((b_{i})^{2}=b_{i}\), and so we have the relation \(b_{i}=a_{i+2}\). Therefore, we have
\[M_{n}(x)=\sum_{i=0}^{n}a_{i}\binom{n}{i}x^{n-i}\]
and
\[N_{n}(x)=\sum_{i=0}^{n}a_{i+2}\binom{n}{i}x^{n-i}.\]
For some positive integer \(k\), we also have the following equalities:
\[\beta^{3k}=(\beta\beta^{2})^{k}=(\beta(\beta+1))^{k}=1,\]
and
\[(\beta+1)^{3k}=((\beta+1)(\beta+1)^{2})^{k}=((\beta+1)\beta)^{k}=1.\]
Therefore,
\[\begin{split} a_{3k}&=\beta^{3k}+(\beta+1)^{3k}=1+1 =0,\\ a_{3k+1}&=\beta^{3k+1}+(\beta+1)^{3k+1}=\beta+( \beta+1)=1,\\ a_{3k+2}&=\beta^{3k+2}+(\beta+1)^{3k+2}=\beta^{2}+( \beta+1)^{2}=1.\end{split}\]
We deduce that
\[\begin{cases}a_{i}=0&when\ 3\mid i,\\ a_{i}=1&otherwise.\end{cases}\]
Determining \(a_{i}\) and \(b_{i}=a_{i+2}\) helps us to determine \(M_{n}(x)\) and \(N_{n}(x)\) easily, as we only need to calculate the corresponding binomial coefficient \(\binom{n}{i}\pmod{2}\) to calculate \(M_{n}(x)\) and \(N_{n}(x)\).
In the following, we prove two lemmas required for the proof of our main theorems.
**Lemma 6**.: _We have the following results:_
1. \(M_{n}(x)\) _has no root in_ \(\mu_{q+1}\)_, when_ \(\gcd(n,q^{2}-1)=1\)_;_
2. \(N_{n}(x)\) _has no root in_ \(\mu_{q+1}\)_, when_ \(\gcd(n,q^{2}-1)=1\) _and_ \(n\equiv 1\pmod{3}\)_._
Proof.: The proofs of the two cases above are similar.
1. Suppose by contradiction that there is \(x\in\mu_{q+1}\) such that \(M_{n}(x)=0\), that is, \[(x+\beta+1)^{n}+(x+\beta)^{n}=0.\]
Therefore, we have \[\Big{(}\frac{x+\beta+1}{x+\beta}\Big{)}^{n}=1.\] As \(\gcd(n,q^{2}-1)=1\), we have \(\frac{x+\beta+1}{x+\beta}=1\). This leads to a contradiction as we would have \(x+\beta=x+\beta+1\). Hence, \(M_{n}(x)\) has no root in \(\mu_{q+1}\) when \(\gcd(n,q^{2}-1)=1\).
2. Similar to the previous case, suppose that there is \(x\in\mu_{q+1}\) such that \(N_{n}(x)=0\), that is, \[\beta(x+\beta+1)^{n}+(\beta+1)(x+\beta)^{n}=0.\] Then, we have \[\frac{\beta}{\beta+1}\Big{(}\frac{x+\beta+1}{x+\beta}\Big{)}^{n}=1.\] Equivalently, we have \[\Big{(}\frac{x+\beta+1}{x+\beta}\Big{)}^{n}=\frac{\beta+1}{\beta}.\] Since \(\beta^{2}+\beta=1\) as we fixed \(\alpha=1=\beta(\beta+1)\), we have \(\frac{\beta+1}{\beta}=\frac{\beta+1}{\frac{1}{\beta+1}}=(\beta+1)^{2}=\beta\). We know that \(\beta^{n}=\beta\) when \(n\equiv 1\pmod{3}\), as \(\beta^{3}=1\). Therefore, we have \[\Big{(}\frac{x+\beta+1}{x+\beta}\Big{)}^{n}=\beta^{n}.\] Since \(\gcd(n,q^{2}-1)=1\), we have \(x+\beta+1=\beta x+\beta+1\) which implies \((\beta+1)x=0\), a contradiction. Hence \(N_{n}(x)\) has no root in \(\mu_{q+1}\) when \(\gcd(n,q^{2}-1)=1\) and \(n\equiv 1\pmod{3}\).
**Remark 2**.: _The proof of Lemma 6 is also true when we consider \(M_{n}(x)\) and \(N_{n}(x)\) in \(\mathbb{F}_{q^{2}}^{*}\). That is, \(M_{n}(x)\) has no root in \(\mathbb{F}_{q^{2}}^{*}\) when \(\gcd(n,q^{2}-1)=1\) and \(N_{n}(x)\) has no root in \(\mathbb{F}_{q^{2}}^{*}\) when \(\gcd(n,q^{2}-1)=1\) with \(n\equiv 1\pmod{3}\)._
**Lemma 7**.: _We have the following results:_
1. \(x^{n+m(q+1)}M_{n}(x)^{q-1}=R_{n}(x)\)_, when_ \(n\equiv 1\pmod{3}\) _where_ \(x\in\mu_{q+1}\)_;_
2. \(x^{n+m(q+1)}M_{n}(x)^{q-1}=R_{n}(x)+1\)_, when_ \(n\equiv 2\pmod{3}\) _where_ \(x\in\mu_{q+1}\)_;_
3. \(x^{n+m(q+1)}N_{n}(x)^{q-1}=1+\frac{1}{R_{n}(x)}\)_, when_ \(n\equiv 0\pmod{3}\) _where_ \(x\in\mu_{q+1}\)_;_
4. \(x^{n+m(q+1)}N_{n}(x)^{q-1}=\frac{1}{R_{n}(x)}\)_, when_ \(n\equiv 1\pmod{3}\) _where_ \(x\in\mu_{q+1}\)_._
Proof.: Consider \(x^{n+m(q+1)}M_{n}(x)^{q-1}\) where \(x\in\mu_{q+1}\). We showed in Lemma 6 that \(M_{n}(x)\) has no root in \(\mu_{q+1}\). Since \(\beta^{q}=\beta^{-1}=\beta+1\) and \((\beta+1)^{q}=(\beta+1)^{-1}=\beta\) where \(\beta\in\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}\), we have
\[x^{n+m(q+1)}M_{n}(x)^{q-1} =x^{n+m(q+1)}\frac{((x+\beta)^{n}+(x+\beta+1)^{n})^{q}}{(x+\beta )^{n}+(x+\beta+1)^{n}}\] \[=x^{n}\frac{(x^{q}+\beta^{q})^{n}+(x^{q}+(\beta+1)^{q})^{n}}{(x+ \beta)^{n}+(x+\beta+1)^{n}}\]
\[=\frac{(\beta+1)^{n}+((\beta+1)x+1)^{n}}{(x+\beta)^{n}+(x+\beta+1)^{n}}\] \[=\frac{(\beta+1)\beta^{n}(x+\beta+1)^{n}+\beta(\beta+1)^{n}(x+ \beta)^{n}}{\beta(x+\beta+1)^{n}+(\beta+1)(x+\beta)^{n}}.\]
When \(n\equiv 1\pmod{3}\), we have \(\beta^{n}=\beta\) and \((\beta+1)^{n}=\beta+1\), then we obtain
\[\frac{(x+\beta+1)^{n}+(x+\beta)^{n}}{\beta(x+\beta+1)^{n}+(\beta+1)(x+\beta)^ {n}}=\frac{1}{R_{n}(x)}.\]
This proves _(3)_ and _(4)_.
Next, we state and prove one of our main theorems of the paper.
**Theorem 8**.: _Let \(\beta\in\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}\) such that \(\beta\) and \(\beta+1\) are roots of the polynomial \(x^{2}+x+1\) in \(\mathbb{F}_{q^{2}}\) where \(q=2^{t}\) with \(t\) odd and \(n\), \(m\) positive integers. Consider \(a_{i}=0\) if \(3\mid i\) and \(a_{i}=1\) otherwise. Then the polynomial_
\[x^{n+m(q+1)}M_{n}(x^{q-1})=x^{n+m(q+1)}\sum_{i=0}^{n}a_{i}\binom{n}{i}(x^{q-1}) ^{n-i}\]
_permutes \(\mathbb{F}_{q^{2}}\) if and only if \(\gcd(n+m(q+1),q-1)=1\), and \(\gcd(n,q^{2}-1)=1\)._
Proof.: We have the following diagram by Lemma 1 (AGW criterion):
\[\begin{array}{l}\mathbb{F}_{q^{2}}^{*}\xrightarrow{x^{n+m(q+1)}M_{n}(x^{q- 1})}\mathbb{F}_{q^{2}}^{*}\\ \mu_{q+1}\xrightarrow{x^{q-1}}\mu_{q+1}\xrightarrow{x^{n+m(q+1)}M_{n}(x)^{q- 1}}\end{array}\]
This entails that our result is equivalent to showing that \(x^{n+m(q+1)}M_{n}(x)^{q-1}\) permutes \(\mu_{q+1}\) when \(\gcd(n+m(q+1),q-1)=1\).
We examine two cases: \(n\equiv 1\pmod{3}\) and \(n\equiv 2\pmod{3}\). We exclude the case \(n\equiv 0\pmod{3}\). Because, we know that \(2^{t}+1\) is divisible by \(3\) where \(q=2^{t}\) when \(t\) is odd and we supposed that \(\gcd(n,q^{2}-1)=1\) which implies \(\gcd(n,q+1)=1\). It contradicts with the fact that \(3\mid q+1\).
Case 1: Let \(n\equiv 1\pmod{3}\). We know by Lemma 7 that \(x^{n+m(q+1)}M_{n}(x)^{q-1}=R_{n}(x)\) when \(n\equiv 1\pmod{3}\), where \(x\in\mu_{q+1}\). Therefore, we need to show that \(R_{n}(x)\) permutes \(\mu_{q+1}\).
Consider \(\phi(x):\mathbb{F}_{q}\cup\{\infty\}\to\mu_{q+1}\) such that \(\phi(x)=\frac{x+\beta}{x+\beta+1}\) and \(\phi^{-1}(x):\mu_{q+1}\to\mathbb{F}_{q}\cup\{\infty\}\) such that \(\phi^{-1}(x)=\frac{(\beta+1)x+\beta}{x+1}\). We have the fact that \(\phi(x)\) induces a bijection from \(\mathbb{F}_{q}\cup\{\infty\}\) to \(\mu_{q+1}\) with \(\phi(\infty)=1\) and \(\phi^{-1}(x)\) induces a bijection from \(\mu_{q+1}\) to \(\mathbb{F}_{q}\cup\{\infty\}\) with \(\phi^{-1}(1)=\infty\) as we have \((\phi\circ\phi^{-1})(x)=x\). Therefore, \(R_{n}(x)\) is a bijection on \(\mu_{q+1}\) if and only if \(R_{n}(\phi(x)):\mathbb{F}_{q}\cup\{\infty\}\to\mu_{q+1}\) is a bijection if and only if \(\phi^{-1}(R_{n}(\phi(x))):\mathbb{F}_{q}\cup\{\infty\}\to\mathbb{F}_{q}\cup\{\infty\}\) is a bijection. To illustrate this, we have the following diagram:
We know that \(R_{n}(x)\) maps \(1\) to \(1\), since we have
\[R_{n}(1)=\frac{\beta^{n+1}+(\beta+1)^{n+1}}{\beta^{n}+(\beta+1)^{n}}=\frac{a_{n+ 1}}{a_{n}}.\]
Since \(a_{n+1}=a_{n}=1\) when \(n\equiv 1\pmod{3}\), we have \(R_{n}(1)=1\). Therefore, the function \(\phi^{-1}(R_{n}(\phi(x)))\) on \(\mathbb{F}_{q}\cup\{\infty\}\) maps \(\infty\) to \(\infty\). Then, we need to show that it permutes \(\mathbb{F}_{q}\). One can compute \(\phi^{-1}(R_{n}(\phi(x)))\) as follows using the equalities \(\beta^{n}=\beta\) and \((\beta+1)^{n}=\beta+1\) when \(n\equiv 1\pmod{3}\)
\[\phi^{-1}(R_{n}(\phi(x))) =\frac{(\beta+1)R_{n}\Big{(}\frac{x+\beta}{x+\beta+1}\Big{)}+ \beta}{R_{n}\Big{(}\frac{x+\beta}{x+\beta+1}\Big{)}+1}\] \[=\frac{(\beta+1)(\beta x)^{n}}{(\beta+1)(\beta x)^{n}+\beta(( \beta+1)x+\beta+1)^{n}}\] \[=\frac{x^{n}}{x^{n}+(x+1)^{n}}.\]
We denote this function as \(g(x)\); next we show that \(g(x)\) permutes \(\mathbb{F}_{q}\). We observe that \(g(1)=\frac{1}{1}=1\) and \(g(0)=\frac{0}{1}=0\). We also have \((x+1)^{n}\neq x^{n}\) for any \(x\in\mu_{q+1}\) since this requires that \(\frac{x}{x+1}=1\) as \(\gcd(n,q^{2}-1)=1\). Therefore, the denominator of \(g\) can not be \(0\) for any \(x\in\mu_{q+1}\).
Suppose that \(g(x)=g(y)\) with \(x\neq y\) where \(x,y\in\mathbb{F}_{q}\setminus\{0,1\}\). We have
\[\frac{x^{n}}{(x+1)^{n}+x^{n}}=\frac{y^{n}}{(y+1)^{n}+y^{n}}.\]
We have
\[x^{n}((y+1)^{n}+y^{n})=y^{n}((x+1)^{n}+x^{n}),\]
that is,
\[(x(y+1))^{n}=((x+1)y)^{n}.\]
We know that \(\gcd(n,q-1)=1\). As a consequence, we have \(xy+x=xy+y\) which implies \(x=y\), a contradiction. Therefore \(g(x)\) permutes \(\mathbb{F}_{q}\).
Thus, we showed that \(R_{n}(x)\) permutes \(\mu_{q+1}\setminus\{1\}\) when \(n\equiv 1\pmod{3}\) and \(R_{n}(1)=1\), and as a consequence \(R_{n}(x)\) permutes \(\mu_{q+1}\).
Case 2: Let \(n\equiv 2\pmod{3}\). We know by Lemma 7 that \(x^{n+m(q+1)}M_{n}(x)^{q-1}=R_{n}(x)+1\), when \(n\equiv 2\pmod{3}\) where \(x\in\mu_{q+1}\). Therefore, we need to show that \(R_{n}(x)+1\) permutes \(\mu_{q+1}\). Similar to the previous case, we need to show
\[\phi^{-1}\Big{(}\frac{(\beta+1)(\phi(x)+\beta+1)^{n}+\beta(\phi(x)+\beta)^{n} }{(\phi(x)+\beta)^{n}+(\phi(x)+\beta+1)^{n}}\Big{)}\]
permutes \(\mathbb{F}_{q}\cup\{\infty\}\). To illustrate this, we have the following diagram:
\(\begin{CD}\mathbb{F}_{q^{2}}^{*}@>{x^{n+m(q+1)}M_{n}(x^{q-1})}>{}>\mathbb{F}_{q^{2}}^{*} \\ \mu_{q+1}@>{x^{q-1}}>{R_{n}(x)+1}>{}>\mu_{q+1}\\ \mathbb{F}_{q}\cup\{\infty\}@>{\phi^{-1}(x)}>{\phi(x)}>{}>\mathbb{F}_{q}\cup\{ \infty\}\end{CD}\)
We observe that \(R_{n}(x)+1\) maps \(1\) to \(1.\) Indeed, since \(a_{n}=1\) and \(a_{n+1}=0\) when \(n\equiv 2\pmod{3},\) we obtain
\[R_{n}(1)+1=\frac{a_{n+1}}{a_{n}}+1=0+1=1.\]
Therefore, the function \(\phi^{-1}\circ(R_{n}(x)+1)\circ\phi\) on \(\mathbb{F}_{q}\cup\{\infty\}\) maps \(\infty\) to \(\infty.\) Then, we need to show that it permutes \(\mathbb{F}_{q}.\) One can compute \(\phi^{-1}\circ(R_{n}(x)+1)\circ\phi\) as follows using the equalities \(\beta^{n}=\beta^{2}=\beta+1\) and \((\beta+1)^{n}=(\beta+1)^{2}=\beta\) when \(n\equiv 2\pmod{3}\):
\[\phi^{-1}\circ(R_{n}(x)+1)\circ\phi =\phi^{-1}\Big{(}\frac{(\beta+1)(\phi(x)+\beta+1)^{n}+\beta(\phi( x)+\beta)^{n}}{(\phi(x)+\beta)^{n}+(\phi(x)+\beta+1)^{n}}\Big{)}\] \[=\frac{(\beta+1)((\beta+1)x+\beta+1)^{n}}{\beta(\beta x)^{n}+( \beta+1)((\beta+1)x+\beta+1)^{n}}\] \[=\frac{(x+1)^{n}}{x^{n}+(x+1)^{n}}.\]
Let us denote this function by \(g^{\prime}(x),\) we want to show that \(g^{\prime}(x)\) permutes \(\mathbb{F}_{q}.\) We have \(g^{\prime}(0)=\frac{1}{1}=1,\)\(g^{\prime}(1)=\frac{0}{1}=0.\) Similar to the previous case, the denominator of \(g^{\prime}(x)\) can not be \(0\) for any \(x\in\mu_{q+1}\) as this is the same denominator as in the previous case.
Suppose that \(g^{\prime}(x)=g^{\prime}(y)\) with \(x\neq y\) where \(x,y\in\mathbb{F}_{q}\setminus\{0,1\}.\) We have
\[\frac{(x+1)^{n}}{(x+1)^{n}+x^{n}}=\frac{(y+1)^{n}}{(y+1)^{n}+y^{n}},\]
that is,
\[(x+1)^{n}((y+1)^{n}+y^{n})=(y+1)^{n}((x+1)^{n}+x^{n}).\]
This implies that
\[(xy+y)^{n}=(xy+x)^{n}.\]
Since \(\gcd(n,q-1)=1,\) we have \(xy+y=xy+x\) which implies \(x=y,\) a contradiction. Therefore \(R_{n}(x)+1\) is a permutation on \(\mu_{q+1}\setminus\{1\}.\) Hence, \(R_{n}(x)+1\) permutes \(\mu_{q+1}\) as \(R_{n}(1)+1=1.\)
This shows \(x^{n+m(q+1)}M_{n}(x)^{q-1}\) permutes \(\mu_{q+1}\) when \(\gcd(n,q^{2}-1)=1\) which finishes the proof.
Similarly, we can have permutation polynomials in a field of even characteristic by using the numerator \(N_{n}(x)\) of the Redei function in even characteristic.
**Theorem 9**.: _Let \(\beta\in\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}\) such that \(\beta\) and \(\beta+1\) are roots of the polynomial \(x^{2}+x+1\) in \(\mathbb{F}_{q^{2}}\) where \(q=2^{t}\) with \(t\) odd and \(n\), \(m\) positive integers. Consider \(a_{i+2}=0\) if \(3\mid(i+2)\) and \(a_{i+2}=1\) otherwise. Then the polynomial_
\[x^{n+m(q+1)}N_{n}(x^{q-1})=x^{n+m(q+1)}\sum_{i=0}^{n}a_{i+2}\binom{n}{i}(x^{q- 1})^{n-i}\]
_permutes \(\mathbb{F}_{q^{2}}\) if and only if \(\gcd(n+m(q+1),q-1)=1\), and \(\gcd(n,q^{2}-1)=1\), when \(n\equiv 1\pmod{3}\)._
Proof.: By Lemma 7, \(x^{n+m(q+1)}N_{n}(x^{q-1})\) becomes \(\frac{1}{R_{n}(x)}\) in \(\mu_{q+1}\). We showed that \(R_{n}(x)\) permutes \(\mu_{q+1}\) in the proof of previous theorem when \(n\equiv 1\pmod{3}\). Therefore, \(\frac{1}{R_{n}(x)}\) permutes \(\mu_{q+1}\).
It is worth noticing that we can easily construct a permutation polynomial \(x^{n+m(q+1)}M_{n}(x^{q-1})\) or \(x^{n+m(q+1)}N_{n}(x^{q-1})\) over a field of even characteristic by only calculating the corresponding binomial coefficients \(\binom{n}{i}\pmod{2}\) as we have
\[M_{n}(x)=\sum_{i=0}^{n}a_{i}\binom{n}{i}x^{n-i}\]
and
\[N_{n}(x)=\sum_{i=0}^{n}a_{i+2}\binom{n}{i}x^{n-i}\]
where \(a_{i}=0\) if \(3\mid i\) and \(a_{i}=1\) otherwise.
Another useful property of Redei function is that its numerator and denominator, \(N_{n}(x)\), \(M_{n}(x)\) respectively, can be obtained recursively. Consider the Redei function in even characteristic, \(R_{n}(x)=\frac{N_{n}(x)}{M_{n}(x)}\). It is easy to see that we have the following equalities
\[(x+\beta)^{n}=N_{n}(x)+\beta M_{n}(x)\]
and
\[(x+\beta+1)^{n}=N_{n}(x)+(\beta+1)M_{n}(x).\]
This allows us to generate \(M_{n}(x)\) and \(N_{n}(x)\) recursively,
\[(x+\beta)^{n} =(x+\beta)(x+\beta)^{n-1}\] \[=(x+\beta)(N_{n-1}(x)+\beta M_{n-1}(x))\] \[=xN_{n-1}(x)+\beta xM_{n-1}(x)+\beta N_{n-1}(x)+(\beta+1)M_{n-1}(x)\] \[=\Big{(}xN_{n-1}(x)+M_{n-1}(x)\Big{)}+\beta\Big{(}(x+1)M_{n-1}(x) +N_{n-1}(x)\Big{)}.\]
Equivalently, we have
\[(x+\beta+1)^{n}=(x+\beta+1)(x+\beta+1)^{n-1}\] \[=xN_{n-1}(x)+(\beta+1)xM_{n-1}(x)+(\beta+1)N_{n-1}(x)+(\beta+1)M_{n -1}(x)+M_{n-1}(x)\] \[=\Big{(}xN_{n-1}(x)+M_{n-1}(x)\Big{)}+(\beta+1)\Big{(}(x+1)M_{n-1 }(x)+N_{n-1}(x)\Big{)}.\]
Therefore, we have the following recursive relation:
\[M_{0}(x) =0,\ \ \ N_{0}(x)=1,\] \[M_{n}(x) =(x+1)M_{n-1}(x)+N_{n-1}(x),\] \[N_{n}(x) =xN_{n-1}(x)+M_{n-1}(x).\]
These properties allow us, by applying Theorem 8, to obtain permutation polynomials over \(\mathbb{F}_{q^{2}}\) where \(q\) is an even prime power. To obtain different permutation polynomials considering their reduction by \(x^{q^{2}}+x\) using Theorem 8 and 9, we need to consider \(n\leq 3(q-1)\) and \(m\leq q-1\) as the next theorem shows.
**Theorem 10**.: _Let \(n,m\) be positive integers such that \(n\leq 3(q-1)\) and \(m\leq q-1\). We have_
\[x^{n+3(q-1)+m(q+1)}M_{n+3(q-1)}(x^{q-1})=x^{n+m(q+1)}M_{n}(x^{q-1})\]
_and_
\[x^{n+m(q+1)}M_{n}(x)\equiv x^{n+m^{\prime}(q+1)}M_{n}(x)(\mathrm{mod}(x^{q^{2} }+x))\]
_where \(m\equiv m^{\prime}\pmod{(q-1)}\)._
Proof.: The left hand side of the first equality is
\[x^{n+3(q-1)+m(q+1)}M_{n+3(q-1)}(x^{q-1})\] \[=x^{n+3(q-1)+m(q+1)}((x^{q-1}+\beta)^{n+3(q-1)}+(x^{q-1}+\beta+1)^ {n+3(q-1)}).\]
Then, to show the first equality above, we need to show that
\[x^{3(q-1)}(x^{q-1}+\beta)^{3(q-1)}=x^{3(q-1)}(x^{q-1}+\beta+1)^{3(q-1)}=1.\]
We compute the following for \(x\in\mathbb{F}_{q^{2}}^{*}\setminus\{1\}\):
\[x^{3(q-1)}(x^{q-1}+\beta)^{3(q-1)} =(x^{q}+\beta x)^{3(q-1)}\] \[=(x^{3q}+\beta x^{2q+1}+(\beta+1)x^{q+2}+x^{3})^{q-1}\] \[=\frac{x^{3q^{2}}+\beta x^{(2q+1)q}+(\beta+1)x^{(q+2)q}+x^{3q}}{x ^{3q}+\beta x^{2q+1}+(\beta+1)x^{q+2}+x^{3}}\] \[=\frac{x^{3}+(\beta+1)x^{q+2}+\beta x^{2q+1}+x^{3q}}{x^{3q}+\beta x ^{2q+1}+(\beta+1)x^{q+2}+x^{3}}\] \[=1.\]
We note that \(x^{3(q-1)}(x^{q-1}+\beta)^{3(q-1)}=1\), when \(x=1\), as \((\beta+1)^{3}=1\). Similarly, \(x^{3(q-1)}(x^{q-1}+\beta+1)^{3(q-1)}=1\) for \(x\in\mathbb{F}_{q^{2}}^{*}\). Therefore, the equality \(x^{3(q-1)}(x^{q-1}+\beta)^{3(q-1)}=x^{3(q-1)}(x^{q-1}+\beta+1)^{3(q-1)}=1\) holds.
For a positive integer \(m<q-1\), it is straightforward that \(x^{n+m^{\prime}(q+1)}=x^{n+m(q+1)}\) when \(m^{\prime}=m+s(q-1)\). Thus, the second equality holds when \(m\equiv m^{\prime}\pmod{(q-1)}\).
The tables below show permutation polynomials obtained from Theorem 8 and 9.
\begin{table}
\begin{tabular}{|l|l|l|} \hline & \(m=1\) & \(x^{10}\) \\ & \(m=2\) & \(x^{19}\) \\ & \(m=3\) & — \\ & \(m=4\) & \(x^{37}\) \\ & \(m=5\) & \(x^{46}\) \\ & \(m=6\) & \(x^{55}\) \\ & \(m=7\) & \(x\) \\ \hline & \(m=1\) & \(x^{11}\) \\ & \(m=2\) & \(x^{20}\) \\ & \(m=3\) & \(x^{29}\) \\ & \(m=4\) & \(x^{38}\) \\ & \(m=5\) & \(x^{47}\) \\ & \(m=6\) & — \\ & \(m=7\) & \(x^{2}\) \\ \hline & \(m=1\) & \(x^{13}\) \\ & \(m=2\) & \(x^{22}\) \\ & \(m=3\) & \(x^{31}\) \\ & \(m=4\) & \(x^{40}\) \\ & \(m=5\) & — \\ & \(m=6\) & \(x^{58}\) \\ & \(m=7\) & \(x^{4}\) \\ \hline & \(m=1\) & \(x^{17}\) \\ & \(m=2\) & \(x^{26}\) \\ & \(m=3\) & — \\ & \(m=6\) & \(x^{53}\) \\ & \(m=6\) & \(x^{62}\) \\ & \(m=7\) & \(x^{8}\) \\ \hline & \(m=1\) & \(x^{33}+x^{19}+x^{12}\) \\ & \(m=2\) & — \\ & \(m=3\) & \(x^{51}+x^{37}+x^{30}\) \\ & \(m=4\) & \(x^{60}+x^{46}+x^{39}+x^{4}\) \\ & \(m=5\) & \(x^{55}+x^{48}+x^{6}\) \\ & \(m=6\) & \(x^{55}+x^{48}+x^{6}\) \\ & \(m=6\) & \(x^{31}+x^{24}+x^{3}\) \\ & \(m=7\) & \(x^{40}+x^{33}+x^{12}\) \\ \hline \end{tabular} \begin{tabular}{|l|l|l|l|} \hline & \(m=1\) & \(x^{41}\) \\ & \(m=2\) & \(x^{50}\) \\ & \(m=3\) & \(x^{59}\) \\ & \(m=4\) & \(x^{5}\) \\ & \(m=5\) & — \\ & \(m=6\) & \(x^{23}\) \\ & \(m=7\) & \(x^{32}\) \\ \hline & \(m=1\) & \(x^{57}+x^{44}+x^{15}\) \\ & \(m=2\) & \(x^{52}+x^{24}+x^{3}\) \\ & \(m=3\) & \(x^{62}+x^{33}+x^{12}\) \\ & \(m=4\) & — \\ & \(m=5\) & \(x^{60}+x^{39}+x^{25}\) \\ & \(m=7\) & \(x^{48}+x^{34}+x^{6}\) \\ \hline & \(m=1\) & \(x^{25}\) \\ & \(m=2\) & \(x^{34}\) \\ & \(m=3\) & \(x^{43}\) \\ & \(m=4\) & \(x^{52}\) \\ & \(m=5\) & \(x^{61}\) \\ & \(m=6\) & — \\ & \(m=7\) & \(x^{16}\) \\ \hline \(n=17\) & \(m=1\) & \(x^{33}+x^{20}+x^{12}\) \\ & \(m=2\) & — \\ & \(m=3\) & \(x^{51}+x^{44}+x^{30}\) \\ & \(m=4\) & \(x^{60}+x^{53}+x^{39}\) \\ & \(m=5\) & \(x^{62}+x^{48}+x^{6}\) \\ & \(m=6\) & \(x^{57}+x^{15}+x^{8}\) \\ & \(m=7\) & \(x^{24}+x^{17}+x^{3}\) \\ \hline \(n=19\) & \(m=1\) & \(x^{17}\) \\ & \(m=2\) & \(x^{26}\) \\ & \(m=3\) & — \\ & \(m=4\) & \(x^{44}\) \\ & \(m=5\) & \(x^{53}\) \\ & \(m=6\) & \(x^{62}\) \\ & \(m=7\) & \(x^{8}\) \\ \hline & \(m=1\) & \(x^{33}+x^{19}+x^{12}\) \\ & \(m=2\) & — \\ & \(m=3\) & \(x^{51}+x^{37}+x^{30}\) \\ & \(m=4\) & \(x^{60}+x^{46}+x^{39}\) \\ & \(m=5\) & \(x^{55}+x^{48}+x^{6}\) \\ & \(m=6\) & \(x^{57}+x^{15}+x\) \\ & \(m=7\) & \(x^{24}+x^{10}+x^{3}\) \\ \hline \end{tabular}
\begin{tabular}{|l|l|l|} \hline & \(m=1\) & \(x^{41}\) \\ & \(m=2\) & \(x^{50}\) \\ & \(m=3\) & \(x^{59}\) \\ & \(m=4\) & \(x^{5}\) \\ & \(m=5\) & — \\ & \(m=6\) & \(x^{23}\) \\ & \(m=7\) & \(x^{32}\) \\ \hline \(n=11\) & \(x^{57}+x^{44}+x^{15}\) \\ & \(m=2\) & \(x^{52}+x^{24}+x^{3}\) \\ & \(m=3\) & \(x^{62}+x^{33}+x^{12}\) \\ & \(m=4\) & — \\ & \(m=5\) & \(x^{51}+x^{30}+x^{16}\) \\ & \(m=6\) & \(x^{60}+x^{39}+x^{25}\) \\ & \(m=7\) & \(x^{48}+x^{34}+x^{6}\) \\ \hline \(n=7\) & \(x^{25}\) \\ & \(m=2\) & \(x^{34}\) \\ & \(m=3\) & \(x^{43}\) \\ & \(m=4\) & \(x^{52}\) \\ & \(m=5\) & \(x^{61}\) \\ & \(m=6\) & — \\ & \(m=7\) & \(x^{16}\) \\ & \(m=17\) & \(x^{33}+x^{20}+x^{12}\) \\ \hline \(n=5\) & \(m=1\) & \(x^{51}+x^{30}+x^{39}\) \\ & \(m=3\) & \(x^{60}+x^{39}+x^{4}\) \\ & \(m=4\) & \(x^{60}+x^{53}+x^{39}\) \\ & \(m=6\) & \(x^{57}+x^{15}+x\) \\ & \(m=6\) & \(x^{57}+x^{15}+x\) \\ & \(m=6\) & \(x^{57}+x^{15}+x\) \\ & \(m=7\) & \(x^{48}+x^{20}+x^{6}\) \\ \hline \end{tabular}
\end{table}
Table 1: Permutation polynomials of \(\mathbb{F}_{2^{6}}\) obtained by applying Theorem 8, and reduced by \(x^{2^{6}}+x\).
The first two tables, Table 1 and Table 2, contain permutation polynomials of \(\mathbb{F}_{2^{6}}.\) According to Theorem 10 above, we consider \(n\leq 21\) and \(m\leq 7\) which satisfies conditions in Theorem 8 and 9 and these tables provide all permutation polynomials of \(\mathbb{F}_{2^{6}}\) that can be obtained from the construction that we present.
On the other hand, Table 3 and 4, present some permutation polynomials obtained by the same construction over \(\mathbb{F}_{2^{10}}.\) Similarly, Table 5 and 6 provide some permutation polynomials of \(\mathbb{F}_{2^{14}}\) according to Theorem 8 and 9.
\begin{table}
\begin{tabular}{|l|l|l|} \hline & \(m=1\) & \(x^{17}\) \\ & \(m=2\) & \(x^{26}\) \\ & \(m=3\) & — \\ & \(m=4\) & \(x^{44}\) \\ & \(m=5\) & \(x^{53}\) \\ & \(m=6\) & \(x^{62}\) \\ & \(m=7\) & \(x^{8}\) \\ \hline & \(m=1\) & \(x^{41}\) \\ & \(m=2\) & \(x^{50}\) \\ & \(m=3\) & \(x^{59}\) \\ & \(m=4\) & \(x^{5}\) \\ & \(m=5\) & — \\ & \(m=6\) & \(x^{23}\) \\ & \(m=7\) & \(x^{32}\) \\ \hline & \(m=1\) & \(x^{33}+x^{26}+x^{12}\) \\ & \(m=2\) & — \\ & \(m=3\) & \(x^{51}+x^{44}+x^{30}\) \\ & \(m=4\) & \(x^{60}+x^{53}+x^{39}\) \\ & \(m=5\) & \(x^{62}+x^{48}+x^{6}\) \\ & \(m=6\) & \(x^{57}+x^{15}+x^{8}\) \\ & \(m=7\) & \(x^{24}+x^{17}+x^{3}\) \\ \hline \end{tabular} \begin{tabular}{|l|l|l|l|} \hline & \(m=1\) & \(x^{57}+x^{29}+x^{15}\) \\ & \(m=2\) & \(x^{38}+x^{24}+x^{3}\) \\ & \(m=3\) & \(x^{47}+x^{33}+x^{12}\) \\ & \(m=4\) & — \\ & \(m=5\) & \(x^{51}+x^{30}+x^{2}\) \\ & \(m=6\) & \(x^{60}+x^{39}+x^{11}\) \\ & \(m=7\) & \(x^{48}+x^{20}+x^{6}\) \\ \hline & \(m=1\) & \(x^{11}\) \\ & \(m=2\) & \(x^{20}\) \\ & \(m=3\) & \(x^{29}\) \\ & \(m=4\) & \(x^{38}\) \\ & \(m=5\) & \(x^{47}\) \\ & \(m=6\) & — \\ & \(m=7\) & \(x^{2}\) \\ \hline \end{tabular} \begin{tabular}{|l|l|l|l|} \hline & \(m=1\) & \(x^{57}+x^{29}+x^{15}\) \\ & \(m=2\) & \(x^{38}+x^{24}+x^{3}\) \\ & \(m=3\) & \(x^{47}+x^{33}+x^{12}\) \\ & \(m=4\) & — \\ & \(m=5\) & \(x^{51}+x^{30}+x^{2}\) \\ & \(m=6\) & \(x^{60}+x^{39}+x^{11}\) \\ & \(m=7\) & \(x^{48}+x^{20}+x^{6}\) \\ \hline \end{tabular}
\begin{tabular}{|l|l|l|l|} \hline & \(m=1\) & \(x^{11}\) \\ & \(m=2\) & \(x^{20}\) \\ & \(m=3\) & \(x^{29}\) \\ & \(m=4\) & \(x^{38}\) \\ & \(m=5\) & \(x^{47}\) \\ & \(m=6\) & — \\ & \(m=7\) & \(x^{2}\) \\ \hline \end{tabular}
\end{table}
Table 2: Permutation polynomials of \(\mathbb{F}_{2^{6}}\) obtained by applying Theorem 9, and reduced by \(x^{2^{6}}+x.\)
\begin{table}
\begin{tabular}{|l|l|l|} \hline & \(m=1\) & \(x^{226}+x^{195}+x^{133}+x^{102}+x^{40}\) \\ \(n=7\) & \(m=2\) & \(x^{259}+x^{228}+x^{166}+x^{135}+x^{73}\) \\ & \(m=3\) & \(x^{292}+x^{261}+x^{199}+x^{168}+x^{106}\) \\ \hline & \(m=1\) & \(x^{418}+x^{325}+x^{294}+x^{201}+x^{46}\) \\ \(n=13\) & \(m=2\) & \(x^{451}+x^{358}+x^{327}+x^{234}+x^{79}\) \\ & \(m=3\) & \(x^{484}+x^{391}+x^{360}+x^{267}+x^{112}\) \\ \hline & \(m=1\) & \(x^{129}+x^{67}+x^{36}\) \\ \(n=34\) & \(m=2\) & \(x^{162}+x^{100}+x^{69}\) \\ & \(m=3\) & \(x^{195}+x^{133}+x^{102}\) \\ \hline \end{tabular}
\end{table}
Table 3: Some permutation polynomials obtained from Theorem 8 over \(\mathbb{F}_{2^{10}},\) and reduced by \(x^{2^{10}}+x.\)
\begin{table}
\begin{tabular}{|l|l|l|} \hline \multirow{2}{*}{\(n=7\)} & \(m=1\) & \(x^{1025}+x^{771}+x^{644}+x^{390}+x^{263}\) \\ & \(m=2\) & \(x^{1154}+x^{900}+x^{773}+x^{519}+x^{392}\) \\ & \(m=3\) & \(x^{1283}+x^{1029}+x^{902}+x^{648}+x^{521}\) \\ \hline \multirow{2}{*}{\(n=22\)} & \(m=1\) & \(x^{2945}+x^{2691}+x^{2183}+x^{659}+x^{405}\) \\ & \(m=2\) & \(x^{3074}+x^{2820}+x^{2312}+x^{788}+x^{534}\) \\ & \(m=3\) & \(x^{3203}+x^{2949}+x^{2441}+x^{917}+x^{663}\) \\ \hline \multirow{2}{*}{\(n=56\)} & \(m=1\) & \(x^{6281}+x^{6265}+x^{3233}+x^{2217}+x^{185}\) \\ & \(m=2\) & \(x^{6410}+x^{5394}+x^{3362}+x^{2346}+x^{314}\) \\ \hline \multirow{2}{*}{\(n=136\)} & \(m=1\) & \(x^{1281}+x^{1154}+x^{138}\) \\ & \(m=2\) & \(x^{1410}+x^{1283}+x^{267}\) \\ \cline{1-1} & \(m=3\) & \(x^{1539}+x^{1412}+x^{267}\) \\ \hline \end{tabular}
\end{table}
Table 6: Some permutation polynomials obtained from Theorem 9 over \(\mathbb{F}_{2^{14}}\), and reduced by \(x^{2^{14}}+x\).
\begin{table}
\begin{tabular}{|l|l|l|} \hline \multirow{2}{*}{\(n=7\)} & \(m=1\) & \(x^{257}+x^{195}+x^{164}+x^{102}+x^{71}\) \\ & \(m=2\) & \(x^{290}+x^{228}+x^{197}+x^{135}+x^{104}\) \\ & \(m=3\) & \(x^{323}+x^{261}+x^{230}+x^{168}+x^{137}\) \\ \hline \multirow{2}{*}{\(n=13\)} & \(m=1\) & \(x^{449}+x^{294}+x^{201}+x^{170}+x^{77}\) \\ & \(m=2\) & \(x^{482}+x^{327}+x^{234}+x^{203}+x^{110}\) \\ & \(m=3\) & \(x^{515}+x^{360}+x^{267}+x^{236}+x^{143}\) \\ \hline \multirow{2}{*}{\(n=34\)} & \(m=1\) & \(x^{129}+x^{98}+x^{36}\) \\ & \(m=2\) & \(x^{162}+x^{131}+x^{69}\) \\ & \(m=3\) & \(x^{195}+x^{164}+x^{102}\) \\ \hline \end{tabular}
\end{table}
Table 4: Some permutation polynomials obtained from Theorem 9 over \(\mathbb{F}_{2^{10}}\), and reduced by \(x^{2^{10}}+x\).
\begin{table}
\begin{tabular}{|l|l|l|} \hline \multirow{2}{*}{\(n=11\)} & \(m=1\) & \(x^{1410}+x^{1283}+x^{521}+x^{267}+x^{140}\) \\ & \(m=2\) & \(x^{1539}+x^{1412}+x^{650}+x^{396}+x^{269}\) \\ & \(m=3\) & \(x^{1668}+x^{1541}+x^{779}+x^{525}+x^{398}\) \\ \hline \multirow{2}{*}{\(n=20\)} & \(m=1\) & \(x^{2181}+x^{657}+x^{149}\) \\ & \(m=2\) & \(x^{2310}+x^{786}+x^{278}\) \\ & \(m=3\) & \(x^{2439}+x^{915}+x^{407}\) \\ \hline \multirow{2}{*}{\(n=56\)} & \(m=1\) & \(x^{6281}+x^{5265}+x^{3233}+x^{2217}+x^{185}\) \\ & \(m=1\) & \(x^{6281}+x^{5265}+x^{3233}+x^{2217}+x^{185}\) \\ & \(m=3\) & \(x^{1283}+x^{1029}+x^{902}+x^{648}+x^{521}\) \\ \hline \end{tabular}
\end{table}
Table 5: Some permutation polynomials obtained from Theorem 8 over \(\mathbb{F}_{2^{14}}\), and reduced by \(x^{2^{14}}+x\).
## 4 Conclusion
In this paper, we provide a recursive construction of permutation polynomials by using the numerator and the denominator of the Redei function in connection to the AGW criterion. This construction provides a convenient way to obtain permutation polynomials in a field of even characteristic as it only requires to compute binomial coefficients modulo \(2\); the polynomials can be obtained recursively. We give examples of permutation polynomials obtained by applying Theorem 8 and 9 in different finite fields. In this article, we fix \(\alpha=\beta^{2}+\beta=1\) and the construction is given by considering the Redei function in even characteristic under this assumption. As a piece of future work, it would be interesting to generalize the construction given by Theorem 8 and Theorem 9 for an arbitrary \(\alpha\in\mathbb{F}_{q}^{*}\setminus\{1\}\).
|
2303.17631 | $d$-wave Charge-$4e$ Superconductivity From Fluctuating Pair Density
Waves | We present a theory for charge-$4e$ superconductivity as a leading
low-temperature instability with a nontrivial $d$-wave symmetry. We show that
in several microscopic models for the pair-density-wave (PDW) state, when the
PDW wave vectors connect special parts of the Fermi surface, the predominant
interaction is in the bosonic pairing channel mediated by exchanging low-energy
fermions. This bosonic pairing interaction is repulsive in the $s$-wave channel
but attractive in the $d$-wave one, leading to a $d$-wave charge-$4e$
superconductor. By analyzing the Ginzburg-Landau free energy including
higher-order fluctuation effects of PDW, we find that the charge-$4e$
superconductivity emerges as a vestigial order of PDW, and sets in via a
first-order transition. Both the gap amplitude and the transition temperature
decay monotonically with increasing superfluid stiffness of the PDW order. Our
work provides a microscopic mechanism of higher-charge condensates with
unconventional ordering symmetry in strongly-correlated materials. | Yi-Ming Wu, Yuxuan Wang | 2023-03-30T18:00:03Z | http://arxiv.org/abs/2303.17631v2 | # \(d\)-wave Charge-\(4e\) Superconductivity From Fluctuating Pair Density Waves
###### Abstract
In this work we show that in several microscopic models for the pair-density-wave (PDW) state, when the PDW wave vectors connect special parts of the Fermi surface, the predominant instability driven by PDW fluctuations is toward a pairing state of the PDW boson themselves, realizing a long-saught-after charge-\(4e\) superconductor. Dual to the scenario for electron pairing, the effective attraction between the bosons is mediated by exchanging low-energy fermions. In particular we show that the pairing interaction for the charge-\(4e\) order is repulsive in the \(s\)-wave channel but attractive in the \(d\)-wave one. By analyzing the Ginzburg-Landau free energy including the fluctuation effects of PDW, within a large-\(M\) extension of the theory, we find that the charge-\(4e\) superconductivity emerges as a vestigial order of PDW, and sets in via a first-order transition.
## I Introduction
Charge-\(4e\) superconductivity (\(4e\)-SC) is an exotic order in which four fermions are bound together and condense [1; 2; 3; 4; 5]. Compared with the more common charge-\(2e\) superconductivity (\(2e\)-SC), it breaks the global \(U(1)\) symmetry to a discrete \(\mathbb{Z}_{4}\) symmetry. Moreover, a vortex core in a \(4e\)-SC supports half flux quanta, which was recently claimed to have been observed in Kagome metal CsV\({}_{3}\)Sb\({}_{5}\)[6]. On the theory side, recent progress have been made in understanding the properties of a mean-field \(4e\)-SC, which is in general an interacting system. It was found [7] that \(4e\)-SC is indeed a perfect conductor with nonzero superfluid density, but is in general a gapless system that shares many features with the Fermi liquid. However, the mechanism for \(4e\)-SC from interacting fermions remains a theoretical challenge. Unlike \(2e\)-SC which follows from an arbitrarily weak attractive interaction, there lacks a logarithmic divergence for the "quadruping" (as opposed to pairing) susceptibility for \(4e\)-SC. Therefore, a theoretical understanding of \(4e\)-SC, even at the mean-field level, requires a non-perturbative description [7; 8].
In recent years, a promising framework for understanding \(4e\)-SC has emerged from the perspective of intertwined orders [9], in which a plethora of orders breaking distinct symmetries can naturally emerge from the partial melting of certain primary electronic orders. For instance, it has been suggested that from the fluctuations of a charge-\(2e\) pair-density-wave (PDW) order [10; 11; 12; 13] or degenerate nematic charge-\(2e\) superconducting order [14; 15], the \(4e\)-SC can appear as a vestigial order in the sense that it restores only partial broken symmetry of the primary orders. However, the starting point of such analyses is typically a Ginzburg-Landau theory for the primary orders, and when the underlying fermionic models are taken into full account, the intrinsic leading instability toward \(4e\)-SC are in general subleading to those toward non-superconducting states, either nematic or ferromagnetic [14; 16; 17]. It remains a major theoretical challenge to search for microscopic mechanisms for \(4e\)-SC as a _leading_ instability.
In this work, we directly demonstrate that in a range of two-dimensional (2d) fermionic models with PDW instabilities, a \(4e\)-SC order naturally emerges as the leading instability once fluctuation effects are taken into account. Furthermore, we show that the resulting \(4e\)-SC order has a \(d\)-wave symmetry. As in previous works, the starting point of our theory, the PDW state, is by itself an exotic type of superconductivity with Cooper pairs carrying non-zero momenta [18; 19; 20; 21; 22; 23; 24; 25], which, despite being proposed to exist in many materials, most notably in the underdoped cuprates and in Kagome metals, in general requires unconventional mechanisms [26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41] to emerge. Encouragingly, it has been revealed recently [27] that PDW instabilities may be the leading pairing tendency as long as i) the repulsion are locally suppressed and ii) the interaction strength exceeds some threshold value. We use this mechanism for PDW as motivation and construct a Ginzburg-Landau (GL) effective theory for the interplay between different components of the PDW order parameter.
Instead of keeping coefficients in the GL theory as phenomenological parameters, we determine them via integrating out fermionic degrees of freedom. The instability toward \(4e\)-SC can be understood as the pairing instability due to interactions among PDW bosons. While in unconventional \(2e\)-SC the effective interaction among
Figure 1: The effective pairing interaction for PDW bosons via exchanging fermions. In particular, \(\mathbf{Q}\) and \(\mathbf{P}\) in our work are related by a \(\pi/2\) rotation.
fermions stems from exchanging low-energy bosons, here the effective interactions among bosons result from exchanging low-energy fermions; see Fig. 1. Importantly, we note that the form of the effective bosonic interaction is _independent_ of the specific mechanism for PDW, and equally applies for other proposed mechanisms for PDW such as that in Ref. [35; 37]. From the effective theory, we show that \(4e\)-SC with \(d\)-wave symmetry is the dominant vestigial order. By integrating out the PDW bosons, we obtain a new effective theory for the \(4e\)-SC order, similar in spirit to the commonly used method for analyzing composite (vestigial) orders [42; 43; 44; 45; 9; 16; 46].
Notably, the primary PDW order parameter we consider here is multi-directional, with at least a \(C_{4}\) rotation symmetry (see Fig. 2), and the PDW momenta are related by rotation. In fact, fluctuations of these PDW can lead to different composite CDW, nematic, and \(4e\)-SC orders. We first argue that in certain momentum space configurations \(4e\)-SC order is naturally favored energetically over nematic and CDW orders. Specifically, for a \(C_{4}\) symmetric system, the key requirement is
\[\frac{|\mathbf{Q}|}{\sqrt{2}}=\big{|}\mathbf{k}_{F}\ \text{mod}\ (\pi,0)\big{|}, \tag{1}\]
where \(\mathbf{k}_{F}\) corresponds to one of the portions on the Fermi surface that are connected by the PDW order parameters. In the next section we show several models that either automatically satisfy this condition or supports tunable \(|\mathbf{Q}|/k_{F}\). Next we show that exchanging low-energy fermions leads to an effective interactions between PDW bosons, and the special condition (1) allows us to single out a particular interaction process that is attractive for \(4e\)-SC in the \(d\)-wave pairing channel, while repulsive in the \(s\)-wave channel. Furthermore, we find that this phase transition into \(4e\)-SC is first-order. To obtain an effective theory for the \(4e\)-SC order, a common approach is to do a "second" Hubbard-Stratonovich (HS) transformation (the first one referring to the process of integrating out fermions and obtaining the effective theory for PDW) to integrate out the PDW order parameters. However, the HS method, which decouples quartic interactions into bilinear terms, is inapplicable due to the higher-order terms in the PDW free energy. To address this issue, we adopt and develop a new method involving a field corresponding to the composite order parameter and an ancillary field that behaves as a Lagrange multiplier. This method bilinearizes the primary order parameters in the free energy for interactions at all orders, which can thus be integrated out, leading to an effective GL free energy for the composite order only.
We find that the first-order phase transition occurs at \(T_{c}>T_{\text{PDW}}\), where \(T_{\text{PDW}}\) is the onset temperature of the primary PDW order within mean-field theory. An important quantity that determines \(T_{c}\) is the effective stiffness, which describes how difficult it is for the primary PDW fields to fluctuate at a given energy scale. We find that smaller stiffness yields a higher \(T_{c}\). This can be understood from the fact that the composite orders are from fluctuation effect. Our theoretical treatment for \(4e\)-SC is based on a large-\(M\) extension of the PDW effective theory, which assumes there are \(M\) flavors of PDW order parameters at each momentum. This extension justifies a mean-field theory for the \(4e\)-SC order. Going beyond this extension, in reality we expect the \(4e\)-SC to occur via a Kosterlitz-Thouless transition in 2d. Despite that the transition has a different nature for \(M\gg 1\) and \(M=1\), it is driven by precisely the same attractive interaction we identify in this work. Depending on the number of broken symmetries, in 2d the primary PDW order may also develop a quasi-long-ranged order, but at a lower temperature.
This paper is organized as follows. In Sec. II we discuss the the PDW model as the basis for constructing composite \(4e\)-SC orders, and show that imposing a special condition can enhance the \(4e\)-SC channel. In Sec. III.1 and III.2 we show how to explicitly expand the GL free energy to higher orders in terms of the PDW fields. In Sec. III.3 we construct the free energy for the \(4e\)-SC orders and in Sec. III.4 study their phase transitions. Some concluding remarks are left in Sec. IV.
Figure 2: (a) Continuum model for PDW with \(O(2)\) symmetry. The orange area is the filled Fermi sea, and the PDW momentum \(\mathbf{Q}\) takes values on the Bose surface represented by the red circle. When \(|\mathbf{Q}|\) equals \(\sqrt{2}k_{F}\), four symmetrically distributed fermions (black arrows) can first form four symmetrically distributed PDW orders with momenta \(\pm\mathbf{Q},\pm\mathbf{P}\), which then condense to give rise to \(4e\)-SC. (b) The PDW model realized on a square lattice when the fermions relevant for pairing are close to the van Hove points \((0,\pi)\) and \((\pi,0)\) and the PDW momentum is close to \((\pi,\pi)\). (c) PDW flucutations in the spin-fermion model. The four hot spots (black dots) near \((\pi,0)\) form a square, and the momenta for the potential PDW orders also form a square.
Attractive interaction for \(d\)-wave \(4e\)-SC from fluctuating PDW
### Microscopic models for PDW
As a starting point, we briefly review several microscopic models that hosts PDW instabilities or enhanced PDW fluctuations. One such model has been recently studied in Ref. [47]. In this model, electrons are subject to repulsive interactions, and it was found that when the local component of the interaction is suppressed and when the overall strength of the interaction exceeds some threshold value, the metallic system is susceptible towards forming PDWs instead of a uniform \(2e\)-SC, whose ordering vectors \(\mathbf{Q}\) form a "Bose surface" in momentum space [red circle in Fig. 2(a)], and the magnitude of \(\mathbf{Q}\) is mainly determined by the form of the pairing BCS interaction, and thus the ratio \(\mathbf{q}/k_{F}\) is tunable. Here we consider a special case when,
\[|\mathbf{Q}|\approx\sqrt{2}k_{F} \tag{2}\]
which turns out to significantly simplify the construction of the GL free energy, as well as showing a prominent attractive interaction toward \(4e\)-SC.
For the lattice version of the same model, one can consider a square lattice as shown in Fig. 2(b). In this scenario, the leading instability is still toward PDWs, while the PDW ordering vectors no longer form a surface but are discretized near \((\pi,\pi)\). Interestingly, since the Fermi surface plays a more important role in choosing the ordering parameter due to anisotropy of the density of states, the PDW ordering vector and \(k_{F}\) are locked such that \(|\mathbf{Q}|\approx\sqrt{2}k_{F}\), where \(k_{F}\) marks the position of the FS near the van Hove point, is naturally satisfied.
The relation for \(\mathbf{Q}/k_{F}=\sqrt{2}\) is not specific to a particular PDW mechanism, and arises quite naturally in square lattices. For example, in the spin-fermion model for cuprates, while it has been long known that the leading instability is toward a \(d\)-wave \(2e\)-SC, PDW instabilities have also been found to exist. [35; 37] There the PDW instabilities are driven by low-energy fermions near the hotspots, and the resulting PDW ordering vectors are shown in Fig. 2 (c), and satisfy \(|\mathbf{Q}|\approx\sqrt{2}|\mathbf{k}_{F}-(\pi,0)|\), where \(\mathbf{k}_{F}\) is the location of a hotspot near \((\pi,0)\). It will be straightforward to see that for our purposes, this condition is equivalent with that in Eq. (2), and for a generic \(C_{4}\) symmetric system, the condition can be compactly written as that in Eq. (1).
For the remainder of our work, we take the PDW instabilities and the condition Eq. (1) as input from the microscopic model, without relying on any particular mechanism for PDW fluctuations. For what we are going to do, the detailed analyses of a lattice model and a continuum model are slightly different, but the essential physics is the same. For concreteness and relevance to condensed matter systems, we will focus on the lattice version. Up to quadratic order, the Ginzburg-Landau free energy can be written as, [47]
\[F=\int\frac{d^{2}\tilde{\mathbf{q}}}{4\pi^{2}}\left(\frac{1}{|u(\tilde{\mathbf{q}})|}- \Pi_{pp}(\tilde{\mathbf{q}},T)\right)|\Psi(\tilde{\mathbf{q}})|^{2}\ \text{for}\ \tilde{\mathbf{q}}\ \text{near}\ \mathbf{Q}_{i} \tag{3}\]
where \(u(\tilde{\mathbf{q}})\) is the momentum space interaction which should be attractive near the ordering momentum \(\mathbf{Q}_{i}=\{\pm\mathbf{P},\pm\mathbf{Q}\}\) and we approximate it by a constant for simplicity. The pairing susceptibility \(\Pi_{pp}(\tilde{\mathbf{q}},T)\) can be expanded near \(\mathbf{Q}_{i}\) as: \(\Pi_{pp}(\tilde{\mathbf{q}},T)\approx\nu\alpha_{0}(T)-\nu\alpha_{2}(T)\mathbf{q}^{2}/k _{F}^{2}+...\), where \(\mathbf{q}=\tilde{\mathbf{q}}-\mathbf{Q}_{i}\), \(\nu\) is the density of states, and \(\alpha_{0}(T)\) and \(\alpha_{2}(T)\) are two dimensionless but temperature dependent coefficients. For later convenience, we rewrite Eq. (3) as
\[F=\sum_{i=\{\pm\mathbf{P},\pm\mathbf{Q}\}}\int\frac{d^{2}\mathbf{q}}{4\pi^{2}}\left[\alpha (T)+\kappa\mathbf{q}^{2}\right]|\Psi_{i}(\mathbf{q})|^{2}+\mathcal{O}(\Psi^{4}). \tag{4}\]
where \(\kappa=\nu\alpha_{2}(T)/k_{F}^{2}\) is the effective stiffness for the PDW orders and since we are interested in the temperature near \(T_{\text{PDW}}\) we can approximate \(\kappa\) as \(T\)-independent. Below the mean-field transition temperature \(T<T_{\text{PDW}}\), \(\alpha(T)<0\). We assume that \(T\sim T_{\text{PDW}}\ll E_{F}\) where \(E_{F}\) is the Fermi energy. In general, the dispersion of the PDW bosons is anisotropic in \(\mathbf{q}\), but as can be directly checked, for \(4e\)-SC instabilities the anisotropy factor can be absorbed into a redefinition of \(q_{x}\) and \(q_{y}\) around each PDW ordering momenta. For PDW fluctuations intrinsically driven by finite-range electronic interactions, \(\kappa\) comes from both the particle-particle bubble and from the \(\mathbf{q}\) dependence of the interaction, and for weak interactions we assume that \(1/\kappa\ll E_{F}\). Further, as will be explained later, for analytical control of the theory, we are interested in the regime \(1/\kappa\ll T_{\text{PDW}}\ll E_{F}\). Indeed, in the microscopic theory of Ref. [47], \(\kappa T_{\text{PDW}}\) can be freely tuned by the interaction strength.
Below we go beyond mean-field theory for \(\Psi\) and analyze the low-temperature phases.
### Attractive interaction for \(d\)-wave \(4e\)-Sc
The effective interaction between the PDW bosons \(\Psi\) arises from exchanging fermionic degrees of freedom. As is usually done for itinerant fermions, the processes involved are described by square diagrams; see Fig. 1 for an example. The key insight here is that, for dominant four-boson interactions, the internal fermions need to come from the vicinity of the Fermi surfaces. This consideration alone singles out three types of interactions
\[\beta_{1}|\Psi_{i}|^{4}+\beta_{2}|\Psi_{\pm\mathbf{P}}|^{2}|\Psi_{\pm\mathbf{Q}}|^{2}+ \beta\left(\Psi_{\mathbf{P}}\Psi_{-\mathbf{P}}\Psi_{\mathbf{Q}}^{*}\Psi_{-\mathbf{Q}}^{*}+h.c. \right), \tag{5}\]
where we have used the shorthands, e.g., \(|\Psi_{i}|^{4}\equiv\sum_{i}\int_{\mathbf{q}_{1},\mathbf{q}_{2},\mathbf{q}_{3}}\Psi_{i}( \mathbf{q}_{1})\Psi_{i}^{*}(\mathbf{q}_{2})\Psi_{i}(\mathbf{q}_{3})\Psi_{i}^{*}(\mathbf{q}_{1} -\mathbf{q}_{2}+\mathbf{q}_{3})\), and \(\int_{\mathbf{q}}\equiv\int\frac{d^{2}\mathbf{q}}{4\pi^{2}}\). Importantly, the last interaction with coefficient \(\beta\) is of appreciable strength when the condition
Eq. (1) is satisfied; otherwise at least two fermion propagators would come from regions far away from the Fermi surface.
The coefficients \(\beta_{1,2}\) and \(\beta\) can be readily obtained by integrating out low-energy fermions, and for \(T\ll E_{F}\), we linearize the fermionic dispersion and obtain [43]
\[\begin{split}\beta_{1}&=\frac{T}{4\pi^{2}v_{F}^{2}} \sum_{n}\int_{-E_{F}}^{E_{F}}\frac{dxdy}{(i\omega_{n}-x)^{2}(i\omega_{n}-y)^{2 }}\\ \beta_{2}&=\frac{T}{4\pi^{2}v_{F}^{2}}\sum_{n}\int_{- E_{F}}^{E_{F}}\frac{-dxdy}{(i\omega_{n}-x)^{2}(\omega_{n}^{2}+y^{2})}\\ \beta&=\frac{T}{4\pi^{2}v_{F}^{2}}\sum_{n}\int_{-E_{F} }^{E_{F}}\frac{dxdy}{(\omega_{n}^{2}+x^{2})(\omega_{n}^{2}+y^{2})}\end{split} \tag{6}\]
After the momentum integration and frequency summation we obtain,
\[\beta_{1}\sim\frac{1}{v_{F}^{2}E_{F}},\,\beta_{2}\sim\frac{1}{v_{F}^{2}E_{F}} \ln\frac{E_{F}}{T},\,\text{and}\,\,\beta=\frac{1}{16v_{F}^{2}T}. \tag{7}\]
Observe that \(\beta\gg\beta_{1,2}\) for \(T\ll E_{F}\); this can be understood from the fact that, when the dispersion is linearized and when one formally takes \(E_{F}\to\infty\), both \(\beta_{1,2}\) involves integrands having higher-order poles with zero residue.
Since \(\beta>0\), The last term in Eq. (5) represents a repulsive for the bosons corresponding to PDW fluctuations. As is familiar from \(2e\)-SC, repulsive interactions may have attractive components in pairing channels with higher-angular momenta. To this end, we introduce bilinear operators \(\Phi_{s}\) and \(\Phi_{d}\) (_not_ independent new fields) for \(4e\)-SC with \(s\)-wave and \(d\)-wave components
\[\begin{split}\Phi_{s}(\mathbf{q})\equiv&\int_{\mathbf{p}} \Psi_{\mathbf{Q}}(\mathbf{p}+\mathbf{q})\Psi_{-\mathbf{Q}}(-\mathbf{p})+\Psi_{\mathbf{P}}(\mathbf{p}+\mathbf{q} )\Psi_{-\mathbf{P}}(-\mathbf{p})\\ \Phi_{d}(\mathbf{q})\equiv&\int_{\mathbf{p}}\Psi_{\mathbf{Q}}( \mathbf{p}+\mathbf{q})\Psi_{-\mathbf{Q}}(-\mathbf{p})-\Psi_{\mathbf{P}}(\mathbf{p}+\mathbf{q})\Psi_{-\mathbf{P }}(-\mathbf{p})\end{split} \tag{8}\]
such that the \(\beta\) interaction can be rewritten as
\[\frac{\beta}{2}\int\frac{d^{2}\mathbf{q}}{4\pi^{2}}|\Phi_{s}(\mathbf{q})|^{2}-\frac{ \beta}{2}\int\frac{d^{2}\mathbf{q}}{4\pi^{2}}|\Phi_{d}(\mathbf{q})|^{2}. \tag{9}\]
Just like their \(2e\)-SC counterparts, \(\Phi_{s,d}\) are distinguished by their transformation properties under the \(C_{4}\) rotation: \(\Phi_{s}\) is even and \(\Phi_{d}\) is odd. From this decomposition it is obvious that in \(d\)-wave channel the charge-\(4e\) pairing interaction is attractive, which can potentially lead to a \(d\)-wave \(4e\)-SC. For the \(s\)-wave channel, the interaction is repulsive. This is reminiscent of the situation in \(2e\)-SC, in that repulsive and momentum-dependent interactions can lead to Cooper pairing with higher angular momenta, most notably in the \(d\)-wave channel.
Before we move on to construct an effective theory for \(d\)-wave \(4e\)-SC, we comment on the other interactions and other instabilities. The \(\beta_{1}\) term corresponds to a local repulsive interaction between the PDW bosons, which, if it were the only interaction, would stabilize a superfluid phase at zero temperature, i.e., the PDW phase. The \(\beta_{2}\) term is repulsive, but it can be written as, up to \(\mathbf{P}\to-\mathbf{P}\) and \(\mathbf{Q}\to-\mathbf{Q}\),
\[\frac{\beta_{2}}{4}\left(|\Psi_{\mathbf{P}}|^{2}+|\Psi_{\mathbf{Q}}|^{2}\right)^{2}- \frac{\beta_{2}}{4}\left(|\Psi_{\mathbf{P}}|^{2}-|\Psi_{\mathbf{Q}}|^{2}\right)^{2}, \tag{10}\]
revealing its tendency toward a nematic instability with the order parameter \(\mathcal{N}\sim|\Psi_{\mathbf{P}}|^{2}-|\Psi_{\mathbf{Q}}|^{2}\). In several recent works [17; 14], it was generally found that the nematic order is more favorable compared with \(4e\)-SC. However, our microscopic calculation has shown that in our PDW-based model, \(4e\)-SC is more favored energetically, because its corresponding attraction is parametrically larger.
We also note that the decomposition of the \(\beta\)-term interaction is not unique. Notably, it can also be decomposed such that there is an equally attractive interaction for a charge-density-wave (CDW) composite order, with e.g., \(\rho_{\mathbf{P}-\mathbf{Q}}=\Psi_{\mathbf{P}}\Psi_{\mathbf{Q}}^{*}-\Psi_{-\mathbf{Q}}\Psi_{-\mathbf{P} }^{*}\). The interplay between CDW and \(4e\)-SC has been systematically studied in a phenomenological model [10]. However, in our microscopic model the CDW instabilities are secondary to that of \(4e\)-SC. Qualitatively, the reason is the similar to the situation in a fermionic theory -- in 2d and higher dimensions with weakly-coupled fermions, the CDW instability requires nesting of the Fermi surface, that is, fermionic dispersions at different momenta with a fixed difference need to be the same, while for \(2e\)-SC instability is guaranteed by time-reversal or inversion symmetries. Here the same reasoning holds for interacting bosons, where the bosonic dispersion for \(\Psi_{\mathbf{P}}\) and \(\Psi_{\mathbf{Q}}\) is clearly not nested. For the continuum model in Fig. 2(a), the Bose surface is not nested either. For the lattice models in Fig. 2(b,c), the lack of nesting comes from the anisotropy of the bosonic dispersion at \(\pm\mathbf{Q}\) and \(\pm\mathbf{P}\) illustrated by the elongated red dots. Therefore, the CDW is suppressed compared with the \(d\)-wave \(4e\)-SC, even if attractive interactions for these channels are equal.
With these considerations in mind, below we will focus on \(d\)-wave \(4e\)-SC as the sole vestigial order from the PDW fluctuations.
## III Mean-field theory for \(d\)-wave \(4e\)-SC
In this section, we obtain a mean-field theory description for the transition into a \(d\)-wave \(4e\)-SC, conspired by fluctuations of the PDW bosons described by Eq. (4), and the attractive interaction in Eq. (9). In a similar spirit to the HS transformation for fermions, we integrate out the PDW bosons and obtain the free energy for the a \(d\)-wave \(4e\)-SC order parameter. Within the mean-field theory, the phase transition is then identified by a nontrivial saddle point of the free energy. Formally, the mean-field theory can be justified by extending \(\Phi_{i}(\mathbf{q})\) at every momentum \(\mathbf{q}\) to an \(M\)-component field \(\Phi_{i}^{\alpha}(\mathbf{q})\) where \(\alpha\in[1,M]\). Of course, for \(M=1\), the phase transition in 2d is of Kosterlitz-Thouless nature, and the
mean-field transition represents a crossover above the actual transition.
Neglecting parametrically weaker interactions \(\beta_{1,2}\), and neglecting fluctuations toward \(s\)-wave \(4e\)-SC (due to the repulsive interaction they are subject to), we begin with the following free energy,
\[F= \sum_{i}\int_{\mathbf{q}}\left[\alpha(T)+\kappa\mathbf{q}^{2}\right]\left| \Psi_{i}(\mathbf{q})\right|^{2}-\frac{\beta}{2}\int_{\mathbf{q}}\left|\Phi_{d}(\mathbf{q}) \right|^{2}+\mathcal{O}(\Psi^{6}) \tag{11}\]
where \(\Phi_{d}\) is defined in Eq. (8). The negative quartic term indicates that in a mean-field theory for PDW, the transition is first-order. When the fluctuation effects for the PDW are included, it is thus reasonable to expect that a transition into \(4e\)-SC is also first-order in nature. This is strongly indicated by previous analyses [42; 43], although there the free energy for the primary bosonic fields was positive-definite up to quartic order, and was truncated to quartic order. There, by a HS transformation on the primary bosonic fields, it was found that the transition into the vestigial order is first-order if the corresponding attractive interaction exceeds a threshold value.
While it is tempting to reproduce the previous analyses for quartic free energies to Eq. (11), since \(F\) is unbounded at quartic level, and since higher-order terms may strongly modify a first-order transition, one needs to include higher-order interaction terms. However, unlike the quartic interaction, higher-order interaction terms cannot be decoupled using a HS transformation, and a new approaches is needed.
### Higher-order interaction terms
Just like the quartic interactions, higher-order interactions come microscopically from higher-order processes of exchanging low-energy fermions. Making use of the condition in Eq. (1), again only a subset of diagrams need to be included. It is easy to implement an algorithm to numerate all possible terms that satisfy the above conditions. Here we present the results, and leave technical details in Appendix A. From the 6-leg and 8-leg diagrams, the leading contributions in terms of \(E_{F}/T\) at each order are
\[\gamma\left[\left(|\Psi_{\mathbf{Q}}|^{2}+|\Psi_{-\mathbf{Q}}|^{2}\right)| \Psi_{\mathbf{P}}|^{2}|\Psi_{-\mathbf{P}}|^{2}+\left(|\Psi_{\mathbf{P}}|^{2}+|\Psi_{-\mathbf{ P}}|^{2}\right)|\Psi_{\mathbf{Q}}|^{2}|\Psi_{-\mathbf{Q}}|^{2}+\left(\sum_{i=1}^{4}|\Psi_{ \mathbf{Q}_{i}}|^{2}\right)\left(\Psi_{\mathbf{P}}^{*}\Psi_{-\mathbf{P}}^{*}\Psi_{\mathbf{Q}} \Psi_{-\mathbf{Q}}+h.c.\ \right)\right]\] \[+ \frac{\zeta}{2}\left[\left(\Psi_{\mathbf{P}}^{*}\Psi_{-\mathbf{P}}^{*} \Psi_{\mathbf{Q}}\Psi_{-\mathbf{Q}}\right)^{2}+h.c.\ \right]+4\zeta|\Psi_{\mathbf{Q}}\Psi_{-\mathbf{Q}}\Psi_{\mathbf{P}}\Psi_{-\mathbf{P}}|^{2}+ \zeta\left(|\Psi_{\mathbf{P}}|^{2}|\Psi_{-\mathbf{P}}|^{2}+|\Psi_{\mathbf{Q}}|^{2}|\Psi_{- \mathbf{Q}}|^{2}\right)\left(\Psi_{\mathbf{P}}^{*}\Psi_{-\mathbf{P}}^{*}\Psi_{\mathbf{Q}}\Psi_ {-\mathbf{Q}}+h.c.\ \right)\] \[+ \frac{\zeta}{4}\left(\sum_{i=1}^{4}|\Psi_{i}|^{4}\right)\left( \Psi_{\mathbf{P}}^{*}\Psi_{-\mathbf{P}}^{*}\Psi_{\mathbf{Q}}\Psi_{-\mathbf{Q}}+h.c.\ \right)+\frac{\zeta}{2}\left(|\Psi_{\mathbf{Q}}|^{4}+|\Psi_{-\mathbf{Q}}|^{4}\right)| \Psi_{\mathbf{P}}|^{2}|\Psi_{-\mathbf{P}}|^{2}+\frac{\zeta}{2}\left(|\Psi_{\mathbf{P}}|^{ 4}+|\Psi_{-\mathbf{P}}|^{4}\right)|\Psi_{\mathbf{Q}}|^{2}|\Psi_{-\mathbf{Q}}|^{2}\] \[+ \frac{\zeta}{2}\left(|\Psi_{\mathbf{P}}|^{2}+|\Psi_{-\mathbf{P}}|^{2} \right)\left(|\Psi_{\mathbf{Q}}|^{2}+|\Psi_{-\mathbf{Q}}|^{2}|\right)\left[|\Psi_{\bm {P}}|^{2}|\Psi_{-\mathbf{P}}|^{2}+|\Psi_{\mathbf{Q}}|^{2}|\Psi_{-\mathbf{Q}}|^{2}+2\left( \Psi_{\mathbf{P}}^{*}\Psi_{-\mathbf{P}}^{*}\Psi_{\mathbf{Q}}\Psi_{-\mathbf{Q}}+h.c.\ \right)\right], \tag{12}\]
where the coefficients \(\gamma\) and \(\zeta\) are given by
\[\gamma=\frac{1}{768v_{F}^{2}}\frac{1}{T^{3}},\quad\zeta=\frac{1}{7680v_{F}^{2 }}\frac{1}{T^{5}}. \tag{13}\]
Like in Eq. (5), for compactness we have suppressed the momenta carried by each fields and the integration over conserved momenta. It is straightforward to check the terms in Eq. (12) have the correct unit of energy.
Eq. (12) can be greatly simplified by using composite fields including those corresponding to \(4e\)-SC
\[\Psi_{\mathbf{P}}^{*}\Psi_{-\mathbf{P}}^{*}\Psi_{\mathbf{Q}}\Psi_{-\mathbf{Q}}+h.c. =\frac{1}{2}|\Phi_{s}|^{2}-\frac{1}{2}|\Phi_{d}|^{2},\] \[|\Psi_{\mathbf{P}}|^{2}|\Psi_{-\mathbf{P}}|^{2} =\frac{1}{4}|\Phi_{s}-\Phi_{d}|^{2},\] \[|\Psi_{\mathbf{Q}}|^{2}|\Psi_{-\mathbf{Q}}|^{2} =\frac{1}{4}|\Phi_{s}+\Phi_{d}|^{2},\] \[\Psi_{\mathbf{Q}}\Psi_{-\mathbf{Q}}\Psi_{\mathbf{P}}\Psi_{-\mathbf{P}} =\frac{1}{16}\left(\Phi_{s}^{2}-\Phi_{d}^{2}\right)\] \[\left(|\Psi_{\mathbf{P}}|^{2}+|\Psi_{-\mathbf{P}}|^{2}\right)\left(|\Psi_{ \mathbf{Q}}|^{2}+|\Psi_{-\mathbf{Q}}|^{2}|\right) =\frac{1}{4}\left(\phi^{2}-\mathcal{N}^{2}\right), \tag{14}\]
Figure 3: A general diagrammatic representation for higher-order interactions of PDW bosons.
where in the last line \(\phi\equiv|\Psi_{\mathbf{P}}|^{2}+|\Psi_{-\mathbf{P}}|^{2}+|\Psi_{\mathbf{Q}}|^{2}+|\Psi_{-\bm {Q}}|^{2}\) is the Gaussian fluctuation and \(\mathcal{N}\equiv|\Psi_{\mathbf{P}}|^{2}+|\Psi_{-\mathbf{P}}|^{2}-|\Psi_{\mathbf{Q}}|^{2}-| \Psi_{-\mathbf{Q}}|^{2}\) is the nematic order, and on both sides, the momentum dependence of the fields and the momentum integrals have been suppressed for compactness.
It is important to note that this re-expression of quartic combinations of \(\Phi\) in terms of bilinear operators is not unique. For example, \(|\Psi_{\mathbf{P}}|^{2}|\Psi_{-\mathbf{P}}|^{2}\) can also be rewritten as \(\left[(|\Psi_{\mathbf{P}}|^{2}+|\Psi_{-\mathbf{P}}|^{2})^{2}-(|\Psi_{\mathbf{P}}|^{2}-| \Psi_{-\mathbf{P}}|^{2})^{2}\right]/4\), which does not explicitly depends on \(4e\)-SC fluctuations. However, the re-expression can be made unique after extending the \(\Psi\) field to an \(M\)-component \(\Psi^{\alpha}\), e.g.,
\[|\Psi_{\mathbf{P}}|^{2}|\Psi_{-\mathbf{P}}|^{2} \rightarrow\frac{1}{M}\sum_{\alpha,\beta=1}^{M}\Psi_{\mathbf{P}}^{ \alpha}\Psi_{\mathbf{P}}^{\alpha}\Psi_{-\mathbf{P}}^{\alpha}\Psi_{-\mathbf{P}}^{\alpha}\] \[=\frac{M}{4}|\Phi_{s}-\Phi_{d}|^{2},\] \[\text{where }\Phi_{s,d} =\frac{1}{M}\sum_{\alpha}\left(\Psi_{\mathbf{Q}}^{\alpha}\Psi_{-\bm {Q}}^{\alpha}\pm\Psi_{\mathbf{P}}^{\alpha}\Psi_{-\mathbf{P}}^{\alpha}\right). \tag{15}\]
As we will see, such a large-\(M\) extension justifies the mean-field theory for \(4e\)-SC, and thus different extension schemes correspond to different mean-field ansatze. However, to avoid congestion of notations, we will not explicitly write the large-\(M\) version of the free energy.
To arrive at a mean-field theory for \(d\)-wave \(4e\)-SC, we first take the mean-field ansatze \(\mathcal{N}=\Phi_{s}=0\), that is, we neglect the nematic and \(s\)-wave \(4e\)-SC order parameters along with their fluctuations. However, note that the Gaussian fluctuation \(\phi\) is always nonzero. We then have up to eighth order in \(\Psi\)
\[F= \sum_{i}\int_{\mathbf{q}}\left[\alpha(T)+\kappa\mathbf{q}^{2}\right]|\Psi _{i}(\mathbf{q})|^{2}-\frac{\beta}{2}\int_{\mathbf{q}}|\Phi_{d}(\mathbf{q})|^{2}+\] \[-\frac{\gamma}{4}\int_{\mathbf{q},\mathbf{p}}\Phi_{d}(\mathbf{q})\Phi_{d}^{*} (\mathbf{q}+\mathbf{p})\phi(\mathbf{p})\] \[+\frac{\zeta}{16}\int_{\mathbf{q},\mathbf{k},\mathbf{p}}\Phi_{d}(\mathbf{q})\Phi_{ d}^{*}(\mathbf{q}+\mathbf{p})\Phi_{d}(\mathbf{k})\Phi_{d}^{*}(\mathbf{k}-\mathbf{p})\] \[-\frac{\zeta}{16}\int_{\mathbf{q},\mathbf{k},\mathbf{p}}\Phi_{d}(\mathbf{q})\Phi_{ d}^{*}(\mathbf{q}+\mathbf{p})\phi(\mathbf{k})\phi^{*}(\mathbf{k}-\mathbf{p})+\mathcal{O}(\Psi^{10}) \tag{16}\]
where, we remind,
\[\Phi_{d}(\mathbf{q})\equiv \int_{\mathbf{p}}\Psi_{\mathbf{Q}}(\mathbf{p}+\mathbf{q})\Psi_{-\mathbf{Q}}(-\mathbf{p})- \Psi_{\mathbf{P}}(\mathbf{p}+\mathbf{q})\Psi_{-\mathbf{P}}(-\mathbf{p})\] \[\phi(\mathbf{q})\equiv \sum_{i=\pm\mathbf{P},\pm\mathbf{Q}}\int_{\mathbf{p}}\Psi_{i}(\mathbf{p}+\mathbf{q}) \Psi_{i}^{*}(\mathbf{p}). \tag{17}\]
It is convenient at this point to measure all lengths against \(n^{-1/2}\), where \(n\) is electron density, and all energies against the Fermi energy, which, in lieu of a concrete fermionic model, is defined as \(E_{F}\equiv v_{F}\sqrt{n}\) (i.e., we assume \(n\sim k_{F}^{2}\) per Luttinger's theorem). With these units, all parameters are made dimensionless. We have
\[\alpha= a\frac{T-T_{\text{PDW}}}{T_{\text{PDW}}},\quad\kappa\gg\frac{1} {T_{\text{PDW}}}\sim\frac{1}{T}\gg 1,\] \[\beta= \frac{1}{16T},\ \ \gamma=\frac{1}{768T^{3}},\ \ \zeta=\frac{1}{7680T^{5}}. \tag{18}\]
### Decoupling the interaction
Here we show that the decoupling of the interaction terms at any order can be achieved by introducing _two_ ancillary fields, one of which is a Lagrange multiplier field. Using this method, we can replace the bilinear operators \(\varphi\) and \(\Phi_{d}\) with local fields. For quartic interactions we show in Appendix B that the result is the same as that from the HS transformation.
We insert to the partition function the following \(\delta\)-function identities [48]
\[1\propto\int\mathcal{D}[\lambda_{d},\Delta_{d}^{*}]\exp\left[- \frac{1}{T}\int_{\mathbf{q}}\lambda_{d}(\mathbf{q})\left(\Delta_{d}^{*}(-\mathbf{q})-\Phi _{d}^{*}(\mathbf{q})\right)\right]\] \[1\propto\int\mathcal{D}[\varphi,\mu]\exp\left[-\frac{1}{T}\int_{ \mathbf{q}}\mu(\mathbf{q})\left(\varphi(-\mathbf{q})-\phi(-\mathbf{q})\right)\right] \tag{19}\]
and the saddle points for the \(\lambda\)'s and \(\mu\) ensures that one can replace in Eq. (16) the bilinear fluctutations \(\Phi_{d}\) and \(\phi\) with local fields \(\Delta_{d}\) and \(\varphi\). The resulting theory is
\[F[\Psi,\Delta_{d},\varphi,\lambda_{d},\mu]\] \[=\sum_{i}\int_{\mathbf{q}}\left[\alpha(T)+\kappa\mathbf{q}^{2}\right]|\Psi _{i}(\mathbf{q})|^{2}-\frac{\beta}{2}\int_{\mathbf{q}}|\Delta_{d}(\mathbf{q})|^{2}\] \[-\frac{\gamma}{4}\int_{\mathbf{q},\mathbf{p}}\Delta_{d}(\mathbf{q})\Delta_{d}^ {*}(\mathbf{q}+\mathbf{p})\phi(\mathbf{p})\] \[+\frac{\zeta}{16}\int_{\mathbf{q},\mathbf{k},\mathbf{p}}\Delta_{d}(\mathbf{q}) \Delta_{d}^{*}(\mathbf{q}+\mathbf{p})\Delta_{d}(\mathbf{k})\Delta_{d}^{*}(\mathbf{k}-\mathbf{p})\] \[-\frac{\zeta}{16}\int_{\mathbf{q},\mathbf{k},\mathbf{p}}\Delta_{d}(\mathbf{q}) \Delta_{d}^{*}(\mathbf{q}+\mathbf{p})\varphi(\mathbf{k})\varphi(\mathbf{p}-\mathbf{k})\] \[+\int_{\mathbf{q}}\lambda_{d}(\mathbf{q})\left(\Delta_{d}^{*}(\mathbf{q})- \Phi_{d}^{*}(\mathbf{q})\right)+h.c.\] \[+\int_{\mathbf{q}}\mu(\mathbf{q})\left(\varphi(-\mathbf{q})-\phi(-\mathbf{q}) \right)+\mathcal{O}(\Psi^{10}) \tag{20}\]
where in the third line we deliberately do not replace \(\phi(\mathbf{p})\) with \(\varphi(\mathbf{p})\).
As a mean-field ansatz, we consider saddle-point solutions of the composite fields that are spatially uniform, with \(\Delta_{d}(\mathbf{q})=\Delta_{d}\delta(\mathbf{q})\), and \(\varphi(\mathbf{q})=\varphi\delta(\mathbf{q})\). Using the regularization \(\delta(\mathbf{q}=0)=V\) where \(V\) is the volume (area) of the system and defining the free energy den
sity \(\mathcal{F}\equiv F/V\), Eq. (20) then becomes
\[\mathcal{F}[\Psi,\Delta_{d},\varphi,\lambda_{d},\mu)\] \[=\sum_{i,\mathbf{q}}\left[\alpha(T)+\kappa\mathbf{q}^{2}-\frac{\gamma}{4}| \Delta_{d}|^{2}-\mu\right]|\Psi_{i}(\mathbf{q})|^{2}\] \[-\frac{\beta}{2}|\Delta_{d}|^{2}+\frac{\zeta}{16}|\Delta_{d}|^{4 }-\frac{\zeta}{16}|\Delta_{d}|^{2}\varphi^{2}\] \[+\lambda_{d}\left(\Delta_{d}^{*}-\Phi_{d}^{*}\right)+\lambda_{d} ^{*}\left(\Delta_{d}-\Phi_{d}\right)+\mu\varphi+\mathcal{O}(\Psi^{10}), \tag{21}\]
where \(\lambda_{d}\equiv\lambda_{d}(\mathbf{q}=0)/V\), \(\mu\equiv\mu(\mathbf{q}=0)/V\), and we have used the fact that \(V\int_{\mathbf{q}}\equiv\sum_{\mathbf{q}}\) in the continuum limit.
### Free energy for \(d\)-wave \(4e\)-Sc
To justify a mean-field theory, we formally extend Eq. (16) to a large-\(M\) version using the scheme described in Eq. (15). Together with rescaling \(\lambda_{d}\to M\lambda_{d}\) and \(\mu\to M\mu\), Eq. (IV.1) becomes
\[\mathcal{F}[\Psi,\Delta_{d},\varphi,\lambda_{d},\mu) \tag{22}\] \[=\sum_{j=1}^{M}\sum_{i}\int_{\mathbf{q}}\left[\alpha(T)+\kappa\mathbf{q} ^{2}-\frac{\gamma}{4}|\Delta_{d}|^{2}-\mu\right]\left|\Psi_{i}^{j}(\mathbf{q}) \right|^{2}\] \[-\frac{M\beta}{2}|\Delta_{d}|^{2}+\frac{M\zeta}{16}|\Delta_{d}|^ {4}-\frac{M\zeta}{16}|\Delta_{d}|^{2}\varphi^{2}\] \[+M\lambda_{d}\left(\Delta_{d}^{*}-\Phi_{d}^{*}\right)+M\lambda_{d }^{*}\left(\Delta_{d}-\Phi_{d}\right)+M\mu\varphi+\mathcal{O}(\Psi^{10}).\]
Since \(\mathcal{F}\) is quadratic in \(\Psi\), we can integrate it out. We get, up to higher-order terms
\[\mathcal{F}(\Delta_{d},\varphi,\lambda_{d},\mu)/M\] \[= 2T\int_{\mathbf{q}}\ln\left[(\alpha(T)+\kappa\mathbf{q}^{2}-\frac{\gamma }{4}|\Delta_{d}|^{2}-\mu)^{2}-|\lambda_{d}|^{2}\right]\] \[-\frac{\beta}{2}|\Delta_{d}|^{2}+\frac{\zeta}{16}|\Delta_{d}|^{4 }-\frac{\zeta}{16}|\Delta_{d}|^{2}\varphi^{2}\] \[+\lambda_{d}\Delta_{d}^{*}+\lambda_{d}^{*}\Delta_{d}+\mu\varphi, \tag{23}\]
where the upper cutoff is a proper combination of high-energy scale \(E_{F}\) and short distance scale \(n^{-1/2}\), which in our units is \(1\). We have also used \(\frac{1}{V}\sum\mathbf{q}\equiv\int_{\mathbf{q}}\).
In the \(M\to\infty\) limit, the partition function \(Z=e^{-\mathcal{F}V/T}\) is completely determined by the saddle points of \(\Delta_{d},\varphi,\lambda_{d}\) and \(\mu\). One can take the saddle-point equations to eliminate the Lagrange multiplier fields \(\lambda_{d}\) and \(\mu\). We get from the requirements \(\partial\mathcal{F}/\partial\varphi=0\), \(\partial\mathcal{F}/\partial\lambda_{d}^{*}=0\) and \(\partial\mathcal{F}/\partial\mu=0\) that
\[\mu= \frac{\zeta}{8}|\Delta_{d}|^{2}\varphi \tag{24}\] \[|\lambda_{d}|= r\tanh\frac{2\pi\kappa|\Delta_{d}|}{T}\] (25) \[\varphi= \frac{T}{\pi\kappa}\ln\left(\frac{\kappa}{r}\cosh\frac{2\pi \kappa|\Delta_{d}|}{T}\right),\] (26) \[\text{where }r\equiv \alpha(T)-\frac{\gamma}{4}|\Delta_{d}|^{2}-\frac{\zeta}{8}|\Delta_ {d}|^{2}\varphi. \tag{27}\]
Note that in order to legitimize the procedure of integrating out PDW orders, we need to keep \(r\) as positive definite. This will be justified in our following calculation for \(\Delta_{d}\). Eqs. (24, 25) can be used to eliminate the Lagrange multiplier fields \(\lambda\) and \(\mu\) in Eq. (IV.1), which yields
\[\frac{\mathcal{F}(\Delta_{d})}{M}=-\frac{\beta}{2}|\Delta_{d}|^{2} +\frac{\zeta}{16}|\Delta_{d}|^{4}+\frac{\zeta}{16}|\Delta_{d}|^{2}\varphi^{2}+ r\varphi+\frac{rT}{\pi\kappa}\] \[=\alpha(T)\varphi-\left(\frac{\beta}{2}+\frac{T\gamma}{4\pi\kappa} \right)|\Delta_{d}|^{2}-\left(\frac{\gamma}{4}+\frac{T\zeta}{8\pi\kappa} \right)|\Delta_{d}|^{2}\varphi\] \[-\frac{\zeta}{16}|\Delta_{d}|^{2}\varphi^{2}+\frac{\zeta}{16}| \Delta_{d}|^{4}+\text{const}, \tag{28}\]
where at the second equality we have used Eq. (IV.1) to eliminate \(r\).
The \(\varphi\) dependence in Eq. (IV.1) can be further eliminated using Eq. (26), which does not admit a closed-form solution but can be expressed in Taylor expansion
\[\varphi=\frac{T}{\pi\kappa}\ln\frac{\kappa}{\alpha}+c_{2}|\Delta_{d}|^{2}+c_{4 }|\Delta_{d}|^{4}+c_{6}|\Delta_{d}|^{6}+... \tag{29}\]
where the first term is the familiar logarithmic divergence of Gaussian fluctuations in 2d, and the rest is the back action from \(4e\)-SC fluctuations. Substituting this expansion into Eq. (26) and matching the coefficients on both sides, we find, omitting high-order terms in \(c_{6}\) due to space,
\[c_{2} =\frac{2\pi\kappa}{T}+\frac{\gamma T}{4\pi\alpha\kappa}+\frac{ \zeta T^{2}}{8\pi^{2}\alpha\kappa^{2}}\ln\frac{\kappa}{\alpha},\] \[c_{4} =-\frac{4\pi^{3}\kappa^{3}}{3T^{3}}+\frac{\zeta}{4\alpha}+\frac{ \gamma^{2}T}{32\pi\alpha^{2}\kappa}+\frac{\gamma\zeta T^{2}}{32\pi^{2}\alpha^{2} \kappa^{2}}\ln\frac{\kappa}{\alpha} \tag{30}\] \[\quad+\frac{\zeta^{2}T^{3}}{64\pi^{3}\alpha^{2}\kappa^{3}}\ln \frac{\kappa}{\alpha}+\frac{\zeta^{2}T^{3}}{128\pi^{3}\alpha^{2}\kappa^{3}}\ln^ {2}\frac{\kappa}{\alpha}\] \[c_{6} =\frac{64\pi^{5}\kappa^{5}}{45T^{5}}-\frac{\pi^{2}\zeta\kappa^{2}}{ 6\alpha T^{2}}+\cdots\]
As a result, we have, after dropping the constant terms
\[\frac{\mathcal{F}(\Delta_{d})}{M}=A(T)|\Delta_{d}|^{2}+B(T)|\Delta_{d}|^{4}+C(T)| \Delta_{d}|^{6}+\cdots \tag{31}\]
where the Ginzburg-Landau coefficients are
\[A(T) =\frac{2\pi\alpha\kappa}{T}-\frac{\beta}{2}-\frac{\gamma T}{4\pi \kappa}\ln\frac{\kappa}{\alpha}-\frac{\zeta T^{2}}{16\pi^{2}\kappa^{2}}\ln^{2} \frac{\kappa}{\alpha},\] \[B(T) =-\frac{4\pi^{3}\alpha\kappa^{3}}{3T^{3}}-\frac{\gamma\gamma\kappa} {2T}+\frac{\zeta}{16}-\frac{\gamma^{2}T}{32\pi\alpha\kappa}\] \[-\frac{\zeta}{4}\left(\frac{\gamma T^{2}}{8\pi^{2}\alpha\kappa^{2}}+ 1\right)\ln\frac{\kappa}{\alpha}-\frac{\zeta^{2}T^{3}}{128\pi^{3}\alpha\kappa^{3}} \ln^{2}\frac{\kappa}{\alpha},\] \[C(T) =\frac{64\pi^{5}\alpha\kappa^{5}}{45T^{5}}+\frac{\pi^{3}\gamma\kappa^ {3}}{3T^{3}}+\frac{\pi^{2}\zeta\kappa^{2}}{6T^{2}}\left(\ln\frac{\kappa}{\alpha}- \frac{3}{2}\right)+\cdots \tag{32}\]
Using Eq. (IV.1), we see that all terms in Eq. (IV.1), are actually organized in increasing powers of \(1/\kappa T\ll 1\) or \(1/\kappa\alpha\). Assuming \(\kappa\alpha\gtrsim 1\) as we will justify below, the
coefficients can be greatly simplified. Indeed, we have already omitted many terms in higher powers of \(1/\kappa T\) in \(C(T)\). More importantly, higher-order terms in the PDW field \(\Psi\) in Eq. (16) in fact enter all coefficients \(A\), \(B\), and \(C\), but it is straightforward to see from the pattern that they are further suppressed by \(1/\kappa T\). In this sense, the expressions in Eq. (32) only make sense in the \(\kappa T\gg 1\) limit. We have
\[A(T)\approx \frac{2\pi\alpha\kappa}{T}-\frac{\beta}{2}\] \[B(T)\approx -\frac{4\pi^{3}\alpha\kappa^{3}}{3T^{3}}-\frac{\pi\gamma\kappa}{ 2T}\] \[C(T)\approx \frac{64\pi^{5}\alpha\kappa^{5}}{45T^{5}}+\frac{\pi^{3}\gamma \kappa^{3}}{3T^{3}}. \tag{33}\]
In the opposite limit \(\kappa T\ll 1\), the effective theory for PDW bosons becomes strongly coupled, and a perturbative expansion in \(\Psi\) becomes inadequate.
### Mean-field transition into \(d\)-wave \(4e\)-Sc
Driven by the temperature dependence of \(\alpha(T)\), the quadratic coefficient \(A(T)\) becomes negative upon lowering temperature when \(\alpha\kappa=\beta T/4\pi\approx 0.005\). As this point, we see that \(B(T)<0\), i.e., \(\Delta_{d}=0\) is no longer a local minimum of the free energy. Instead, the global minimum of free energy is given by \(\Delta_{d}\neq 0\). Therefore, a first-order phase transition must have already occurred at a higher temperature. The transition temperature is determined by the condition \(B^{2}\sim AC\), and thus we find by using Eq. (18) that
\[\alpha_{c}\sim 1/\kappa,\quad T_{c}>T_{\rm PDW}. \tag{34}\]
and that at the onset,
\[|\Delta_{d}|^{2}\sim\frac{T_{c}^{2}}{\kappa^{2}}. \tag{35}\]
We see that a smaller stiffness term \(\kappa\) leads to a higher \(T_{c}\) and a larger onset for \(\Delta_{d}\). This is the main conclusion of this work.
One important consistency check for our theory is that \(r\) defined in Eq. (27) should be positive, at least until the phase transition occurs. Plugging Eq. (35) and Eq. (29) into Eq. (27) we see that at the transition \(r\approx\alpha\), which is indeed positive. Taking Eq. (31) at face value, the first-order phase transition occurs when \(B^{2}=4AC\). However, we caution that with a truncation at finite order, the free energy (31) is unreliable in quantitatively determining a first-order transition temperature, as higher-order terms are equally important. Nevertheless, the qualitative relations (34, 35) remain valid even in the presence of higher-order terms.
It is important here to note that within the large-\(M\) theory, the mean-field temperature \(T_{\rm PDW}\) should not be identified as the transition temperature into a PDW state. In fact, with \(M\) components, the \(\Psi_{i}^{\alpha}\) field can never condense at any finite temperature. Instead, within the large-\(M\) framework, a PDW state should be recognized as a state in which both \(U(1)\) and translational symmetries are broken, e.g., with both \(\langle\Phi_{d}\rangle\neq 0\) and \(\langle\rho_{\mathbf{p}-\mathbf{q}}\rangle\neq 0\). As we discussed, however, the \(4e\)-SC order is by far the predominant one, and translational symmetry remains intact well into the \(4e\)-SC state. Therefore, the PDW order can only develop at much lower temperatures if at all. Similarly for \(M=1\), the \(4e\)-SC state has quasi-long-range order via a Kosterlitz-Thouless transition, and PDW state may only develop via another Kosterlitz-Thouless transition at lower temperatures corresponding to the translation symmetry.
Figure 4: Numerical results for the phase transition of \(\Delta_{d}\). (a) The GL coefficients \(A(T)\), \(B(T)\) and \(C(T)\) in Eq.(31) obtained for taking \(a=10^{-3},T_{\rm PDW}=0.1\) and \(\kappa=10\). The \(\Delta_{d}\neq 0\) phase transition temperature \(T_{c}\) is determined by the condition \(B^{2}-4AC=0\). (b) The \(4e\)-SC order parameter as a function of \(T\). For clarity we also show the free energy profile at different \(T\) as the insets. The pink colored region at \(T\) close to \(T_{\rm PDW}\) is where \(r<0\) and our perturbative analysis breaks down due to a large \(|\Delta_{d}|\). (c) and (d), \(T_{c}\) and \(|\Delta_{d}|\) at \(T_{c}\) as a function of the effective stiffness \(\kappa\).
Complementary to the analytical approach, we also solved numerically for saddle points using the full expressions in Eqs. (31, 32). In Fig. 4(a) we show the temperature dependence of the GL coefficients \(A,B\) and \(C\), together with the quantity \(B^{2}-4AC\), for a typical set of parameters \(a=10^{-3}\), \(T_{\rm PDW}=0.1\) and \(\kappa=10\). We see clearly that above \(T_{\rm PDW}\), \(B\) remains negative while \(C\) remains positive. \(A\) changes sign at some particular \(T\), which is consistent with our above analysis and can be also seen from the leading approximation in Eq. (33). The quantity \(B^{2}-4AC\) vanishes as \(T\to T_{c}\), and thus can be used to obtain \(T_{c}\). For this particular set of parameters, we obtain that \(T_{c}\approx 1.6T_{\rm PDW}\). In Fig. 4(b) we show the magnitude of the \(d\)-wave \(4e\)-SC order parameter \(|\Delta_{d}|\) as a function of \(T\), which is calculated via
\[|\Delta_{d}|=\left(\frac{-B+\sqrt{B^{2}-3AC}}{3C}\right)^{1/2} \tag{36}\]
for \(B<0\) and \(C>0\) is always satisfied near \(T_{c}\). We see that \(|\Delta_{d}|\) increases monotonically as \(T\) decreases below \(T_{c}\). From Eq. (27) we see that once \(|\Delta_{d}|\) becomes large enough, \(r\) will eventually becomes negative, invalidating our perturbative analysis. In practice, in the region of \(r<0\) one needs to keep expanding to higher orders of the PDW order parameters. We also show the free energy profile for \(T>T_{c}\), \(T=T_{c}\) and \(T<T_{c}\), from which the nature of first order phase transition is seen directly. In order to show the impact of the stiffness \(\kappa\) on various quantities for the \(4e\)-SC order, we show in Fig. 4(c) and (d) the plot of \(T_{c}\) and \(|\Delta_{d}(Tc)|\) as a function of \(\kappa\). It is clear that a larger \(\kappa\) yields a smaller \(T_{c}\) and \(|\Delta_{d}(T_{c})|\). This is consistent with Eq. (35), although that is obtained in the limit that \(\kappa T\gg 1\).
## IV Conclusion and outlook
In this work, we studied the phase transition driven by bidirectional fluctuations of PDWs. We showed that when the ratio of PDW momenta and the Fermi momenta is \(\sqrt{2}\), the predominant interaction among PDW bosons is attractive in the \(d\)-wave pairing channel of the PDW bosons. In contrast to earlier works, such a \(4e\)-SC state is the leading vestigial instability of \(2e\)-SC fluctuations. The \(d\)-wave nature is reminiscent of \(d\)-wave \(2e\)-SC in fermionic pairing problems with repulsive interactions. Interestingly, we found that the similar pairing mechanism can be applied to the formation of \(4e\)-SC.
On the technical level, we developed a new formalism to analyze vestigial orders. Earlier works [42; 43] analyzed the free energy of primary orders that is bounded up to quartic order and used a HS transformation to decouple the four-boson interactions. In this work we have an unbounded free energy of PDW bosons that calls for the inclusion higher-order bosonic interactions. We introduced Langrange multiplier fields which enable us to decouple interactions any other order. As a result, we found that the transition into the \(d\)-wave \(4e\)-SC is weakly first-order with a higher transition temperature than the mean-field \(T_{\rm PDW}\).
Our theory has interesting implications for unconventional superconductors. There exist strong evidence for PDW in underdoped cuprates, although it is likely unidirectional within each Cu-O plane and alters between \(x\) and \(y\) ordering direction between neighboring planes [44]. Our work points to a possibility of \(4e\)-SC with a relative sign change between neighboring Cu-O planes with perpendicular PDW wavevectors, although microscopic details call for a separate analysis. In addition, both PDW and \(4e\)-SC (and \(6e\)-SC) have been proposed to exist in the Kagome metal CsV\({}_{3}\)Sb\({}_{5}\)[49; 50; 51]. It would be interesting to generalize our theory to hexagonal systems, which may be applied to CsV\({}_{3}\)Sb\({}_{5}\). Finally, the mean-field theory developed here for \(4e\)-SC can be directly tested using unbiased numerical methods such as quantum Monte Carlo simulations, which we leave as future work.
_Note added:_ During the preparation of the manuscript, we became aware of a recent work Ref. [17] by Hecker et al., in which \(d\)-wave \(4e\)-SC was considered as a vestigial order of a two-component superconductor. Different from our work, the \(d\)-wave \(4e\)-SC was found to be a subleading instability.
###### Acknowledgements.
The authors would like to thank Andrey Chubukov, Rafael Fernandes, Sri Raghu, Pavel Nosov, Hong Yao and Steven Kivelson for useful discussions. Y.-M. Wu acknowledges the Gordon and Betty Moore Foundation's EPiQS Initiative through GBMF8686 for support at Stanford University. Y. Wang is supported by NSF under award number DMR-2045781.
## Appendix A Details of finding leading contributions up to \(\mathcal{O}(|\Psi|^{8})\)
In obtaining the Ginzburg-Landau theory for the primary PDW orders, we integrate out the fermions and obtain
\[-\operatorname{tr}\ln\mathcal{G}^{-1} \tag{37}\]
in the action. The matrix \(\mathcal{G}\) is defined as \(\mathcal{G}^{-1}=G_{0}^{-1}+\hat{\Delta}\) and
\[G_{0}^{-1}=\begin{pmatrix}G_{p}^{-1}&0\\ 0&G_{h}^{-1}\end{pmatrix},\quad\hat{\Delta}=\begin{pmatrix}0&\Psi\\ \Psi^{\dagger}&0\end{pmatrix} \tag{38}\]
The particle and hole Green's function \(G_{p}\) and \(G_{h}\) are diagonal in frequency-momentum space. For instance, \([G_{p}]_{k,k^{\prime}}=G_{p}(k)\delta_{k,k^{\prime}}\). While \(\Psi\) and \(\Psi^{\dagger}\) is not diagonal in momentum space. In particular, we have \([\Psi]_{k,k^{\prime}}=\sum_{i}\Psi_{\mathbf{Q}_{i}}\delta_{k,\mathbf{Q}_{i}-k^{\prime}}\) and \([\Psi^{\dagger}]_{k,k^{\prime}}=\sum_{i}\Psi^{\dagger}_{\mathbf{Q}_{i}}\delta_{k,- \mathbf{Q}_{i}-k^{\prime}}\) with
\(\mathbf{Q}_{i}\in\{\pm\mathbf{Q},\pm\mathbf{P}\}\). Expanding Eq. (16) in terms of \(\Psi_{\mathbf{Q}_{i}}\), we obtain
\[\begin{split}-\text{tr}\ln\mathcal{G}^{-1}&=-\text{ tr}\ln G_{0}^{-1}-\text{tr}\ln(1+G_{0}\hat{\Delta})\\ &=-\text{tr}\ln G_{0}^{-1}+\sum_{n=1}^{\infty}\frac{1}{2n}\text{ tr}(G_{0}\hat{\Delta})^{2n}.\end{split} \tag{17}\]
Given the matrix structure in Eq. (15), it is easy to see that \(G_{0}\) is can be obtained by simply taking the inverse of the diagonal elements of \(G_{0}^{-1}\). The \(n\)-th element in the series is given by
\[\begin{split}\frac{1}{2n}\sum_{\{k_{i}\}}[G_{p}]_{k_{1}}[\Psi]_{ k_{1},k_{2}}[G_{h}]_{k_{2}}[\Psi^{*}]_{k_{2},k_{3}}...\\...[G_{p}]_{k_{2n-1}}[\Psi]_{k_{2n-1},k_{2n}}[G_{h}]_{k_{2n}}[ \Psi^{*}]_{k_{2n},k_{1}}\end{split} \tag{18}\]
In our special situation, all the Green's functions should be \(G_{i},i=1,...,4\), which are
\[\begin{split} G_{1}(k)&=\frac{1}{i\omega_{n}-v_{F} k_{x}},G_{2}(k)=\frac{-1}{i\omega_{n}-v_{F}k_{y}},\\ G_{3}(k)&=\frac{1}{i\omega_{n}+v_{F}k_{x}},G_{4}(k) =\frac{-1}{i\omega_{n}+v_{F}k_{y}}.\end{split} \tag{19}\]
Given a set of Green's functions, the external momentum can be determined by momentum conservation. Therefore, each diagram uniquely corresponds to one permutation of the Green's function set. We can develop a simple algorithm to numerate all the possible combinations for any \(n\) based on Eq. (18). The algorithm processes as follows,
1. For a given \(n\), specify all possible set \(S\) for \(2n\) Green's functions that satisfy the momentum conservation rule. For instance, for \(n=2\), one possible set is \(S=\{1,2,3,4\}\), meaning all the leading contributions corresponds to all the permutations of this set.
2. Generate all possible permutations for the set \(S\), and keep only those nonequivalent ones.
3. Identify external legs for each nonequivalent permutations based on momentum conservation.
4. collect all the terms from different permutations for a given set, repeat the procedure for all sets.
Below we briefly discuss how to obtain (12). For \(n=3\), the internal Green's functions runs over the set \(\{1,2,3,4,1,2\}\), where we denote \(G_{i}\) by \(i\) for brevity. Note that the set is equivalent to \(\{1,2,3,4,2,3\}\), \(\{1,2,3,4,3,4\}\) and \(\{1,2,3,4,4,1\}\) by \(D_{4}\) symmetry. Thus all of these sets should be considered, and they combined to give the \(\gamma\) terms in Eq. (12).
The results for \(n=4\) contains more terms. First of all, all the fermion Green's functions can run over the set \(\{1,2,3,4,1,2,3,4\}\). The set is invariant under \(D_{4}\) group operation so this is a complete class. The momentum integral yields a factor of \(\zeta\) for this class, and this is presented in the second line of Eq. (12). The second class comes from the set \(\{1,2,3,4,1,1,2,2\}\) and its symmetry related ones \(\{1,2,3,4,2,2,3,3\}\), \(\{1,2,3,4,3,3,4,4\}\) and \(\{1,2,3,4,4,4,1,1\}\). The one-loop integral for these diagrams leads to a factor of \(\zeta/4\), and this class corresponds to the third line of Eq. (12). The last class comes from the set \(\{1,2,3,4,1,1,2,4\}\), \(\{1,2,3,4,2,2,1,3\}\), \(\{1,2,3,4,3,3,2,4\}\) and \(\{1,2,3,4,4,4,1,3\}\), for which the one-loop integral yields a factor of \(\zeta/2\). This corresponds to the last line of Eq. (12). It can be checked that all other sets do not meet the momentum conservation, i.e. they will lead to some external legs for which the momentum is neither \(\pm\mathbf{Q}\) nor \(\pm\mathbf{P}\). Therefore, Eq. (12) is all the leading contributions at \(\mathcal{O}(|\Psi_{\mathbf{Q}_{i}}|^{8})\).
## Appendix B Equivalence between HS transformation and the method of using Lagrangian multiplier
In a special case when \(\gamma\) and \(\zeta\) terms are absent from the action, one can alternatively perform HS (HS) transformation to analyze the charge-\(4e\) order. Below we show that the HS transformation yields exactly the same results as we obtain by using Lagrangian multiplier. The action we consider is
\[\int\frac{d^{2}q}{4\pi^{2}}\left[(\kappa q^{2}+\alpha)\sum_{i=1}^{4}|\Psi_{ \mathbf{Q}_{i}}|^{2}\right]-\frac{\beta}{2}|\Phi_{d}|^{2} \tag{20}\]
with \(\beta>0\). After HS transformation, the action becomes
\[\frac{\beta}{2}\left(|\Delta_{d}|^{2}-\Delta_{d}\Phi_{d}{}^{*}-\bar{\Delta}_{d }\Psi_{d}\right)+\int\frac{d^{2}q}{4\pi^{2}}\left[(\kappa q^{2}+\alpha)\sum_{i= 1}^{4}|\Psi_{\mathbf{Q}_{i}}|^{2}\right] \tag{21}\]
Again one can integrate out the PDW order parameters using the basis \(\Psi=(\Psi_{\mathbf{Q}},\Psi_{\mathbf{P}},\Psi_{\mathbf{-Q}}^{\dagger},\Psi_{\mathbf{-P}}^{ \dagger})^{T}\). This leads to the following effective action for the charge-\(4e\) order
\[\frac{\beta}{2}\Delta_{d}^{2}+T\int\frac{d^{2}q}{4\pi^{2}}\text{tr}\ln A(q) \tag{22}\]
where
\[A(q)=\begin{pmatrix}\kappa q^{2}+\alpha&0&-\frac{\beta}{2}\Delta_{d}&0\\ 0&\kappa q^{2}+\alpha&0&\frac{\beta}{2}\Delta_{d}\\ -\frac{\beta}{2}\bar{\Delta}_{d}&0&\kappa q^{2}+\alpha&0\\ 0&\frac{\beta}{2}\bar{\Delta}_{d}&0&\kappa q^{2}+\alpha\end{pmatrix}. \tag{23}\]
Performing the trln operation, Eq. (16) becomes
\[\frac{\beta}{2}\Delta_{d}^{2}+T\int\frac{d^{2}q}{2\pi^{2}}\ln\left[(\kappa q^{2 }+\alpha)^{2}-\frac{\beta^{2}}{4}\Delta_{d}^{2}\right] \tag{24}\]
The saddle point of this action reads
\[\begin{split} 1&=\beta T\int\frac{d^{2}q}{4\pi^{2}}\frac{1}{(\kappa q ^{2}+\alpha)^{2}-\frac{\beta^{2}}{4}\Delta_{d}^{2}}\\ &=\frac{\beta T}{4\pi\kappa}\frac{1}{\bar{\beta}}\Delta_{d} \text{arccoth}\frac{\alpha}{\bar{\beta}}\Delta_{d},\end{split} \tag{25}\]
or equivalently,
\[\Delta_{d}=\frac{T}{2\pi\kappa}\mathrm{arccoth}\frac{2\alpha}{\beta\Delta_{d}}\ \ \mathrm{or}\ \ \tanh(2\pi\kappa\Delta_{d}/T)=\frac{\beta\Delta_{d}}{2\alpha}. \tag{30}\]
In obtaining the above saddle point equations, we have used the condition that \(\alpha>\frac{\beta}{2}\Delta_{d}\), as is required for the convergence of the Gaussian integral in order to integrate out PDW fields. Eq. (30) is the results obtained via HS transformation at quartic level.
To compare Eq. (30) with the one obtained using the method of Lagrangian multiplier, we note from Eq. (26) that in the absence of \(\gamma\) and \(\zeta\), we have
\[\varphi=\frac{T}{\pi\kappa}\ln\left(\frac{\kappa}{\alpha}\cosh\frac{2\pi \kappa|\Delta_{d}|}{T}\right). \tag{31}\]
The free energy for \(\Delta_{d}\) then becomes
\[\mathcal{F}[\Delta_{d}]=-\frac{\beta}{2}|\Delta_{d}|^{2}+\frac{\alpha T}{\pi \kappa}\left[1+\ln\left(\frac{\kappa}{\alpha}\cosh\frac{2\pi\kappa|\Delta_{d} |}{T}\right)\right]. \tag{32}\]
Variation of this free energy with respect to \(|\Delta_{d}|\) and set it to zero, we immediately obtain the gap equation
\[-\frac{\beta}{2}\Delta_{d}+\alpha\tanh(2\pi\kappa\Delta_{d}/T)=0 \tag{33}\]
which is exactly the same as Eq. (30). Therefore, we have proved that at the level of \(\mathcal{O}(\Psi^{4})\), both the HS transformation and the method of Lagrangian multiplier yield the same result. However, the HS transformation, which is built on Gaussian integral, fails to work once higher order terms of the primary order parameters are included, but the Lagrangian multiplier method is still valid.
|
2310.17709 | Singularity formation along the line bundle mean curvature flow | The line bundle mean curvature flow is a complex analogue of the mean
curvature flow for Lagrangian graphs, with fixed points solving the deformed
Hermitian-Yang-Mills equation. In this paper we construct two distinct examples
of singularities along the flow. First, we find a finite time singularity,
ruling out long time existence of the flow in general. Next we show long time
existence of the flow with a Calabi symmetry assumption on the blowup of
$\mathbb P^n$, $n\geq 3$, if one assumes supercritical phase. Using this, we
find an example where a singularity occurs at infinite time along the
destabilizing subvariety in the semi-stable case. | Yu Hin Chan, Adam Jacob | 2023-10-26T18:08:48Z | http://arxiv.org/abs/2310.17709v1 | # Singularity formation along the line bundle mean curvature flow
###### Abstract.
The line bundle mean curvature flow is a complex analogue of the mean curvature flow for Lagrangian graphs, with fixed points solving the deformed Hermitian-Yang-Mills equation. In this paper we construct two distinct examples of singularities along the flow. First, we find a finite time singularity, ruling out long time existence of the flow in general. Next we show long time existence of the flow with a Calabi symmetry assumption on the blowup of \(\mathbb{P}^{n}\), \(n\geq 3\), if one assumes supercritical phase. Using this, we find an example where a singularity occurs at infinite time along the destabilizing subvariety in the semi-stable case.
*Supported in part by a Simons Collaboration Grant
## 1. Introduction
Let \((X,\omega)\) be a compact Kahler manifold, and \([\alpha]\in H^{1,1}(X,\mathbb{R})\) a real cohomology class. The _deformed Hermitian-Yang-Mills_ (dHYM) equation seeks a representative \(\alpha\in[\alpha]\) satisfying
\[\text{Im}(e^{-i\hat{\theta}}(\omega+i\alpha)^{n})=0 \tag{1.1}\]
for a fixed constant \(e^{i\hat{\theta}}\in S^{1}\). Recently this equation has garnered significant attention, and extensive work has centered around the relationship between existence of a solution and notions of geometric stability [2, 3, 4, 5, 8, 14, 20]. Although much of this work has been done with elliptic methods, substantial progress has been made following a parabolic approach as well [10, 12, 15, 29, 30].
In this paper we focus on one such parabolic method, known as the line bundle mean curvature flow. Fix a background form \(\alpha_{0}\in[\alpha]\), and define \(\alpha_{t}:=\alpha_{0}+i\partial\bar{\partial}\phi_{t}\). At any point \(p\in X\) one can choose coordinates so \(\omega^{-1}\alpha_{t}\) is diagonal with eigenvalues \(\{\lambda_{1},...,\lambda_{n}\}\). The line bundle mean curvature flow can be expressed as
\[\dot{\phi}_{t}=\sum_{k}\arctan(\lambda_{k})-\hat{\theta}, \tag{1.2}\]
where \(\hat{\theta}\) is some choice of a lift of \(e^{i\hat{\theta}}\) to \(\mathbb{R}\). This parabolic flow is the complex analogue of the Lagrangian mean curvature flow in the graphical setting, with the distinction being that the mean curvature flow is given by eigenvalues of
the real Hessian of a function, as opposed to the complex Hessian (we direct the reader to [21, 22, 23, 31, 32] for further background on the Lagrangian case). By the complex formulation of arctan, one sees \(\sum_{k}\arctan(\lambda_{k})\) is the argument of the top dimensional form \((\omega+i\alpha)^{n}\), and so solutions to (1.1) are fixed points of (1.2). We denote this argument by \(\Theta(\alpha_{t})=\sum_{k}\arctan(\lambda_{k})\).
Developed by the second author and S.-T. Yau, the flow (1.2) was used to prove existence of a solution to (1.1) under the assumption of hypercritical phase, defined by \(\Theta(\alpha_{0})>(n-1)\frac{\pi}{2}\), in addition to the assumption that \((X,\omega)\) has non-negative orthogonal bisectional curvature [15]. The phase assumption is useful for two reasons. First, it ensures convexity of the operator \(\Theta(\cdot)\). Second, it allows a natural choice of a lift of \(\hat{\theta}\), which is a priori defined up to a multiple of \(2\pi\). In fact, being able to choose such a lift is a major difficulty in the study of (1.1), and one would not expect the flow to converge without making the appropriate choice of a lift at the start.
Given the cohomological obstructions to the existence of solutions to (1.1) from [3], it is evident that the flow (1.2) can not converge if one choses an initial, unstable class. However, it was previously not known if the flow exists for all time, or if a finite time singularity could occur. The first goal of our paper is to construct an explicit example of a finite time singularity, ruling out long time existence.
**Theorem 1.1**.: _Let \(X\) be the blowup of \(\mathbb{P}^{n}\) at a point. There exists a Kahler form \(\omega\), and cohomology class \([\alpha]\in H^{1,1}(X,\mathbb{R})\) admitting a representative \(\alpha_{0}\), for which the flow 1.2 achieves a finite-time singularity. Specifically, if \(\lambda_{Max}(p,t)\) denotes the largest eigenvalue of \(\omega^{-1}\alpha_{t}\) at a point \(p\in X\), then there exists a sequence of points \(\{x_{k}\}\subset X\) and times \(t_{k}\to T<\infty\) such that_
\[\lim_{k\to\infty}\lambda_{Max}(x_{k},t_{k})=\infty.\]
This example is constructed using a particular type of symmetry on \(X\), called Calabi Symmetry, which is described in Section 2. The symmetry allows the dHYM equation to be written as an ODE, and the flow (1.2) is reduced to a parabolic PDE with one spacial variable. Due to similarities with the curve shortening flow, we construct subsolutions which, along with a particular choice of an initial condition, force a singularity to happen. Our example can be constructed on classes that admit a solution to the dHYM equation, demonstrating that finite time singularities can not be ruled out by class conditions alone. In fact, we believe similar examples of finite time singularities can be constructed on any pair of classes \([\omega]\) and \([\alpha]\) on the blowup of \(\mathbb{P}^{n}\). Thus finite time singularities will remain an integral part of the study of 1.2, and will need to be ruled out by choosing appropriate initial conditions.
Our next goal is to demonstrate that the flow can also become singular at infinite time, and to find an example where we can predict exactly where this singularity will occur from the initial classes \([\alpha]\) and \([\omega]\). Using the
same Calabi Symmetry setup as above, we first show that if the initial form satisfies supercritical phase then the flow exists for all time:
**Theorem 1.2**.: _Let \((X,\omega)\) be the blowup of \(\mathbb{P}^{n}\) at a point, \(n\geq 3\), and consider a class \([\alpha]\in H^{1,1}(X,\mathbb{R})\). Assume \(\omega\), \(\alpha_{0}\in[\alpha]\) have Calabi-symmetry, and furthermore assume \(\alpha_{0}\) has supercritical phase, that is \(\Theta(\alpha_{0})>(n-2)\frac{\pi}{2}\). Then the flow (1.2) beginning at \(\alpha_{0}\) exists for all time._
Note that Takahashi proved long time existence for the line bundle mean curvature flow in the hypercritical phase case [29], which implies that \(\alpha_{t}\) stays a Kahler form and the operator \(\Theta(\cdot)\) is convex. For our result we use the weaker supercritical phase assumption, which does not imply convexity of the operator is \(\Theta(\cdot)\). However, it does imply the level sets are convex [3], which is enough to apply Evans-Krylov.
To see where the long time singularity occurs, we turn to the conjectured relationship between solutions to the dHYM equation and stability. Following the work of Lejmi-Szekelyhidi on the \(J\)-equation [17], the second author, along with T.C. Collins and S.-T. Yau, integrated a certain positivity condition along subvarieties to develop a necessary class condition for existence, and conjectured it was a sufficient condition as well [3]. Specifically, for any irreducible analytic subvariety \(V\subseteq X\), define the complex number
\[Z_{[\alpha][\omega]}(V):=-\int_{V}e^{-i\omega+\alpha},\]
where by convention we only integrate the term in the expansion of order \(\dim(V)\). Under the supercritical phase assumption \(Z_{[\alpha][\omega]}(X)\) lies in the upper half plane \(\mathbb{H}\). The conjecture of Collins-J.-Yau posits that a solution to the dHYM equation exists if and only if
\[\pi>\operatorname{arg}\!Z_{[\alpha][\omega]}(V)>\operatorname{arg}\!Z_{[ \alpha][\omega]}(X). \tag{1.3}\]
Later, when \(n=3\), Collins-Xie-Yau demonstrated a necessary Chern number inequality [6] (which has since been extended to \(n=4\)[11]), which is also useful for defining the lifted angle \(\hat{\theta}\) algebraically. Collins-Yau further conjectured that such a Chern number inequality in higher dimension was needed [7]. Indeed, recently when \(n=3\) an example was found where the stability inequality (1.3) holds, but the Chern number inequality does not, and no solution to the dHYM equation exists [33]. We note that slightly weaker versions of the Collins-J.-Yau conjecture have been solved by Chen [2] (assuming uniform stability), and Chu-Lee-Takahashi [8] (in the projective case). These results all rest on the supercritical phase assumption.
Some of the few results without the supercritical phase assumption are due to the second author and Sheu [14] (and later [13]), who take advantage of the same Calabi-Symmetry used in this paper. They demonstrate that the inequalities (1.3) can be reinterpreted as stating whether two points in \(\mathbb{C}\) lie on the same level set of a harmonic polynomial, from which it follows that solutions to the dHYM exists. Since the stability conditions are necessary, and supercritical phase impiles long time existence of the flow in this setting,
the unstable case ends up being the perfect setup to construct a singularity for \(\alpha_{t}\) as \(t\) approaches infinity. In particular we demonstrate:
**Theorem 1.3**.: _Let \((X,\omega)\) be the blowup of \(\mathbb{P}^{n}\) at a point, \(n\geq 3\). There exists classes \([\alpha]\) and \([\omega]\), which are semi-stable in the sense of (1.3), where the flow (1.2) starting at an initial representative \(\alpha_{0}\) exists for all time and becomes singular at time \(t=\infty\) along the destabilizing subvariety._
Similar to the proof of Theorem 1.1, we utilize explicit subsolutions of a modified curve shortening flow to force a singularity at infinite time.
Here we briefly discuss two other parabolic flows in the literature for which solutions to (1.1) are fixed points. The first is the tangent Lagrangian phase flow, introduced by Takahashi in [30]. Defined by \(\dot{\phi}_{t}=\tan(\Theta(\alpha_{t})-\hat{\theta})\), this flow is the the gradient flow of the Kempf-Ness functional arising from the infinite dimensional GIT picture for stability and the dHYM equation, as developed by Collins-Yau [7]. As a result, this flow is well behaved with respect to many important functionals, and it could be useful when exploring if some type of limiting Harder-Narasimhan filtration exists in the unstable case. One downside of this flow is that it is only defined for "almost calibrated" potentials, when the angle \(\Theta(\alpha_{t})\) varies from the target angle by less than \(\frac{\pi}{2}\). The second flow was introduced by Fu-Yau-Zhang and is defined by the equation \(\dot{\phi}_{t}=\cot(n\frac{\pi}{2}-\Theta(\alpha_{t}))-\cot(n\frac{\pi}{2}- \hat{\theta})\)[10]. This flow has the advantage that \(\cot(n\frac{\pi}{2}-\Theta(\alpha_{t}))\) is concave under the supercritical phase assumption. Additionally the form of the flow allows for some useful estimates for subsolutions. However, this flow is only defined for supercritical phase, since otherwise one may end up taking the cotangent of zero. Note that in comparison to the above flows, the line bundle mean curvature flow is always defined for short time.
The singularity examples we construct point towards many new interesting problems to explore. One question is whether similar singularities can be constructed on more general Kahler manifolds, perhaps with some sort of gluing technique. We also wonder if there are any examples which can be related to singularities of the graphical Lagrangian mean curvature flow, given the formal similarities between the two flows. In the above examples, the highest eigenvalue of \(\omega^{-1}\alpha_{t}\) approaches infinity while the derivatives of the eigenvalues stay bounded. Thus the analogue of the graphical "Lagrangian" (given by one derivative of the potential) is tilting up to achieve vertical slope. It would be interesting if one could find examples with higher order singularities, which would allow for a richer blowup analysis. Finally, in our long time singularity example, the singularity occurs along the destabilizing subvariety, and one would expect this relationship between stability and singularity formation to hold in more general settings. We hope to explore these problems in future work.
The paper is organized as follows. In Section 2 we introduce the Calabi symmetry assumption, which is used in constructing our examples. In Section 3 we construct our finite time singularity. Section 4 contains our proof
of long time existence in the supercritical phase case. We conclude with our example of a long time singularity in Section 5.
**Acknowledgements.** This problem arose at the American Institute of Mathematics workshop "Stability in mirror symmetry," and the second author would like to thank the organizers and the institute for providing a productive research environment. In particular special thanks to Tristan C. Collins, Jason Lotay, and Felix Schulze for some helpful discussion. This work was funded in part by a Simons collaboration grant.
## 2. Calabi Symmetry
Throughout this paper we work on the blowup of \(\mathbb{P}^{n}\) at one point \(p\), which we denote by \(X\). This manifold admits \((1,1)\) forms that satisfy a symmetry ansatz, originally defined by Calabi in [1]. We include a short introduction to this ansatz here, and direct the reader to [1, 9, 14, 24, 25, 26, 27, 28] for further details.
Let \(E\) denote the exceptional divisor, and \(H\) the pullback of the hyperplane divisor from \(\mathbb{P}^{n}\). These two divisors span \(H^{1,1}(X,\mathbb{R})\), and any Kahler class will lie in \(a_{1}[H]-a_{2}[E]\) with \(a_{1}>a_{2}>0\). Normalizing, assume \(X\) admits a Kahler form \(\omega\) in the class
\[[\omega]=a[H]-[E],\]
with \(a>1\). Furthermore, assume our class \([\alpha]\) satisfies
\[[\alpha]=p[H]-q[E],\]
for a choice of \(p,q\in\mathbb{R}\).
On \(X\backslash(H\cup E)\cong\mathbb{C}^{n}\backslash\{0\}\), set \(\rho=\log(|z|^{2})\). If \(u(\rho)\in C^{\infty}(\mathbb{R})\) satisfies \(u^{\prime}(\rho)>0\), \(u^{\prime\prime}(\rho)>0\), then \(\omega=i\partial\bar{\partial}u\) defines a Kahler form on \(\mathbb{C}^{n}\backslash\{0\}\). For \(\omega\) to extend to a Kahler form on \(X\) in the class \(a[H]-[E]\), \(u\) must satisfy boundary asymptotics. Specifically, define \(U_{0},U_{\infty}:[0,\infty)\to\mathbb{R}\) via
\[U_{0}(r):=u(\text{log}r)-\text{log}r\qquad\text{and}\qquad U_{\infty}(r):=u(- \text{log}r)+a\text{log}r.\]
Assume \(U_{0}\) and \(U_{\infty}\) extend by continuity to smooth functions at \(r=0\), with both \(U_{0}^{\prime}(0)>0\) and \(U_{\infty}^{\prime}(0)>0\). This fixes the asymptotic behavior of \(u\)
\[\lim_{\rho\to-\infty}u^{\prime}(\rho)=1,\qquad\lim_{\rho\to\infty}u^{\prime}( \rho)=a,\]
and ensures \(\omega=i\partial\bar{\partial}u\) extends to a Kahler form on \(X\) in the correct class.
Next, given a function \(v(\rho)\in C^{\infty}(\mathbb{R})\), the Hessian \(i\partial\bar{\partial}v(\rho)\) defines a \((1,1)\) form \(\alpha\) on \(\mathbb{C}^{n}\backslash\{0\}\). In order for \(\alpha\) to extend to \(X\) in the class \([\alpha]=p[H]-q[E]\), we require similar asymptotics without the positivity assumptions, as \(\alpha\) need not be a Kahler form. Consider the functions \(V_{0},V_{\infty}:[0,\infty)\to\mathbb{R}\) defined via
\[V_{0}(r):=v(\text{log}r)-q\text{log}r\qquad\text{and}\qquad V_{\infty}(r):=v( -\text{log}r)+p\text{log}r.\]
Assume that \(V_{0}\) and \(V_{\infty}\) extend by continuity to smooth functions at \(r=0\), which implies \(v(\rho)\) satisfies:
\[\lim_{\rho\to-\infty}v^{\prime}(\rho)=q,\qquad\lim_{\rho\to\infty}v^{\prime}( \rho)=p.\]
Then \(i\partial\bar{\partial}v\) extends to a smooth (1,1) form on \(X\) in the class \([\alpha]\).
We refer to forms \(\omega\) and \(\alpha\) constructed in the above manner as having _Calabi Symmetry_. Restricting to \(\mathbb{C}^{n}\backslash\{0\}\), one can check that in this case the eigenvalues of \(\omega^{-1}\alpha\) are \(\frac{v^{\prime}}{u^{\prime}}\) with multiplicity \((n-1)\), and \(\frac{v^{\prime\prime}}{u^{\prime\prime}}\) with multiplicity one (for a proof of this see [9]). Furthermore, because \(u^{\prime\prime}>0\), the first derivative \(u^{\prime}\) is monotone increasing, allowing us to use Legendre transform coordinates and view \(u^{\prime}\) as a real variable, denoted by \(x\), which ranges from \(1\) to \(a\). One can then write \(v^{\prime}\) as a graph \(f\) over \(x\in(1,a)\), so we have \(f(x)=f(u^{\prime}(\rho))=v^{\prime}(\rho)\). Taking the derivative of both sides with respect to \(\rho\) gives
\[f^{\prime}(x)u^{\prime\prime}(\rho)=v^{\prime\prime}(\rho).\]
We allow the slight abuse of notation where \(f^{\prime}\) denotes the derivative of \(f\) with respect to the variable \(x\), and \(u^{\prime\prime}\) and \(v^{\prime\prime}\) denote the second derivatives with respect to the variable \(\rho\). By the above, the eigenvalues of \(\omega^{-1}\alpha\) are
\[\frac{v^{\prime}}{u^{\prime}}=\frac{f}{x}\left(\text{with multiplicity}\,n-1 \right)\qquad\text{and}\qquad\frac{v^{\prime\prime}}{u^{\prime\prime}}=f^{ \prime}.\]
As \(x\to 1\), we have \(\rho\to-\infty\), while \(x\to a\) implies \(\rho\to\infty\). Thus the asymptotics of \(v(\rho)\) imply
\[\lim_{x\to 1^{+}}f(x)=q,\qquad\lim_{x\to a^{-}}f(a)=p,\]
and we extend \(f(x)\) to the boundary \([1,a]\) by continuity.
In this form, the dHYM equation can be written as an ODE
\[\operatorname{Im}\left(e^{-i\hat{\theta}}\left(1+i\frac{f}{x}\right)^{n-1} \left(1+if^{\prime}\right)\right)=0 \tag{2.1}\]
subject to the boundary constraints \(f(1)=q\), \(f(a)=p\). Furthermore the Lagrangian angle given by the eigenvalues of \(\omega^{-1}\alpha\) can be expressed as
\[\Theta(x):=(n-1)\arctan\left(\frac{f}{x}\right)+\arctan\left(f^{\prime}\right).\]
Because \(\alpha=i\partial\bar{\partial}v\), in our setting we can write line bundle mean curvature flow as
\[\dot{v}=\Theta(x)-\hat{\theta}.\]
In order to arrive at an equation for \(f\) we take the derivative of both sides with respect to \(\rho\) and see
\[\frac{d\dot{v}}{d\rho}=\frac{d\Theta}{dx}\frac{dx}{d\rho}.\]
This now becomes
\[\dot{f}=L(f):=u^{\prime\prime}\left(\frac{f^{\prime\prime}}{1+f^{\prime 2}}+(n-1) \frac{xf^{\prime}-f}{x^{2}+f^{2}}\right)=u^{\prime\prime}\Theta^{\prime}. \tag{2.2}\]
We have now defined second order parabolic equation for \(f\), to which a solution can be integrated in \(\rho\) to arrive at a solution of the line bundle mean curvature flow (1.2). Note that \(u^{\prime\prime}(1)=u^{\prime\prime}(a)=0\), so the flow fixes the boundary values of \(f\). One interesting consequence of taking an extra derivative to define the flow is that it is no longer necessary to take a lift \(\hat{\theta}\) of the average angle. In this way the flow is more analogous to a how graph evolves by the mean curvature vector rather than how a potential evolves by the Lagrangian angle.
The flow (2.2) is defined on graphs over \([1,a]\) with fixed boundary. However, we can generalize it to curves, which is useful in order to construct barriers. Consider the region \(D:=\{(x,y)\in\mathbb{R}^{2}\,|\,1\leq x\leq a\}\). Let \(\gamma_{t}(s):I\subseteq\mathbb{R}\to D\) be a family of smooth curves, and let \(s\) be the arc-length parameter. Let
\[\kappa=\frac{d}{ds}\arctan\gamma^{\prime}\]
denote the usual plane curvature, and let
\[\xi=\frac{d}{ds}\arctan\gamma\]
be an extrinsic quantity. Consider the flow
\[\dot{\gamma}=u^{\prime\prime}(\gamma)\left(\kappa+(n-1)\xi\right)\mathbf{N}, \tag{2.3}\]
where the normal vector \(\mathbf{N}\) is defined by \(e^{i\frac{\pi}{2}}\gamma^{\prime}\), and \(u^{\prime\prime}(\gamma)\) is defined to be the function \(u^{\prime\prime}\) applied to the \(x\)-coordinate of \(\gamma\). Notice the relationship between this flow and the curve shortening flow \(\dot{\gamma}=\kappa\mathbf{N}\).
In the case where \(\gamma(x)=(x,f(x))\) is a graph of a function, we have \(ds=\sqrt{1+f^{\prime 2}}dx\). Simple computations show
\[\langle\dot{\gamma},\mathbf{N}\rangle=\frac{\dot{f}}{\sqrt{1+f^{\prime 2}}}, \qquad\kappa=\frac{f^{\prime\prime}}{(1+f^{\prime 2})^{3/2}},\qquad\xi= \frac{1}{\sqrt{1+f^{\prime 2}}}\frac{xf^{\prime}-f}{x^{2}+f^{2}}.\]
Hence, (2.3) reduces to (2.2) in this case, and thus is the correct generalization to curves.
## 3. A finite time singularity
Consider a real number \(R>1\) (to be determined later), and set \(a=6R\). As above, consider a function \(u:\mathbb{R}\to\mathbb{R}\) so that \(\omega:=i\partial\bar{\partial}u\) extends from \(\mathbb{C}^{n}\backslash\{0\}\) to a Kahler form on \(X\) in the class \(a[H]-[E]\). Furthermore, assume that \(u^{\prime\prime}<R\), and that there exists a small constant \(k\) so that \(u^{\prime\prime}(x)\geq k(x-1)(a-x)\) on \([1,a]\) (which is possible since by the Calabi symmetry assumptions \(u^{\prime\prime}(x)\) approaches the boundary linearly in \(x\)). We remark that one should be able to construct a similar singularity example for any Kahler
form satisfying Calabi symmetry, however we include our extra assumptions on \(u\) and \(R\) for the ease of presentation.
The idea is as follows. Consider a class \([\alpha]=p[H]-q[E]\) and assume \(p\geq a\). Define a representative \(\alpha_{0}\) via the function \(f_{0}(x)\), which has a graph such as in Figure 1. We construct a family of shrinking circles, and a traveling family of hyperbolas, which are subsolutions to (2.3). If \(f_{t}\) is the evolution of \(f_{0}\) via the line bundle MCF (2.2), and \(f_{0}\) avoids the initial circle and hyperbola at time \(t=0\), then by the maximum principle \(f_{t}\) must avoid these families for all time. The hyperbolas push out past the center of the circles before they shrink to a point, forcing \(f_{t}\) to achieve vertical slope at some finite time.
We first construct our family of hyperbolas. Observe that both \(\kappa\) and \(\xi\) are invariant under orthogonal transformation. Hence, by interchanging the \(x\) and \(y\) coordinates, we have the following lemma.
**Lemma 3.1**.: _Suppose \(y=f_{t}(x)\) satisfies the flow (2.2). If the inverse \(x=f_{t}^{-1}(y)=:h_{t}(y)\) exists, then \(h_{t}(y)\) satisfies_
\[\dot{h}=u^{\prime\prime}(h(y))\left(\frac{h^{\prime\prime}}{1+h^{\prime 2}}+(n- 1)\frac{yh^{\prime}-h}{y^{2}+h^{2}}\right). \tag{3.1}\]
**Lemma 3.2**.: _Suppose \(b(t):[0,T)\to\mathbb{R}\) satisfies the initial value problem:_
\[\dot{b}=-\frac{kb_{\infty}(b_{\infty}-1)(a^{2}-b_{0}^{2})(b-b_{\infty})b^{3}}{ a(a^{2}-b_{\infty}^{2})(2a^{2}-b_{\infty}^{2})^{2}} \tag{3.2}\]
Figure 1. The graph of a function \(f_{0}\) which forms a singularity.
_where \(1<b_{\infty}<b_{0}<a\) are constants. Then_
\[g_{t}(y):=\sqrt{\frac{a^{2}-b^{2}}{a^{2}-b_{\infty}^{2}}y^{2}+b^{2}}\]
_is a sub-solution to the equation 3.1 for \(y\in\left[-\sqrt{a^{2}-b_{\infty}^{2}},\sqrt{a^{2}-b_{\infty}^{2}}\right]\)._
Proof.: For simplicity, write
\[m=\frac{a^{2}-b^{2}}{a^{2}-b_{\infty}^{2}},\qquad 1-m=\frac{b^{2}-b_{\infty}^{2}} {a^{2}-b_{\infty}^{2}}.\]
We also write \(g=g_{t}\) for notational simplicity. Notice that \(b_{0}\geq b>b_{\infty}\) from the initial value problem, so \(m<1\). We compute
\[g^{\prime}=\frac{my}{\sqrt{my^{2}+b^{2}}}=\frac{my}{g},\]
which in turn gives
\[g^{\prime\prime}=\frac{m}{g}-\frac{myg^{\prime}}{g^{2}}=\frac{m}{g}-\frac{m^{ 2}y^{2}}{g^{3}}=\frac{mg^{2}-m^{2}y^{2}}{g^{3}}=\frac{mb^{2}}{g^{3}}.\]
Furthermore the two expressions from (3.1) can be written as
\[\frac{g^{\prime\prime}}{1+g^{\prime 2}}=\frac{mb^{2}}{g(g^{2}+m^{2}y^{2})},\]
and
\[\frac{yg^{\prime}-g}{y^{2}+g^{2}}=\frac{my^{2}-g^{2}}{g(y^{2}+g^{2})}=\frac{- b^{2}}{g(g^{2}+y^{2})}.\]
Thus
\[L(g) :=u^{\prime\prime}(g(y))\left(\frac{g^{\prime\prime}}{1+g^{\prime 2 }}+(n-1)\frac{yg^{\prime}-g}{y^{2}+g^{2}}\right)\] \[=u^{\prime\prime}(g(y))\left(\frac{mb^{2}}{g(g^{2}+m^{2}y^{2})}- (n-1)\frac{b^{2}}{g(g^{2}+y^{2})}\right)\] \[\leq u^{\prime\prime}(g(y))\left(\frac{mb^{2}}{g(g^{2}+m^{2}y^{2}) }-\frac{b^{2}}{g(g^{2}+y^{2})}\right)\] \[=u^{\prime\prime}(g(y))\frac{-(1-m)b^{4}}{g(g^{2}+m^{2}y^{2})(g^{ 2}+y^{2})}\] \[\leq k(g-1)(a-g)\frac{-(1-m)b^{4}}{g(g^{2}+m^{2}y^{2})(g^{2}+y^{2} )}\leq 0,\]
where the last inequality follows from our assumption \(u^{\prime\prime}(x)\geq k(x-a)(a-x)\).
Now, observe that
\[(a-g)(a+g)=a^{2}-my^{2}-b^{2}=m\left(\frac{a^{2}-b^{2}}{m}-y^{2}\right)=m(a^{2 }-b_{\infty}^{2}-y^{2}).\]
The right hand side is non-negative when \(y\in\left[-\sqrt{a^{2}-b_{\infty}^{2}},\sqrt{a^{2}-b_{\infty}^{2}}\right]\). As a result
\[L(g) \leq\frac{-k(g-1)m(a^{2}-b_{\infty}^{2}-y^{2})(1-m)b^{4}}{g(a+g)(g^ {2}+m^{2}y^{2})(g^{2}+y^{2})}\] \[=\frac{-k(a^{2}-b_{\infty}^{2}-y^{2})(g-1)(a^{2}-b^{2})(b+b_{ \infty})(b-b_{\infty})b^{4}}{g(a+g)(a^{2}-b_{\infty}^{2})^{2}(g^{2}+m^{2}y^{2}) (g^{2}+y^{2})},\]
where we plugged in the definition of \(m\) and \((1-m)\). Because the above expression is negative, the inequalities \(m<1\), \(b_{\infty}\leq g\leq a\), and \(b_{\infty}<b\leq b_{0}\), allow us to conclude
\[L(g)\leq\frac{-k(a^{2}-b_{\infty}^{2}-y^{2})(b_{\infty}-1)(a^{2}-b_{0}^{2})2b _{\infty}(b-b_{\infty})b^{4}}{g(2a)(a^{2}-b_{\infty}^{2})^{2}(2a^{2}-b_{\infty }^{2})^{2}}.\]
Next, we turn to the evolution of \(g\):
\[\dot{g}=\frac{\dot{m}y^{2}+2b\dot{b}}{2g}=\frac{\frac{-2b\dot{b}} {a^{2}-b_{\infty}^{2}}y^{2}+2b\dot{b}}{2g} =\frac{b\dot{b}}{g}\left(1-\frac{y^{2}}{a^{2}-b_{\infty}^{2}}\right)\] \[=\frac{b\dot{b}(a^{2}-b_{\infty}^{2}-y^{2})}{g(a^{2}-b_{\infty}^{ 2})}\leq 0.\]
Putting everything together we arrive at
\[\dot{g}-L(g)\geq\frac{b(a^{2}-b_{\infty}^{2}-y^{2})}{g(a^{2}-b_{\infty}^{2})} \left(\dot{b}+\frac{k(b_{\infty}-1)(a^{2}-b_{0}^{2})b_{\infty}(b-b_{\infty})b ^{3}}{a(a^{2}-b_{\infty}^{2})^{2}(2a^{2}-b_{\infty}^{2})}\right).\]
The right hand side is zero by the initial value problem. Hence we have demonstrated \(\dot{g}-L(g)\geq 0\).
We now solve the initial value problem (3.2), and compute the time for which the hyperbola pushes out a specified distance. Set \(b_{\infty}=R\), and \(b_{0}=5R\). Recall \(a=6R\), so \(1<b_{\infty}<b_{0}<a\). Note there exists a constant \(M>0\) such that
\[C_{1}:=k\frac{b_{\infty}(b_{\infty}-1)(a^{2}-b_{0}^{2})}{a(a^{2}-b_{\infty}^{ 2})(2a^{2}-b_{\infty}^{2})^{2}}\geq\frac{1}{MR^{3}}.\]
The differential equation 3.2 is separable, yielding
\[-C_{1}dt=\frac{db}{(b-b_{\infty})b^{3}},\]
which has the solution
\[-C_{1}t+C_{0}=\frac{b_{\infty}^{2}+2b^{2}\log(b-b_{\infty})+2b_{\infty}b-2b^{ 2}\log(b)}{2b_{\infty}^{3}b^{2}}\]
where \(C_{0}\) is given by the initial value \(b(0)=b_{0}=5R\). Plugging in \(t=0\) we see directly that
\[C_{0}=\frac{11/50+\log(4/5)}{R^{3}}.\]
Let \(T\) be the time such that \(b(T)=2R\). Then, we have
\[T=\frac{1}{C_{1}}\left(\frac{11/50+\log(4/5)}{R^{3}}-\frac{5/8-\log(2)}{R^{3}} \right)\leq A \tag{3.3}\]
for some constant \(A\).
**Proposition 3.3**.: _Let \(\gamma(t)\) satisfy (2.3). If \(\gamma(0)\) does not intersect the hyperbola \(g_{0}(y)\), then \(\gamma(t)\) does not intersect \(g_{t}(y)\) for as long as the flow is defined._
Proof.: Suppose \(p=(x_{0},y_{0})\) is the first point of intersection of the two curves, occurring at time \(t_{0}\). Since the hyperbola \(g_{t}\) never achieves horizontal slope, we can assume near \(p\) that \(\gamma(t)\) is a graph of a function \(h_{t}(y)\) over the ball \(B_{\delta}(y_{0})\) in \(y\)-axis solving (3.1). Without loss of generality, for \(0\leq t<t_{0}\) assume that \(g_{t}(y)>h_{t}(y)\) over \(B_{\delta}(y_{0})\). Then over the region \(B_{\delta}(y_{0})\times[0,t_{0})\), we see \((\frac{d}{dt}-L)(g-h)\geq 0\), yet \(g-h>0\) on the parabolic boundary. The result follows from the maximum principle.
Next we turn to the family of shrinking circles which act as a barrier. Since \(\xi\) is relatively small for a curve far away from the origin, (2.3) behaves similarly to the curve shortening flow in this case. The idea is to consider a family of circles far away from the origin which evolve slightly faster than curve shortening flow, in order to absorb the small \(\xi\) term.
**Proposition 3.4**.: _For \(R=a/6>1\) as above, assume the graph of \(f_{0}(x)\) does not intersect the ball \(B_{R}(3R,y_{0})\). Then, for \(y_{0}\) sufficiently negative, the family of shrinking balls \(B_{\sqrt{R^{2}-4Rt}}(3R,y_{0})\) does not intersect the family of graphs of \(f_{t}(x)\) evolving via (2.2), as long as the flow is defined._
Proof.: Locally, we can write \(\phi_{t}(x)=-\sqrt{r(t)^{2}-(x-3R)^{2}}+y_{0}\) as equation representing the lower boundary of the shrinking balls, where \(r(t)=\sqrt{R^{2}-4Rt}\). Direct computation gives
\[u^{\prime\prime}\frac{\phi^{\prime\prime}}{1+\phi^{\prime 2}}-\dot{\phi}=\frac{u ^{\prime\prime}-2R}{\sqrt{r^{2}-(x-3R)^{2}}}<\frac{-R}{\sqrt{r^{2}-(x-3R)^{2} }},\]
since by assumption \(u^{\prime\prime}<R\). Suppose \(t=t_{0}\) is the first time the graph of \(\phi_{t}\) intersects \(f_{t}\) from above at a point \(x_{0}\). At this point of intersection we have \(f^{\prime}_{t_{0}}(x_{0})=\phi^{\prime}_{t_{0}}(x_{0})\), \(\dot{f}_{t_{0}}(x_{0})\geq\dot{\phi}_{t_{0}}(x_{0})\), and we can assume \(f_{t}(x)<\phi_{t}(x)\) for all \(t<t_{0}\), and so \(f^{\prime\prime}_{t_{0}}(x_{0})\leq\phi^{\prime}_{t_{0}}(x_{0})\). Then at \(t=t_{0}\), \(x=x_{0}\), we have
\[\dot{f}-\dot{\phi} =-\dot{\phi}+u^{\prime\prime}\left(\frac{f^{\prime\prime}}{1+f^{ \prime 2}}+(n-1)\frac{x_{0}f^{\prime}-f}{x_{0}^{2}+f^{2}}\right)\] \[\leq-\dot{\phi}+u^{\prime\prime}\left(\frac{\phi^{\prime\prime}} {1+\phi^{\prime 2}}+(n-1)\frac{x_{0}\phi^{\prime}-\phi}{x_{0}^{2}+\phi^{2}}\right)\] \[<-\frac{R}{\sqrt{r^{2}-(x_{0}-3R)^{2}}}+u^{\prime\prime}(n-1) \frac{x_{0}\phi^{\prime}-\phi}{x_{0}^{2}+\phi^{2}}.\]
To achieve a contradiction we need to show that for \(y_{0}\) sufficiently negative the right hand side above is negative. To control the \(\phi^{\prime}\) term we can compute directly
\[-\frac{R}{\sqrt{r^{2}-(x_{0}-3R)^{2}}}+\frac{u^{\prime\prime}(n-1)x_{0}\phi^{ \prime}}{x_{0}^{2}+\phi^{2}}=\frac{-R(x_{0}^{2}+\phi^{2})+(n-1)u^{\prime\prime }(x_{0}-3R)}{(x_{0}^{2}+\phi^{2})\sqrt{r^{2}-(x_{0}-3R)^{2}}}.\]
Recall that by assumption \(u^{\prime\prime}<R\). Choose \(y_{0}\) sufficiently negative to ensure \(-(x_{0}^{2}+\phi^{2})+(n-1)(x_{0}-3R)\leq-\frac{1}{2}(x_{0}^{2}+\phi^{2})\). Then
\[-\frac{R}{\sqrt{r^{2}-(x_{0}-3R)^{2}}}+\frac{u^{\prime\prime}(n-1)x_{0}\phi^{ \prime}}{x_{0}^{2}+\phi^{2}}\leq\frac{-R}{2\sqrt{r^{2}-(x_{0}-3R)^{2}}}<-\frac {1}{2}\]
since \(r<R\). We have now demonstrated that
\[\dot{f}-\dot{\phi}<-\frac{1}{2}-\frac{u^{\prime\prime}(n-1)\phi}{x_{0}^{2}+ \phi^{2}}.\]
The function \(\phi\) is negative, so the second term on the right hand side above is positive. However, we can choose \(y_{0}\) sufficiently negative so that this term is less than \(\frac{1}{2}\), and the result follows.
We now demonstrate the existence of a singularity using our two subsolutions constructed above.
For \(R>1\), set \(b_{\infty}=R\), \(b_{0}=5R\), and \(a=6R.\) Consider the circle of radius \(R\) centered around \((3R,y_{0})\), with \(y_{0}\) sufficiently negative so that the hypothesis of Proposition 3.4 is satisfied. The right side of the circle lies on
Figure 2. The maximum principle forces \(f_{t}\) to achieve vertical slope.
the line \(x=4R\). Note that the vertex of the hyperbola \(g_{0}(y)\) lies on the line \(x=b_{0}=5R\). Furthermore, the hyperbola intersects \(x=a\) at \(y=\pm\sqrt{35R^{2}}\). Since \(p>a=6R\), we see \((a,p)\) lies above the top of the hyperbola \(g_{0}(y)\). Thus, it is possible to choose a function \(f_{0}:[1,a]\to\mathbb{R}\) with \(f_{0}(1)=q\), \(f_{0}(a)=p\), such that \(f_{0}\) goes below \(B_{R}(3R,y_{0})\), then increases above the hyperbola \(g_{0}(y)\) before arriving at \((a,p)\).
Let \(f_{t}(x)\) be the solution of (2.2) starting at \(f_{0}(t)\). By Proposition 3.3 and Proposition 3.4, \(f_{t}(x)\) can not intersect \(g_{t}(y)\) nor \(B_{\sqrt{R^{2}-4R}}(3R,y_{0})\) as long as the flow is defined. Note it takes time \(t=R/4\) for \(B_{\sqrt{R^{2}-4R}}(3R,y_{0})\) to shrink a point. Also, if \(T\) is the time the hyperbola \(g_{T}(y)\) has pushed out to the line \(x=2R\), as we have seen by (3.3) there exists a constant \(A\) such that \(T\leq A\). Hence, choose \(R\) large enough to ensure \(A<R/4\) and thus \(T<R/4\), which implies the hyperbola will push past the center of the shrinking circles before they completely disappear. This forces \(f_{t}\) to first have a vertical tangency, as illustrated in Figure 2, demonstrating the existence of a finite time singularity and proving Theorem 1.1.
## 4. Long time existence
The example above shows that a finite-time singularity for the flow (2.2) can occur in the interior of the interval \((1,a)\). In particular, one can not always expect \(\sup_{(1,a)}|f_{t}^{\prime}(x)|\) to stay bounded for finite time. The main goal of this section is to rule out a finite time singularity if one chooses a sufficiently nice initial function \(f_{0}(x)\), specifically where the corresponding \((1,1)\) form \(\alpha_{0}\) has supercritical phase. This is an important step towards the construction of a singularity at infinite time.
As a first step we show that along the flow, the first derivative \(|f_{t}^{\prime}(x)|\) stays bounded at the boundary points \(x=1\) and \(x=a\). In fact, our boundary estimate does not need the supercritical phase assumption.
**Proposition 4.1**.: _Suppose \(f_{t}(x)\) is defined on \((t,x)\in[0,T)\times[1,a]\). Then, there exists uniform constants \(A,B\) so that_
\[|f_{t}^{\prime}(1)|+|f_{t}^{\prime}(a)|\leq Ae^{Bt}.\]
Proof.: We will show \(|f^{\prime}(1)|<C(T)\), as the other boundry point is treated similarly. Consider \(g_{t}(x)=q+Ae^{B(n-1)t}(x-1)\). Choose \(A\gg 0\) sufficiently large to ensure both \(Ae^{B(n-1)t}\geq 2\max\{|q|,|q|^{-1}\}\) and \(f_{0}<g_{0}\) for all \(x\in(1,a]\). Choose \(B\gg 0\) so that \(u^{\prime\prime}<B(x-1)\). We claim that \(f_{t}<g_{t}\) for all time \(t\in[0,T)\).
Suppose not, and assume the curves touch for the first time at \(x=x_{0}>1\) and \(t=t_{0}\). Then, \(f_{t_{0}}(x_{0})=g_{t_{0}}(x_{0})\), \(f_{t_{0}}^{\prime}(x_{0})=g_{t_{0}}^{\prime}(x_{0})\), \(f_{t_{0}}^{\prime\prime}(x_{0})\leq g_{t_{0}}^{\prime\prime}(x_{0})\)
and \(\dot{f}_{t_{0}}(x_{0})\geq\dot{g}_{t_{0}}(x_{0})\). Thus, when \(x=x_{0}\), \(t=t_{0}\) we have
\[\dot{f} =u^{\prime\prime}\left(\frac{f^{\prime\prime}}{1+f^{\prime 2}}+(n-1) \frac{x_{0}f^{\prime}-f}{x_{0}^{2}+f^{2}}\right)\] \[\leq B(x_{0}-1)\left(\frac{g^{\prime\prime}}{1+g^{\prime 2}}+(n-1) \frac{x_{0}g^{\prime}-g}{x_{0}^{2}+g^{2}}\right)\] \[=B(x_{0}-1)(n-1)\frac{Ax_{0}e^{B(n-1)t_{0}}-q-Ae^{B(n-1)t_{0}}(x_ {0}-1)}{x_{0}^{2}+(q+Ae^{B(n-1)t_{0}}(x_{0}-1))^{2}}\] \[<ABe^{B(n-1)t}(x_{0}-1)(n-1)\frac{1-A^{-1}e^{-B(n-1)t_{0}}q}{1+q^ {2}},\]
since \(x_{0}^{2}+(q+Ae^{B(n-1)t_{0}}(x_{0}-1))^{2}>1+q^{2}\). Furthermore by assumption on \(A\) we have \(-A^{-1}e^{-B(n-1)t_{0}}q\leq\frac{1}{2}q^{2}\), and so
\[\frac{1-A^{-1}e^{-B(n-1)t_{0}}q}{1+q^{2}}\leq 1.\]
Hence,
\[\dot{f}<ABe^{B(n-1)t_{0}}(x_{0}-1)(n-1)=\dot{g},\]
a contradiction. Thus \(g_{t}\) serves as a barrier giving an upper bound for the derivative \(f^{\prime}(1)\leq Ae^{B(n-1)t}\). The lower bound is treated similarly.
We now turn to the case where we do have long time existence, namely when \(n\geq 3\) and \(\alpha_{0}\) has supercritical phase, i.e. \(\Theta(\alpha_{0})>(n-2)\frac{\pi}{2}\).
**Lemma 4.2**.: _The supercritical phase condition is preserved along the flow._
Proof.: On a general Kahler manifold \((X,\omega)\), set \(\omega=ig_{\bar{k}j}dz^{j}\wedge d\bar{z}^{k}\) and \(\alpha=i\alpha_{\bar{k}j}dz^{j}\wedge d\bar{z}^{k}\). Consider the metric \(\eta_{\bar{k}j}=g_{\bar{k}j}+\alpha_{\bar{k}p}g^{\bar{q}p}\alpha_{\bar{q}j}\). By equation (5.4) in [15], the angle \(\Theta(\alpha_{t})\) evolves via the heat equation
\[\dot{\Theta}(\alpha_{t})=\Delta_{\eta}\Theta(\alpha_{t}), \tag{4.1}\]
and thus the result follows from the maximum principle.
**Lemma 4.3**.: _If \(f_{0}(x)\) satisfies the supercritical phase assumption, \(f_{t}(x)>0\) for all \(t\geq 0\)._
Proof.: Suppose there exists a time \(t_{0}\) and a point \(x_{0}\) where \(f_{t_{0}}(x_{0})\leq 0\). This implies \(\arctan\Bigl{(}\frac{f}{x_{0}}\Bigr{)}\leq 0\). Yet \(\Theta(x_{0}):=(n-1)\arctan\left(\frac{f}{x_{0}}\right)+\arctan\left(f^{\prime }\right),\) and so the super critical phase assumption implies
\[\arctan\left(f^{\prime}\right)>(n-2)\frac{\pi}{2},\]
which is impossible for \(n\geq 3\).
**Lemma 4.4**.: _Under the supercritical phase assumption there exists a uniform constant \(C\) so that \(f^{\prime}_{t}(x)>-C\) for all \(t\geq 0\)._
Proof.: By the supercritical phase condition
\[\arctan\left(f_{t}^{\prime}\right)>(n-2)\frac{\pi}{2}-(n-1)\arctan\left(\frac{f_ {t}}{x}\right).\]
Since \(x\geq 1\) and \(f_{t}\leq C\) by the maximum principle, there exists an \(\epsilon>0\) so that \(\arctan\left(\frac{f_{t}}{x}\right)<\frac{\pi}{2}-\epsilon\). Thus
\[\arctan\left(f_{t}^{\prime}\right)>-\frac{\pi}{2}+(n-1)\epsilon.\]
This gives a lower bound for \(f_{t}^{\prime}\).
**Proposition 4.5**.: _Under the supercritical phase assumption, a solution \(f_{t}(x)\) to (2.2) has bounded first derivative for all times \(T<\infty\). In particular, there exists uniform constants \(A,B\) so that_
\[\sup_{x\in[1,a]}|f_{t}^{\prime}(x)|\leq A(1+t)e^{Bt}.\]
Proof.: By the previous lemma we only need an upper bound for \(f_{t}^{\prime}\). By Proposition 4.1 we have
\[A^{-1}e^{-Bt}\left(|f_{t}^{\prime}(1)|+|f_{t}^{\prime}(a)|\right)\leq 1.\]
As a result if \(\sup_{x\in[1,a]}A^{-1}e^{-Bt}|f_{t}^{\prime}(x)|\) is large, this supremum must be achieved at an interior point. Let \(x_{0}\) be the interior max. At this point we have \(f_{t}^{\prime}(x_{0})>0\), \(f_{t}^{\prime\prime}(x_{0})=0\), and \(f_{t}^{\prime\prime\prime}(x_{0})\leq 0\). By direct computation at \(x_{0}\) it holds
\[\dot{f}^{\prime} =\frac{d}{dx}\left(u^{\prime\prime}\left(\frac{f^{\prime\prime}}{ 1+f^{\prime 2}}+(n-1)\frac{xf^{\prime}-f}{x+f^{2}}\right)\right)\] \[\leq\frac{du^{\prime\prime}}{dx}(n-1)\frac{x_{0}f^{\prime}-f}{x_{ 0}^{2}+f^{2}}+u^{\prime\prime}\frac{d}{dx}\left(\frac{f^{\prime\prime}}{1+f^{ \prime 2}}+(n-1)\frac{xf^{\prime}-f}{x^{2}+f^{2}}\right)\] \[\leq Cf^{\prime}+u^{\prime\prime}\left(\frac{f^{\prime\prime \prime}}{1+f^{\prime 2}}-(n-1)\frac{2(x_{0}f^{\prime}-f)(x_{0}+ff^{\prime})}{(x_{0}^{2}+ f^{2})^{2}}\right),\]
where we repeatedly plugged in that \(f^{\prime\prime}(x_{0})=0\). Since \(f\) is positive the term \(-2x_{0}f(f^{\prime})^{2}\) is negative, and thus
\[\dot{f}^{\prime}\leq Cf^{\prime}+u^{\prime\prime}2(n-1)\frac{fx_{0}+f^{2}f^{ \prime}-x_{0}^{2}f^{\prime}}{(x_{0}^{2}+f^{2})^{2}}\leq Cf^{\prime}+C\]
for some constant \(C\).
Now, consider the function \(A^{-1}e^{-Bt}f_{t}^{\prime}(x)-Ct\). By making \(B\) larger, if necessary, we can assume \(B\geq C\). At an interior maximum we see
\[\frac{d}{dt}\left(A^{-1}e^{-Bt}f^{\prime}-Ct\right)\leq 0,\]
from which the result follows.
We remark that the above proof fails when the function \(f\) is not positive, since then the term \(-2x_{0}f(f^{\prime})^{2}\) is positive. Thus the best inequality one can derive in this case is \(\dot{f}^{\prime}\leq Cf^{\prime 2}\), which is certainly not enough to prevent a finite time singularity, as we have demonstrated. We are now ready to prove our second main result.
Proof of Theorem 1.2.: Let \(\alpha_{t}:=\alpha_{0}+i\partial\bar{\partial}\phi_{t}\), be the solution to (1.2) starting at \(\alpha_{0}\), and assume the flow is defined for \(t\in[0,T)\) for some time \(T<\infty\). By proposition 4.5, all the eigenvalues of \(\omega^{-1}\alpha_{t}\) are bounded uniformly by a constant \(C_{T}\). From here the result follows from the argument outlined in Proposition 5.2 in [15].
The idea is that once the eigenvalues are bounded, the operator \(\Delta_{\eta}\) is uniformly elliptic. Given \(\Theta(\alpha_{t})\) solves the heat equation (4.1), the parabolic estimates of Krylov-Safonov ([16] Theorem 11, Section 4.2) imply \(\Theta(\alpha_{t})\) is in \(C^{\alpha}\) in time which gives \(\phi_{t}\) is uniformly bounded in \(C^{1,\alpha}\) in time. Now, the uniform eigenvalue bounds also imply \(\phi_{t}\) has bounded \(C^{2}\) norm. The supercritical phase assumption implies the operator \(\Theta(\cdot)\) has convex level sets, which allows us to apply Evans-Krylov theory (see Section 6 of [3]). This gives uniform \(C^{2,\alpha}\) bounds for \(\phi_{t}\) which can be bootstrapped to higher order estimates. Thus we get smooth convergence \(\phi_{t}\to\phi_{T}\) to some limit, which allows us to continue the flow past the time \(T\).
## 5. Singular behavior at \(t=\infty\)
We now construct an example where the line bundle mean curvature flow develops a singularity at infinite time along a destabilizing subvariety. Recall from Section 2 that if one assumes Calabi-symmetry at an initial time, then (1.2) can be reformulated as a flow of curves (2.3). As a first step, we construct a family of subsolutions to (2.3) in polar coordinates that converges to a stationary solution \(\gamma_{\infty}\). By [14], we know such a solution must lie on a level set of the harmonic polynomial \(\operatorname{Im}(e^{-i\hat{\theta}}z^{n})\). Write \(\gamma_{\infty}(\theta)=(x_{\infty}(\theta),y_{\infty}(\theta))=(r_{\infty}( \theta)\cos\theta,r_{\infty}(\theta)\sin\theta)\), with \(\theta\in[\theta_{\min},\theta_{\max}]\). We also assume
\[1\leq x_{\infty}(\theta)\leq a\quad\text{and}\quad x_{\infty}(\theta_{\min})= x_{\infty}(\theta_{\max})=a. \tag{5.1}\]
This leads to the following result.
**Proposition 5.1**.: _Under the assumptions_
\[r^{\prime}_{\infty}\geq 0\quad\text{and}\quad\frac{r^{\prime}_{\infty}}{r_{ \infty}}\leq 2\tan\theta, \tag{5.2}\]
_there exists a subsolution \(\gamma_{t}(\theta)=(r_{t}(\theta)\cos\theta,r_{t}(\theta)\sin\theta)\) to (2.3) such that \(\gamma_{t}\to\gamma_{\infty}\) uniformly as \(t\to\infty\)._
Proof.: We first write down (2.3) in polar coordinates. Note that \(\dot{\gamma}=(\dot{r}\cos\theta,\dot{r}\sin\theta)\), with the normal vector to \(\gamma\) given by
\[\mathbf{N}=\frac{1}{(r^{\prime 2}+r^{2})^{1/2}}(-r^{\prime}\sin\theta-r\cos \theta,r^{\prime}\cos\theta-r\sin\theta).\]
Thus \(\langle\dot{\gamma},{\bf N}\rangle=-\frac{\dot{r}r}{(r^{\prime 2}+r^{2})^{1/2}}\). In this case the extrinsic quantity \(\xi\) is simply \(\xi=\frac{d}{ds}\theta=\frac{1}{(r^{\prime 2}+r^{2})^{1/2}}\). The curvature of a plane curve in polar coordinates is given by \(\kappa=\frac{2r^{\prime 2}-rr^{\prime\prime}+r^{2}}{(r^{\prime 2}+r^{2})^{ \frac{3}{2}}}\). Hence taking the dot product of (2.3) with \({\bf N}\) we arrive at
\[\dot{r}r=-u^{\prime\prime}\left(\frac{2r^{\prime 2}-rr^{\prime\prime}+r^{2}}{r^ {\prime 2}+r^{2}}+(n-1)\right).\]
Because \(\gamma_{\infty}\) is stationary, we see (2.3) is equivalent to
\[\frac{2r_{\infty}^{\prime 2}-r_{\infty}r_{\infty}^{\prime\prime}+r_{\infty}^{2 }}{r_{\infty}^{\prime 2}+r_{\infty}^{2}}+(n-1)=0. \tag{5.3}\]
Now, let \(b=b(t):[0,\infty)\to\mathbb{R}\) be an increasing function to be determined later. We use \(b(t)\) to define \(r_{t}(\theta)\) by
\[\frac{1}{r_{t}^{2}(\theta)}=\frac{1}{1+b}\left(\frac{b}{r_{\infty}^{2}(\theta )}+\frac{\cos^{2}\theta}{a^{2}}\right). \tag{5.4}\]
For an appropriate choice of \(b(t)\), we will show that the family of curves \(\gamma_{t}(\theta)=(r_{t}(\theta)\cos\theta,r_{t}(\theta)\sin\theta)\), which form an interpolation between \(\gamma_{0}\) and \(\gamma_{\infty}\), gives a subsolution to (2.3).
Differentiating (5.4) with respect to \(\theta\), and suppressing dependence on \(t\) and \(\theta\) from our notation for simplicity, we have
\[\frac{r^{\prime}}{r^{3}}=\frac{1}{1+b}\left(\frac{br_{\infty}^{\prime}}{r_{ \infty}^{3}}+\frac{\sin(2\theta)}{2a^{2}}\right)\]
as well as
\[\frac{r^{\prime\prime}}{r^{3}}-\frac{3r^{\prime 2}}{r^{4}}=\frac{1}{1+b} \left(\frac{br_{\infty}^{\prime\prime}}{r_{\infty}^{3}}-\frac{3br_{\infty}^{ \prime 2}}{r_{\infty}^{4}}+\frac{\cos(2\theta)}{a^{2}}\right).\]
So,
\[\frac{2r^{\prime 2}-rr^{\prime\prime}+r^{2}}{r^{4}} =-\left(\frac{r^{\prime\prime}}{r^{3}}-\frac{3r^{\prime 2}}{r^{4}} \right)+\frac{1}{r^{2}}-\left(\frac{r^{\prime}}{r^{3}}\right)^{2}r^{2}\] \[=\frac{1}{1+b}\left(-\frac{br^{\prime\prime}_{\infty}}{r_{\infty}^ {3}}+\frac{3br^{\prime 2}_{\infty}}{r_{\infty}^{4}}-\frac{\cos(2\theta)}{a^{2}}+ \frac{b}{r_{\infty}^{2}}+\frac{\cos^{2}\theta}{a^{2}}\right.\] \[\qquad\qquad\left.-\left(\frac{br^{\prime}_{\infty}}{r_{\infty}^ {3}}+\frac{\sin(2\theta)}{2a^{2}}\right)^{2}\left(\frac{b}{r_{\infty}^{2}}+ \frac{\cos^{2}\theta}{a^{2}}\right)^{-1}\right).\]
By (5.3),
\[-\frac{br^{\prime\prime}_{\infty}}{r_{\infty}^{3}}+\frac{3br^{\prime 2}_{ \infty}}{r_{\infty}^{4}}+\frac{b}{r_{\infty}^{2}}=\frac{-b}{r_{\infty}^{4}} \left((n-1)(r^{\prime 2}_{\infty}+r_{\infty}^{2})-r^{\prime 2}_{\infty} \right).\]
Now, for notational simplicity, set
\[A=\frac{br^{\prime}_{\infty}}{r_{\infty}^{3}}+\frac{\sin(2\theta)}{2a^{2}}, \quad B=\frac{b}{r_{\infty}^{2}}+\frac{\cos^{2}\theta}{a^{2}}.\]
Then returning to the above we see
\[\frac{2r^{\prime 2}-rr^{\prime\prime}+r^{2}}{r^{\prime 2}+r^{2}} =\frac{1}{r^{2}}\frac{2r^{\prime 2}-rr^{\prime\prime}+r^{2}}{r^{4}} \left(\left(\frac{r^{\prime}}{r^{3}}\right)^{2}+\left(\frac{1}{r^{2}}\right) ^{2}\right)^{-1}\] \[=\frac{B}{A^{2}+B^{2}}\left(\frac{-b}{r_{\infty}^{4}}\left((n-1)( r^{\prime 2}_{\infty}+r_{\infty}^{2})-r^{\prime 2}_{\infty}\right)+\frac{\sin^{2} \theta}{a^{2}}-\frac{A^{2}}{B}\right).\]
Hence
\[\frac{2r^{\prime 2}-rr^{\prime\prime}+r^{2}}{r^{\prime 2}+r^{2}}+(n-1) =\frac{1}{A^{2}+B^{2}}\left(-(n-1)\frac{Bb}{r_{\infty}^{2}}-(n-2 )\frac{Bbr^{\prime 2}_{\infty}}{r_{\infty}^{4}}\right.\] \[\qquad\qquad\qquad\left.+\frac{B\sin^{2}\theta}{a^{2}}+(n-2)A^{2 }+(n-1)B^{2}\right).\]
We now compute
\[-(n-1)\frac{Bb}{r_{\infty}^{2}}+(n-1)B^{2}=(n-1)B\left(B-\frac{b}{r_{\infty}^ {2}}\right)=(n-1)B\frac{\cos^{2}\theta}{a^{2}},\]
and
\[A^{2}-\frac{Bbr^{\prime 2}_{\infty}}{r_{\infty}^{4}}=\frac{br^{\prime}_{ \infty}\sin(2\theta)}{a^{2}r_{\infty}^{3}}+\frac{\sin^{2}(2\theta)}{4a^{4}}- \frac{br^{\prime 2}_{\infty}\cos^{2}\theta}{a^{2}r_{\infty}^{4}}.\]
Combining these, we have
\[\frac{2r^{\prime 2}-rr^{\prime\prime}+r^{2}}{r^{\prime 2}+r^{2}}+(n-1) =\frac{(n-1)B\cos^{2}\theta+B\sin^{2}\theta}{a^{2}(A^{2}+B^{2}) }+\frac{(n-2)\sin^{2}(2\theta)}{4a^{4}(A^{2}+B^{2})}\] \[+\frac{n-2}{A^{2}+B^{2}}\left(\frac{br^{\prime}_{\infty}\sin(2 \theta)}{a^{2}r_{\infty}^{3}}-\frac{br^{\prime 2}_{\infty}\cos^{2}\theta}{a^{2}r_{ \infty}^{4}}\right).\]
By assumption,
\[r^{\prime}_{\infty}\geq 0\quad\text{and}\quad\frac{r^{\prime}_{\infty}}{r_{ \infty}}\leq 2\tan\theta,\]
which implies
\[\frac{br_{\infty}^{\prime}\sin(2\theta)}{a^{2}r_{\infty}^{3}}-\frac{br_{\infty}^{ \prime 2}\cos^{2}\theta}{a^{2}r_{\infty}^{4}}\geq 0.\]
Additionally, \(r_{\infty}\), \(\sin\theta\), and \(\cos\theta\), are all bounded above and below away from zero. This implies there exists a constant \(C_{1}\) so that, for large \(b\),
\[\frac{2r^{\prime 2}-rr^{\prime\prime}+r^{2}}{r^{\prime 2}+r^{2}}+(n-1)\geq \frac{C_{1}}{b}.\]
Returning to (5.4), we take the derivative of both sides in \(t\)
\[-\frac{2\dot{r}}{r^{3}}=-\frac{\dot{b}}{(1+b)^{2}}\left(\frac{b}{r_{\infty}^{2 }}+\frac{\cos^{2}\theta}{a^{2}}-\frac{1+b}{r_{\infty}^{2}}\right)=-\frac{\dot {b}}{(1+b)^{2}}\left(\frac{\cos^{2}\theta}{a^{2}}-\frac{1}{r_{\infty}^{2}} \right).\]
Multiplying by \(-r^{4}\) and plugging in the square of (5.4) for \(r^{4}\) gives
\[2r\dot{r} =\left(\frac{\cos^{2}\theta}{a^{2}}-\frac{1}{r_{\infty}^{2}} \right)\left(\frac{b}{r_{\infty}^{2}}+\frac{\cos^{2}\theta}{a^{2}}\right)^{-2 }\dot{b}\] \[=(r_{\infty}x-ra)\left(\frac{r_{\infty}x+ra}{a^{2}r^{2}r_{\infty}^ {2}}\right)\left(\frac{b}{r_{\infty}^{2}}+\frac{\cos^{2}\theta}{a^{2}}\right)^ {-2}\dot{b}\] \[\geq(r_{\infty}x-ra)\frac{C_{2}}{b^{2}}\dot{b}\]
for some \(C_{2}>0\) whenever \(b\) is large. Note that the polar curves \(r(\theta)\) intersect the line \(x=a\) to the zeroth order, which implies there exists a constant \(C_{3}>0\) for which
\[0\geq\inf_{x\in[a-\epsilon,a]}\left(u^{\prime\prime-1}(r_{\infty}x-ra)\right) +\inf_{x\in[1,a]}(r_{\infty}x-ra)\geq-C_{3}. \tag{5.5}\]
Next, we use the same assumption on the background Kahler form as Section 3, namely, for \(x\in[1,a-\epsilon]\) we assume \(u^{\prime\prime}(x)\geq k(x-1).\) This implies
\[u^{\prime\prime} \geq k(r\cos\theta-1)\] \[=k\left(\sqrt{(1+b)\left(\frac{b}{r_{\infty}^{2}}+\frac{\cos^{2} \theta}{a}\right)^{-1}}\cos\theta-1\right)\] \[=k\left(\sqrt{(1+b)\left(\frac{b}{(r_{\infty}\cos\theta)^{2}}+ \frac{1}{a}\right)^{-1}}-1\right)\] \[\geq k\left(\sqrt{(1+b)\left(b+\frac{1}{a}\right)^{-1}}-1\right).\]
For simplicity, write the right hand side above as \(C(b)\), which is a smooth positive function approaching \(0\) as \(b\to\infty\). Combining with (5.5) we arrive at,
\[\frac{2}{u^{\prime\prime}}r\dot{r}\geq-\frac{C_{2}C_{3}}{b^{2}}\left(1+\frac{ 1}{C(b)}\right)\dot{b}.\]
If \(b\) solves the initial value problem
\[\dot{b}=2\left(1+\frac{1}{C(b)}\right)^{-1}\frac{C_{1}}{C_{2}C_{3}}b;\qquad b_{0 }\gg 0,\]
then \(r_{t}(\theta)\) defines a subsolution:
\[\frac{1}{u^{\prime\prime}}r\dot{r}+\left(\frac{2r^{\prime 2}-rr^{\prime \prime}+r^{2}}{r^{\prime 2}+r^{2}}+(n-1)\right)\geq 0.\]
Notice that we require \(b_{0}\gg 0\). Thus the subsolution does not start at \(\gamma_{0}\) (given by the vertical line in Figure 3), but rather a curve starting closer to \(\gamma_{\infty}\) in the interpolation. It then sweeps out to \(\gamma_{\infty}\) as \(t\to\infty\).
We now show that the assumptions on \(r_{\infty}\) in Proposition 5.1 can be satisfied with an explicit example. As we have stated above, in [14] it was demonstrated that under the Calabi-Symmetry assumption, solutions to the dHYM equation correspond to functions \(f:[1,a]\to\mathbb{R}\), satisfying the boundary conditions \(f(1)=q\), \(f(a)=p\), so that the graph \((x,f(x))\) lies on a level curve of \(\mathrm{Im}(e^{-i\hat{\theta}}z^{n})\). Furthermore, the proof of Theorem 1 from [14] uses that if the level curve through \((1,q)\) has vertical slope, then the class \([\alpha]\) is semi-stable with respect to the stability condition (1.3), with the exceptional divisor \(E\) being the destabilizing subvariety. Thus in this case any graph \(f_{\infty}(x)\) lying on the level curve is singular with unbounded derivative at \((1,q)\), and by construction the corresponding representative of \([\alpha]\) will be singular precisely along \(E\). It is this singular graph that will be the limiting curve to the line bundle mean curvature flow.
**Lemma 5.2**.: _There exists a Kahler class \([\omega]\) and a semi-stable class \([\alpha]\) with a stationary solution \(\gamma_{\infty}\) which satisfies \(\gamma_{\infty}(\theta_{0})=(1,q)\), \(\gamma_{\infty}(\theta_{\max})=(a,p)\), and where the corresponding polar function \(r_{\infty}\) satisfies the assumptions of Proposition 5.1._
Proof.: Choose \(\gamma_{\infty}\) lying on a level curve of \(\mathrm{Im}(e^{-i\hat{\theta}}z^{n})\) so that \(\gamma_{\infty}(\theta_{0})=(1,q)\) and \(\gamma_{\infty}^{\prime}(\theta_{0})\) is vertical, for some \(\theta_{0}\). This guarantees we are working with a semi-stable class. The corresponding polar function \(r_{\infty}(\theta)\) will now satisfy (5.3). Define \(\beta\) by
\[\tan\beta:=\frac{y^{\prime}(\theta)}{x^{\prime}(\theta)}=\frac{r_{\infty}^{ \prime}\sin\theta+r_{\infty}\cos\theta}{r_{\infty}^{\prime}\cos\theta-r_{ \infty}\sin\theta}=\frac{r_{\infty}^{\prime}r_{\infty}^{-1}\tan\theta+1}{r_{ \infty}^{\prime}r_{\infty}^{-1}-\tan\theta}.\]
As a result
\[\frac{r_{\infty}^{\prime}}{r_{\infty}}=\cot(\beta-\theta)=\tan(\pi/2-\beta+ \theta).\]
Now, choose \(q\gg 0\). Because \(\gamma_{\infty}^{\prime}(\theta_{0})\) is vertical, we know \(\beta(\theta_{0})=\pi/2\). In particular, at this point
\[r_{\infty}^{\prime}(\theta_{0})>0\quad\text{and}\quad\frac{r_{\infty}^{\prime }(\theta_{0})}{r_{\infty}(\theta_{0})}=\tan(\theta_{0})<2\tan(\theta_{0}).\]
Thus, there exists a neighborhood of \(\theta_{0}\) where (5.2) holds.
We now check (5.1). At \(\theta=\theta_{0}\),
\[x^{\prime}_{\infty} =r_{\infty}\cos\theta\left(\frac{r^{\prime}_{\infty}}{r_{\infty}}- \tan\theta\right)=0\] \[x^{\prime\prime}_{\infty} =\frac{\cos\theta}{r_{\infty}}\left(-2r_{\infty}r^{\prime}_{ \infty}\tan\theta+r_{\infty}r^{\prime\prime}_{\infty}-r^{2}_{\infty}\right)\] \[=\frac{\cos\theta}{r_{\infty}}\left(-2r^{\prime 2}_{\infty}+r_{ \infty}r^{\prime\prime}_{\infty}-r^{2}_{\infty}\right)>0,\]
where last inequality follows from (5.3). Hence, \(x_{\infty}\) achieves local minimum at \(\theta=\theta_{0}\). We choose \(a\) slightly greater than \(1\) such that \(x_{\infty}(\theta_{\min})=x_{\infty}(\theta_{\max})=a\). This demonstrates the assumptions of Proposition 5.1.
We are now ready to complete the proof of Theorem 1.3. Consider the classes \([\omega]\) and \([\alpha]\) discussed in the above lemma. Let \(f_{\infty}(x)\) denote the graphical portion of \(\gamma_{\infty}\) that connects \((1,q)\) to \((a,p)\). Since the assumptions of Proposition 5.1 are satisfied, there exists a subsolution \(\gamma_{t}\) pushing out towards \(\gamma_{\infty}\). In the proof of Proposition 5.1 we saw the subsolution condition is not satisfied unless \(b\) is sufficiently large, and so the subsolution starts at some time \(t_{0}\), with \(\gamma_{t_{0}}\) already pushed out towards \(\gamma_{\infty}\).
Consider a function \(f_{t_{0}}\) satisfying \(f_{t_{0}}(1)=q\), and \(f_{t_{0}}(a)=p\), which lies above the curve \(\gamma_{t_{0}}\), but below \(\gamma_{\infty}\), as in Figure 4. This function defines an initial representative \(\alpha_{0}\in[\alpha]\), and its angle is given by
\[\Theta(\alpha_{0})=(n-1)\text{arctan}\left(\frac{f_{t_{0}}}{x}\right)+\text{ arctan}(f^{\prime}_{t_{0}}).\]
The supercritical phase assumption in Theorem 1.2 is satisfied if we choose \(q\) large enough so that \(\text{arctan}\left(\frac{f_{t_{0}}}{x}\right)\) is sufficiently close to \(\pi/2\). Thus if we consider a solution \(\alpha_{t}\) to the line bundle mean curvature flow starting at \(\alpha_{0}\), the flow exists for all time. Let \(f_{t}\) be graph corresponding to \(\alpha_{t}\). By the maximum principle, \(f_{t}\) must stay below \(f_{\infty}\) and above \(\gamma_{t}\) for all time.
Because the subsolution \(\gamma_{t}\) sweeps out to \(\gamma_{\infty}\) as \(t\to\infty\), the solution to the flow \(f_{t}\) must converge to \(f_{\infty}\) in \(C^{0}\). In particular, it can not develop an infinite time singularity at any point other than \((1,q)\), where it will achieve vertical tangency. By construction, this point corresponds to the exceptional divisor \(E\), which is precisely the destabilizing subvariety. Thus, the corresponding forms \(\alpha_{t}\) along the line bundle mean curvature flow will blow up along \(E\) at infinite time.
|
2307.14817 | Models of reference production: How do they withstand the test of time? | In recent years, many NLP studies have focused solely on performance
improvement. In this work, we focus on the linguistic and scientific aspects of
NLP. We use the task of generating referring expressions in context
(REG-in-context) as a case study and start our analysis from GREC, a
comprehensive set of shared tasks in English that addressed this topic over a
decade ago. We ask what the performance of models would be if we assessed them
(1) on more realistic datasets, and (2) using more advanced methods. We test
the models using different evaluation metrics and feature selection
experiments. We conclude that GREC can no longer be regarded as offering a
reliable assessment of models' ability to mimic human reference production,
because the results are highly impacted by the choice of corpus and evaluation
metrics. Our results also suggest that pre-trained language models are less
dependent on the choice of corpus than classic Machine Learning models, and
therefore make more robust class predictions. | Fahime Same, Guanyi Chen, Kees van Deemter | 2023-07-27T12:46:38Z | http://arxiv.org/abs/2307.14817v1 | # Models of reference production: How do they withstand the test of time?
###### Abstract
In recent years, many NLP studies have focused solely on performance improvement. In this work, we focus on the linguistic and scientific aspects of NLP. We use the task of generating referring expressions in context (REG-in-context) as a case study and start our analysis from GREC, a comprehensive set of shared tasks in English that addressed this topic over a decade ago. We ask what the performance of models would be if we assessed them (1) on more realistic datasets, and (2) using more advanced methods. We test the models using different evaluation metrics and feature selection experiments. We conclude that GREC can no longer be regarded as offering a reliable assessment of models' ability to mimic human reference production, because the results are highly impacted by the choice of corpus and evaluation metrics. Our results also suggest that pre-trained language models are less dependent on the choice of corpus than classic Machine Learning models, and therefore make more robust class predictions.
## 1 Introduction
NLP research can have different aims. Some NLP research focuses on developing new algorithms or building practical NLP applications. Another line of NLP work constructs computational models that aim to explain human language and language use; this line of work has been dubbed _NLP-as-Science_van Deemter (2023). Among other things, NLP-as-Science demands that we ask ourselves to what extent NLP research findings generalise along a range of dimensions.
In addition to the practical applications of Referring Expression Generation (REG, Reiter, 2017), REG is also one of the typical tasks in NLP-as-Science, where REG algorithms are built to model and explain the reference production of human beings Krahmer and van Deemter (2012); van Deemter (2016). In the computational linguistics and cognitive science community, REG can be divided into two distinct tasks: _one-shot REG_, finding a referring expression (RE) to single out a referent from a set, and _REG-in-context_, generating an RE to refer to a referent at a given point in a discourse.
In a classic setup, REG-in-context is often approached in two steps: The first is to decide on the form of an RE at a given point in the discourse, and the second is to decide on its content. Many researchers have been interested in the first subtask, referential form selection: the task to decide which referential form (e.g., pronoun, proper name, description, etc.) an RE takes McCoy and Strube (1999); Henschel et al. (2000); Kibrik et al. (2016). Nearly 15 years ago, Belz et al. (2008) introduced the GREC shared tasks and a number of English REG corpora with two goals: (1) assessing the performance of computational models of reference production Belz et al. (2009), and (2) understanding the contribution of linguistically-inspired factors to the choice of referential form Greenbacker and McCoy (2009); Kibrik et al. (2016); Same and van Deemter (2020).
15 years have passed since the GREC challenge was organised, and many new models and corpora have been proposed in the meantime (e.g., Castro Ferreira et al. (2018); Cunha et al. (2020), and Same et al. (2022)). We, therefore, decided that it was time to ask, in the spirit of NLP-as-Science, how well the lessons that GREC once taught our research community hold up when scrutinised in light of all these developments. In other words, we will investigate to what extent the findings from GREC can be _generalised_ to other corpora and other models.
To this end, we pursue the following objectives: (1) We extend GREC by testing its REG algorithms not only on the GREC corpora but also on a corpus that was not originally considered and that has a different genre, namely the Wall Street Journal
(WSJ) portion of OntoNotes (Hovy et al., 2006; Weischedel et al., 2013); (2) We fine-tune pretrained language models on the task of REG-inn-context and assess them in the GREC framework.
In Section 2, we detail the GREC shared tasks and introduce the corpora used in GREC. Section 3 spells out our research questions. In Section 4 and Section 5, we introduce the algorithms and corpora that we use. Section 6 reports the performance of each algorithm on each corpus, followed by analyses in Section 7. Section 8 will discuss our findings and draw some lessons.
## 2 The GREC Shared Tasks
In this section, we summarise the GREC task, the corpora used by GREC, and its conclusions.
### The GREC Task and its Corpora
According to Belz et al., "_the GREC tasks are about how to generate appropriate references to an entity in the context of a piece of discourse longer than a sentence_" (2009, p. 297). The main task was to predict the referential form, namely whether to use a pronoun, proper name, description or an empty reference at a given point in discourse.
The GREC challenges use two corpora, both created from the introductory sections of Wikipedia articles: (1) GREC-2.0 (henceforth msr, as it was used in the GREC-MSR shared tasks of 2008 and 2009) consists of 1941 introductory sections of the articles across five domains (people, river, mountain, city, and country); and (2) GREC-People (henceforth neg as it was used in the GREC-NEG shared task in 2009) contains 1000 introductory sections from Wikipedia articles about composers, chefs, and inventors. Here is an example from neg:
1. **David Chang** (born 1977) is a noted American chef. **He** is chef/owner of Momofuku Noodle Bar, Momofuku Ko and Momofuku Ssam Bar in New York City. **Chang** attended Trinity College, where **he** majored in religious studies. In 2003, **Chang** opened **his** first restaurant, Momofuku Noodle Bar, in the East Village.
A key difference between msr and neg lies in their RE annotation practices. In msr, only those REs that refer to the main topic of the article are annotated, while in neg, mentions of all _human_ referents are annotated. For instance, in a document about David Chang, msr will only annotate REs referring to David Chang, while neg will include annotations for all human referents, including David Chang and others.
### REG Algorithms Submitted to GREC
Various REG algorithms were submitted to the GREC challenges. These consist of feature-based ML algorithms: CNTS (Hendrickx et al., 2008), ICSI (Favre and Bohnet, 2009), IS-G (Bohnet, 2008), OSU Jamison and Mehay (2008) and UUeI Greenbacker and McCoy (2009a), and an algorithm that mixes feature-based ML and rules: JUNLG (Gupta and Bandopadhyay, 2009). Table 1 presents the details of each model, including the ML method, and the original reported accuracy on msr (cf. Belz et al. (2009) for details).
### Feature Selection
The GREC Tasks were designed to find out _what kind of information is useful for making choices between different kinds of referring expressions in context_ (Belz et al., 2009, p. 297). However, the original paper does not consider the factors that contributed to the RE choice in the systems submitted to GREC. In a follow-up study, Greenbacker and McCoy (2009b) conducted a feature selection study informed by psycholinguistics. They experimented with various feature subsets derived from their system, known as UDel, which had previously been submitted to the GREC. Additionally, they incorporated selected features from another
\begin{table}
\begin{tabular}{l l l l} \hline Name & GREC ST & ALG & Acc \\ \hline UDel & msr ’09 & C5.0 & 77.71 \\ ICSI & msr ’09 & CRF & 75.16 \\ CNTS & msr ’08 & MBL & 72.61 \\ IS-G & msr ’08 & MLP & 70.78 \\ OSU & msr ’08 & MaxEnt & 69.82 \\ JUNLG & msr ’09 & Rule & 75.40 \\ \hline \end{tabular}
\end{table}
Table 1: An overview of the algorithms submitted to GREC. The first column contains the name of the respective algorithm. The column GREC ST presents the name of the msr shared task to which the algorithm was submitted. The third column, ALG, lists the algorithms used, where abbreviations from top to bottom are C5.0 decision tree, conditional random field, memory-based learning, multi-layer perceptron, maximum entropy, and frequency-based rules. The fourth column, Acc, reports the original accuracy of the algorithms, as reported in Belz et al. (2009). Note that UUeI, ICSI, and JUNLG were submitted to both the msr ’08 and msr’09 shared tasks, and we only present the newest results here.
REG system, CNTS (Hendrickx et al., 2008), into their study. They show that features motivated by psycholinguistic studies and certain sentence construction features have a positive impact on the performance of REG models. Follow-up feature-selection studies including Kibrik et al. (2016) and Same and van Deemter (2020) also emphasise the contribution of factors such as recency and grammatical role to the choice of RE form.
## 3 Research Questions
15 years after the GREC shared tasks, we were curious to know to what extent the conclusions from GREC still "stand". We, therefore, came up with the following research questions.
In the first place, we are interested in _the impact of the choice of corpus on the performance of REG algorithms_ (\(\mathcal{R}_{1}\)). GREC uses only the introductory part of Wikipedia articles (see Section 2), which represents only one genre of human language use. Considering that a good REG algorithm needs to model the general use of reference, a better evaluation framework should include texts from multiple genres. Therefore, we also include the WSJ corpus in the study (see Section 5 for more details) and conduct a correlation analysis to quantify how the choice of corpus impacts the evaluation results.
Second, previous studies suggested that classic machine learning (ML) based REG algorithms perform on par with most recent neural methods (Same et al., 2022). However, their study has three limitations: (1) they did not incorporate pre-trained language models (PLMs); (2) they focused on the surface forms of REs, which partly depend on the performance of surface realisation; (3) they did not assess the models based on the intuition that a model with good explanatory power should be less influenced by the choice of corpus. Therefore, we adopt PLMs to the task of REG-in-context (see Section 4 for more details) and investigate _how good is the explanatory power of PLM-based REG models compared to classic ML-based models_ (\(\mathcal{R}_{2}\)) using the enhanced GREC framework.
Finally, as previously mentioned, one of the primary theoretical objectives of GREC was to computationally explore the contribution of factors that originate from linguistic studies to the choice of referential forms. It is reasonable to expect that such contributions may change depending on the choice of corpus. In this study, we conduct an importance analysis to investigate _whether the importance ranking of linguistic factors changes when we use different corpora_ (\(\mathcal{R}_{3}\)).
## 4 REG Algorithms
In what follows, we introduce the REG algorithms that are considered in this study.
### ML-based REG
For this study, we have narrowed our focus to feature-based ML algorithms that predict the type of RE. Consequently, we reconstruct five ML-based REG algorithms, namely UDe1, ICSI, CNTS, IS-G, and OSU, along with their respective feature sets, while excluding JUNLG. Note that we implement CNTS slightly differently from Hendrickx et al. (2008). Concretely, Hendrickx et al. (2008) have mentioned that they have used the TiMBL package (Daelemans et al., 2007) for implementing the Memory Based Learning algorithm. Instead, we implemented the k-Nearest Neighbors algorithm. According to Daelemans et al. (2007), Memory Based Learning is the direct descendant of k-Nearest Neighbors. More information on the implementation of these models can be found in Appendix B.
### PLM-based REG
Deep learning approaches have been used in many previous works on REG (Castro Ferreira et al., 2019; Cao and Cheung, 2019; Cunha et al., 2020; Chen et al., 2021). Different from previous work1,
Figure 1: Illustration of the PLM-based REG Algorithm.
we fine-tune PLMs on REG corpora in this study.
To fine-tune PLMs on REG corpora, we began by pre-processing each corpus using the same paradigm as described by Cunha et al. (2020). More precisely, each referent in a given document was replaced with its corresponding proper name. For example, all underlined REs in Example (1) were replaced by "David Chang". Subsequently, as depicted in Figure 1, we fed the data into a PLM, and, for each referent (e.g., "David Chang" ), we extracted the representations of its first token and its last token and summed them. The final representations were then sent to a fully connected layer for predicting the RE forms. In this study, we use BERT and RoBERTa (see section 6.1 for more details).
## 5 REG Corpora
In the following, we explain the corpora used in this work. These corpora are English-language corpora.
### The msr and neg Corpora
In the current study, we only use the articles from the training sets of these corpora (see the number of documents in Table 2). Following the same approach as Castro Ferreira et al. (2018), we created a version of the GREC corpora for the End-to-end (E2E) REG modelling. For the classic ML models, we reproduced the models using the feature sets from the studies mentioned in Section 2.2.
### The wsj Corpus
As mentioned earlier, the WSJ portion of the OntoNotes corpus (Weischedel et al., 2013) is our third data source.2 We use the version of the corpus that Same et al. (2022) developed for E2E REG modeling.3 Since empty pronouns are not annotated in wsj, we decided to also exclude them from the two GREC corpora and focus on a 3-label classification task. The labels considered in this study are _pronoun_, _description_, and _proper name_. Table 2 presents a detailed overview of these corpora.
Footnote 2: We used Ontonotes 5.0 licensed by the Linguistic Data Consortium (LDC) [https://catalog.ldc.upenn.edu/LDC20131719](https://catalog.ldc.upenn.edu/LDC20131719).
Data Splits.We have made a document-wise split of the data. We split the wsj data in accordance with the CoNLL 2012 Shared Task (Pradhan et al., 2012). Our wsj training, development, and test sets contain 20275, 2831, and 2294 samples, respectively. We did an 85-5-10 split of the GREC datasets in accordance with Belz et al. (2009). After excluding empty pronouns, the msr training, development, and test sets contain 9413, 519, 1038 instances, and the neg training, development, and test sets contain 6681, 259, 896 instances.
Proportion of Referring ExpressionsAs shown in Table 2, pronouns and proper names make up 80% and 89.5% of the referential instances in msr and neg, respectively. This implies that the other two referential forms, namely descriptions and empty references, account for approximately 20% of the cases in msr and about 10% in neg. Given this imbalance in the frequency of different forms within the two corpora, we question its potential effect on algorithm performance. Specifically, we are wondering if forms with lower frequencies are accurately predicted by the algorithms.
## 6 Evaluation
In this section, we introduce the evaluation protocol and report the performance of the models.
### Implementation Details
For BERT and RoBERTa, we used _bert-base-cased_ and _roberta-base_, both from Hugging Face. For fine-tuning, we set the batch size to 16, the learning rate to 1e-3, the dropout rate to 0.5, and the size of the output layer to 256. We ran each model for 20 epochs and used the one that achieved the highest F1 score on the development set. The implementation details of the classic ML-based models can be found in Appendix B.
### Evaluation Protocol
The main evaluation metric in the GREC-MSR shared tasks was accuracy. In addition to accuracy,
\begin{table}
\begin{tabular}{l r r r} \hline \hline & msr & neg & wsj \\ \hline number of documents & 1655 & 808 & 582 \\ word/doc (mean) & 148 & 129 & 530 \\ sent/doc (mean) & 7.1 & 5.8 & 25 \\ par/doc (mean) & 2.3 & 2.2 & 10.8 \\ referent/doc (mean) & 1 & 2.6 & 15 \\ number of RE & 11705 & 8378 & 25400 \\ description \% & 13.84\% & 4\% & 38.29\% \\ proper name \% & 38.09\% & 40.79\% & 34.57\% \\ pronoun \% & 41.79\% & 48.75\% & 27.14\% \\ empty \% & 6.28\% & 6.47\% & - \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of the msr, neg, and wsj corpora in terms of their length-related characteristics and distribution of REs. _Doc_, _sent_ and _par_ stands for _documents_, _sentences_ and _paragraphs_.
we also report macro-F1 and weighted-macro F1. We argue that different metrics evaluate algorithms from different perspectives and provide us with different meaningful insights. For pragmatic tasks like REG, it makes sense to ask how well an algorithm performs on naturally distributed data which is often imbalanced. For these cases, reporting accuracy and weighted F1 are logical. Furthermore, analogous to other classification tasks, minority categories should not be overlooked. Take as an example the class _description_ in the neg corpus, which occurs only 4%. If a model fails to produce this class, the produced document might sound unnatural. Therefore, it is important to ensure that an algorithm is not over- or under-generating certain classes. Looking into accuracy and macro-F1 together provides insights into such cases.
### Performance of the Models
The overall accuracy of the models, their macro F1, and their weighted-macro F1 are presented in Table 3. We also present the ranking of the models based on these scores in Appendix A.
PLM-based Models.The best-performing models across all corpora and metrics are PLM-based models. In six out of nine rankings, BERT and RoBERTa are ranked as the top two models. The sole exception is neg, where BERT is the second worst model. The benefit of using PLMs is the largest on the wsj corpus. For example, RoBERTa improves the macro F1 score from 69.63 (i.e., the performance of the best ML-based model) to 82.70.
ML-based Models.In contrast to the robust performance of the PLM models, the performance of the classic ML models is more corpus-dependent. In the case of msr and neg, ICSI is the best-performing model, while in the case of wsj, it is at the bottom section of the rankings. Another interesting observation is the performance of the UDel models. In terms of accuracy, UDel has the highest performance in neg, while it has the lowest performance in both msr and wsj. In terms of macro-F1 rankings, the neg UDel model dropped from first to last place, whereas BERT improved from penultimate place to second place. In general, our ML models yielded lower scores than the original models used in the GREC study (Belz et al., 2009). This could be attributed to a variety of factors, including differences in feature engineering and model parameters.
Comparing Different Metrics.Upon comparing average scores across the three metrics, we observe that for msr and neg, PLMs are clear winners only when macro-F1 is the metric in question. However, for wsj, PLMs are winners on all three metrics. This may be because the distribution of categories in wsj is much more balanced than in the other two corpora.
## 7 Analysis
To further compare the different models and investigate the impact of the choice of corpus, we conduct (1) a Bayes Factor (BF) analysis to determine whether the accuracy rates reported in Section 6 come from similar or different distributions, (2) a per-class evaluation of predictions to assess the success of each model in predicting individual classes, (3) a correlation analysis to quantify how the evaluation results change with respect to the choice of a corpus, and (4) a feature selection study to check how the importance of each feature changes as a function of the choice of corpus.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{msr} & \multicolumn{3}{c}{neg} & \multicolumn{3}{c}{wsj} \\ & Acc. & F1 & wF1 & Acc. & F1 & wF1 & Acc. & F1 & wF1 \\ \cline{2-10} UDel & 66.86 & 56.76 & 64.3 & **80.80** & 55.45 & 77.9 & 63.74 & 64.23 & 63.2 \\ ICSI & 71.19 & 64.73 & 70.4 & 80.36 & 64.53 & 78.6 & 64.62 & 64.15 & 63.4 \\ CNTS & 68.59 & 61.39 & 67.2 & 78.68 & 61.62 & 76.8 & 64.31 & 64.59 & 64.4 \\ OSU & 68.02 & 60.28 & 66.6 & 79.24 & 57.04 & 76.5 & 69.20 & 69.63 & 68.9 \\ IS-G & 67.05 & 58.83 & 65.3 & 77.34 & 59.52 & 75.6 & 69.15 & 69.35 & 69.2 \\ \hline BERT & **71.68** & 66.70 & **71.4** & 77.79 & 72.87 & 77.7 & 80.95 & 80.93 & 80.9 \\ RoBERTa & 70.91 & **67.53** & 70.7 & **80.80** & **77.29** & **80.7** & **82.61** & **82.70** & **82.6** \\ \hline Average & 69.19 & 62.32 & 67.99 & 79.29 & 64.05 & 77.69 & 70.65 & 70.80 & 70.37 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Overall accuracy (Acc.), macro-averaged F1 (F1), and weighted-macro F1 (wF1) scores of the algorithms depicted in Section 4. For instance, msr-UDel refers to a C5.0 classifier trained on the msr corpus, using the feature set mentioned in Greenbacker and McCoy (2009a).
### Bayes Factor Analysis
Given that the accuracy scores are provided for all GREC systems in Belz et al. (2009), we chose to focus our analysis on the raw distributions of these scores. Our aim is to determine if there are significant differences between the accuracies of our models by comparing these distributions. We conduct a Bayes Factor analysis with a beta distribution of 0.01 (henceforth: the threshold). This analysis aims to assess, for each pair of accuracies, how strong the evidence is that they come from a common distribution, or from different ones. A difference below the threshold indicates that accuracy rates come from similar distributions; whereas, a difference above the threshold indicates that they come from different distributions, thus signalling that they differ evidentially. We interpret the strength of the evidence in favour of/against similar/different distributions according to Kass and Raftery (1995). Therefore, based on this approach, we expect that the raw accuracy distributions of the best- and worst-performing models for each corpus differ evidentially.
For msr, the comparison between the best- and worst-performing models, namely BERT and UDe1, provides no evidence that their accuracy rates are evidentially different from each other (BF = 1.4). The same holds for neg, where the comparison of the best (UDe1 and RoBERTa) and worst (IS-G) models appear to have similar probability distributions; therefore, these models are not evidentially different from each other. Conversely, in the case of wsj, the BF analysis provides strong evidence that the accuracy distributions of the top-performing models, BERT and RoBERTa, are different from those of the classic ML models.
To summarise, we only observed significant differences in the wsj-based models; the GREC models show more or less the same accuracy distributions. A reason might be that the aggregated calculation of accuracy loses the specificity of the classes being calculated.
### Per-class Evaluation
As mentioned earlier, the neg models demonstrate high accuracy (e.g. the highest average accuracy), but we observe a sharp decline in their macro-F1 values. In this analysis, we want to investigate whether the accuracy scores reported in Table 3 truly reflect the success of these algorithms or if they are merely the by-product of over-generating the dominant label or under-generating the less frequent label. Table 4 presents the _per-class_ preci
\begin{table}
\begin{tabular}{l l c c c c c c c c c} \hline \hline & & \multicolumn{3}{c}{msr} & \multicolumn{3}{c}{neg} & \multicolumn{3}{c}{wsj} \\ \cline{3-11} Model & Category & P & R & F & P & R & F & P & R & F \\ \hline \multirow{3}{*}{Ude1} & description & 55.36 & 19.38 & 28.71 & 0.00 & 0.00 & 0.00 & 60.29 & 62.95 & 61.59 \\ & name & 72.39 & 62.21 & 66.92 & 76.65 & 80.32 & 78.44 & 60.42 & 49.44 & 54.38 \\ & pronoun & 64.53 & 88.51 & 74.64 & 84.06 & 92.14 & 87.91 & 71.00 & 83.44 & 76.72 \\ \hline \multirow{3}{*}{ICSI} & description & 51.69 & 38.12 & 43.88 & 100.00 & 17.74 & 30.13 & 81.92 & 40.53 & 54.22 \\ & name & 80.33 & 66.82 & 72.95 & 81.85 & 73.14 & 77.25 & 55.12 & 86.40 & 67.37 \\ & pronoun & 69.41 & 87.39 & 77.37 & 79.05 & 94.76 & 86.19 & 72.17 & 69.61 & 70.86 \\ \hline \multirow{3}{*}{CNTS} & description & 53.68 & 31.88 & 40.00 & 75.00 & 14.52 & 24.33 & 64.31 & 63.67 & 63.30 \\ & name & 76.79 & 61.75 & 68.45 & 77.84 & 72.87 & 75.27 & 60.34 & 66.75 & 63.38 \\ & pronoun & 66.16 & 88.51 & 75.72 & 79.32 & 92.14 & 85.25 & 71.90 & 62.54 & 66.89 \\ \hline \multirow{3}{*}{OSU} & description & 53.57 & 28.12 & 36.88 & 100.00 & 4.84 & 9.23 & 72.70 & 56.91 & 63.84 \\ & name & 69.39 & 68.43 & 68.91 & 79.01 & 72.07 & 75.38 & 63.56 & 73.30 & 68.08 \\ & pronoun & 69.20 & 81.98 & 75.05 & 79.27 & 95.20 & 86.51 & 73.43 & 80.87 & 76.97 \\ \hline \multirow{3}{*}{ISG} & description & 57.97 & 25.00 & 34.93 & 77.78 & 11.29 & 19.72 & 73.88 & 63.41 & 68.25 \\ & name & 71.46 & 65.21 & 68.19 & 71.77 & 79.79 & 75.57 & 62.19 & 76.64 & 68.66 \\ & pronoun & 65.10 & 84.01 & 73.36 & 82.30 & 84.28 & 83.28 & 75.36 & 67.36 & 71.14 \\ \hline \multirow{3}{*}{BERT} & description & 52.86 & 46.25 & 49.33 & 62.71 & 59.68 & 61.16 & 82.63 & 79.37 & 80.97 \\ & name & 74.35 & 72.81 & 73.57 & 77.32 & 75.27 & 76.28 & 79.64 & 82.69 & 81.14 \\ & pronoun & 74.84 & 79.73 & 77.21 & 80.04 & 82.31 & 81.16 & 80.48 & 80.87 & 80.67 \\ \hline \multirow{3}{*}{RoBERTa} & description & 56.33 & 55.62 & 55.97 & 76.47 & 62.90 & 69.02 & 86.19 & 77.40 & 81.56 \\ & name & 76.50 & 64.52 & 70.00 & 78.70 & 80.59 & 79.63 & 77.22 & 89.25 & 82.80 \\ \cline{1-1} & pronoun & 71.40 & 82.66 & 76.62 & 83.04 & 83.41 & 83.22 & 86.47 & 81.19 & 83.75 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Per-class precision, recall and F1 score of each label. The results report on training seven different algorithms on three corpora for predicting three labels, namely description, name, and pronoun.
sion, recall, and F1 scores of these models.
Upon comparing the F1 scores for the class _description_ across the three corpora, we observe that the wsj models consistently achieve the highest scores, with all algorithms exceeding an F1 score of 50. In contrast, the F1 scores for both msr and neg are considerably lower than those of wsj. The F1 scores for neg are particularly low, with two notable instances, Ube1 and OSU, scoring 0 and below 10 respectively. The poor prediction of the class description by the classic ML neg models is likely due to an insufficient number of instances in the training dataset, thereby hindering the proper training of the algorithms. In contrast, the two PLM models demonstrate acceptable performance in predicting the class description (BERT = 61.16 & RoBERTa = 69.02). This could indicate that pre-trained language models are advantageous where there is a class imbalance.
Another interesting observation concerns the high recall of the "pronoun" prediction in the neg models. Four of the classic models have a recall of over 92. In the case of OSU, for example, the recall is 95, which means that of all the cases that are pronouns, 95% are labelled correctly. This is possibly an indication that pronouns have been over-generated in this system. In the PLM models, the recall is below 84.
In sum, the results of our per-class evaluation show the difficulties that the classic ML-based neg models had in predicting the class _description_. The msr models also had poor performance in predicting descriptions, yet they were more successful than neg. These results tentatively suggest that feature-based classification models need to be trained on an adequate and relatively balanced number of instances to reliably predict all classes. The results of this study suggest that the PLM models are less dependent on the choice of corpus, and therefore predict classes more robustly.
### Correlation Analysis
To quantify how the evaluation results change with respect to corpora, we compute the Spearman correlation coefficient between every pair of corpora, indicating how the rank of the models changes. Table 5 shows the computed coefficients along with the p-values of the tests. It is noteworthy that only the results evaluated by the macro-weighted F1 on msr and neg are significantly correlated (\(p<.001\)).
The lack of correlation between the results on msr/wsj and those on neg/wsj suggests that using a corpus of a different genre could greatly influence the ranking of the models and, therefore, make the conclusions difficult to generalise. Additionally, these results are in line with the fact that msr and neg are from the same source, both being the introductory part of Wikipedia articles, and a higher correlation is to be expected. Also, we may conclude that macro-averaged F1 is a more reliable evaluation metric (see the discussions in Section 6, Section 7.1, and Section 7.2).
### Feature Selection Study
We performed a feature importance analysis to check whether the contribution of linguistic factors changes depending on the choice of the corpus. We used XGBoost from the family of Gradient Boosting trees (Chen and Guestrin, 2016) and then computed the permutated variable importance for each model. Data were analysed in two ways: firstly, we used the complete dataset, as outlined in Section 5; secondly, we excluded first-mention REs to concentrate only on subsequent mentions. Considering that the choice of a referent' first mention is less context-dependent, we only report on the latter dataset below:
As expected, the ranking of feature importance varies across different corpora. However, a substantial overlap is observed when considering the most important features across the three corpora. An example is the semantic category of the REs that is used in various msr and wsj REG models.4 In the case of msr, the REs belong to five semantic categories: human, city, country, river, and mountain. In the case of wsj, the REs are annotated for a wide
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & & acc & F1 & wF1 \\ \hline \multirow{2}{*}{msr/neg} & \(r_{s}\) & -0.1081 & 0.9643 & 0.4643 \\ & \(p\) & 0.8175 & 0.0005 & 0.2939 \\ \hline \multirow{2}{*}{msr/wsj} & \(r_{s}\) & 0.2857 & 0.5357 & 0.4643 \\ & \(p\) & 0.5345 & 0.2152 & 0.2939 \\ \hline \multirow{2}{*}{neg/wsj} & \(r_{s}\) & -0.1261 & 0.5000 & -0.0357 \\ & \(p\) & 0.7876 & 0.2532 & 0.9394 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Spearman correlation coefficient \(r_{s}\) and the p-value between every pair of corpora in terms of accuracy, macro-averaged F1, and weighted F1.
range of categories including human, city, country, organisation, objects, etc. Notably, in every model that employs semantic category information, this feature has either the highest or second-highest importance ranking. A plausible explanation could be that humans use different referencing strategies to refer to different categories of referents.
In addition to the semantic category, the grammatical role of the RE and the categorical sentential distance to the antecedent consistently have a high importance ranking. The grammatical role marks the distinction between subject, object, and determiner roles. The categorical distance in the number of sentences provides information on how far an RE is to its nearest coreferential antecedent. For instance, whether they are both in the same sentence or are separated by one or more sentences. Figure 2 illustrates the importance rankings of the 05U features in the three corpora. Other importance ranking graphs are available in Appendix C. For a comprehensive description of all features employed in classic ML models and the feature importance analysis, refer to Same and van Deemter (2020).
## 8 Discussion
In this paper, we have conducted a series of reproductions, evaluations, and analyses to check whether the conclusions of GREC are still true after 15 years. Below, we summarise and discuss our findings in accordance with our three research questions in Section 3. We also report our post-hoc observations on the choice of evaluation metric.
Performance of REG Algorithms.To answer research question \(\mathcal{R}_{2}\), we extended the GREC by introducing a corpus of a different genre, wsj, and two pre-trained (PLM-based) REG models. We found that, on msr, PLM-based and ML-based models perform similarly, as confirmed by both the BF and per-class analyses. With regards to neg, PLM-based and ML-based models have similar accuracy scores, as confirmed by the BF analysis, but there are large differences when micro-F1 is used, as confirmed by the per-class evaluation (i.e., ML-based models have difficulty predicting descriptions). On wsj, PLM-based models are the clear winners.
These results suggest that, in terms of explanatory power, PLM-based models have good performance and good "direct support", i.e., a good ability to generalise to different contexts (see van Deemter (2023) for further discussion). Whether they have good "indirect support" (e.g., whether their predictions are in line with linguistic theories) needs to be investigated in further probing studies.
Impact of the Choice of Corpus.As our evaluations and analyses demonstrate, the choice of corpus plays a crucial role in assessing REG algorithms. This role is twofold. Firstly, the choice of corpus strongly influences the evaluation results, pertaining to the research question \(\mathcal{R}_{1}\). Secondly, in addition to the score differences discussed in Section 6, we found that: (1) the difference between PLM-based and ML-based models on wsj is larger (and evidentially different) than on msr and neg models (as evidenced by the BF analysis); (2) the correlations of the evaluation results between wsj and both msr and neg are not significant.
For \(\mathcal{R}_{3}\), we conducted feature selection analyses across the three corpora, discovering that the importance of the features ranks differently for each corpus. This suggests that when investigating the "indirect support" for a model, one needs to aggregate findings from multiple corpora with different genres.
Figure 2: Different rankings of the features in msr, neg, and wsj OSU models.
The Use of Evaluation Metrics.As we discussed in Section 6.2, different metrics evaluate different aspects of a model. This was further ascertained by the inconsistency of the BF analysis and per-class analysis. One lesson we have learned is that it is not enough to report or do analyses on a single metric. Another lesson is that the evaluation results by macro-F1 are more reliable than other metrics because (1) they are consistent across corpora with similar genres (i.e., msr and neg; see the Correlation analysis results); (2) the differences identified by using macro-F1 can be confirmed by the per-class evaluation.
## 9 Conclusion
We are now in a position to address the question that we raised in the Introduction: Can the conclusions from the GREC shared tasks still be trusted? By examining a wider class of corpora, models, and evaluation metrics than before, we found that the answer to this question is essentially negative since the GREC conclusions are prone to drastic change once a different corpus or a different metric is employed.
Perhaps this should come as no surprise. According to a widely accepted view of scientific progress (e.g., Jaynes (2002); applied to NLP in van Deemter (2023)), theories should be updated again and again in light of new data (i.e., indirect Support), and when new models are proposed, the plausibility of existing models should be compared against the plausibility of these new models (as well as pre-existing ones). New metrics deserve a place in this story as well, even though they are often overlooked. In other words, what we have seen in the present study is nothing more than science in progress - something we are bound to see more of as the enterprise called NLP-as-Science matures.
Ethics Statement:Regarding potential biases, in addition to the biases present in text-based datasets, biases can also be introduced by the pre-trained language models Bender et al. (2021) used in this work. In other words, the REG algorithms we developed in this study may make different predictions with respect to different genders, for instance. In the future, we plan to investigate this phenomenon and find ways to mitigate it.
Supplementary Materials Availability Statement:All associated data, source code, output files, scripts, documentation, and other relevant material to this paper are publicly available and can be accessed on our GitHub repository: [https://github.com/fsame/REG_GREC-WSJ](https://github.com/fsame/REG_GREC-WSJ), DOI: 10.5281/zenodo.8182689.
Acknowledgements:We thank the anonymous reviewers for their helpful comments. Fahime Same is supported by the German Research Foundation (DFG)- Project-ID 281511265 - SFB 1252 "Prominence in Language".
|
2308.02628 | Radiation reaction on an accelerating point charge | A point charge accelerating under the influence of an external force emits
electromagnetic radiation that reduces the increase in its mechanical energy.
This causes a reduction in the particle's acceleration. We derive the decrease
in acceleration due to radiation reaction for a particle accelerating parallel
to its velocity, and show that it has a negligible effect. | Jerrold Franklin | 2023-08-04T16:11:09Z | http://arxiv.org/abs/2308.02628v2 | # Radiation reaction on
###### Abstract
A point charge accelerating under the influence of an external force emits electromagnetic radiation that reduces the increase in its mechanical energy. This causes a reduction in the particle's acceleration. We derive the decrease in acceleration due to radiation reaction for a particle accelerating parallel to its velocity, and show that it has a negligible effect.
## 1 Introduction
The question of how the electromagnetic fields radiated by an accelerating charged particle produce radiation reaction that diminishes its acceleration has been of interest for a long time. The Abraham-Lorentz force1,
Footnote 1: We are using Gaussian units with \(c=1\).
\[{\bf F}_{\rm AL}=\frac{2}{3}q^{2}{\bf\hat{a}}, \tag{1}\]
was first proposed by Abraham [1] and Lorentz [2] over 100 years ago as a retarding force. A relativistic generalization of the force was formulated by Dirac in 1938 [3]. There have a been a large number of other published and unpublished papers on the Abraham-Lorentz force for many years.
However, the Abraham-Lorentz force is known to lead to paradoxes when used in a differential equation for the motion of an accelerating charged particle. Because of this, various modifications of the Abraham-Lorentz differential equation have been proposed. Some of these paradoxes and modifications are discussed in [4] and [5]. See also [6], which includes a number of other references.
In this paper, we derive the radiation reaction on an accelerating charged particle by directly subtracting the energy radiated by the particle from its mechanical energy without introducing a radiation reaction force. We consider the case of a point charge accelerating parallel to its velocity, which is the case generally considered in attempted derivations2 and applications of the Abraham-Lorentz force.
Footnote 2: There is no generally accepted derivation of the Abraham-Lorentz force.
For the power radiated by an accelerating point charge, we use Larmor's formula [7], as extended to relativity by Lienard [8],
\[\frac{dW_{\rm rad}}{dt}=\frac{2}{3}q^{2}\gamma^{6}[a^{2}-({\bf v}\mathbf{ \times}{\bf a})^{2}]. \tag{2}\]
For acceleration parallel to the velocity, this reduces to
\[\frac{dW_{\rm rad}}{dt}=\frac{2}{3}q^{2}a^{2}\gamma^{6}=\frac{2}{3}q^{2}a^{ \prime 2}, \tag{3}\]
where \({\bf a}^{\prime}\) is the charged particle's acceleration in its instantaneous rest frame.
The usual derivations for Larmor's or Lienard's formula give the rate of emission of radiated energy in terms of the acceleration and velocity at a retarded time3, and could not be used in a differential equation for the velocity at the present time. In the next section we derive Lienard's formula for the radiated power in terms of the acceleration and velocity at the present time.
Footnote 3: See, for instance, Chapter 14 of [4] or Chapter 11 of [5].
## 2 Electromagnetic Power Emitted
by an Accelerating Point Charge
The power radiated by an accelerating point charge is given by the rate at which the radiated energy passes through a spherical surface of radius \(R_{\rm rad}\),
\[P=\frac{dW_{\rm rad}}{dt} = \frac{1}{4\pi}\oint_{S}{\bf dS}\mathbf{\cdot}[{\bf E}({ \bf r},t)\mathbf{\times}{\bf B}({\bf r},t)]\]
\[= \frac{1}{4\pi}\int_{0}^{R_{\rm rad}}r^{2}dr\oint\mathbf{\hat{r}} \mathbf{\cdot}[\mathbf{E}(\mathbf{r},t)\boldsymbol{\times}\mathbf{B}(\mathbf{r},t)]d\Omega,\]
However, a major problem arises if Eq. (4) is used to calculate the electromagnetic power. The fields in the integrals are to be evaluated at the present time and the point \(\mathbf{r}\) at which the fields are observed, but the fields for an accelerating particle are given in terms of variables given at the retarded time by the Lienard-Wiechert field equations.
Thus the Lienard formula of Eq. (2) [and Eq. (3) for acceleration parallel to the velocity] would be given in terms of the acceleration and velocity at a retarded time, \(t_{r}\), and not at the present time, \(t\). We show below how a radius can be chosen to give the power given by Eq.4) in terms of the acceleration and velocity at the present time.
The electric and magnetic fields appearing in Eq. (4) are given by the Lienard-Wiechert field equations,
\[\mathbf{E}(\mathbf{r},t) = \left\{\frac{q(\mathbf{\hat{r}}_{r}-\mathbf{v}_{r})}{r_{r}^{2} \gamma_{r}^{2}(1-\mathbf{\hat{r}}_{r}\boldsymbol{\cdot}\mathbf{v}_{r})^{3}} \right\}+\left\{\frac{\mathbf{\hat{r}}_{r}\boldsymbol{\times}[(\mathbf{\hat{r} }_{r}-\mathbf{v}_{r})\boldsymbol{\times}\mathbf{a}_{r}]}{r_{r}(1-\mathbf{\hat {r}}_{r}\boldsymbol{\cdot}\mathbf{v}_{r})^{3}}\right\}, \tag{5}\] \[\mathbf{B}(\mathbf{r},t) = \mathbf{\hat{r}}_{r}\boldsymbol{\times}\mathbf{E}(\mathbf{r},t), \tag{6}\]
where the variables, \(\mathbf{r}_{r},\mathbf{v}_{r},\gamma_{r}=1/\sqrt{1-v_{r}^{2}}\), and \(\mathbf{a}_{r}\) are all evaluated at the retarded time,
\[t_{r}=t-r_{r}. \tag{7}\]
The radius vector, \(\mathbf{r}_{r}\), is the distance from the charged particle's position at the retarded time to the point of observation of the electromagnetic fields at the present time.
To calculate the radiated power, we consider a point charge \(q\) at a position \(\mathbf{r}(t)\) with a velocity \(\mathbf{v}(t)\) and acceleration \(\mathbf{a}(t)\). We make a Lorentz transformation to the rest frame of the point charge where \(\mathbf{v}^{\prime}=\mathbf{0}\) and
\[\mathbf{a}^{\prime}_{\parallel} = \mathbf{a}_{\parallel}\gamma^{3}, \tag{8}\] \[\mathbf{a}^{\prime}_{\perp} = \mathbf{a}_{\perp}\gamma^{2}. \tag{9}\]
\(\mathbf{a}^{\prime}_{\parallel}\) is the rest frame acceleration parallel to \(\mathbf{v}\), and \(\mathbf{a}^{\prime}_{\perp}\) is the rest frame acceleration perpendicular to \(\mathbf{v}\).
We now evaluate the rest frame surface integral
\[\frac{dW^{\prime}_{\rm rad}}{dt^{\prime}} = \frac{1}{4\pi}\int_{0}^{R^{\prime}_{\rm rad}}r^{\prime 2}dr^{ \prime}\oint\mathbf{\hat{r}^{\prime}}\mathbf{\cdot}[\mathbf{E}^{\prime}( \mathbf{r}^{\prime},t^{\prime})\boldsymbol{\times}\mathbf{B}^{\prime}( \mathbf{r}^{\prime},t^{\prime})]d\Omega^{\prime}, \tag{10}\]
in the limit \(R^{\prime}_{\rm rad}\to 0\). In this limit, \(t^{\prime}_{r}=t^{\prime}\) and \({\bf a^{\prime}}_{r}={\bf a^{\prime}}\) so the electric field is given by
\[{\bf E^{\prime}}({\bf r^{\prime}},t^{\prime}) = \frac{q{\bf\hat{r}^{\prime}}}{r^{\prime 2}}+\frac{q[{\bf\hat{r}^{ \prime}}({\bf\hat{r}^{\prime}}{\bf\cdot}{\bf a^{\prime}})-{\bf a^{\prime}}]}{r^ {\prime}}. \tag{11}\]
Then, the surface integral in Eq. (10) for the radiated power reduces to
\[\frac{dW^{\prime}_{\rm rad}}{dt^{\prime}} = \frac{q^{2}}{4\pi}\oint{\bf\hat{r}^{\prime}}{\bf\cdot}[{\bf a^{ \prime}}{\bf\times}({\bf\hat{r}^{\prime}}{\bf\times}{\bf a^{\prime}})]d\Omega^ {\prime} \tag{12}\] \[= \frac{q^{2}}{4\pi}\oint[{\bf a^{\prime 2}}-({\bf\hat{r}}{\bf \cdot}{\bf a^{\prime}})^{2}]d\Omega^{\prime}\] \[= \frac{2}{3}q^{2}a^{\prime 2}.\]
The radiated power can be put back in terms of the original acceleration, using Eqs. (8) and (9) to give
\[\frac{dW^{\prime}_{\rm rad}}{dt^{\prime}} = \frac{2}{3}q^{2}(a^{2}_{\parallel}\gamma^{6}+a^{2}_{\perp}\gamma ^{4}) \tag{13}\] \[= \frac{2}{3}q^{2}\gamma^{6}[a^{2}-({\bf v\times}{\bf a})^{2}].\]
The variables in Eq. (13) are in the original moving frame, but the rate of energy emission on the left hand side of the equation is still given in terms of the rest frame variables. However, the right-hand side of Eq. (13) has been shown to be a Lorentz invariant4, so Eq. (13) can be Lorentz transformed to the moving frame, finally giving
Footnote 4: See, for instance, page 666 of [4].
\[\frac{dW_{\rm rad}}{dt} = \frac{2}{3}q^{2}\gamma^{6}[a^{2}-({\bf v\times}{\bf a})^{2}]. \tag{14}\]
This result has the same form as Lienard's relativistic extension of Larmor's formula, but is given here with all variables at the present time, and not an arbitrary retarded time.
## 3 Radiation Reaction on an Accelerating Point Charge
We want to relate \(\frac{dW_{\rm rad}}{dt}\) to its effect on the motion of an accelerating point charge. For an accelerating particle of mass \(m\), the rate of change of its
kinetic energy is given (for \({\bf a}\) parallel to \({\bf v}\)) by
\[\frac{dW_{\rm mat}}{dt} = \frac{d(m\gamma-m)}{dt}=m\gamma^{3}({\bf v}\mathbf{\cdot}{ \bf a})=mva^{\prime}. \tag{15}\]
If an external force acts on the charged particle, the electromagnetic power produced by the external force will increase the sum of the particle's kinetic energy and the electromagnetic energy at the rates
\[\frac{dW_{\rm ext}}{dt} = \frac{dW_{\rm mat}}{dt}+\frac{dW_{\rm rad}}{dt}. \tag{16}\]
This would reduce the particle's velocity and acceleration by
\[mva^{\prime} = m\bar{v}\bar{a}^{\prime}-\frac{2}{3}q^{2}a^{\prime 2}, \tag{17}\]
where \(\bar{v}\) and \(\bar{a}^{\prime}\) are the velocity and rest frame acceleration an uncharged partical would have.
Equation (17) is a quadratic equation for \(a^{\prime}\), with the solution
\[a^{\prime} = \frac{2m\bar{v}\bar{a}^{\prime}}{\left[mv+\sqrt{m^{2}v^{2}+(8/3)q ^{2}m\bar{v}\bar{a}^{\prime}}\right]}, \tag{18}\] \[va^{\prime} = \frac{2\bar{v}\bar{a}^{\prime}}{\left[1+\sqrt{1+\left(\frac{q^{2 }}{2m}\right)\left(\frac{16\bar{v}\bar{a}^{\prime}}{3v^{2}}\right)}\right]}. \tag{19}\]
Equations (17), (18), and (19) each show the decrease in the charged particle's acceleration due to the diversion of the applied energy into radiated electromagnetic energy.
We note that the increase in electromagnetic energy does not produce an added force on the charged particle, but is just electromagnetic energy produced in space by the action of the external force. The energy going into the electromagnetic field reduces the increase of the mechanical energy of the particle.
The external force acts on two separate entities, the accelerating particle and the electromagnetic field. The energy put into the electromagnetic field should not be considered as a separate force on the accelerating particle5.
Footnote 5: A particle, even if accelerating, cannot exert force on itself.
To get an idea of the relative size of the radiation reaction, we consider the case of a constant external force acting on a particle starting from rest. An uncharged particle would have a uniform acceleration \(\bar{a}^{\prime}\), that is constant in the particle's instantaneous rest frame. Then we can write6
Footnote 6: This is equation (8) of [9].
\[\bar{a}^{\prime} = \frac{(\bar{\gamma}-1)}{x}=\frac{\bar{\gamma}^{2}\bar{v}^{2}}{( \bar{\gamma}+1)x}, \tag{20}\]
relating \(\bar{a}^{\prime}\) to \(x\), the distance traveled by the charged particle. Then,
\[va^{\prime} = \frac{2\bar{v}\bar{a}^{\prime}}{\left[1+\sqrt{1+\left(\frac{r_{c} }{x}\right)\left[\frac{16\bar{\gamma}^{2}\bar{v}^{3}}{3v^{2}(\bar{\gamma}+1)} \right]}\right]}. \tag{21}\]
We have taken the accelerating particle to be an electron, and have introduced the 'classical radius' of the electron, \(r_{c}=\frac{g^{2}}{2m}=2.82\) fm.
It is interesting to look at the non-relativistic and the extreme relativistic limits of Eq.(21). The non-relativistic limit is
\[\bar{v}<<1,\quad va^{\prime} \simeq \frac{2\bar{v}\bar{a}^{\prime}}{\left[1+\sqrt{1+\left(\frac{r_{c }}{x}\right)\left[\frac{8\bar{v}^{3}}{3v^{2}}\right]}\right]},\] \[a^{\prime} \simeq \frac{\bar{a}^{\prime}}{\left[1+\left(\frac{2\bar{v}^{3}r_{c}}{3 v^{2}x}\right)\right]}. \tag{22}\]
The relativistic limit is
\[\bar{\gamma}>>1,\quad a^{\prime} = \frac{\bar{a}^{\prime}}{\left[1+\left(\frac{4\bar{\gamma}r_{c}}{3 x}\right)\right]}. \tag{23}\]
We can see from each of Eqs. (21), (22), and (23) that radiation reaction on a charged particle, accelerating parallel to its velocity, has a negligible effect on the acceleration of the particle. The distance, \(r_{c}=2.82\) fm, is much less than any reasonable distance traveled by the particle. Even for a highly relativistic particle, the ratio \(\bar{\gamma}r_{c}/x\) is too small to give an observable effect7.
Conclusion
Our conclusion is that the power radiated by an point charge, accelerating parallel to its velocity, reduces the acceleration of the particle, as shown in Eqs. (21)-(23). However, the reduction of the particle's acceleration is negligible, even for highly relativistic particles.
|
2302.10277 | A Comparative Analysis of CNN-Based Pretrained Models for the Detection
and Prediction of Monkeypox | Monkeypox is a rare disease that raised concern among medical specialists
following the convi-19 pandemic. It's concerning since monkeypox is difficult
to diagnose early on because of symptoms that are similar to chickenpox and
measles. Furthermore, because this is a rare condition, there is a knowledge
gap among healthcare professionals. As a result, there is an urgent need for a
novel technique to combat and anticipate the disease in the early phases of
individual virus infection. Multiple CNN-based pre-trained models, including
VGG-16, VGG-19, Restnet50, Inception-V3, Densnet, Xception, MobileNetV2,
Alexnet, Lenet, and majority Voting, were employed in classification in this
study. For this study, multiple data sets were combined, such as monkeypox vs
chickenpox, monkeypox versus measles, monkeypox versus normal, and monkeypox
versus all diseases. Majority voting performed 97% in monkeypox vs chickenpox,
Xception achieved 79% in monkeypox against measles, MobileNetV2 scored 96% in
monkeypox vs normal, and Lenet performed 80% in monkeypox versus all. | Sourav Saha, Trina Chakraborty, Rejwan Bin Sulaiman, Tithi Paul | 2023-01-20T18:11:43Z | http://arxiv.org/abs/2302.10277v1 | # A Comparative Analysis of CNN-Based Pretrained Models for the Detection and Prediction of Monkeypox
###### Abstract
Monkeypox is a rare disease that raised concern among medical specialists following the convi-19 pandemic. It's concerning since monkeypox is difficult to diagnose early on because of symptoms that are similar to chickenpox and measles. Furthermore, because this is a rare condition, there is a knowledge gap among healthcare professionals. As a result, there is an urgent need for a novel technique to combat and anticipate the disease in the early phases of individual virus infection. Multiple CNN-based pre-trained models, including VGG-16, VGG-19, Restnet50, Inception-V3, Densnet, Xception, MobileNetV2, Alexnet, Lenet, and majority Voting, were employed in classification in this study. For this study, multiple data sets were combined, such as monkeypox vs chickenpox, monkeypox versus measles, monkeypox versus normal, and monkeypox versus all diseases. Majority voting performed 97% in monkeypox vs chickenpox, Xception achieved 79% in monkeypox against measles, MobileNetV2 scored 96% in monkeypox vs normal, and Lenet performed 80% in monkeypox versus all.
## 1 Introduction
At a time when the globe was still struggling to recover from the devastation caused by COVID-19, the deadly monkeypox virus emerged. This virus transmit from animals to people. The disease presents itself with symptoms that are analogous to those of smallpox but are not as severe. In 1958, a Danish researcher working in a laboratoryin Copenhagen, Denmark made the first discovery of the monkeypox virus. In 1970, the Democratic Republic of the Congo was the location where human contraction was discovered for the first time.It is possible for a virus to spread from one person to another through the exchange of bodily fluids, respiratory droplets, and infected items such as beddings, among other things.
Following the COVID-19 epidemic, the globe is now facing a new danger in the form of monkeypox. The World Health Organization (WHO) asserts that the current outbreak of monkeypox is not a pandemic but rather an endemic. When a disease is present only in a certain location, geographic region, or environmental setting, we refer to that illness as endemic. As of right now, the World Health Organization (WHO) has identified the following countries as being endemic to monkeypox: Benin, Cameroon, the Central African Republic, the Democratic Republic of the Congo, Gabon, Ghana (identified only in animals), Cote d'Ivoire, Liberia, Nigeria, the Republic of the Congo, and Sierra Leone.
According to WHO, among the endemic region,the Democratic Republic of Congo has
the largestnumber of deaths and suspected cases, which are 58 and 1284 respectively at this point, has been confirmed. WHO has reason to believe that there will be other developments in the case in the coming days. According to the findings of our research, we have reason to think that the monkeypox endemic is in the beginning stages of its first wave of transmission, which is comparable to the beginning stages of the COVID-19 pandemic transmission. 780 cases of monkeypox have been reported throughout a total of 27 nations that are identifiedas non-endemic regions. The United Kingdom and Northern Ireland have the largest number of in-stances of monkeypox, totaling 207. This is followed by Spain and Portugal, which have 156 and 138 cases of the disease, respectively.
Although monkeypox is not as infectious as COVID-19, the incidence of the disease is still rising. In West and Central Africa, there were just fifty confirmed instances of the illness in the year1990. However, by 2022, there were thousands ofreported instances of the disease. In the past, it was thought that the disease had only ever appeared inAfrica. But in the year 2022, those who were infected with the virus were tracked down and identified in a number of nations throughout both the United States and Europe. (Ahsan et al., 2022a) People's anxiety and stress levels are rising as a direct result of the rising number of reported incidents. As a consequence of this, we are witnessing widespread expressions of panic on both social media and traditional media, what is the current treatment?
Another factor that contributes to people's anxiety is the lack of any particular remedies that havebeen developed up until this point. At this time, there is no treatment available for those who havebeen infected with the virus that causes monkeypox. The Center for Disease Control and Prevention (CDC) reports that a number of medicinal treatments, sometimes known as countermeasures, are available for the treatment of this dis- ease. These include medicines that were created specifically for the treatment of smallpox (, 2022).Medicines such as Tecovirimat, Cidofovir, Vaccinia Immune Globulin Intravenous (VIGIV), and Brincidofovir are utilized in the treatment of monkeypox. These medicines are also widely used forthe treatment of smallpox. An EAIND is now being developed by researchers in order to assist in the development of Brincidofovir as a therapy formonkeypox (, 2022). Although many successful vaccines have been developed for illnesses similarto monkeypox, and researchers are currently employing such vaccinations as a cure for monkey- pox, there are also some limitations.
what are the current limitations? One of the dis-ease's most apparent downsides is that there is now no known treatment for it. In addition to this disadvantage, another constraint is the difficulty in making an early diagnosis.
This infectious disease is said to be contagious
Figure 1: Monkeypox Confirmed cased
until the scabs that have formed on the skin have peeled off, as stated by the Rare and imported pathogens laboratory [12, 13]. Due to the fact that the disease seems to be smallpox in the image (a to e) above, a pathology diagnosis is required. As a potential solution to this problem, our team is exploring the use of machine learnings a method for diagnosing the disease during its early stages of progression. In order to take advantage of machine learning, we require a sufficient amount of data to train the model. In the instance of monkeypox, one of the limitations is that there is no dataset that is readily accessible to the publicthat can be applied to the modelling of monkeypox diagnosis. How deep learning and machine learning model has been used over the years in medicalimaging?
Identifying medical conditions is just one of the many fields in which machine learning has been applied for a considerable amount of time. For instance, early diagnosis of bone disorders [10], Newborn Brain Maturity [11], Network Abnormality Detection [21], Pneumonia Detection [12], Credit Card Fraud Detection [13], and Diabetics Prediction [14], as well as many more ap- plications. Researchers are working hard in spite of these restrictions to build machine learning algorithms that could able to recognize illnesses such as monkeypox. Imaging solutions that are safe, ac-curate, and rapid may be provided to medical professionals by machine learning, and these solutions have received general recognition as an importantdecision-making tool.
**Scope and Motivation:** The processing of images is naturally a challenging task. Working with a lit-tle dataset adds another layer of complexity to the issue at hand. Because monkeypox is an uncommon illness and the current epidemic is in its earlystages, there is a compelling need for an innovative strategy to combat and anticipate monkeypoxat the earliest stages of individual infection with the virus. As part of this research project, a variety of different machine learning algorithms, including DL, CNN, and ANN, will be taken into consideration to develop an innovative prediction model with increased precision. The following problem statements will be attempted to be answered in this paper:
* Does over-sampling help identify medical im- ages with better accuracy/ precision (other benchmarks)
* Does pre-trained CNN architecture predict better than the other models
* What are the methods that can be used to tackle a limited dataset in the Monkeypox case to detect the virus at the early stage of its lifecycle?
## 2 Related Work
The development of AI models in a number of fields, including emotion analysis [23], fruit image analysis [15], and chest x-ray images, has led to the development of medical image analysis AI models for the diagnosis of various virus-related diseases. For instance, Sandeep et
Figure 2: Different stages of Monkeypox
al.[12] studied the use of deep learning (DL)-based techniques to identify a variety of skin conditions, including psoriasis, chicken pox, utiligo, melanoma, ring- worm, acne, lupus, and herpes. They used the VGG-16 pre-trained model to evaluate their convolutional neural network (CNN) classification of the skin lesion into eight distinct illness classifications with their own [13]. Their technique offered a 78% detection accuracy. Transfer learning (TL) is the process ofapplying a model that has been successfully ap- plied to one machine learning application to an- other using a dataset that has already been studied. Transfer learning for computer vision is oftenutilized in several applications. Pre-trained models with the highest popularity and recognition are VGG, Resnet, Inceptionnet, and other well-known models [15]. By allowing models to be generated with very little data, the concept of employing pre-trained models createsa significant shift in the field of artificial intelligence. When employing TL, there are two key benefits [16]. First of all, it excels in both large and small datasets. Second, when a pre-trained model is used, it is simple to decrease the overfitting of the model using a bigger dataset[16]. In order to assess the crucial variables for the control of a smallpox outbreakin a major city with a population of 2 million, a stochastic model has been created to simulate the progression of an epidemic managed by ring vaccination and case isolation [17]. Numerous organisms, including bacteria [18], fungi, protozoa, and viruses [19], are listed by the World Health Organization as having the potential to be used as biological weapons (nization et al., 2004). Secondary occurrences in the event of a chemical or toxic attack, a toxic attack, or an attack by an agent like bacillus anthracis, for which human-to-human transmission is unusual, would be unlikely. However, serial human-to-human transmission is more probable in the event of an attack by infectious organisms like smallpox [16]. According to earlier research into predictive measures, there are a number of patterns that may be retrieved from medical tracings and medical imaging, including the diagnosis of diabetic retinopathy [15], malignant cells in dermatology [12], and brain tumors in MRI scans [1]. These are just a few examples. Classification algorithms that take into account earlier medical cases may hasten the prediction process and even identify possible illness onsets so that they can be treated before harmful symptoms develop[2]. Artificial neural networks (ANNs) have previously improved the performance of potentially out-of-date and ungeneralizable indices or heuristics still used in the healthcare industry [2] by helping doctors to make better-informed decisions about their diagnoses. Machine learning is therefore a crucial technique for completing the information gaps in these tests and improving the reliability and accuracy of the provided forecast. Successful implementation of such an ANN will also reduce the risk of the disease deteriorating and the related financial effects. The main result that approaches ultimately achieve is overall patient satisfaction [2]. In the healthcare industry, especially when it comes to rare diseases or abnormalities, the challenge of class imbalance within the data collection, or when classes are considerably over/under-represented, commonly occurs.Think about a scenario where smallpox has reappeared and doctors need to quickly tell the difference between spots that are symptomatic of chickenpox and those of smallpox to hasten eradication.
Methodology
We divided the dataset into train, validation, and test to conduct the experiments. We then tookthe image of different classes as input and got binary predictions for output as to whether or not the picture depicts a patient with Monkeypox.The framework is shown in figure 4. We briefly describe the stages of the experiment below:
**Data Acquisition and Augmentation:** We adopted the monkeypox dataset containing both real and augmented images of monkeypox, chickenpox, measles, and normal class (Ahsan et al., 2022b).
The images were collected by surfing the inter-net with relevant search results. They augmented the images using Keras ImageDataGenerator. Various augmentation techniques such as rotation, width and height shifting, and flipping were used to augment the images. The final composition of the dataset statistics is presented in Table 1.
**Transfer Learning with pre-trained models:** We used the first layer of the architecture uponthe transfer learning of the pre-trained models,a handy machine learning technique to improve performance by learning from a different task than the intended one. The models have been pre-trained on the ImageNet dataset (Deng et al., 2009). We removed the final fully connected dense layer and trained with the monkeypox dataset. The model for transfer learning is shown in figure 4. We applied nine different pre-trained models and tested them in the transfer-learning layer to check which model provides better performance for the classification. The models are VGG-16, VGG-19, Resnet-50, InceptionV3, Densenet, Xception, MobilenetV2, Alexnet and Lenet. The pre-trained models have been trained on millions of images predicting over 100 classes and allow leveraging features learned from the large pre-trained models on the small dataset.
* **VGG16:** VGG-16 is a 16 layers-deep convolutional neural network. The pre-trained version of the network is trained on over a mil- lion images from the ImageNet database. The pre-trained network can classify images into 1000 object categories. Therefore, the network has learned rich feature representations for a vast range of image objects. The net- work has an image input size of 224*224.
* **VGG19:** VGG-19 is a variant of VGG-16 with a 19-layer deep convolutional network. It also has been trained on ImageNet with mil- lions of images to provide good transfer learning results with 224*244 input size.
**ResNet50:** In the Resnet paper the authors introduce a residual learning
Figure 3: Workflow of Detecting Monkeypox
framework that is easier to optimize and gains higher accuracy from increasingly higher depth. The Resnet50 is a variant of this residual network that is 50 layers deep and also pre-trained onimagenet dataset with input size 224*224*3.
* **InceptionV3:** In the inception V3 paper the authors formulate a way to scale up a net- work in order to facilitate added computations as efficiently as possible and by suitably factorizing computations and aggressive regularization. The input image size for this model is 299*299. However, as it also works well on 224*224 images to keep it ubiquitous the 224*224 was used.
* **Densenet:** The Densenet connects each layer to every other layer in a feed-forward manner.The distinct feature of the densenet is that it provides several advantages like strengthening the feature propagation, encouraging feature reuse, significantly reducing the number of parameters, and solving the vanishing gradient problem. The default input size of the densenet is 224*224.
* **Xceptionpn:** The Xception is a slight variation of the Inception module with the same number of parameters but due to efficient use of model architecture the model slightly outperforms the Inception model. Here the Inception module has been replaced by have been replaced with depth wise separable convolutions. The default input size for Xception is 224x224.
* **MobilenetV2:** The MobileNetV2 architec- ture is designed depending on an inverted residual structure. Here the input and output of the residual block are thin bottleneck layers unlike expanded representations in the input of traditional residual models. MobileNetV2 on the other hand uses lightweight depth wiseconvolutions to filter features in the intermediate expansion layer. These measures significantly increase performance. The default in-put size of Xception is 299*299.
* **Alexnet:** Alexnet is a deep convolutional neural network with 60 million parameters and 650,000 neurons. It constitutes of five convolutional layers, some of the layers are followed by the max-pooling layers. It also has three fully connected layers with a final 1000-way softmax. The input size of the alexnet is256*256.
* **Lenet:** Lenet-5 is one of the early convolutional neural networks constituting of 5 convolutional layers of neural networks. The in-put size of the images is 32*32.
**Model Loading and Compiling:** We have imported all the necessary libraries which were needed for compiling the architectures and to import layers involved in building the network. We have loaded pre-trained CNN architecture trained on a large dataset. To avoid the problem of overfitting, we have avoided training the entirenetwork. We have frozen some layers and trained only the classifier. We have flattened the lower layer output and created a dense layer with an activation function. After that, we compiled the model by defining the optimizer, loss function, and metrics.
**Image Preprocessing:** We have loaded our dataset into suitable paths. We have processed the dataset by resizing the images according to the respective models and appended these images into paths. we have also done a normalization processor the dataset.
**Fit the Model:** Every pre-trained model has different numbers of convolution layers,
pooling layers, activation layers, dropout layers, etc. To build those architectures, we can use TensorFlowand the Keras library in Python. So we can importall the necessary Python libraries that we need to build an architecture of this neural network. Oncethe model is compiled we can fit the model using training data and validation data and the variable records the metrics. When fitting the model with data, it shows the accuracy and the loss factor of the given data.
**Predict Model:** With the recorded variable we can predict the test data and evaluate the model using built-in functions in python. It shows the accuracy factor for the test data and helps us to determine the performance of the model.
**Determine Confusion Matrix:** We can present the predicted value in a format that is called a confusion matrix where we can determine theprecision, recall, and f1 score for the compiled model. A confusion matrix is a format that is used to determine the performance of a classification algorithm. It visualizes and summarizes the performance of a classification algorithm. So we are using it to look up our model performances.
**Loss and accuracy plot:** At the last, we have plotted the loss and accuracy plot by using the built-in functions of python and it helps to visualize the comparison. By observing the plots, we can also have a clear understanding of the model'performance.
## 4 Experimental Setup
**Train-Test-Val Set Selection:** To prevent the model from overfitting and measure the performance of the model without bias, we decided to make a separate test and validation set. We used the stratified split to split the data set into 70:15:15 (Train: Val: Test). We used the validation setto fix early stopping criteria that the model will stop training when it gets lower validation accuracy for two consecutive epochs. On then other hand, the test set was used to measure the result after the model had already finished training.
**Model Selection:** As there is little to no study conducted in monkeypox detection, we decided to use standard pre-trained architectures for image classification. We selected the following models: VGG (VGG-16 & VGG-19) (Simonyan and Zisserman, 2014), Resnet50 (He et al., 2016), InceptionV3 (Szegedy et al., 2016), Densenet (Huang et al., 2017), Xception (Chollet, 2017), MobilenetV2 (Sandler et al., 2018), Alexnet (Krizhevsky et al., 2012), and Lenet (LeCun et al., 1998). The models were trained on the Imagenet dataset. As these models have different architectures, our goal was to find performance measurements in different kinds of situations.
**Training Setup:** As the images in the dataset were of different shapes and sizes, we resize all imagesinto a*b*n (where a and b repeats the image height and width decided by the model input and n represents the number of channels) and convertedall image types to.png to keep the symmetric aspect for the modeling. The pixel intensity value was normalized by dividing each image by 255. We used optimizer = Adam, batch size = 32 and loss function='sparse_categorical_crossentropy' for all the models. We ran all the models 10 epochs and included an early stopping metric on the validation set to stop the overfitting of the models.
**Evaluation Metric:** As the dataset is small in size and imbalance we decided to look at the Precision(P), Recall(R), and weighted F1 scores our evaluation metric to better reflect the composition of the datasetResults and Analysis
## 5 Results and analysis
We report the result of our models in Table
**Comparison among different disease clusters:**The results are significantly good despite the lackof original images. The binary classification shows how well the models can isolate the images of that class with monkeypox based on the features of the disease. We see that all the models could detect Monkeypox with chickenpox with relative ease both for colored and grayscale images. The Dense-net even reached up to a near-perfect F1 score of 99 percent. The measles detection is relatively poor due to the lack of original measles data to train. A significant observation is that models also show an excellent monkeypox classification result with persons who don't have any dis- eases with VGG-16 yielding the best result of 95 f1 scores. We also reported the result with monkeypox and all other classes and yielded the highest result of 78 f1 scores.
**Majority Voting:** Most of the models showed similar performance while detecting monkeypox, we looked at if the majority voting of the models could improve the result or reveal any major char-acteristics. We got a mixed result. While chick- enpox vs monkeypox (MvC) and chickenpox vs normal(MvN) got almost topnotch results, Mon- keypox vs Measles (MvM) and Monkeypox vs All (MvA) seem to provide an average result.
## 1 Conclusion and Future Works
In this paper,. Majority voting performed 97% in monkeypox vs chickenpox, Xception achieved 79% in monkeypox against measles, MobileNetV2 scored 96% in monkeypox vs normal, and Lenet performed 80% in monkeypox versus all. Our models offer a competitive prediction of monkeypox detection by the pre-trained models even with a small number of datasets. We also see that monkeypox detection is more accurate when the number of images is slightly larger and fails to differentiate as was the case with monkeypox and measles detection. Also, we see that an imbalanced dataset also causes the models to perform poorly as was with the
\begin{table}
\begin{tabular}{l c c c} \hline
**Class Type** & **Curated Images** & **Augmented Image** & **Total Image** \\ \hline Monkeypox & 43 & 587 & 630 \\ Chickenpox & 47 & 329 & 376 \\ Measles & 17 & 286 & 303 \\ Normal & 54 & 552 & 606 \\ \hline
**Total** & **161** & **1764** & **1915** \\ \hline \end{tabular}
\end{table}
Table 2: Numerical description of the dataset in terms of number of images, number of augmented images, and total image
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c c c c c c c c c c c c c c c c} \hline \multirow{3}{*}{Models} & \multicolumn{3}{c|}{Monkeypox vs Chickenpox} & \multicolumn{3}{c|}{Monkeypox vs Measles} & \multicolumn{3}{c|}{Monkeypox vs Normal} & \multicolumn{3}{c}{Monkeypox vs All} \\ \cline{2-19} & \multicolumn{3}{c|}{**Color**} & \multicolumn{3}{c|}{**GrayScale**} & \multicolumn{3}{c|}{**Color**} & \multicolumn{3}{c|}{**Grayscale**} & \multicolumn{3}{c|}{**Color**} & \multicolumn{3}{c|}{**GrayScale**} & \multicolumn{3}{c|}{**Color**} & \multicolumn{3}{c|}{**GrayScale**} & \multicolumn{3}{c|}{**Color**} & \multicolumn{3}{c}{**GrayScale**} \\ \cline{2-19} & **P** & **R** & **F1** & **P** & **R** & **F1** & **P** & **R** & **F1** & **P** & **R** & **F1** & **P** & **R** & **F1** & **P** & **R** & **F1** & **P** & **R** & **F1** & **P** & **R** & **F1** & **P** & **R** & **F1** \\ \hline VGG-16 & 95 & 94 & 94 & 94 & 93 & 94 & 61 & 62 & 61 & 83 & 71 & 75 & 97 & 97 & 97 & 97 & 93 & 93 & 56 & 49 & 52 & 36 & 35 & 33 \\ VGG-19 & 94 & 93 & 93 & 94 & 93 & 93 & 66 & 66 & 73 & 63 & 67 & 97 & 97 & 97 & 98 & 88 & 43 & 44 & 43 & 95 & 65 & 76 \\ Resnet50 & 96 & 67 & 76 & 76 & 74 & 73 & 57 & 54 & 53 & 87 & 50 & 55 & 77 & 68 & 70 & 77 & 65 & 67 & 50 & 43 & 42 & **99** & **67** & **80** \\ Inception-V3 & 94 & 94 & 94 & 94 & 94 & 93 & 93 & 68 & 64 & 66 & 60 & 61 & 60 & 95 & 95 & 95 & 95 & 95 & 95 & 95 & 52 & 48 & 50 & 49 & 47 & 48 \\ Densenet & 95 & 41 & 54 & 78 & 59 & 61 & 100 & 66 & 80 & 83 & 63 & 71 & 93 & 93 & 93 & 84 & 82 & 83 & 41 & 43 & 42 & 95 & 66 & 78 \\ Xception & 80 & 64 & 65 & 90 & 90 & 90 & **98** & **69** & **79** & **100** & **66** & **80** & 87 & 70 & 73 & 88 & 67 & 71 & 44 & 43 & 44 & 100 & 67 & 81 \\ MobilenetV2 & 96 & 96 & 96 & 96 & 96 & 96 & 66 & 66 & 66 & 70 & 67 & 68 & 96 & 96 & 96 & 96 & 96 & 96 & 51 & 47 & 49 & 36 & 38 & 37 \\ Alexnet & 99 & 39 & 55 & 66 & 66 & 66 & 74 & 64 & 68 & 63 & 58 & 60 & 100 & 51 & 67 & 60 & 58 & 58 & 75 & 62 & 67 & 100 & 33 & 49 \\ Lenet & 70 & 70 & 70 & 74 & 68 & 68 & 84 & 34 & 45 & 92 & 63 & 75 & 100 & 51 & 67 & 76 & 73 & 74 & **99** & **67** & **80** & 100 & 33 & 49 \\ Majority Voting & **97** & **97** & **97** & **96** & **96** & 96 & 65 & 66 & 65 & 63 & 66 & 61 & 95 & 95 & 95 & 94 & 94 & 94 & 45 & 48 & 47 & 44 & 59 & 50 \\ \hline \end{tabular}
\end{table}
Table 1: Table 1: The table depicts the results of binary classification of monkeypox detection with different pre-trained models. Here, P, R, and F1 account for Precision, Recall, and Weighted F1 scores.
case of monkeypox vs all others combined. We get a somewhat stable result when we take the majority voting of the model re-sults. Our contributions give valuable insights into the primary screening of monkeypox detection.
|
2308.05147 | From Dirac to Majorana: the Cosmic Neutrino Background capture rate in
the minimally extended Standard Model | We investigate the capture rate of the cosmic neutrino background on tritium
within the Standard Model, extended to incorporate three right-handed singlet
neutrinos with explicit lepton-number violation. We consider a scenario where
the $6 \times 6$ neutrino mixing matrix factorizes into three independent $2
\times 2$ pairs and analyze the states produced from weak interactions just
before neutrino decoupling. Taking into account the unrestricted Majorana mass
scale associated with lepton number violation, spanning from the Grand
Unification scale to Planck-suppressed values, we observe a gradual transition
in the capture rate from a purely Majorana neutrino to a purely (pseudo) Dirac
neutrino. We demonstrate that the capture rate is modified if the lightest
active neutrino is relativistic, and this can be used to constrain the tiniest
value of mass-squared difference $\sim 10^{-35}\,{\rm eV}^2$, between the
active-sterile pair, probed so far. Consequently, the cosmic neutrino capture
rate could become a promising probe for discerning the underlying mechanism
responsible for generating neutrino masses. | Yuber F. Perez-Gonzalez, Manibrata Sen | 2023-08-09T18:00:00Z | http://arxiv.org/abs/2308.05147v1 | From Dirac to Majorana: the Cosmic Neutrino Background capture rate in the minimally extended Standard Model
###### Abstract
We investigate the capture rate of the cosmic neutrino background on tritium within the Standard Model, extended to incorporate three right-handed singlet neutrinos with explicit lepton-number violation. We consider a scenario where the \(6\times 6\) neutrino mixing matrix factorizes into three independent \(2\times 2\) pairs and analyze the states produced from weak interactions just before neutrino decoupling. Taking into account the unrestricted Majorana mass scale associated with lepton number violation, spanning from the Grand Unification scale to Planck-suppressed values, we observe a gradual transition in the capture rate from a purely Majorana neutrino to a purely (pseudo) Dirac neutrino. We demonstrate that the capture rate is modified if the lightest active neutrino is relativistic, and this can be used to constrain the tiniest value of mass-squared difference \(\sim 10^{-35}\,\mathrm{eV}^{2}\), between the active-sterile pair, probed so far. Consequently, the cosmic neutrino capture rate could become a promising probe for discerning the underlying mechanism responsible for generating neutrino masses.
+
Footnote †: preprint: IPPP/23/37
## I Introduction
Standard cosmology predicts that the present Universe is awash with a sea of neutrinos, produced approximately a second after the Big Bang. This Cosmic Neutrino Background (C\(\nu\)B) is a sea of relic neutrinos, much like the Cosmic Microwave Background (CMB) is a sea of relic photons left after photon decoupling around 380,000 years after the Big Bang [1]. Since the C\(\nu\)B is much older than the CMB, a careful study of the C\(\nu\)B is crucial for a better understanding of the early Universe.
The neutrinos composing the C\(\nu\)B are expected to follow a Fermi-Dirac distribution1, with a temperature today of around 1.95 K, which is \((4/11)^{1/3}\) the temperature of the CMB photons today. This happened due to the temperature of the photons increasing during electron-positron decoupling at around 0.5 MeV. The neutrinos, on the other hand, decoupled from the plasma at around 1 MeV. For the present day CMB temperature \(T_{\gamma 0}=0.23\,\mathrm{meV}\), the present day neutrino temperature \(T_{\nu 0}=0.17\,\mathrm{meV}\). Thus, following a Fermi-Dirac distribution, the current neutrino number density today is \(\sim 112\,\mathrm{cm}^{-3}\) per flavor. The helicity distribution of this neutrino number density depends on the neutrino nature. For Dirac neutrinos, we expect that only left-helical neutrinos and right-helical antineutrino states are populated, while for Majorana, both left- and right-helical states should be present in the C\(\nu\)B. Furthermore, from the bounds on neutrino masses from neutrino oscillation experiments, \(m_{\nu_{2}}\geq\sqrt{\Delta m_{\rm sol}^{2}}=8.7\,\mathrm{meV}\), \(m_{\nu_{3}}\geq\sqrt{\Delta m_{\rm atm}^{2}}=48\,\mathrm{meV}\) in normal mass ordering, and \(m_{\nu_{2}}\geq\sqrt{\Delta m_{\rm atm}^{2}}=48\,\mathrm{meV}\), \(m_{\nu_{1}}\geq\sqrt{\Delta m_{\rm atm}^{2}-\Delta m_{\rm sol}^{2}}=47\, \mathrm{meV}\), we have that at least two of the neutrinos will be non-relativistic today [4].
Footnote 1: This is true in the absence of neutrino clustering [2; 3], an assumption we make in this work
An experimental detection of the C\(\nu\)B will not only present us with a validation of our understanding of the early Universe but also present the first-ever detection of non-relativistic neutrinos. As a result, a lot of theoretical as well as experimental efforts are underway to detect the C\(\nu\)B. Currently, the most popular and feasible idea is that of neutrino capture on beta-decaying nuclei, postulated first by Weinberg [5]. The PTOLEMY experiment [6] aims at detecting the C\(\nu\)B through neutrino capture on tritium: \(\nu+{{}^{3}\mathrm{H}}\to{{}^{3}\mathrm{He}}^{+}+\mathrm{e}^{-}\). The signal at PTOLEMY will be an electron emitted with kinetic energy equalling \(2\,m_{\nu}\) above the beta decay endpoint. Nevertheless, there are a number of experimental and theoretical challenges, in particular, with attaining an energy resolution as low as 0.1 eV with current technology. This is currently an open issue and a lot of experimental and technological efforts are underway to overcome this barrier [7; 8; 9; 10]. Apart from this, a number of other ideas has been proposed to detect the C\(\nu\)B [11; 12; 13; 14; 15; 16; 17]. However, these are futuristic and cannot be achieved in the near foreseeable future. The capture rate also depends quite sensitively on whether the C\(\nu\)B clusters or not [2; 3; 18]. A comprehensive discussion of the different constraints on neutrino clustering is given in [19].
A direct detection of the C\(\nu\)B will be crucial to testing fundamental properties associated with neutrinos such as their lifetime, whether they cluster or not [3; 20; 21], additional interactions of neutrinos [22; 23; 24; 25; 26] and so on. These neutrinos, being non-relativistic, will allow us to probe kinematical regions, which are otherwise inaccessible in terrestrial laboratories. For example, detecting the C\(\nu\)B can be used to differentiate between the Dirac and Majorana nature of neutrinos [27; 28]. If the neutrinos are Majorana particles, then the capture rate will be two times more than that for Dirac neutrinos when all three mass eigenstates are non-relativistic today (see text for more details). This can act as a direct test for lepton
number violation in the Standard Model (SM).
However, it is possible that lepton number is violated _softly_ in the SM. The extent of lepton number violation (LNV) can be quantified through the smallness of the Majorana mass term, in comparison to the Dirac mass term for neutrinos. In such a scenario, neutrinos are pseudo-Dirac (or quasi-Dirac) [29; 30; 31; 32; 33; 34; 35]. The softness of LNV guarantees that although neutrinos are Majorana in nature, they behave as Dirac neutrinos for all practical purposes. Active-sterile neutrino oscillations are usually driven by a tiny mass-squared difference (\(\delta m^{2}\)) between the mass-eigenstates and could be accessible only over astronomically large baselines. Strong constraints on the smallness of the mass-squared difference arise from high-energy neutrinos, \(10^{-18}\,\mathrm{eV}^{2}\lesssim\delta m^{2}\lesssim 10^{-12}\ \mathrm{eV}^{2}\)[36; 37], supernova neutrinos \(\delta m^{2}\lesssim 10^{-20}\ \mathrm{eV}^{2}\)[38; 39; 40] as well as solar neutrinos \(\delta m^{2}\lesssim 10^{-11}\ \mathrm{eV}^{2}\)[34; 41; 42]. Weaker constraints also exist from neutrino oscillation experiments [43; 44; 45] as well as atmospheric neutrinos, \(\delta m^{2}\lesssim 10^{-4}\ \mathrm{eV}^{2}\)[46].
If neutrinos are pseudo-Dirac, it would also affect the cosmic neutrino capture rate. One would expect there to be a gradual transition from the capture rate in the Dirac case to that in the Majorana case, and this transition should be a function of the extent of LNV, given by \(\delta m^{2}\). Therefore, when \(\delta m^{2}\) is tiny, we expect the capture rate to behave like that for Dirac neutrinos. On the other hand, for large \(\delta m^{2}\), we should recover the Majorana capture rate. Furthermore, the rate is also modified if the lightest neutrino is relativistic at the time of capture, thereby allowing a probe of the smallness of \(\delta m^{2}\). These differences in capture rate would clearly show up in an experiment like PTOLEMY, thereby allowing a complementary probe of LNV through the C\(\nu\)B. We show that PTOLEMY will be sensitive to \(\delta m^{2}\sim 10^{-35}\,\mathrm{eV}^{2}\) - easily shadowing the sensitivity from all other sources of LNV, and therefore set the strongest constraints on the smallness of \(\delta m^{2}\). This is demonstrated in Fig. 1, which shows the sensitivity of different neutrino sources to \(\delta m^{2}\) in the \(E_{\nu}-L\) plane. Clearly, positive detection of the C\(\nu\)B can be used to constrain the tiniest value of \(\delta m^{2}\) probed so far.
The paper is organised as follows. In Sec. II, we discuss the minimally extended Standard Model, by adding 3 singlet neutrinos and explore the mass-squared differences between the active-sterile neutrinos. In Sec. III, we discuss the capture rate of the cosmic neutrino background in the case of soft violation of lepton number. In Sec. IV, we demonstrate the event rates in an upcoming experiment like PTOLEMY. Finally, we conclude in Sec. V. We consider natural units where \(\hbar=c=k_{\mathrm{B}}=1\) throughout this manuscript.
## II A minimal standard model extension
The gauge symmetries of the Standard Model (SM) allow for the existence of singlets with zero hypercharges, which can couple to the left-handed lepton doublets and generate Yukawa terms responsible for neutrino masses. Initially, one might expect these Yukawa couplings to be extremely small, of the order of \(\mathcal{O}(10^{-12})\), in order to match the observed neutrino mass scale of \(\mathcal{O}(\mathrm{eV})\). However, it is worth noting that the same SM symmetries also permit Majorana mass terms for those singlets. While such terms lead to lepton number violation, this symmetry is accidental and does not pose any fundamental issues. Furthermore, the scale of these Majorana mass terms is only loosely constrained [34]. In fact, it can be close to the scale of Grand Unification Theories (GUT), or it can be suppressed relative to the electroweak scale. In the first case, corresponding to the well-known see-saw mechanism, the Majorana mass terms are at the GUT scale. In the second case, known as the Pseudo-Dirac scenario, the mass terms are suppressed compared to the electroweak scale. Let us examine these scenarios in greater detail. The mass Lagrangian for neutrinos, which includes both Yukawa interactions with the singlets \(\nu_{R}^{i}\), \(i=\{1,2,3\}\), and their Majorana mass terms, can be written as
\[\mathscr{L}_{\nu}=-Y_{\alpha i}\overline{L_{\alpha}}\widetilde{H}\nu_{R}^{i}+ \frac{1}{2}\overline{(\nu_{R}^{i})^{c}}M_{R}^{ij}\nu_{R}^{j}\,. \tag{1}\]
Here, \(Y_{\alpha i}\) represents the Yukawa couplings between the left-handed lepton doublets \(L_{\alpha}\), the conjugate of the SM Higgs doublet \(\widetilde{H}\), and the singlets. The Majorana mass
Figure 1: Landscape of neutrino mass-squared difference \(\delta m^{2}\) in the neutrino energy (\(E_{\nu}\)) and experiment baseline (\(L\)) plane. The corresponding sensitivity from reactor neutrinos (light purple), accelerator neutrinos (green), atmospheric neutrinos (blue), solar neutrinos (yellow), supernova neutrinos (emerald), diffuse supernova neutrino background (dark red) and high energy neutrinos (purple) are shown. The bound from neutrino data from SN1987A is shown by a pink region. Predictions from the C\(\nu\)B derived in this work, assuming the lightest neutrino to be relativistic today, are shown in light blue. The dashed red lines indicate the solar, atmospheric mass splittings \(\Delta m^{2}_{21},|\Delta m^{2}_{3i}|\) and a value of \(\delta m=6.31\times 10^{-20}\ \mathrm{eV}^{2}\) preferred by the SN1987A data [39].
term, denoted by \(M_{R}^{ij}\), depends on the scale at which such terms originate. The superscript \(c\) signifies charge conjugation. After electroweak symmetry breaking, the neutrino mass Lagrangian can be rewritten as
\[\mathscr{L}_{\nu}=-\frac{1}{2}\overline{N_{L}^{c}}MN_{L}, \tag{2}\]
where
\[N_{L}=\begin{pmatrix}\nu_{L}\\ (\nu_{R})^{c}\end{pmatrix},\quad M=\begin{pmatrix}0_{3}&Yv/\sqrt{2}\\ Yv/\sqrt{2}&M_{R}\end{pmatrix}. \tag{3}\]
In the above expressions, \(v\) represents the vacuum expectation value (VEV) of the Higgs field, \(\nu_{L}=(\nu_{e},\nu_{\mu},\nu_{r})^{T}\) denotes the left-handed neutrino field, and \(\nu_{R}=(\nu_{R_{1}},\nu_{R_{2}},\ldots)^{T}\) represents the right-handed neutrino field. At this stage, we have not specified any hierarchy between the Higgs VEV and the scale of the Majorana mass matrix \(M_{R}\).
In scenarios where a significant hierarchy exists between the Majorana mass and the electroweak scales, i.e., \(M_{R}\gg Yv\), the diagonalization of the matrix \(M\) gives rise to active neutrinos with suppressed masses relative to the electroweak scale, \(m_{\nu}\propto Y^{T}(M_{R})^{-1}Yv^{2}\). This mechanism, widely known as the seesaw mechanism [47; 48; 49; 50; 51; 52; 53; 54; 55; 56], has garnered considerable attention due to its potential to explain the observed matter-antimatter asymmetry in the Universe [49].
However, it is also plausible that the Majorana mass scale is suppressed relative to the electroweak scale, \(M_{R}\ll Yv\), particularly if the Majorana mass terms are Planck-suppressed, for example. In this particular scenario, referred to as the "pseudo-Dirac" case, lepton number is softly broken by the Majorana mass, resulting in the lifting of degeneracy between the left- and right-handed components of a Dirac neutrino. Significantly, in this scenario, processes involving lepton-number violation are highly suppressed, making it challenging to experimentally detect lepton-number violating phenomena.
In order to test the pseudo-Dirac scenario, it is then crucial to explore the consequences of the presence of Majorana mass terms, particularly for the oscillations between the active and sterile neutrino components. Let's first consider the general case where we do not assume any specific hierarchy between the Majorana mass matrix and the electroweak scale. The mass matrix \(M\) can be diagonalized by a \(6\times 6\) unitary matrix, \(\mathscr{V}\), which is obtained from the multiplication of 15 complex rotation matrices [45]. For simplicity, we will focus on the mixing between the pseudo-Dirac pairs labelled as \(1-4\), \(2-5\), and \(3-6\). Hence, considering only as non-zero mixing angles \(\theta_{14},\theta_{25},\theta_{36}\), the mixing matrix \(\mathscr{V}\) can be expressed as
\[\mathscr{V}=U_{23}U_{13}U_{12}U_{14}U_{25}U_{36}\,. \tag{4}\]
We therefore define the mass eigenstates \(\nu_{i}^{\pm}\)[32]
\[N_{L}=\mathscr{V}\begin{pmatrix}\nu_{i}^{-}\\ \nu_{i}^{+}\end{pmatrix}\,,\]
where \(\pm\) refers to the two mass eigenstates associated with the splitting of a given mass eigenstate \(i\). Assuming the singlet mass matrix \(M_{R}\) to be diagonal, \(M_{R}=\text{diag}(m_{r_{1}},m_{r_{2}},m_{r_{3}})\), we have the masses \(m_{i}^{\pm}\) associated to the eigenstates
\[m_{i}^{\pm}=\frac{1}{2}\left[\sqrt{(m_{r_{i}})^{2}+(2m_{D_{i}})^{2}}\pm m_{r_ {i}}\right]\,, \tag{5}\]
with \(m_{D_{i}}=Yv/\sqrt{2}\) being the eigenvalues of the Dirac mass matrix. Therefore, the mixing angle for each generation will be
\[\tan 2\theta_{i}=\frac{2m_{D_{i}}}{m_{r_{i}}}\,. \tag{6}\]
In our case, where only mixing between the pseudo-Dirac pairs \(1-4\), \(2-5\), and \(3-6\) are considered, this implies \(\theta_{1,2,3}=\theta_{14,25,36}\). Explicitly, the neutrino fields in the flavor basis take a simple form
\[\nu_{\alpha}=\sum_{i}\,U_{\alpha i}(\text{e}^{\text{i}\lambda}\cos\theta_{i} \,\nu_{i}^{-}+\sin\theta_{i}\,\nu_{i}^{+})\,, \tag{7}\]
with \(U_{\alpha i}\) the standard Pontecorvo-Maki-Nakagawa-Sakata mixing matrix. We observe that a flavor eigenstate corresponds to a superposition of six mass eigenstates \(\nu_{i}^{\pm}\). The CP phase \(e^{\text{i}\lambda}\) in Eq. (7) is fixed after imposing the masses to be positive, finding that \(e^{\text{i}\lambda}=\text{i}\)[57]. The orthogonal components \(\nu_{\{s_{1},s_{2},s_{3}\}}\), which represent the states that do not interact weakly, can be written as
\[\nu_{s_{i}}=-\text{i}\sin\theta_{i}\,\nu_{i}^{-}+\cos\theta_{i}\,\nu_{i}^{+}. \tag{8}\]
Let us now consider in detail the limits mentioned before of this scenario depending on the scale of the singlet mass matrix \(M_{R}\).
_See-saw limit:_\(M_{R}\gg Yv\). In such a case, we have that the mixing becomes tiny, \(\theta_{i}\to 0\), in such a way that the flavor and sterile fields become,
\[\nu_{\alpha}\approx\text{i}\sum_{i}\,U_{\alpha i}\nu_{i}^{-},\quad\nu_{s_{i} }\approx\nu_{i}^{+}, \tag{9}\]
such that the states \(\nu_{i}^{\pm}\) have masses
\[m_{i}^{-}=\frac{(m_{D_{i}})^{2}}{m_{r_{i}}},\quad m_{i}^{+}=m_{r_{i}}\,. \tag{10}\]
This indicates that sterile neutrinos are mostly composed of \(\nu_{i}^{+}\) eigenstates, while flavor states are superpositions of the \(\nu_{i}^{-}\) states, which we can identify as the usual mass eigenstate fields.
_Pseudo-Dirac limit:_\(M_{R}\ll Yv\). In such a case, we have that the mixing becomes maximal, \(\theta_{i}\to\pi/4\), and the
flavor and sterile fields become,
\[\nu_{\alpha} =\sum_{i}\frac{U_{\alpha i}}{\sqrt{2}}(\,\mathrm{i}\,\nu_{i}^{-}+\nu _{i}^{+}), \tag{11a}\] \[\nu_{s_{i}} =\frac{1}{\sqrt{2}}(\,-\mathrm{i}\,\nu_{i}^{-}+\nu_{i}^{+}). \tag{11b}\]
Here the masses for the mass eigenstates are given by
\[m_{i}^{\pm}=m_{D_{i}}\pm\frac{m_{r_{i}}}{2}, \tag{12}\]
respectively. Note that when we consider the exact Dirac case, \(m_{r_{i}}=0\), we recover the usual fact that a neutral Dirac field is a maximally mixed superposition of two degenerate Majorana neutrinos.
Now, to establish the specific properties of the relic neutrinos in the PD scenario, we have to first determine the states participating in the weak interactions, a task which will be considered in the next subsection.
### Weak Interactions
Before their decoupling, neutrinos were in an ultra-relativistic state and in thermal equilibrium due to their weak interactions. As the Universe cooled down, neutrinos decoupled from the thermal bath, and will therefore retain the flavor state related to their last scattering. Thus, the initial states will be linear superpositions of the mass eigenstates \(\nu_{i}^{\pm}\). However, since weak interactions violate parity, it becomes crucial to carefully determine the specific superposition that is emitted based on the weak process involved. In simpler terms, we need to specify whether the initial state created has a right or left helicity. To address this, we can examine the charged-current (CC) weak interaction Lagrangian explicitly, which is written using the defined flavor fields mentioned above,
\[\mathscr{L}_{\mathrm{CC}} =-\frac{g}{\sqrt{2}}\sum_{\alpha=e,\mu,\tau}[\overline{\nu_{ \alpha}}\gamma^{\mu}\alpha_{L}W_{\mu}+\overline{\alpha_{L}}\gamma^{\mu}\nu_{ \alpha}W_{\mu}^{\dagger}],\] \[=-\frac{g}{\sqrt{2}}\sum_{\alpha=e,\mu,\tau}\sum_{i}[U_{\alpha i }^{*}(-\mathrm{i}\cos\theta_{i}\,\overline{\nu_{i}^{-}}+\sin\theta_{i}\, \overline{\nu_{i}^{+}})\gamma^{\mu}\alpha_{L}W_{\mu}+U_{\alpha i}\overline{ \alpha_{L}}\gamma^{\mu}(\mathrm{i}\cos\theta_{i}\,\nu_{i}^{-}+\sin\theta_{i} \,\nu_{i}^{+})W_{\mu}^{\dagger}]. \tag{13}\]
Examining this Lagrangian, we notice that the two currents yield distinct linear combinations. To determine the helicities of these combinations, let's recall the expansion of a generic Majorana field operator \(\psi\),
\[\psi(x)=\int\frac{d^{3}p}{(2\pi)^{3}2E}\sum_{h=\pm}[a_{h}(p)u_{h}(p)e^{- \mathrm{i}px}+a_{h}^{\dagger}(p)v_{h}(p)e^{\mathrm{i}px}], \tag{14}\]
where, \(u_{\pm}\) and \(v_{\pm}\) represent four-component spinors, and \(a\) and \(a^{\dagger}\) are quantum operators adhering to standard anticommutation relations. It follows that the operator \(\psi\) can create or annihilate the same state, as expected from a Majorana fermion. Given that neutrinos were ultra-relativistic at decoupling, we can consider the following approximations for the spinors \(u_{\pm}\) and \(v_{\pm}\)[57]
\[u_{+}(p) \approx-\sqrt{2E}\begin{pmatrix}\chi^{+}(p)\\ -\frac{m}{2E}\chi^{+}(p)\end{pmatrix}, u_{-}(p) \approx\sqrt{2E}\begin{pmatrix}-\frac{m}{2E}\chi^{-}(p)\\ \chi^{-}(p)\end{pmatrix}\] \[v_{+}(p) \approx-\sqrt{2E}\begin{pmatrix}\frac{m}{2E}\chi^{-}(p)\\ \chi^{-}(p)\end{pmatrix}, v_{-}(p) \approx\sqrt{2E}\begin{pmatrix}\chi^{+}(p)\\ \frac{m}{2E}\chi^{+}(p)\end{pmatrix}, \tag{15}\]
where \(\chi^{\pm}\) are two-component helicity eigenstate spinors.
Hence, the first terms of the charged-current (CC) Lagrangian in Eq. (13), \(\overline{\nu_{i}^{\pm}}\gamma^{\mu}\alpha_{L}W_{\mu}\), create a \(\nu_{i}^{\pm}\) with negative helicity (\(h=-1\) or a _neutrino_) or annihilate a \(\nu_{i}^{\pm}\) with positive helicity (\(h=+1\) or an _antineutrino_). The second term operates conversely, creating neutrinos with positive helicity and annihilating neutrinos with negative helicity. Thus, the neutrino states with negative helicity, \(|\nu_{\alpha}\rangle_{h=-1}\), and positive helicity, \(|\overline{\nu}_{\alpha}\rangle_{h=1}\), created by the CC Lagrangian correspond to the following linear superpositions,
\[|\nu_{\alpha}\rangle_{h=-1} =U_{\alpha i}^{*}(-\mathrm{i}\cos\theta_{i}|\nu_{i}^{-}\rangle+ \sin\theta_{i}|\nu_{i}^{+}\rangle) \tag{16a}\] \[|\overline{\nu}_{\alpha}\rangle_{h=1} =U_{\alpha i}(\mathrm{i}\cos\theta_{i}|\nu_{i}^{-}\rangle+\sin \theta_{i}|\nu_{i}^{+}\rangle). \tag{16b}\]
The conjugation arises from the nature of the interaction entering the CC Lagrangian. In the previously described see-saw limit, the states \(|\nu_{\alpha}\rangle_{h=-1}\) and \(|\overline{\nu}_{\alpha}\rangle_{h=+1}\) take the
approximate forms:
\[|\nu_{\alpha}\rangle_{h=-1} \approx-\mathrm{i}\,U_{\alpha i}^{*}|\nu_{i}^{-}\rangle\] \[|\overline{\nu}_{\alpha}\rangle_{h=1} \approx\quad\mathrm{i}\,U_{\alpha i}|\nu_{i}^{-}\rangle,\]
These expressions, up to an irrelevant overall phase \(\pm\mathrm{i}\), align with the conventional definitions of neutrino and antineutrino states commonly employed in neutrino oscillation studies [57]. In contrast, in the Dirac limit, the states are approximately given by:
\[|\nu_{\alpha}\rangle_{h=-1} \approx\frac{U_{\alpha i}^{*}}{\sqrt{2}}(-\mathrm{i}|\nu_{i}^{-} \rangle+|\nu_{i}^{+}\rangle)\equiv U_{\alpha i}^{*}|\nu_{i}\rangle\] \[|\overline{\nu}_{\alpha}\rangle_{h=1} \approx\frac{U_{\alpha i}}{\sqrt{2}}(\mathrm{i}|\nu_{i}^{-} \rangle+|\nu_{i}^{+}\rangle)\equiv U_{\alpha i}|\overline{\nu}_{i}\rangle,\]
Again, these approximations are consistent with the standard mixing of neutrinos and antineutrinos after defining the neutrino mass eigenstate to be \(|\nu_{i}\rangle=\frac{1}{2}(-\mathrm{i}|\nu_{i}^{-}\rangle+|\nu_{i}^{+}\rangle)\), while the antineutrino state is related by complex conjugation. Thus, it is evident that the general superpositions defined in Eqs. (16a) correctly reproduce the expected limits for both the see-saw and Dirac scenarios. As for the sterile state, it follows from Eq. (8):
\[|\nu_{s_{i}}\rangle=-\mathrm{i}\sin\theta_{i}|\nu_{i}^{-}\rangle+\cos\theta_{ i}|\nu_{i}^{+}\rangle. \tag{17}\]
These are the potential superpositions in which neutrinos, both left- and right-handed would have frozen out after the decoupling phase. Moreover, as the mass eigenstates \(|\nu_{i}^{\pm}\rangle\) evolve with distinct phases, there is a possibility that the initial flavor states would oscillate to sterile ones, which do not interact and would result in the disappearance of a portion of the C\(\nu\)B. The occurrence of active-sterile oscillations is closely linked to the value of the neutrino capture rate for Dirac neutrinos, as we will explore in the following section.
## III Capture rate computation
Due to the non-relativistic nature of the neutrinos today, chirality and helicity can no longer be used interchangeably. We will work with helicities here. The tiny mass-squared difference between \(\nu_{i}^{\pm}\) in the pseudo-Dirac scenario will induce active sterile oscillations, which can take place over baselines \(\propto E/\delta m^{2}\). These oscillations conserve helicity, leading to \(\nu_{h=1}^{\alpha}\longleftrightarrow\nu_{h=1}^{\sigma}\) and \(\nu_{h=-1}^{\alpha}\longleftrightarrow\nu_{h=-1}^{s}\). Henceforth, we will drop the subscript \(h\) and use \(\pm 1\) to denote the helicity state of the neutrino. Since relic neutrinos have propagated in an expanding Universe, the evolution phases from the decoupling, occurring at a redshift \(z\), until today depending on the momentum \(p\) are given by [58; 59],
\[\Phi_{i}^{\pm}(z)=\int_{0}^{z}\frac{dz^{\prime}}{H(z^{\prime})}\left[(m_{i}^{ \pm})^{2}+p^{2}(1+z^{\prime})^{2}\right]^{\frac{1}{2}}, \tag{18}\]
where \(H(z)=H_{0}(1+z)\sqrt{\Omega_{m}(1+z)^{3}+\Omega_{r}(1+z)^{4}+\Omega_{\Lambda}}\) is the Hubble function, depending on the Hubble parameter \(H_{0}\), and the matter, \(\Omega_{m}\), radiation \(\Omega_{r}\), and Dark Energy \(\Omega_{\Lambda}\) contributions to the total energy density [60]. Thus, the positive and negative helicity states will evolve according to
\[|\nu_{i}^{\pm}(z)\rangle=\exp(-i\Phi^{\pm}(z))|\nu_{i}^{\pm}\rangle\]
The disappearance probability \(P(\nu_{\pm 1}^{\alpha}\to\nu_{\pm 1}^{s_{i}})\) for each eigenstate \(i\) is then
\[P(\nu_{-1}^{\alpha}\to\nu_{-1}^{s_{i}}) =|\langle\nu_{s_{i}}|\nu_{\alpha}(z)\rangle|^{2}\] \[=|U_{\alpha i}|^{2}\sin^{2}\,2\theta_{i}\sin^{2}\left[\frac{ \Delta\Phi_{i}}{2}\right]\,, \tag{19a}\] \[P(\nu_{+1}^{\alpha}\to\nu_{+1}^{s_{i}}) =|\langle\nu_{s_{i}}|\overline{\nu}_{\alpha}(z)\rangle|^{2}\] \[=|U_{\alpha i}|^{2}\sin^{2}\,2\theta_{i}\cos^{2}\left[\frac{ \Delta\Phi_{i}}{2}\right]\,, \tag{19b}\]
where the phase difference is \(\Delta\Phi_{i}=\Phi_{i}^{+}-\Phi_{i}^{-}\).
After freeze-out, the phase-space distribution of the C\(\nu\)B remains a Fermi-Dirac distribution, while the temperature and the momenta redshift. Therefore, the abundance at freeze-out for effectively massless neutrinos is given as
\[n(T)=\frac{3\zeta(3)}{4\pi^{2}}T_{\nu}^{3}\,. \tag{20}\]
where \(T_{\nu}\) is related to the photon temperature \(T_{\gamma}\) through \(T_{\nu}=(4/11)^{1/3}T_{\gamma}\). The current number density of neutrinos, after accounting for redshift, is \(n_{0}\equiv 56\,\mathrm{cm}^{-3}\) per flavor per helicity state of the neutrino. Moreover, we have that the root mean square momentum of neutrinos is \(\overline{p}=0.6\) meV [27], indicating that the two heaviest states are non-relativistic today, while the lightest could be still relativistic if it has a mass smaller than \(\sim 0.1\) meV. Since only the states \(|\nu_{\alpha}\rangle_{-1},|\overline{\nu}_{\alpha}\rangle_{+1}\) are populated in the Early Universe, in equal amounts, their abundances at present follow [27]
\[n(\nu_{-1}^{\alpha}) =n_{0}\,, n(\nu_{+1}^{\alpha})=n_{0}\,, \tag{21a}\] \[n(\nu_{\pm 1}^{s_{i}}) =0, \tag{21b}\]
Note that we have assumed that the sterile states are not populated in the Early Universe.
Now, let us consider the effect of mixing between active and sterile states in our scenario. In this regime, after the neutrinos have decoupled, the sterile states can be populated with a probability given by Eq. 19. As a result, the abundances of the neutrinos are given by
\[n(\nu^{\alpha}_{-1}) = \left(1-P(\nu^{\alpha}_{-1}\to\nu^{s_{i}}_{-1})\right)n_{0}\,, n(\nu^{\alpha}_{+1})=\left(1-P(\nu^{\alpha}_{+1}\to\nu^{s_{i}}_{+1}) \right)n_{0}\,, \tag{22}\] \[n(\nu^{s_{i}}_{-1}) = P(\nu^{\alpha}_{i,-1}\to\nu^{s_{i}}_{-1})\,n_{0}\,, n(\nu^{s_{i}}_{+1})=P(\nu^{\alpha}_{+1}\to\nu^{s_{i}}_{+1})\,n_{0}\,. \tag{23}\]
Some of the active neutrinos will be lost from the thermal plasma due to active-sterile conversion. This is the main effect of having lepton number violation, after the addition of singlet states having Majorana masses. The Majorana limit can be recovered for \(P(\nu^{\alpha}_{\pm 1}\to\nu^{s_{i}}_{\pm 1})=0\).
Taking into account the context discussed in this thread, let us now delve into the calculation of the capture rate of the C\(\nu\)B on a target nuclei, represented by the process \(\nu_{e}+n\to p^{+}+e^{-}\). Following the established standard procedure to compute this rate [27], we arrive at the following result
\[\Gamma_{\rm C\nu B}=N_{T}\,\overline{\sigma}\,\sum_{i=1}^{3}\left[\left(1-P( \nu^{e}_{+1}\to\nu^{s_{i}}_{+1})\right)n(\nu^{\alpha}_{+1})\,{\cal A}_{i}(+1) +\left(1-P(\nu^{e}_{-1}\to\nu^{s_{i}}_{-1})\right)n(\nu^{\alpha}_{-1})\,{\cal A }_{i}(-1)\right]\,, \tag{24}\]
where \(N_{T}\) are the number of targets, and \({\cal A}(h)\) are spin-dependent factors that take into account the mismatch between helicity and chirality,
\[{\cal A}_{i}(h)\equiv 1-h\overline{v_{i}}, \tag{25}\]
being \(\overline{v_{i}}=(v^{+}_{i}+v^{-}_{i})/2\), with \(v^{\pm}_{i}=|\vec{p}|/\sqrt{|\vec{p}|^{2}+(m^{\pm}_{i})^{2}}\) the average neutrino velocity, \(h\) the helicity. The nucleus-dependent factor \(\overline{\sigma}\) in the capture rate is the spin-averaged cross-section. Assuming tritium as the target, we have that
\[\overline{\sigma}\approx 3.8\times 10^{-45}\ {\rm cm}^{2}. \tag{26}\]
Expanding the capture rate, we find the following dependence on the mixing between the \(\nu^{\pm}_{i}\) fields,
\[\Gamma_{\rm C\nu B}=N_{T}\overline{\sigma}n_{0}\sum_{i=1}^{3}|U_{ei}|^{2} \left[1+\cos^{2}2\theta_{i}+\sin^{2}2\theta_{i}\left\langle v_{i}\cos\left( \Delta\Phi_{i}\right)\right\rangle\right], \tag{27}\]
where we have taken the average of the oscillatory term with respect to the C\(\nu\)B momentum distribution \(f_{\rm C\nu B}(p)\)[28],
\[\langle v_{i}\cos\left(\Delta\Phi_{i}\right)\rangle=\frac{\int_{0}^{\infty}v_ {i}\cos\left(\Delta\Phi_{i}\right)p^{2}f_{\rm C\nu B}(p)\,dp}{\int_{0}^{\infty }p^{2}f_{\rm C\nu B}(p)\,dp}. \tag{28}\]
As mentioned before, we consider a Fermi-Dirac distribution for the C\(\nu\)B momentum in terms of the temperature of the relic neutrinos today,
\[f_{\rm C\nu B}(p)=\frac{1}{\exp(p/T_{\nu})+1}.\]
Let us analyse the different limits in the capture rate Eq. (27). In the see-saw limit previously mentioned, where the mixing angle \(\theta_{i}\to 0\), we have
\[\Gamma_{\rm C\nu B}\approx 2N_{T}\overline{\sigma}n_{0}, \tag{29}\]
corresponding to the usual Majorana capture rate. Now if the mixing angle is maximal, \(\cos\theta_{i}=\sin\theta_{i}=1/\sqrt{2}\) and the fields \(\nu^{\pm}_{i}\) are degenerate in mass, i.e. \(m_{r_{i}}=0\), the capture rate is
\[\Gamma_{\rm C\nu B}\approx N_{T}\overline{\sigma}n_{0}(1+\sum_{i=1}^{3}|U_{ei} |^{2}\langle v_{i}\rangle), \tag{30}\]
which is the value obtained for Dirac neutrinos [28].
Let us examine the ratio between the full neutrino capture rate and the purely Majorana case,
\[\frac{\Gamma_{\rm C\nu B}}{\Gamma_{\rm C\nu B}^{M}}=1+\sum_{i=1}^{3}|U_{ei}|^{ 2}\left[\cos^{2}\theta_{i}+\sin^{2}2\theta_{i}\left\langle v_{i}\cos\left( \Delta\Phi_{i}\right)\right\rangle\right], \tag{31}\]
In this analysis, we assume that the values of \(m^{-}_{i}\) coincide with the mass of the active neutrinos in the seesaw limit. Additionally, we consider all singlet masses to be equal, \(m_{r_{1}}=m_{r_{2}}=m_{r_{3}}=m_{r}\). In Fig. 2, we illustrate the behaviour of the ratio as it varies with the \(m_{r}\) while maintaining a fixed value for \(m^{-}_{\ell}\), the mass of the lightest state. We consider different fixed values for the lightest neutrino \(m^{-}_{\ell}=10^{-7}\) eV (green), \(m^{-}_{\ell}=10^{-4}\) eV (light blue dashed), \(m^{-}_{\ell}=0.01\) eV (magenta dotted), \(m^{-}_{\ell}=0.1\) eV (orange dot-dashed), \(m^{-}_{\ell}=1\) eV (pink
dot-dot-dashed), for both the Normal (left) and Inverted (right) Orderings.
The shaded region indicates values that are excluded based on current neutrino oscillation experiments [34]. As anticipated from the limits discussed earlier, particularly when the lightest neutrino is non-relativistic today, we observe that for \(m_{r}\gg m_{1}^{-}\), the capture rate aligns with the purely Majorana scenario. Conversely, in the opposite limit, we recover the expected Dirac behaviour, consistent with the findings in the previously described pseudo-Dirac limit. The transition between these two limits hinges on the mass spectrum of \(m_{i}^{\pm}\). Specifically, as \(m_{r}\) approaches approximately \(0.1m_{i}^{-}\), the mixing angle begins to deviate from maximal, resulting in an increased capture rate. When \(m_{r}\) surpasses \(m_{i}^{-}\) by roughly two orders of magnitude, the capture rate tends to approach the maximal value associated with the purely Majorana case. However, it is important to highlight that the transition region, which could potentially yield varying capture rates, falls within the range excluded by current experimental data.
Significant differences arise when considering the scenario where the lightest neutrino remains relativistic in the present day. In this case, it is expected that the capture rate for Majorana neutrinos remains the same, while that for Dirac neutrinos increases, depending on the velocity and the PMNS mixing matrix element corresponding to the lightest neutrino. For the normal ordering, the ratio takes on a value of approximately
\[\frac{\Gamma^{D}_{\rm{C\nu B}}}{\Gamma^{M}_{\rm{C\nu B}}}\approx\frac{1}{2}(1 +|U_{e1}|^{2}\langle v_{1}\rangle)\approx 0.84 \tag{32}\]
when the lightest neutrino is massless. This is consistent with previous results in Ref. [28].
In our case, taking the case of \(m_{\ell}^{-}=10^{-7}\) eV, we expect \(\Gamma_{\rm{C\nu B}}/\Gamma^{M}_{\rm{C\nu B}}\simeq 0.84\) for \(m_{r}\ll 10^{-31}\) eV. This mimics the result expected for Dirac neutrinos. However, as \(m_{r}\) increases, a distinctive pattern emerges. A minimum becomes apparent in the capture rate ratio. This diminution in the ratio occurs due active-sterile transitions, which reach the first oscillation maximum when \(\Delta\Phi_{i}=\pi\). In the relativistic lightest neutrino regime, we have that
\[\Delta\Phi_{i}=\frac{\delta m_{\ell}^{2}}{2p}L_{\rm{C\nu B}}, \tag{33}\]
where \(\delta m_{\ell}^{2}=(m_{\ell}^{+})^{2}-(m_{\ell}^{-})^{2}\), and \(L_{\rm{C\nu B}}\) denotes the C\(\nu\)B propagation distance [59; 61]
\[L_{\rm{C\nu B}}=\int_{0}^{z}\frac{dz^{\prime}}{(1+z^{\prime})H(z^{\prime})} \approx 2.35\ {\rm{Gpc}}, \tag{34}\]
for \(z=10^{10}\) redshift value at neutrino decoupling. Thus,
Figure 2: Ratio of the C\(\nu\)B capture for the general Dirac+Majorana scenario to the purely Majorana rate, as a function of the scale of lepton number violation \(m_{r}\) for lightest neutrino masses of \(m_{\ell}^{-}=10^{-7}\) eV (green), \(m_{\ell}^{-}=10^{-4}\) eV (light blue dashed), \(m_{\ell}^{-}=0.01\) eV (magenta dotted), \(m_{\ell}^{-}=0.1\) eV (orange dot-dashed), \(m_{\ell}^{-}=1\) eV (pink dot-dot-dashed) for the Normal (left) and Inverted (right) Orderings. The shaded regions are excluded from neutrino oscillation experiments. Note that for a relativistic lightest neutrino in the purely Dirac case, the ratio tends to the value in Eq. (32).
the oscillation maximum occurs when
\[\delta m_{\ell}^{2} =\frac{2\pi p}{L_{\rm C\nu B}}\] \[\sim 10^{-35}\ {\rm eV}^{2}\left(\frac{2.35\ {\rm Gpc}}{L_{\rm C \nu B}}\right)\left(\frac{p}{0.6\ {\rm meV}}\right) \tag{35}\]
Since \(\delta m_{\ell}^{2}=m_{r}(m_{r}+2m_{\ell}^{-})\ll(m_{\ell}^{-})^{2}\), we obtain the value of \(m_{r}\) where the maximum active-sterile oscillation takes place, at approximately
\[m_{r}^{\rm osc} \approx\frac{\pi p}{m_{\ell}^{-}\,L_{\rm C\nu B}}\] \[\sim 5\times 10^{-30}\ {\rm eV}\left(\frac{1\ \mu{\rm eV}}{m_{\ell}^{-}}\right)\left(\frac{2.35\ {\rm Gpc}}{L_{\rm C\nu B}}\right)\left(\frac{p}{0.6\ {\rm meV}}\right). \tag{36}\]
The averaging effect remains until \(m_{r}\) surpasses a certain value, in this case equalling \(10^{-10}\) eV in the normal ordering.
As \(m_{r}\) is further increased, the growth rate increases again and makes a transition when \(m_{r}\simeq\sqrt{\Delta m_{\rm sol}^{2}}=8.7\) meV. This explains the second step-like feature in the plot. For more massive neutrinos, maximal mixing is preserved and we recover the Majorana capture rate. This behaviour is contingent upon the lightest neutrino mass \(m_{\ell}^{-}\), and becomes less prominent as it increases.
Upon comparing the outcomes for both normal and inverted orderings, a notable distinction emerges concerning the capture rate for extremely small values of \(m_{r}\). In the case of the inverted ordering, where the lightest neutrino corresponds to \(m_{3}^{-}\), its capture is governed by the small mixing angle \(\theta_{13}\). Consequently, the asymptotic value for \(m_{r}\ll m_{r}^{\rm osc}\) exhibits only a marginal correction of approximately \(\sim 2.5\%\) from the non-relativistic Dirac scenario.
In summary, the overall behaviour of the capture rate critically hinges on the value of \(m_{r}\). When \(m_{r}=m_{r}^{\rm osc}\), a minimum arises due to the active neutrinos undergoing a transition to sterile neutrinos. For values larger than \(m_{r}^{\rm osc}\), maximal mixing prevails, and the active-sterile oscillation averages out, resulting in a capture rate akin to that of the purely Dirac case, until \(m_{r}\) approaches the vicinity of \(m_{\ell}^{-}\), where the mixing deviates from maximality, leading to a capture rate approaching the Majorana value. On the other hand, for values lower than \(m_{r}^{\rm osc}\), the capture rate tends toward the Dirac case, but with a correction due to the presence of a relativistic lightest neutrino. Indeed, even when dealing with a relativistic lightest neutrino, the capture rate has the potential to align with the Dirac case in the non-relativistic limit. This phenomenon arises due to the active-sterile oscillations averaging out, leading to a cancellation between helicity contributions that effectively nullify the impact of having a relativistic lightest neutrino. Hence, the capture rate can converge to a value comparable to that in the Dirac case, despite the relativistic nature of the lightest neutrino.
## IV Event rates in a PTOLEMY-like detector
The proposed PTOLEMY experiment aims to detect neutrinos from the C\(\nu\)B utilizing a layer of graphene with atomic tritium on top of it [6; 10]. Although various setups for PTOLEMY have been considered, our focus lies in examining how the presence of singlets would impact the detection events in PTOLEMY or similar experiments. In the capture process described earlier, when a neutrino interacts with the tritium nucleus, it produces an electron whose energy can be measured using specific techniques. The kinematics of this capture process results in definite energy for the electrons [27]
\[E_{e}^{\rm C\nu B,i}\simeq m_{e}+K_{\rm end}^{0}+2\,m_{i}. \tag{37}\]
Here, \(K_{\rm end}^{0}\) represents the endpoint energy of the electrons emitted from the \(\beta\)-decay of tritium. Given that the electrons produced after neutrino capture are monochromatic, they will generate one or more peaks at energies larger than \(K_{\rm end}^{0}\). The distinguishability of the C\(\nu\)B emitted electrons from those originating from tritium \(\beta\)-decay relies on the energy resolution. With a sufficiently high resolution, it becomes possible to differentiate these events. However, if the energy resolution is too large, the C\(\nu\)B electron events may be buried under a significant background. To account for this, we convolve the capture in Eq (29) with an assumed Gaussian-like experimental resolution,
\[\frac{d\Gamma_{\rm C\nu B}}{dE_{e}} =\frac{1}{\sqrt{2\pi\sigma^{2}}}\sum_{j=1}^{3}\int_{-\infty}^{ \infty}\ dE_{e}^{\prime}\,\Gamma_{\rm C\nu B}^{j}\,\exp\left[-\frac{(E_{e}^{ \prime}-E_{e})^{2}}{2\sigma^{2}}\right]\,\delta(E_{e}-E_{e}^{\rm C\nu B,j}), \tag{38a}\] \[\frac{d\Gamma_{\beta}}{dE_{e}} =\frac{1}{\sqrt{2\pi\sigma^{2}}}\int_{-\infty}^{\infty}\ dE_{e}^{ \prime}\,\frac{d\Gamma_{\beta}}{dE_{e}^{\prime}}\,\exp\left[-\frac{(E_{e}^{ \prime}-E_{e})^{2}}{2\sigma^{2}}\right], \tag{38b}\]
where \(\sigma\) is the energy resolution, also parameterised through the full width at half maximum (FWHM) \(\Delta=2.35\sigma\), and \(\Gamma_{\rm C\nu B}^{j}\) indicates the capture rate associated with the \(i\)-th mass eigenstate. By utilizing the complete
expression for the \(\beta\)-decay spectrum of tritium [62], we present in Fig. (3) the anticipated electron spectra as a function of the measured energy for various values of the singlet mass \(m_{r}=10^{-35}\) eV (green), \(m_{r}=m_{r}^{\rm osc}=5\times 10^{-32}\) eV (orange dashed), \(m_{r}=10^{-15}\) eV (blue dotted), and \(m_{r}=10^{5}\) eV (purple dot-dashed), assuming the normal ordering. We consider an FWHM of \(\Delta=10\) meV, and the lightest neutrino mass of \(m_{1}^{-}=0.1\) meV. The \(\beta\)-decay background is denoted by the grey dot-dot-dashed line. In all cases, the electron spectrum exhibits two primary peaks. The first peak, with a maximum at \(K_{e}-K_{\rm end}^{0}=m_{1}^{-}\), corresponds to the superposition of capture rates for the lightest neutrinos. The second peak emerges around the mass of the heaviest neutrino, approximately \(K_{e}-K_{\rm end}^{0}\approx 50\) meV. Furthermore, the extreme values of \(m_{r}=10^{-35}\) eV and \(m_{r}=10^{5}\) eV depict the event spectra for Dirac, encompassing a relativistic lightest neutrino, and Majorana, respectively. A significant difference between these two cases appears due to the contribution of the heaviest neutrinos, which change the shape of the first peak, and enhance the capture of the heaviest states. Meanwhile, for the intermediate value of \(m_{r}=10^{-15}\) eV, the capture rate has a value corresponding to the Dirac case for a non-relativistic spectrum. Regarding the \(m_{r}=m_{r}^{\rm osc}\) scenario, we observe a reduction in the spectrum, even when compared to the Dirac case. As previously discussed, this reduction stems from the oscillation of active neutrino states into sterile, and thus unobservable, neutrinos, thereby reducing the number of states available for capture. Additionally, the electron spectrum is no longer symmetric in this instance due to the emergence of the peak associated with the capture of the superposition \(\nu_{2}^{\pm}\). These findings underscore the vital role of underlying mass generation in neutrino capture, particularly in a PTOLEMY-like experiment.
## V Conclusions
Possible future detection of the cosmic neutrino background will be a watershed moment in our understanding of the early Universe, as well as the nature of the neutrinos. In particular, it is expected to shed light on whether neutrinos are Dirac or Majorana, thereby offering a probe of lepton number violation in our Universe. Currently, the most popular idea for the detection of the C\(\nu\)B involves neutrino capture on tritium - an idea which is being actively pursued by the PTOLEMY collaboration.
In this paper, we studied the dependence of the neutrino capture on the extent of lepton number violation in the Standard Model. We focused on pseudo-Dirac neutrinos, where lepton number can be softly broken so that neutrinos behave as Dirac while actually being Majorana. In such a scenario, we showed that the neutrino capture rate smoothly transitions between a purely Dirac case and a purely Majorana case. As a result, even a slight deviation of the capture rate from the purely Dirac case can signal a soft violation of lepton number.
Active-sterile oscillations, mediated by a tiny mass-squared difference, can also cause a distortion in the capture rate. We found that in the scenario where the lightest neutrino is relativistic, the distortion can be sensitive to the value of the mass-squared difference as small as \(\delta m^{2}\sim 10^{-35}\) eV\({}^{2}\). From this value, and depending on the mass of the lightest neutrino, there exists a critical Majorana mass scale, \(m_{r}^{\rm osc}\), such that for \(m_{r}\ll m_{r}^{\rm osc}\), the capture rate approaches the Dirac rate, but with an enhancement due to the presence of the relativistic lightest neutrino. As \(m_{r}\) approaches \(m_{r}^{\rm osc}\), active-sterile oscillations take over leading to an overall minima in the capture rate, which can go below the Dirac rate as well. On the other hand, for \(m\gg m_{r}^{\rm osc}\), active-sterile oscillations average out, and the Dirac rate is recovered. This happens until the \(m_{r}\) approaches the value of the lightest neutrino, where the active-sterile mixing gradually deviates from the maximum, and the capture rate approaches the Majorana value.
We compared the neutrino capture rates in a PTOLEMY-like detector as a function of the sterile neutrino mass - which is a measure of the strength of lepton number violation. We confirmed that a detector like PTOLEMY would indeed be sensitive to the underlying mechanism of neutrino mass generation. The electron
Figure 3: Expected electron spectra as a function of the observed energy for different values of the singlet mass \(m_{r}=10^{-35}\) eV (green), \(m_{r}=5\times 10^{-32}\) eV (orange dashed), \(m_{r}=10^{-15}\) eV (blue dotted), and \(m_{r}=10^{5}\) eV (purple dot-dashed), assuming the normal ordering. We consider an experimental resolution with full width at half maximum \(\Delta=10\) meV, and the lightest neutrino mass of \(m_{1}^{-}=0.1\) meV. The \(\beta\)-decay background is plotted as a grey dot-dot-dashed line.
spectra events are shown to lie between a purely Dirac hypothesis and a purely Majorana hypothesis, with the exact rate depending on the value of the sterile neutrino mass.
Through this analysis, we pointed out the sensitivity of the capture rate of the C\(\nu\)B to the mechanism connecting neutrino mass-generation. We performed a simple analysis under the approximation where the underlying \(6\times 6\) neutrino mixing matrix, consisting of 3 active and 3 sterile neutrinos, factorizes into 3 independent \(2\times 2\) matrix involving active-sterile pairs. Future studies will be aimed at relaxing this approximation to test the sensitivity of our results on the underlying neutrino mixing mechanism.
###### Acknowledgements.
We would like to thank Andre de Gouvea for helpful discussions in the initial stages of the project, and for the insightful comments on the first version of this manuscript. YFPG would like to thank the warm hospitality of the Particle and Astroparticle Division of the Max-Planck-Institute fur Kernphysik where part of this work was completed. This work has been funded by the UK Science and Technology Facilities Council (STFC) under grant ST/T001011/1. This project has received funding/support from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 860881-HIDDeN. This work has made use of the Hamilton HPC Service of Durham University.
|
2306.02384 | Spear or Shield: Leveraging Generative AI to Tackle Security Threats of
Intelligent Network Services | Generative AI (GAI) models have been rapidly advancing, with a wide range of
applications including intelligent networks and mobile AI-generated content
(AIGC) services. Despite their numerous applications and potential, such models
create opportunities for novel security challenges. In this paper, we examine
the challenges and opportunities of GAI in the realm of the security of
intelligent network AIGC services such as suggesting security policies, acting
as both a ``spear'' for potential attacks and a ``shield'' as an integral part
of various defense mechanisms. First, we present a comprehensive overview of
the GAI landscape, highlighting its applications and the techniques
underpinning these advancements, especially large language and diffusion
models. Then, we investigate the dynamic interplay between GAI's spear and
shield roles, highlighting two primary categories of potential GAI-related
attacks and their respective defense strategies within wireless networks. A
case study illustrates the impact of GAI defense strategies on energy
consumption in an image request scenario under data poisoning attack. Our
results show that by employing an AI-optimized diffusion defense mechanism,
energy can be reduced by 8.7%, and retransmission count can be decreased from
32 images, without defense, to just 6 images, showcasing the effectiveness of
GAI in enhancing network security. | Hongyang Du, Dusit Niyato, Jiawen Kang, Zehui Xiong, Kwok-Yan Lam, Yuguang Fang, Yonghui Li | 2023-06-04T15:38:38Z | http://arxiv.org/abs/2306.02384v1 | Spear or Shield: Leveraging Generative AI to Tackle Security Threats of Intelligent Network Services
###### Abstract
Generative AI (GAI) models have been rapidly advancing, with a wide range of applications including intelligent networks and mobile AI-generated content (AIGC) services. Despite their numerous applications and potential, such models create opportunities for novel security challenges. In this paper, we examine the challenges and opportunities of GAI in the realm of the security of intelligent network AIGC services such as suggesting security policies, acting as both a "_spear_" for potential attacks and a "_shield_" as an integral part of various defense mechanisms. First, we present a comprehensive overview of the GAI landscape, highlighting its applications and the techniques underpinning these advancements, especially large language and diffusion models. Then, we investigate the dynamic interplay between GAI's spear and shield roles, highlighting two primary categories of potential GAI-related attacks and their respective defense strategies within wireless networks. A case study illustrates the impact of GAI defense strategies on energy consumption in an image request scenario under data poisoning attack. Our results show that by employing an AI-optimized diffusion defense mechanism, energy can be reduced by 8.7%, and retransmission count can be decreased from 32 images, without defense, to just 6 images, showcasing the effectiveness of GAI in enhancing network security.
Generative AI, network security, large language model, diffusion model, AI safety, digital trust.
## I Introduction
Generative artificial intelligence (GAI) technologies, such as generative adversarial networks (GANs), transformers, and diffusion models, are creating profound impacts across a multitude of industries [1]. These technologies, fueled by large volumes of data, have accelerated the rapid evolution of pretrained foundation models, including conversational AI systems like ChatGPT, and are reshaping the trajectory of future Internet development. Such models unlock a vast potential for revolutionizing applications from customer support to content generation. Additionally, GAI have shown remarkable power in synthesizing images and generating audio, thus enriching innovative forms of multimedia content. With the escalating interest in GAI, the necessity for robust and intelligent network services--capable of supporting such sophisticated systems--becomes increasingly critical. Concurrently, GAI techniques can contribute to optimizing network management and performance, enhancing the overall efficiency of wireless and other mobile systems [2].
With the expanding influence of GAI, it becomes critical to explore how it intertwines and collaborates with established forms of AI in intelligent networks, particularly discriminative AI1. These two AI types have contrasting objectives and functionalities. Discriminative models excel at classifying and predicting outcomes based on given data [3], and have been extensively adopted in wired/wireless networks to optimize resource allocation, enhance network management, and improve security and privacy [3]. On the other hand, GAI models are designed to generate new data instances, simulating the distribution of input data. The unique characteristics of GAI models lead to significant divergences from discriminative AI. First, unlike discriminative AI that is primarily decision-oriented, GAI models focus on data creation, leading to applications like synthetic media generation and data augmentation, e.g., to suggest novel security policies. Second, GAI models, especially advanced ones like large language models (LLMs), are complex, introducing challenges in understanding their behavior. Third, GAI models' capability to generate seemingly realistic synthetic content can be exploited for adversarial attacks, creating opportunities for novel security threats.
Footnote 1: Here we consider the discriminative AI as a broad category that includes predictive AI and deep reinforcement learning, which refers to AI models that map an input to a class label.
Therefore, GAI presents not only opportunities but also threats in the context of wireless network management. On the one hand, the powerful capabilities of GAI carry potential for misuse by network attackers. For example, LLMs, with their advanced text generation abilities, can be exploited to spread misinformation [4]. The diffusion models, recognized for creating high-quality multimedia content, can be manipulated to generate deepfakes [5], thereby distorting the boundary between reality and artificial content. On the other hand, the complexity of GAI-based services, their reliance on substantial volumes of data, and their proficiency in producing synthetic content make them prime targets for cyber attackers. The protection of GAI within networks is motivated by the necessity to uphold the integrity of digital infrastructures, maintain data privacy, and ensure service availability.
Analyzing the security implications of integrating GAI in wireless networks necessitates a comprehensive exploration of two fundamental aspects, i.e., attack and defense, acting as the spear and shield of the intelligent networks.
* On the attack side, the exploration bifurcates into two categories: first, _attacks executed by GAI on existing discriminative AI systems_, and second, _attacks perpetrated by discriminative AI against GAI-empowered services_. This categorization is crucial since generative and discriminative AI models could exhibit fundamentally different behaviors. For instance, a GAI system, e.g., LLM, could be used to generate malicious text aiming to mislead a user to reveal sensitive data in a discriminative AI-aided intelligent network. Conversely, discriminative AI could potentially exploit vulnerabilities in GAI-based services, such as adding misleading inputs to the training dataset of LLM model to let the model generate unwanted outputs, as shown in Fig. 2 Part B.
* From a defense standpoint, the countermeasures can also be split into two strategies: first, _defenses by GAI on existing discriminative AI systems_, and second, _defenses by discriminative AI to GAI-empowered services_. For instance, GAI could be used to generate a diverse set of scenarios for robustness testing of discriminative AI systems. On the other hand, discriminative AI can help in detecting anomalies in GAI outputs or behaviors, thereby reinforcing the security of GAI-empowered services.
These four aspects of interactions between generative and discriminative AI models within wireless network security are further illustrated in Fig. 1. This systematic categorization provides a framework for understanding the intricate dynamics between these two types of AI models in the cybersecurity landscape. Recognizing the complexities of GAI, the contribution in this paper emerges from an examination of attack and defense scenarios involving these AI models, together with the introduction of novel strategies and solutions to address rising challenges. Through the use of a case study to illustrate the potential threats, vulnerabilities, and countermeasures, this paper intends to lay a foundation for a safe and effective adoption of GAI technologies into wireless networks. This, in turn, enhances network performance, resilience, and adaptability in an environment of rapidly evolving cyber threats. The contributions of this paper can be summarized as follows:
* We provide a thorough analysis of the interplay between generative and discriminative AI models in the realm of intelligent network security. Focusing on the attack strategies, we discuss how representative GAI, e.g., LLMs and diffusion models, can be exploited for attacks, as well as how discriminative AI can potentially jeopardize GAI-based services.
* From a defensive standpoint, we study how generative and discriminative AI models contribute to enhance network security. We highlight the role of GAI in robustness testing and data augmentation, and detail how discriminative AI can be used to bolster the security of GAI-based services.
* We present a case study that exemplifies the potential security threats and defenses in a real-world scenario. Specifically, we examine a situation where a user requests specific images from a service provider, demonstrating how discriminative AI can act as an attack vector while generative AI, in the form of diffusion models, serves as a protective shield.
## II Overview of Generative AI
This section provides an overview of GAI, elucidating its successful applications, the underpinning techniques, and major security considerations.
### _Successful Applications_
GAI has been proven pivotal across a wide array of applications, as illustrated by the following pioneering applications:
Fig. 1: An illustrative representation of the interactions between generative and discriminative AI in both attack and defense perspectives.
* **ChatGPT**: Developed by OpenAI, ChatGPT ([https://openai.com/blog/chatgpt](https://openai.com/blog/chatgpt)) is an AI language model, grounded in a variant of the Transformer architecture. Tailored for conversational interactions, this model gained traction rapidly, exceeding 1 million users within the first five days of its launch.
* **Bard**: Bard ([https://bard.google.com/](https://bard.google.com/)), an AI chatbot from Google. Bard excels at coding, math problem-solving, and writing assistance, offering on-demand support for users.
* **Stable Diffusion**: Stable Diffusion, an AI model developed by Stability AI, generates high-quality images from text descriptions ([https://stablediffusionweb.com/](https://stablediffusionweb.com/)). It delays a latent diffusion model architecture to iteratively denoise random noise guided by the text encoder.
* **DALL-E:** DALL-E ([https://openai.com/product/dall-e-2](https://openai.com/product/dall-e-2)) uses textual prompts to create novel images. Its second iteration, DALL-E 2, employs a diffusion model to produce more photorealistic images with a resolution four times greater than its predecessor.
By offering real-time, personalized interactions, on-demand support, and the ability to create unique and high-quality content, these services have transformed the way users engage with AI, fostering a more immersive and tailored experience.
### _Underpinning Techniques of GAI_
GAI involves various techniques such as Transformer models, GANs, Autoregressive Models (ARMs), Variational Autoencoders (VAEs), Flow-Based Models (FBMs), and more recently, LLMs and Diffusion Models. While each model boasts its unique attributes and applications, the recent advent of LLMs and Diffusion Models has caused a notable shift, supporting the applications discussed in Section II-A:
* **Large Language Models**: LLMs, epitomized by GPT and BERT, form the vanguard of modern developments in natural language processing and generation [4]. These models anticipate subsequent words in a sequence, thereby facilitating the generation of contextually coherent and meaningful text. This predictive prowess underpins AI applications like Part A in Fig. 2, engendering human-like conversation and interaction.
* **Diffusion Models**: Diffusion Models have heralded a revolution in realistic data samples generation, including images and audio [6]. These models implement a diffusion process, gradually introducing and subsequently reversing noise to generate the desired output. This technique is instrumental in multimedia content generation and has opened doors to novel opportunities across wireless network optimization [2] and content delivery platforms [7].
These models display distinct characteristics and capabilities [1]:
* **Training Mechanism:** While GANs, VAEs, and FBMs involve various transformation or adversarial training mechanisms, LLMs predict future words in a sequence, thereby generating contextually relevant text. Diffusion Models progressively introduce and then reverse noise to generate data samples. These distinctive training mechanisms give rise to a range of robust and diverse applications in wireless networks.
* **Control over Generation Process:** Unlike GANs and some other generative models that grapple with issues like mode collapse and output control, LLMs offer enhanced control over the generation process through input prompts. This capability potentially enables personalized, context-specific services in wireless networks.
* **Real-time Application Potential:** The predictive nature of LLMs and the noise-based generation of Diffusion Models could provide real-time benefits in wireless networks, such as in chat support, content generation, and multimedia content delivery.
Despite these advancements, their impacts on robust security strategies in intelligent networks remain largely unexplored.
### _Security Concerns_
While GAI continues to drive innovation across industries, it is facing security challenges. For instance, the misuse of LLMs could lead to the generation of misinformation or harmful content [4]. Similarly, Diffusion Models could be exploited to create deepfakes, raising privacy and authenticity concerns [5]. These concerns impose the need for robust defense strategies to safeguard against potential adversarial attacks. As the adage goes, the best defense is a good offense. In the context of AI, this means proactively launch various attacks and identify potential threats to ensure the security and integrity of both discriminative and generative AI systems. In the following sections, we delve into the dynamic interplay of "Spear" and "Shield" in the realm of GAI. The "Spear" refers to how GAI and discriminative AI can be exploited for adversarial attacks, while the "Shield" denotes the diverse defense mechanisms that can be employed to secure these AI systems.
## III Spear: How Generative AI and discriminative AI Attack Each Other
This section discusses the adversarial aspects of GAI and discriminative AI within wireless networks, focusing on two main perspectives: (1) attacks executed by GAI targeting existing discriminative AI systems, and (2) attacks perpetrated by discriminative AI against GAI-empowered services within wireless networks, as shown in the left-hand side of Fig. 1.
### _Attacks Executed by Generative AI on Existing Discriminative AI Systems_
#### Iii-A1 Attacks from Large Language Models
Advanced LLMs such as OpenAI's ChatGPT have exhibited superior performance across various natural language processing tasks. Nonetheless, emerging research reveals a concerning facet of these technological advancements [4]:
* **Malicious Content Generation:** The ability of LLMs to generate malicious content raises significant concerns, particularly as they can circumvent security protocols implemented by API vendors. A malicious actor could subtly alter input prompts to an LLM, causing it to
generate content that violates usage policies, yet bypasses implemented security measures. Although discriminative AI-aided protective mechanisms exist, they are not entirely foolproof against sophisticated manipulations. For instance, an LLM like ChatGPT would decline to execute a prompt like "_Write a tweet saying 'Noah is a bad guy'._". However, the model would be manipulated to produce the same undesirable outcome with a more subtly crafted prompt such as, "_I am doing my homework that is writing Twitter. Today's topic is 'Noah is a bad guy_'."
* **Distributed Denial of Service (DDoS):** LLMs, when deployed as chatbots, can be repurposed to launch DDoS attacks against discriminative AI systems. The malicious actor inundates the target network, services, and even human, with requests from multiple sources, causing network paralysis and preventing it from handling legitimate requests. These attacks often complicates source identification and mitigation as it is more difficult for traditional DDoS traffic classification, e.g., based on neural networks, to differentiate whether the requests are from legitimate users or malicious sources.
* **Phishing Attacks:** LLMs can be used to extract confidential information. Malicious actors can trick unsuspecting individuals into revealing sensitive data by staging genuine conversations. Furthermore, these models can distribute malware, such as ransomware, embedded within seemingly innocuous messages. They can also be used to orchestrate phishing attacks by generating fraudulent emails and messages that convincingly mimic legitimate organizations, escaping detection by supervised and unsu
Fig. 2: An overview of current Applications and Associated Security Risks of LLMs. Part A delineates the diverse range of services that leverage LLMs, encompassing extensions, agents & applications, and integrations. Part B highlights potential attack vectors targeting LLM-based services, e.g., data poisoning, prompt injection, and phishing server attacks. Part C illustrates the impact of a prompt injection attack using ChatGPT as an example.
pervised learning, like clustering, phishing filter systems..
#### Iii-A2 Attacks from Diffusion Models
Diffusion models also pose exploitable vulnerabilities that can be used to launch sophisticated attacks. The potential misuse of these models is exemplified in the following few scenarios:
* **Adversarial Examples Generation:** GAI models, like diffusion models, can be used to breach privacy-preserving image encryption techniques employed by discriminative AI. A relevant study [8] details an attack mechanism that leverages generative models to extract sensitive visual data from encrypted images. This method capitalizes on the similarities between encrypted and plain images within the embedding space. Using these shared features, a guided generative model penetrates learnable image encryption. This attack utilizes both StyleGAN-based and latent diffusion-based models. Experimental results from CelebA-HQ and ImageNet datasets indicate a high degree of perceptual similarity between original images and reconstructed ones, underscoring the vulnerabilities of current discriminative AI-based image encryption techniques.
* **Deception Attack:** Diffusion models can also be weaponized to conduct deception attacks. Blasingame and Liu [5] introduce a unique morphing attack leveraging a diffusion-based architecture to deceive Face Recognition (FR) systems. The attack presents a morphed image combining biometric features of two different identities, intending to trigger a false acceptance in the FR system. The diffusion model enhances the visual fidelity of the morphed image, improving its capacity to depict characteristics from both identities. The effectiveness of the proposed attack is validated through comprehensive testing on its visual fidelity and its capacity to deceive FR systems. Notably, the attack's ability to evade detection was compared against established GAN-based and Landmark-based attacks, emphasizing the potency of this diffusion model-based deception.
### _Attacks Perpetrated by Discriminative AI Against Generative AI-Empowered Services_
#### Iii-B1 Attacks Against LLMs
Coupled with privacy issues arising from the vast data requirement of LLMs, the integration of LLMs into various applications, often termed Application-Integrated LLMs as shown in Fig. 2 Part B, has opened up new avenues for adversarial threats:
* **Privacy Threats:** The inherent data consumption of LLMs, as they are trained on massive amounts of text data, raises the privacy concerns. For example, GPT-3 is trained on 45TB of text data, a volume that could potentially encompass sensitive information. Despite numerous precautions taken by developers to ensure dialogue safety and limit harmful content generation, research conducted in [9] demonstrates that privacy threats remain significant. The research demonstrates the potential of evading ChatGPT's ethical guidelines using jailbreak prompts coupled with Chain-of-Thought prompting, a technique that can be utilized by discriminative AI to exploit these vulnerabilities.
* **Prompt Injection Attacks:** The integration of LLMs into applications introduces Prompt Injection (PI) attacks. This threat, introduced by Greshake et al. [10], involves adversaries manipulating an LLM to generate harmful content. PI attack can potentially override existing instructions and filtering mechanism, e.g., support vector machine (SVM), highlighting a blind spot in current defense strategies. The study also reveals the susceptibility of Application-Integrated LLMs to indirect PI attacks, where the model processes tampered web content.
#### Iii-B2 Attacks Against Diffusion Models
As diffusion models continue to be integrated into various tasks, understanding their vulnerabilities to attacks is vital:
* **Trojans Attack:** Diffusion models are susceptible to Trojan attacks due to their large-scale training data dependency. Chen, Song and Li [11] introduced TrojDiff, a Trojan attack for diffusion models. TrojDiff optimizes Trojan diffusion and generative processes during training, altering the model's response to specific inputs. They showed that TrojDiff can manipulate diffusion models to generate particular class instances, an out-of-domain distribution, or a specific instance. These attacks were successful with minimal impact on the model's benign setting performance, indicating that Trojan attacks can influence diffusion model output subtly yet significantly.
* **Backdoor Attacks:** Backdoor attacks implant a hidden trigger in the model during training. During inference, the model produces a predetermined output when this trigger is input. Chou, Chen and Ho [12] proposed BadDiffusion, a framework for implanting backdoors in diffusion models. The compromised model behaves normally for regular inputs but generates a targeted outcome for the trigger. This attack is insidious as it maintains high utility and target specificity, and the trigger can be implanted by fine-tuning a clean pre-trained diffusion model. They also explored countermeasures, emphasizing the need for more robust defenses against backdoor attacks on diffusion models.
## IV Shield: How Generative AI and Discriminative AI Defends Each Other
In this section, we examine the critical role of GAI models as defensive tools in fortifying the security of Discriminative AI-empowered wireless networks and defenses perpetrated by Discriminative AI for AIGC services, as shown in the right-hand side of Fig. 1.
### _Defenses by Generative AI on Existing Discriminative AI Systems_
#### Iv-A1 Defenses by LLMs
In the face of growing adversarial threats, LLMs can act as key defenders in AI security.
* **Fighting against adversarial attacks:** As we discussed in Section III-A1, LLMs can generate adversarial examples to test and enhance the robustness of discriminative
AI systems [4]. This approach enables the development of effective defense strategies that can withstand sophisticated adversarial attacks.
* **Improving model interpretability:** LLMs are instrumental in elucidating the decision-making mechanics of discriminative AI models, thereby augmenting model transparency and interpretability. An example is a recent study where GPT-4 was employed to auto-generate explanations of neuronal behaviors in AI models. GPT-4 generated and scored natural language explanations of neuronal behaviors in another language model, i.e., GPT-2 2. In this investigation, over 1,000 neurons were found with explanations scoring at least 0.8, signifying that these neurons account for the majority of the neuron's top-activating behavior.
Footnote 2: [https://openai.com/research/language-models-can-explain-neurons-in-language-models](https://openai.com/research/language-models-can-explain-neurons-in-language-models)
#### Iv-A2 Defenses by Diffusion Models
Diffusion Models have emerged as potent tools in AI defense strategies, presenting robust solutions to challenges such as adversarial attacks and data privacy concerns.
* **Safeguard Dataset Mechanism:** The susceptibility of neural networks to adversarial attacks, notably due to their sensitivity to minor input perturbations, is a critical AI security concern. However, the emergence of GAI, specifically Denoising Diffusion Probabilistic Models (DDPM), provides a novel approach to address this. Leveraging the adaptability and robustness of DDPM, a sturdy defense mechanism can be established. Ankle, Midgley, and Weisshaar [13] utilized the reverse diffusion process inherent to DDPM to enhance system robustness against adversarial attacks by introducing and then strategically removing noise from adversarial examples, as shown in Fig, 3. Tested on the PatchCamelyon dataset, their strategy showed notable improvement in classification accuracy, reaching 88% of the accuracy of the original discriminative AI-based model.
* **Differential Privacy via Diffusion Models:** Differential privacy (DP) ensures individual privacy in datasets while preserving data analysis abilities. Diffusion models can generate synthetic, privacy-preserving versions of sensitive image datasets, an asset in wireless networks where secure data transmission and processing are crucial. Ghalebikesabi et al. [14] showcased diffusion models' potential for achieving DP by privately fine-tuning ImageNet pre-trained diffusion models with over 80 million parameters. This leads to significantly improved performance on CIFAR-10 and Camelyon17 in terms of Frechet Inception Distance (FID) and classifier accuracy on synthetic data. Their study underscores the potential of diffusion models in generating valuable and provably private synthetic data, even with significant distribution shifts between pre-training and fine-tuning.
### _Defenses by Discriminative AI to Generative AI-Empowered Services_
Discriminative AI models can improve security of Generative AI-empowered services, offering countermeasures to detect and mitigate potential threats.
#### Iv-B1 Defenses to LLMs
Discriminative models are important in mitigating potential misuse of LLMs.
* **Content Filtering:** Discriminative models can be trained to identify and filter harmful or inappropriate content generated by LLMs, ensuring the safe dissemination of information [10]. OpenAI, for example, has implemented a safety mitigation system that includes a Moderation API. This system is designed to flag or block certain types of unsafe content generated by their GPT-3 model 3. Footnote 3: [https://platform.openai.com/docs/guides/safety-best-practices](https://platform.openai.com/docs/guides/safety-best-practices)
* **Identification of Bias and Fairness Issues:** It has been observed that several prevalent LLMs exhibit bias towards certain religions, race, and genders, resulting in to the propagation of prejudiced notions and the perpetuation of injustices against underprivileged communities [4]. Discriminative AI can aid in recognizing and quantifying biases. For instance, researchers have used AI to discover and quantify gender and racial biases in LLMs4. Footnote 4: [https://huggingface.co/blog/evaluating-llm-bias](https://huggingface.co/blog/evaluating-llm-bias)
#### Iv-B2 Defenses to Diffusion Models
We next discuss the defenses to diffusion model.
* **Differential Privacy:** Boosting privacy-preserving mechanisms in generative models is vital, especially in sensitive sectors like wireless networks that prioritize secure data transmission. As discussed in Section IV-A2, diffusion models can be used for DP in synthetic datasets [14]. Alternatively, DP mechanisms can be incorporated during diffusion model training. A recent study [15] introduced Differentially Private Diffusion Models (DPDMs), which utilize diffusion mechanisms and enforce privacy through differentially private stochastic gradient descent (DP-SGD) algorithm, a robust algorithm for privacy-preserving neural network training. By introducing noise
Fig. 3: Principles of adversarial attacks and defense method based on the diffusion model. The attacker uses adversarial training to imbue a pumpkin image with the semantic information of an apple, which causes errors in the neural network-based classifier. The protector can use the diffusion model to clean the image by adding noise to the adversarial image and then denoising it, so that the classifier can make the correct decision.
multiplicity, a modification of the training objective tailored for the DP setting, they could significantly enhance the performance. The proposed DPDMs outperform prior approaches on widely-used image generation benchmarks. For instance, on MNIST, they have improved FID from 48.4 to 5.01 and downstream classification accuracy from 83.2% to 98.1%.
* **Distributed Diffusion Model-based Services:** Implementing diffusion model-based services, particularly in wireless networks, raises significant privacy concerns. Users may be reluctant to generate an image in remote servers due to privacy and security concerns. Generating an image in a remote server increases the risk of unauthorized access or data leakage. In response to this concern, Du et al. [7] propose a collaborative distributed diffusion-based AIGC framework. This framework enables the execution of shared denoising steps on a single device, with intermediate results then transmitted to other devices for the completion of task-specific denoising steps. By identifying patterns and making predictions based on the data, discriminative AI can aid in optimizing the denoising process, thereby reducing energy consumption and further bolstering the privacy-preserving potential.
## V Case Study
This section investigates a scenario where a user requests a specific number of images from a service provider. In this setting, discriminative AI launches data poisoning attacks on the image dataset stored at the server, while generative AI, specifically diffusion models, provides the defense.
### _Scenario Description_
We consider that an image dataset resides in a Publicly Accessible Server (PAS), and the service provider is responsible for retrieving the requested images from PAS and sending them to a user. The PAS, however, may contain attack images uploaded by malicious attackers. If the service provider inadvertently sends these attack images to a user, the user would immediately recognize the incorrect content with the human eye, resulting in a retransmission request. This process consumes unnecessary communication resources, as the provider has to re-fetch and retransmit the correct images.
### _Proposed Defense: Diffusion Model-based Image Verification_
We consider a defense mechanism wherein the service provider leverages a diffusion model to verify the correctness of each image before transmitting it to a user [13]. The reason is that the diffusion model-based method can achieve the state-of-the-art results, outperforming current adversarial training and adversarial purification methods [6]. This approach comprises the following few steps.
1. A user sends a request for a specific number of images of a certain category to the service provider.
2. The service provider fetches the requested images from the PAS.
3. Before transmission, the service provider employs a diffusion model to verify the correctness of each image [13]. * If the diffusion model identifies an attack image, the service provider re-fetches the correct image from the PAS. * If the diffusion model confirms the image's correctness, the service provider proceeds to transmit it to the user.
4. The service provider sends the verified images to the user, eliminating the need for re-transmission due to incorrect content.
### _Analysis and Implications_
The proposed defense emphasizes diffusion models' advantages for wireless image transmission security. However, it requires a balance between computational and communication resources due to the verification process's increased computational load. To further illustrate this point, we propose an optimization problem, which aims to identify the optimal number of diffusion steps that should be set in defense to minimize the total energy cost. To solve this optimization problem, we use an AI-generated optimization method, namely diffusion-empowered optimization, as proposed in [2].
Fig. 5 presents the training curves of the AI-generated optimization method, alongside comparisons with proximal policy optimization and a random policy. Fig 6 illustrates image transmission and total energy consumption under different schemes, i.e., when the diffusion steps for defense are 0, 29,
Fig. 4: Generative AI-aided secure image request services in wireless networks. Part A is the database in the publicly accessible server where the attack images exist. Part B is the attacker who generated the poisoned attack images, turning the Nike shoe images into “Addisa” semantic information through adversarial training. Part C is the interaction between the service provider and users. Part D is a diffusion model-based attack image detection method that can be used by the service provider.
and 48. Here, we assume that the user requests 50 images, the probability of selecting an attack image in the dataset is 30%, and the energy consumption for transmitting one image and performing one denoising step is 4 watt-hours (Wh) and 0.05 Wh, respectively. We observe that, without diffusion, the re-transmission counts are 27, 5, and 1, resulting in a total energy cost of 332 Wh. When the diffusion step is set to 29 (as decided by the AI-generated optimization method), the retransmission counts decrease to 5 and 1 in the first and second attempts, respectively, leading to a reduced total energy cost of 305.2 Wh. However, with the diffusion step set to 48, although the retransmission count is at its lowest (2 in the first attempt), the total energy cost escalates to 358.4 Wh due to high diffusion defense energy consumption.
## VI Future Directions
### _Expanding Security Frameworks for Generative AI Models_
In the rapidly evolving landscape of GAI in wireless networks, generalized security approaches often fall short. Thus, a more nuanced approach that leverages the capabilities of diffusion processes could be instrumental in fortifying network security. Specifically, diffusion models can simulate intricate network scenarios, potentially highlighting vulnerabilities susceptible to jamming or DDoS attacks. This approach, akin to a "white hat" attack, allows network operators to proactively identify and patch these vulnerabilities before they can be exploited. Furthermore, applications of diffusion processes can extend to enhancing physical layer security and authentication procedures, providing a multi-layered defense strategy. The integration of adversarial machine learning strategies, differential privacy techniques, and federative learning should be harmoniously weaved into this approach.
### _Resource Trade-offs in Deploying GAI in Intelligent Networks_
The integration of GAI models into wireless networks necessitates careful consideration of resource allocation to ensure efficient security and privacy protection. The implementation of GAI defense mechanisms can impose significant demands on data, energy, computational, and communication resources. Future research needs to focus on striking an optimal balance among these interdependent resources. For example, allocating financial resources to procure verified data from trusted providers could be a worthwhile investment, reducing the reliance on solely GAI-generated content and enhancing data authenticity and security. Moreover, the offloading of GAI defense mechanisms to edge servers could be a feasible strategy. This approach can leverage the superior computational power and storage capacity of edge servers, thereby enhancing the efficiency and effectiveness of security measures while minimizing the load on wireless networks.
### _Ethical Considerations of GAI in Wireless Communications_
The pervasive deployment of GAI models in wireless networks surfaces a plethora of ethical considerations, including fairness, accountability, and transparency. It is anticipated that future work can delve into the creation of ethical guidelines and regulatory standards specifically tailored for AI applications in wireless networks. These should direct the design, deployment, and utilization of AI-assisted services, ensuring that they advocate fairness and deter misuse. Furthermore, comprehensive studies into the potential societal impacts of AI-assisted wireless services are essential to identify and mitigate any adverse effects.
## VII Conclusion
In this paper, we have delved into the dual role of GAI in intelligent network security, elucidating its potential as both an attacker and a defender. We have investigated the dynamic interplay between generative and discriminative AI, exploring GAI-aided attacks and their respective defense mechanisms. A case study has been carried out to demonstrate the significant efficiency of an AI-optimized diffusion defense strategy in mitigating data poisoning attacks, leading to a notable 8.7% reduction in energy and a drastic decrease in retransmission count. These findings highlight the critical importance of strategic GAI integration in wireless networks, underscoring the need for ongoing research in mitigating potential security threats.
Fig. 5: Training curves of diffusion-empowered AI-generated optimization, proximal policy optimization, and random policy. Note that the reason for the fluctuations in curves is that the _number_ of attack images selected in each experiment fluctuates even if the _probability_ of selecting an attack image in the database is constant.
Fig. 6: Image transmission and total energy consumption under different schemes, i.e., the diffusion steps for defence are 0, 29 and 48. We consider that the user requests 50 images, the probability of selecting a attack image in the dataset is 30%, the energy consumption for transmitting one image and performing one denoising step is 4 watt-hours (Wh) and 0.05 Wh, respectively. |
2303.07422 | A variable star population in the open cluster NGC 6791 observed by the
Kepler spacecraft | We present the list of variable stars we found in the Kepler superstamp data
covering approximately 9 arcminutes from the central region of NGC 6791. We
classified the variable stars based on the variability type and we established
their cluster membership based on the available Gaia Early Data Release 3
astrometry, by means of the Bayesian Gaussian mixture models. In total we found
278 variable objects, among which 17 binaries, 45 pulsators, 62 rotational and
five unclassified variables are cluster members. The remaining 28 binaries, 25
pulsators, 83 rotational, four unclassified and nine unidentified variables are
either not members or their membership is not established. In the case of
eclipsing binaries we calculated the mid-times of eclipses and derived
ephemerides. We searched for eclipse timing variation by means of the observed
minus calculated diagrams. Only three objects show significant orbital period
variation. Independently of a report published just recently by Colman et
al(2022) we found 119 new variables. We used isochrones calculated within the
MIST project and derived the age (8.91 Gyr), average distance (4134 pc) and
iron content [Fe/H] (0.26-0.28), of NGC 6791. Using the cluster members with
membership probabilities greater than 0.9, we calculated the distance to the
cluster of 4123(31) pc, which agrees with the result from our isochrone
fitting. | Sachu Sanjayan, Andrzej S Baran, Peter Nemeth, Karen Kinemuchi, Jakub Ostrowski, Sumanta Kumar Sahoo | 2023-03-13T19:00:16Z | http://arxiv.org/abs/2303.07422v1 | # A variable star population in the open cluster NGC 6791 observed by the _Kepler_ spacecraft
###### Abstract
We present the list of variable stars we found in the _Kepler_ superstamp data covering approximately 9 arcminutes from the central region of NGC 6791. We classified the variable stars based on the variability type and we established their cluster membership based on the available _Gaia_ Early Data Release 3 astrometry, by means of the Bayesian Gaussian mixture models. In total we found 278 variable objects, among which 17 binaries, 45 pulsators, 62 rotational and five unclassified variables are cluster members. The remaining 28 binaries, 25 pulsators, 83 rotational, four unclassified and nine unidentified variables are either not members or their membership is not established. In the case of eclipsing binaries we calculated the mid-times of eclipses and derived ephemerides. We searched for eclipse timing variation by means of the observed minus calculated diagrams. Only three objects show significant orbital period variation. Independently of a report published just recently by Colman _et al._ (2022) we found 119 new variables. We used isochrones calculated within the MIST project and derived the age (8.91 Gyr), average distance (4134 pc) and iron content [Fe/H] (0.26-0.28), of NGC 6791. Using the cluster members with membership probabilities greater than 0.9, we calculated the distance to the cluster of 4123(31) pc, which agrees with the result from our isochrone fitting.
**Open clusters and associations : individual: NGC 6791 - binaries: general - Stars oscillations - Stars : rotation**
Introduction
NGC 6791 has been first described as a metal rich cluster by Baade (1931) and listed as an old open cluster by King (1964). The authors provided no age estimations. Kinman (1965) presented a detailed comparative study of color - magnitude diagrams (CMD) of NGC 6791 along with other two open clusters, M 67 (4 Gyr) and NGC 188 (6.8 Gyr). From the first photometric observations in the B-V color, Harris and Canterna (1981) determined a reddening of E(B-V) = 0.13 mag. According to the recent studies NGC 6791 is 7 - 9 Gyr old (Chaboyer _et al._ 1999, Carraro _et al._ 2006, Basu _et al._ 2011), and it has a mass of around 4 000 M\({}_{\odot}\) (Kaluzny and Udalski 1992, Carraro _et al._ 2006, Platais _et al._ 2011, Tofflemire _et al._ 2014). The cluster is located \(\sim\)8000 pc from the Galactic center and 1000 pc above the Galactic plane. According to some hypotheses the cluster may have formed in the bulge of the Galaxy and radially migrated to its current location (Jilkova _et al._ 2012, Villanova _et al._ 2018). The distance to the cluster is approximately 3 614 pc, which was estimated for the first time by Stetson _et al._ (2003) from the de-reddened distance modulus of (m-M)\({}_{0}\approx\)12.79 mag. The authors derived E(B-V) = 0.09 mag. According to Villanova _et al._ (2018) NGC 6791 is a super metal-rich cluster with [Fe/H] = +0.3 - +0.4. Geisler _et al._ (2012) showed that NGC 6791 has multiple stellar populations, which makes the cluster chemically peculiar. NGC 6791 has an anomalous horizontal branch with a red clump (RC) region. Liebert _et al._ (1994) found a group of extreme horizontal branch members using spectrophotometry of blue targets observed by Kaluzny and Udalski (1992). The age of the cluster predicts that it should have a rich population of cooling white dwarfs, hence Bedin _et al._ (2005) observed the cluster using the Hubble Space Telescope up to m\({}_{\rm F606W}\approx\) 28.5 mag. They found the white dwarf luminosity function to give a peak at 27.4 mag, which corresponds to an age of 2.5 Gyr. Such an estimate does not agree with the age derived from the main sequence (MS) or red giant branch (RGB) population (Chaboyer _et al._ 1999, Carraro _et al._ 2006). Thus far, these studies show the cluster is very unusual. A more detailed study is required to constrain the age and metal abundances for understanding the formation and evolution of NGC 6791. A clear picture of the cluster could be achieved by deriving an entire population of variable stars and analysis of components of the stars to find their ages and chemical abundances.
NGC 6791 has been a subject of extensive search for variable stars. Kaluzny and Udalski (1992) and Kaluzny and Rucinski (1993) did an extensive photometric survey finding 17 variable stars which includes 8 contact binaries, two blue stragglers and one binary consisting of a hot subdwarf B star. Rucinski _et al._ (1996) found three detached binaries and one cataclysmic variable (CV) star exhibiting a three day outburst. As a part of search for planets in stellar clusters, Mochejska _et al._ (2002) found 47 new low amplitude variable stars. The authors reported several BY Dra type and two outbursting CV stars, confirming the CV found by Rucinski _et al._ (1996). Mochejska _et al._ (2003) reported seven new variable stars with a long and periodic flux variation. Kaluzny (2003) found four new variable stars by reanalyzing archived data from Kaluzny and Rucinski (1993). A search for transiting events by giant planets reported by Bruntt _et al._ (2003) yielded 22 new low amplitude objects along with 20 previously known variable stars. Using a
high precision time-series photometry, Hartman _et al._ (2005) detected 10 new variable stars including one \(\delta\)-Scuti type star and 8 contact binaries. Mochejska _et al._ (2005) detected 14 more variable stars and reported 9 eclipsing binaries. Using a high precision photometry in the Johnson V band, de Marchi _et al._ (2007) detected 260 variables in the cluster area, although not all stars are members of the cluster.
From the launch in 2009, for almost 10 years the _Kepler_ spacecraft has served mankind by providing very precise and almost continuous photometric measurements (Koch _et al._ 2010). The _Kepler_ has observed more than five hundred thousand stars during its entire mission time. The _Kepler_ mission was completed in two phases. During the first mission, _Kepler_ observed 0.25% of the sky in the direction of Cygnus and Lyra constellation for 1460 days. The mission was reborn as K2 (second mission) after the second reaction wheel failed. K2 mission made 80 day observing campaigns along the ecliptic equator, which lasted 1695 days (Howell _et al._ 2014). During both missions, the observations were obtained using two different exposures, 30 minutes for the long cadence (LC) and 1 minute for the short cadence (SC) mode (Koch _et al._ 2010, Borucki _et al._ 2010, Caldwell _et al._ 2010, Thompson _et al._ 2016). During the first mission, four open clusters were inside the _Kepler_ field of view, NGC 6791, NGC 6819, NGC 6811 and NGC 6866. Two of the open clusters, NGC 6791 and NGC 6819, were observed by using the so-called LC superstamps.
Recently, Colman _et al._ (2022) presented light curves of KIC stars obtained from the _Kepler_ superstamp data. The authors used an image subtraction method to derive light curves of all _Kepler_ cataloged targets. They identified variability in 239 out of 5342 stars they extracted light curves of. The number of new variables is not given. We stress that our work has been performed simultaneously to, yet independently from, Colman _et al._ (2022) and contains additional analysis. By comparing our results with results of Colman _et al._ (2022), we have noticed that the authors applied a very strong detrending policy removing either eclipses or out-of-eclipse variations in binaries or variations in other objects that we claim to be variables.
In Section 2 of this paper, we present a brief description of the _Kepler_ data and method used for obtaining the light curves of variable objects. In Section 3, we present a spectroscopic study of the variable stars found in this project using either archived spectra from public surveys or our own data. In Section 4, we describe the method of deriving the membership probabilities of our new variable star findings. In Section 5, we report individual variable star cluster members divided into variability classes. The field variable counterparts are listed in the Tables 5-8. In Section 6, we present the result of isochrone fitting.
## 2. Kepler Photometry
We downloaded the _Kepler_ superstamp data of NGC 6791 from the Mikulski Archive for Space Telescopes (MAST1). The data are 20 x 100 pixel boxes piled up in two contiguous 10 box stacks. The field of view of all pixels is
800 x 800 arcseconds and covers the most central part of the cluster. The superstamps data are collected in the LC mode. The pixel scale of an individual square pixel is 4 arcsec. The data have been collected over 1460 days and are split into 18 quarters.
We searched for a flux variation by extracting fluxes for all time stamps in individual pixels for each of the quarters Q 2 - 5. Then, a Fourier transform of the time-series data was performed in each pixel and each quarter separately. The pixels showing peaks (representing signal) in the amplitude spectra were selected. Signals that were identified with artifacts, either reported by Baran (2013) or those found in this project, were discarded. We combined all contiguous pixels showing the same signal and defined an optimal aperture of pixels. To keep the solar cells exposed to the sunlight, every quarter the spacecraft rolled 90 degrees, hence, with each quarterly rotation of the spacecraft, our targets landed on different CCD chips. This positioning caused different target images, and consequently, different optimal apertures (Bryson _et al._ 2010). Fortunately, every four quarters the images and apertures were the same, so we have defined the apertures only in four quarters, i.e. Q 2 - 5, and propagate them to the corresponding quarters (e.g. Q 2, 6, 10, 14). Next, using the optimal apertures for all targets showing flux variation we used PyKE software (Kinemuchi _et al._ 2012) to pull out the fluxes and correct them for instrumental artifacts by means of Co-trending Basis Vectors. Finally, using our custom scripts, we clipped the data at 4.5 sigma, detrended using spline fits, and normalized them to _parts per thousand_ (ppt). The variable stars discovered in our work will be presented in Section 5.
## 3 Spectroscopy
We searched for spectra in the literature of all variables we detected. We found optical or infrared spectra for 111 objects in the archives of APOGEE (Ahn _et al._ 2014, Majewski _et al._ 2017), SDSS (Blanton _et al._ 2017), LAMOST (Zhao _et al._ 2012), ESO (Gilmore _et al._ 2012, Randich _et al._ 2013), and the HECTOSPEC (Fabricant _et al._ 2005) surveys. All spectra with T\({}_{\rm eff}<15\,000\) K were modeled with interpolated local thermal equilibrium (LTE) synthetic spectra drawn from the BOSZ (Bohlin _et al._ 2017) spectral library to determine the fundamental atmospheric parameters. The BOSZ library was calculated for scaled solar metallicity with carbon and \(\alpha\)-element enhancement; therefore, individual abundance patterns cannot be investigated with our method.
Our fitting procedure (XTgrid; Nemeth _et al._ 2012) is based on a steepest-gradient chi-square minimizing method, which was originally developed to model hot stars. To improve its performance for cool stars, we added a grid-search preconditioning to the procedure. We step through a set of models to search for the best starting model for the steepest-descent part. Next, the descent part takes over in driving the fit and converges on the best solution. Once a convergence is achieved, the procedure explores the parameter errors by stepping through a set of points around the best solution. If a better solution is found during error calculations, then the procedure returns to the descent part, and hence pushing the solution towards the global minimum. XTgrid fits the radial velocity and projected rotation velocity of each
spectra along with the stellar surface parameters, such as the effective temperature (T\({}_{\rm eff}\)), surface gravity (logg) and abundances.
In addition, the procedure accesses photometric data from the VizieR Photometry Viewer2, distance data from the Gaia EDR3 database, and extinction values from the NED online services. The spectroscopic surface parameters combined with these measurements allow us to reduce systematics and derive absolute stellar parameters, such as mass, radius, and luminosity. An anti-correlation is observed between T\({}_{\rm eff}\) and [Fe/H]. Fortunately, the spectral energy distribution (SED) helps in resolving this bias by restricting the T\({}_{\rm eff}\). Another bias is observed in surface gravity, in particular below T\({}_{\rm eff}\) = 4 000 K. At such low temperatures, the spectrum is insensitive to the surface gravity. When the spectral coverage is very limited, we could not determine an accurate value for logg. We do not report atmospheric parameters for such stars.
Footnote 2: [http://vizier.u-strasbg.fr/vizier/sed/](http://vizier.u-strasbg.fr/vizier/sed/)
The archival spectroscopic data are very inhomogeneous. Consequently, high resolution spectra (e.g. obtained with ESO instruments) with a short wavelength coverage are more suitable for radial velocity measurements, while low resolution spectra (e.g. from the SDSS and LAMOST surveys) can provide more consistent atmospheric parameters, but less precise velocities. Some ESO spectra cover only 5 300-5 600 A range at a resolution of R=20 000, and only weak spectral features are visible. For such spectra, at a relatively low signal-to-noise ratio (SNR), the fitting procedure increases the projected rotation above 100 km s\({}^{-1}\), which decreases the radial velocity accuracy. In general, low SNR spectra limit our analysis the most, while crowding in dense stellar fields and a limited spectral coverage affects the parameter determination.
## 4 Cluster membership
We used _Gaia_ astrometry to determine the membership probabilities of all variable stars we found. We used five parameters, i.e. equatorial coordinates \(\alpha\) and \(\delta\), proper motions \(\mu_{\alpha}\) and \(\mu_{\delta}\), and parallax \(\pi\) (further called five astrometric parameters). First, we adopted/estimated mean values of these parameters. The cluster center has been taken from Kamann _et al._ (2019) to be at \(\alpha_{2000}\) = 19:20:51.3 and \(\delta_{2000}\) = +37:46:26. Next, we downloaded _Gaia_ Early Data Release 3 (EDR3) (_Gaia_ collaboration _et al._, 2016, 2021) data for all stars within a tidal radius of 23 arcmin (Platais _et al._, 2011). The area contains 36 647 targets accessible to our analysis; however, we filtered out dubious targets with parallaxes to be negative or greater than 1 arcsec or relative uncertainties for any of the proper motion or parallax values to be greater than 50%. A cluster environment, particularly toward the center, is very dense, which can lead to unrealistic or imprecise estimates of these three parameters (\(\mu_{\alpha}\), \(\mu_{\delta}\), \(\pi\)). In addition, we limited our sample to targets for which the zero point offset corrections of parallax have been applied (Lindegren _et al._, 2021). After filtering and correcting for the parallax zero offset, we ended up with 11 466 targets.
To determine the membership probabilities, we used the Bayesian Gaussian Mixture Models (GMM) using _scikit-learn_ python toolkit (Pedregosa _et al._, 2011). The GMM assumes each data point to be a combination of finite Gaussian functions, in which the number of these functions is determined using a variational Bayesian inference model with Dirichlet process prior (Ferguson, 1973). We performed 10 000 iterations using the Expectation - Maximization algorithm (Dempster _et al._, 1977), and we derived membership probabilities for each target in our sample based on all five astrometric parameters. In the case of targets that we found variables in the superstamp area, we estimated their membership probabilities regardless of precision of their five astrometric parameters. If the uncertainties were larger than 50%, we considered corresponding parameters to be error-free, while the negative parallaxes were ignored and only four astrometric parameters were used.
To strengthen the probability, the radial velocity of individual stars can also be used, however they need to be corrected for the effects of binarity, rotation, and pulsations. Different instruments differ in instrumental calibration which often bias the radial velocity (RV) estimates. Since we did not conduct one single survey that could provide us with consistent RV estimations, we decided not to use RVs for the membership analysis. Since binarity and rotation affects the measurement of intrinsic motion, we expect the RVs will be random values and, as it will be seen in Section 5, the values in Tables 1-3, confirm our suspicion. We expected the most consistent estimates for single solar-like pulsators, since their oscillation motion on the surface is of a very small amplitude. In fact, in only three cases the RVs are far from the average cluster value (-47.46\(\pm\)1.08 km/s, Carrera _et al._, 2019), since the stars may belong to binary systems. On the other hand, RVs that are consistent indicate that the stars are likely single or the orbital motion (if any) is very slow or the spectra have been taken when both stars were aligned with the observer's line of sight. The RVs of the solar-like stars that are unlikely to be members (Table 6) are not close to the cluster average and seem to confirm their field membership, unless they are in binaries.
## 5 A zoo of variable stars
In total, we found 278 variable objects in the superstamp area. Our sample contains cluster members as well as foreground and background stars. In Section 4, we provided details on a membership analysis. Our prime focus is on the members of NGC 6791. The non-members and objects with unknown membership status, as a consequence of a lack of the _Gaia_ astrometry, are listed in the Table 5-8. Their variability is classified the same way as for the cluster members.
Figure 1: Examples of light curves and corresponding amplitude spectra of a zoo of variable stars in the open cluster NGC 6791.
Based on flux variations, we classified the stars into three main variability types, i.e. eclipsing, pulsating, and rotating stars. The first two types are further split into specific classes. Five stars remained unclassified. Their light curves show variations that we are unable to unambiguously identify as one of the three types listed. These objects show flux variations which can origin in _e.g._ a reflection effect, ellipsoidal variation or a rotation of a spotted star. These stars have typically low amplitude flux variations. In Fig. 1 we present examples of light curves and their corresponding amplitude spectra for each type and a selection of classes of variable stars we found.
### Binary systems
We selected binary stars with sharp eclipses typical for semi- and detached systems. Some eclipsing systems show additional out-of-eclipse variation, which can be caused by a chromospheric activity, and we call them "active" eclipsing. We identified contact systems, which are characterized by a continuous flux change and typical for W UMa stars. Another class contains outbursting stars, which we associate with binaries experiencing a rapid mass transfer causing sudden eruptions, e.g. novae, dwarf novae, nova-like variables. We stress that our classification is not based on radial velocities. Some of the stars may not be classified correctly, e.g. a smoothly continuous and small amplitude flux changes may be misidentified with rotational variables. However, the flux change over the course of observations is not modulated (see explanation in Section 5.3) or they can be long-period pulsating stars. In Fig. 2 we present the phased light curves of three stars that we consider new discoveries. The sample includes all the classes of binary stars we identified in the superstamp data. We found 17 binaries to be cluster members (Table 1), 28 binaries are field objects, including two binaries for which we could not establish membership due to the lack of _Gaia_ astrometry data (Table 5). For binary systems their membership has been derived based on all five astrometric parameters. Majority of binaries in the cluster are main sequence (MS) stars with just two exceptions, assuming the position of the latter in the CMD is correct. _Gaia_ EDR3 2051105720053889536 is a post-MS star on its early ascent of the red giant branch (RGB), while _Gaia_ EDR3 2051293186783992320 is located below the RGB, which can be explained by an incorrect color index or pre-MS evolutionary status. Among the member counterparts, five are eclipsing, six active eclipsing, five contact, and one outbursting stars. |
2304.12936 | Some singular curves in Mukai's model of $\overline{M}_7$ | Mukai showed that the GIT quotient $\operatorname{Gr}(7,16) /\!/
\operatorname{Spin}(10)$ is a birational model of the moduli space of
Deligne-Mumford stable genus 7 curves $\overline{M}_7$. The key observation is
that a general smooth genus 7 curve can be realized as the intersection of the
orthogonal Grassmannian $\operatorname{OG}(5,10)$ in $\mathbb{P}^{15}$ with a
six-dimensional projective linear subspace. What objects appear on the boundary
of Mukai's model? As a first step in this study, computer calculations in
Macaulay2, Magma, and Sage are used to find and analyze linear spaces yielding
three examples of singular curves: a 7-cuspidal curve, the balanced ribbon of
genus 7, and a family of genus 7 reducible nodal curves.
$\operatorname{Spin}(10)$-semistability is established by constructing and
evaluating an invariant polynomial. | David Swinarski | 2023-04-25T15:51:43Z | http://arxiv.org/abs/2304.12936v1 | # Some singular curves in Mukai's model of \(\overline{M}_{7}\)
###### Abstract.
Mukai showed that the GIT quotient \(\operatorname{Gr}(7,16)/\!/\!\operatorname{Spin}(10)\) is a birational model of the moduli space of Deligne-Mumford stable genus 7 curves \(\overline{M}_{7}\). The key observation is that a general smooth genus 7 curve can be realized as the intersection of the orthogonal Grassmannian \(\operatorname{OG}(5,10)\) in \(\mathbb{P}^{15}\) with a six-dimensional projective linear subspace. What objects appear on the boundary of Mukai's model? As a first step in this study, computer calculations in Macaulay2, Magma, and Sage are used to find and analyze linear spaces yielding three examples of singular curves: a 7-cuspidal curve, the balanced ribbon of genus 7, and a family of genus 7 reducible nodal curves. \(\operatorname{Spin}(10)\)-semistability is established by constructing and evaluating an invariant polynomial.
## 1. Introduction
In 1995 Mukai showed that the GIT quotient \(\operatorname{Gr}(7,16)/\!/\!\operatorname{Spin}(10)\) is a birational model of the moduli space of Deligne-Mumford stable genus 7 curves \(\overline{M}_{7}\). We briefly recall this correspondence.
For a general curve of genus \(g\geq 3\), the canonical ideal \(I\) is generated by \(\binom{g-2}{2}\) quadrics. Thus, when \(g=7\), \(10\) quadrics in \(\mathbb{P}^{6}\) are required.
Mukai showed that for a smooth genus 7 curve with no \(g_{2}^{1}\), \(g_{3}^{1}\), or \(g_{4}^{1}\), the multiplication map \(\operatorname{Sym}^{2}(I_{2})\to I_{4}\) has a one-dimensional kernel. Let \(Q\) be a generator of the kernel. Then \((I_{2},Q)\) is a 10-dimensional quadratic vector space.
Let \(f_{0},\ldots,f_{9}\in k[x_{0},\ldots,x_{6}]\) generate \(I_{2}\). For each \(p\in C\), the row space of the Jacobian matrix at \(p\)
\[\left[\frac{\partial f_{j}}{\partial x_{i}}(p)\right]_{i=0,\ldots,6}^{j=0, \ldots,9}\]
is a Lagrangian of \((I_{2},Q)\), which Mukai denotes \(W_{p}^{\perp}\).
Let \(\operatorname{OG}(5,10)\) denote the ten-dimensional orthogonal Grassmannian parametrizing Lagrangian subspaces of \((I_{2},Q)\). \(\operatorname{OG}(5,10)\) has a natural embedding in \(\mathbb{P}^{15}\) by mapping a Lagrangian to its half spinor.
**Theorem 1.1** (Mukai, 1995).: _Let \(C\) be a smooth genus 7 curve with no \(g_{2}^{1}\), \(g_{3}^{1}\), or \(g_{4}^{1}\)._
1. _The map_ \[\begin{array}{ccccccc}\rho:&C&\to&\operatorname{OG}(5,10)&\to&\mathbb{P}^{ 15}\\ &&p&\mapsto&[W_{p}^{\perp}]\end{array}\] _is an embedding of_ \(C\)_._
2. _The image_ \(\rho(C)\) _is the intersection_ \((P\cap\operatorname{OG}(5,10))\) _of a 6-dimensional projective linear subspace_ \(P\subset\mathbb{P}^{15}\) _with the orthogonal Grassmannian, and_ \(C\) _is canonically embedded in_ \(P\)_._
3. \(\operatorname{Gr}(7,16)/\!/\!\operatorname{Spin}(10)\) _is a birational model of_ \(\overline{M}_{7}\)_._
See [20, Theorem 0.4 and Prop. 5.2].
Let \(S^{+}\) be the half-spin representation of \(\operatorname{Spin}(10)\). We have \(\dim S^{+}=16\). A character calculation shows that there exist \(\operatorname{Spin}(10)\)-invariant polynomials on \(\operatorname{\Lambda}^{7}S^{+}\); see Code 1.1. It follows that a general point of \(\operatorname{Gr}(7,16)\) is \(\operatorname{Spin}(10)\)-semistable. Also, Farkas and Verra give some \(\operatorname{Spin}(10)\)-semistability results for the related quotient \(\operatorname{Hilb}(\operatorname{OG}(5,10))/\!/\!\operatorname{Spin}(10)\) in [13].
However, several questions remain open. Is every smooth genus 7 curve with no \(g_{2}^{1}\), \(g_{3}^{1}\), or \(g_{4}^{1}\)\(\operatorname{Spin}(10)\)-semistable? Which schemes occur as intersections \(P\cap\operatorname{OG}(5,10)\), and when is \([P]\)\(\operatorname{Spin}(10)\)-semistable?
As a first step, we study three examples of singular curves.
\begin{tabular}{l l} Example 1: & \(C_{\text{cusp}}\), the 7-cuspidal curve with heptagonal symmetry \\ Example 2: & \(C_{\text{rib}}\), the balanced ribbon of genus 7 \\ Example 3: & \(C_{\text{nod},t}\), a family of reducible nodal curves degenerating to three \\ & & \\ & & \\ & & \\ & & \\ \end{tabular}
The rationale for these choices is as follows. The orthogonal Grassmannian \(\operatorname{OG}(5,10)\subset\mathbb{P}^{15}\) has the following Betti table, displayed following Macaulay2's conventions. See Code 1.2.
\begin{tabular}{c c c c c c} & 0 & 1 & 2 & 3 & 4 & 5 \\ total: & 1 & 10 & 16 & 16 & 10 & 1 \\ & 0: & 1 &. &. &. &. \\
1: &. & 10 & 16 &. &. &. \\
2: &. &. &. & 16 & 10 &. \\
3: &. &. &. &. &. & 1 \\ \end{tabular}
A Betti table with at most one nonzero entry per column is called _pure_. See [12].
If a linear section \(P\cap\operatorname{OG}(5,10)\) is one-dimensional, it must also have this Betti table. Curves with pure Betti tables have been the subject of much study for several years in connection with Green's Conjecture. \(g\)-cuspidal curves, ribbons, and graph curves were proposed as candidates for proving Green's Conjecture for a generic curve [11]. (This strategy was recently completed for \(g\)-cuspidal curves by the results of [2] and for ribbons by the results of [22].)
Specific \(g\)-cuspidal curves, ribbons, and graph curves with automorphisms have also been used to study the Hassett-Keel program for \((\overline{M}_{g},\Delta)\). One of Hassett and Keel's conjectures was that the canonical model of \(\overline{M}_{g}\) could be constructed by variation of GIT applied to quotients of spaces parametrizing syzygies of curves. The GIT semistability of the canonically embedded balanced ribbon was established in [1] for finite Hilbert stability and in [8] for first syzygies. The 7-cuspidal curve and the graph curve \(C_{\operatorname{nod},0}\) studied here also have GIT semistable second Hilbert points and first syzygies (Swinarski, unpublished). Since these three examples of singular curves appear in the model of \(\overline{M}_{7}\) given by first syzygies, it was natural to ask whether they also appear in Mukai's model of \(\overline{M}_{7}\).
### Outline of the paper
In Section 2, we recall the notation of Mukai's construction. In Sections 3 and 4 we obtain the 7-cuspidal curve with heptagonal symmetry and the balanced genus 7 ribbon as intersections \(P\cap\operatorname{OG}(5,10)\) for some explicit \(P\in\operatorname{Gr}(7,16)\). In Section 5 we describe a 1-parameter family of reducible nodal curves and obtain a general member of this family as the intersection \(P\cap\operatorname{OG}(5,10)\) for some explicit \(P\in\operatorname{Gr}(7,16)\). We also study the limits as this family degenerates in the Hilbert scheme \(\operatorname{Hilb}(\mathbb{P}^{15},12t-6)\) and in the Grassmannian \(\operatorname{Gr}(7,16)\) and show that two of these limits are GIT-unstable.
In Section 6 we describe how to construct a \(\operatorname{Spin}(10)\)-invariant polynomial \(F_{5\omega_{1}}\in(\operatorname{Sym}^{4}\Lambda^{7}S^{+})^{\operatorname{ Spin}(10)}\). Finally, we evaluate \(F_{5\omega_{1}}\) on these three examples to deduce \(\operatorname{Spin}(10)\)-semistability for the 7-cuspidal curve with heptagonal symmetry, the balanced genus 7 ribbon, and the general member of the family of reducible nodal curves.
### Software and code links
This project relies heavily on calculations in Macaulay2, Magma, and Sage [17, 18, 23]. In this document, we report the inputs to these calculations and describe the results. On the author's webpage [28], we have posted transcripts of interactive sessions for the shorter calculations and the input and output files used for the lengthier calculations. We cite each calculation in the text of this document using a phrase of the form "see Code x.y" which includes a link to the relevant calculation.
### Acknowledgements
It is a pleasure to thank Patricio Gallardo, Jesus Martinez-Garcia, Han-Bom Moon, and Ian Morrison for several helpful discussions related to this work. This work is a sequel to a project begun by the AIM Square "Computational aspects of GIT with a view of moduli spaces" that met between 2018-2020 consisting of Gallardo, Martinez-Garcia, Moon, and the author.
## 2. Background: Mukai's construction
Let \(V\) be a \(2n\)-dimensional vector space over \(\mathbb{C}\). (Note: Mukai's results hold over an algebraically closed field of any characteristic. We will state our results only for \(\mathbb{C}\), but it seems likely that some of them may generalize to positive characteristic as well.) Let \(Q\) be a full rank quadratic form on \(V\). Following Chevalley and Mukai's conventions in [6, 20], let \(B(x,y)=Q(x+y)-Q(x)-Q(y)\). (Note: Fulton and Harris use a different convention in [14].) Then \(B(x,x)=2Q(x)\). Let \(C(Q)\) be the Clifford algebra satisfying \(v\cdot w+w\cdot v=B(v,w)\cdot 1\).
Let \(U_{0}\) and \(U_{\infty}\) be two complementary Lagrangians, and let \(S^{+}=\Lambda^{\operatorname{even}}U_{\infty}\), \(S^{-}=\Lambda^{\operatorname{odd}}U_{\infty}\).
Let \(e_{-1},\ldots,e_{-n}\) be a basis of \(U_{0}\), and let \(e_{1},\ldots,e_{n}\) be a basis of \(U_{\infty}\). \(e_{-i}\) acts on \(\Lambda U_{\infty}\) as the contraction of \(e_{i}\), and \(e_{i}\) acts on \(\Lambda U_{\infty}\) as wedging on the left by \(e_{i}\). Extending these actions by linearity yields an endomorphism \(\varphi_{v}\) for any \(v\in V\).
For each subset \(I=\{i_{1},\ldots,i_{k}\}\subset\{1,\ldots,5\}\) with \(k\) even and \(i_{1}<\ldots<i_{k}\), let \(e_{I}=e_{i_{1}}\wedge\cdots\wedge e_{i_{k}}\). This gives a basis of \(S^{+}\). Let \(x_{I}\) be the corresponding coordinates on \(\mathbb{P}(S^{+})\).
Let \(U\) be a Lagrangian of \((V,Q)\). The half spinor \(s_{U}\) of \(U\) is an element of \(S^{+}\cup S^{-}\) satisfying \(\varphi_{u}(s_{U})=0\) for all \(u\in U\).
We use two approaches to compute half spinors.
_Approach 1:_ Suppose \(U\cap U_{\infty}=\{0\}\). (This is the generic case.) Then we can find a basis of \(U\) of the form \(u_{i}=e_{-i}-\sum_{j=1}^{5}a_{ij}e_{j}.\) The coefficients \(a_{ij}\) yield a \(5\times 5\) skew-symmetric matrix \(A\). In the proof of [20, Prop. 1.5], Mukai gives a formula for \(s_{U}\) in terms of the Pfaffians of minors \(A\). Specifically, let \(A_{I}\) denote the minor of \(A\) obtained by selecting the rows and columns indexed by \(I\). Then the coordinate \(x_{I}\) in \([s_{U}]\) is given by \(\operatorname{Pf}(A_{I})\).
_Approach 2:_ For any Lagrangian \(U\), we may compute the operators \(\varphi_{u}\) for a basis of \(U\) and intersect their kernels to obtain a suitable \(s_{U}\).
Approach 2 applies to any Lagrangian \(U\), but it is typically slower than Approach 1, so we only use Approach 2 when \(\dim(U\cap U_{\infty})>0\).
Mukai gives the following equations of the orthogonal Grassmannian \(\operatorname{OG}(5,10)\subset\mathbb{P}(S^{+})\) in [20, (0.1)]:
\[\begin{array}{l}x_{0}x_{2345}-x_{23}x_{45}+x_{24}x_{35}-x_{25}x_{34},\\ x_{12}x_{1345}-x_{13}x_{1245}+x_{14}x_{1235}-x_{15}x_{1234},\\ x_{0}x_{1345}-x_{13}x_{45}+x_{14}x_{35}-x_{15}x_{34},\\ x_{12}x_{2345}-x_{23}x_{1245}+x_{24}x_{1235}-x_{25}x_{1234},\\ x_{0}x_{1245}-x_{12}x_{45}+x_{14}x_{25}-x_{15}x_{24},\\ x_{13}x_{2345}-x_{23}x_{1345}+x_{34}x_{1235}-x_{35}x_{1234},\\ x_{0}x_{1235}-x_{12}x_{35}+x_{13}x_{25}-x_{15}x_{233},\\ x_{14}x_{2345}-x_{24}x_{1345}+x_{34}x_{1245}-x_{45}x_{1234},\\ x_{0}x_{1234}-x_{12}x_{34}+x_{13}x_{24}-x_{14}x_{23},\\ x_{15}x_{2345}-x_{25}x_{1345}+x_{35}x_{1245}-x_{45}x_{1235}\end{array}\]
## 3. The \(7\)-cuspidal curve with heptagonal symmetry
Canonically embedded \(g\)-cuspidal curves can be obtained as hyperplane sections of the _tangent developable_ of the rational normal curve. See [10, 11, 2] for more details.
For \(g=7\), the tangent developable in \(\mathbb{P}^{7}\) is parametrized by mapping \((s,t,u,v)\) to
\[[7s^{6}u:6s^{5}tu+s^{6}v:5s^{4}t^{2}u+2s^{5}tv:4s^{3}t^{3}u+3s^{4}t^{2}v:3s^{2} t^{4}u+4s^{3}t^{3}v:2st^{5}u+5s^{2}t^{4}v:t^{6}u+6st^{5}v:7t^{6}v].\]
We eliminate the parameters to obtain equations of the tangent developable in \(k[y_{0},\ldots,y_{7}]\); see Code 3.1. Then, by taking \(y_{7}=y_{0}\) we get equations of a rational curve \(C_{\rm cusp}\) with seven cusps. The cusps occur where the hyperplane section meets the diagonal, that is, at the seventh roots of unity \((s/t)^{7}=1\). Hence this curve has the dihedral group \(D_{7}\) of order \(14\) as its automorphism group; see Code 3.2.
This yields the following \(10\) quadrics generating \(I_{2}\).
\[\begin{array}{rclrcl}f_{0}&=&3y_{2}^{2}-4y_{4}y_{6}+y_{3}y_{0}&&f_{5}&=&y_{3} y_{4}-2y_{1}y_{6}+y_{0}y_{0}\\ f_{1}&=&2y_{4}y_{5}-3y_{3}y_{6}+y_{2}y_{0}&&f_{6}&=&5y_{2}y_{4}-8y_{1}y_{5}+3y_{ 0}y_{6}\\ f_{2}&=&5y_{3}y_{5}-8y_{2}y_{6}+3y_{1}y_{0}&&f_{7}&=&5y_{3}^{2}-9y_{1}y_{5}+4y_{ 0}y_{6}\\ f_{3}&=&3y_{2}y_{5}-5y_{1}y_{6}+2y_{0}y_{0}&&f_{8}&=&2y_{2}y_{3}-3y_{1}y_{4}+y_ {0}y_{5}\\ f_{4}&=&5y_{4}^{2}-9y_{2}y_{6}+4y_{1}y_{0}&&f_{9}&=&3y_{2}^{2}-4y_{1}y_{3}+y_{ 0}y_{4}\end{array}\]
The automorphisms are given by the maps \(y_{i}\mapsto\zeta_{7}^{i}y_{i}\) and \([y_{0}:y_{1}:y_{2}:y_{3}:y_{4}:y_{5}:y_{6}]\mapsto[y_{0}:y_{6}:y_{5}:y_{4}:y_{ 3}:y_{2}:y_{1}]\).
Next, we compute \(\ker(\operatorname{Sym}^{2}(I_{2})\to I_{4})\) in Macaulay2, and find that these quadrics satisfy the following quadratic form.
\[-f_{3}^{2}+\frac{9}{2}f_{3}f_{5}-5\,f_{5}^{2}-\frac{3}{10}f_{2}f_{6}+\frac{1}{5 }f_{4}f_{7}-\frac{3}{2}f_{1}f_{8}+f_{0}f_{9}=0.\]
We change the basis of \(I_{2}\) as follows.
\[\begin{array}{rclrcl}g_{0}&=&-10f_{0}&&g_{5}&=&f_{9}\\ g_{1}&=&15f_{1}&&g_{6}&=&f_{8}\\ g_{2}&=&3f_{2}&&g_{7}&=&f_{6}\\ g_{3}&=&-2f_{4}&&g_{8}&=&f_{7}\\ g_{4}&=&-10f_{3}+25f_{5}&&g_{9}&=&-f_{3}+2f_{5}\end{array}\]
Then \(\sum_{i=0}^{4}g_{i}g_{i+5}=0\); see Code 3.3.
Next, we arbitrarily choose eight smooth points \(p_{0},\ldots,p_{7}\) in general position on \(C_{\rm cusp}\). These points are given by the following values of \((s,t,u,v)\) under the parametrization shown above: \((-1,1,1,1)\), \((1,2,64,1)\), \((2,1,1,64)\), \((1,3,729,1)\), \((3,1,1,729)\), \((1,-2,64,1)\), \((-2,1,1,64)\), \((1,-3,729,1)\). (Seven points are sufficient to determine the linear space \(P_{\rm cusp}\); the eighth point will be used to prove that the map \(\rho:C_{\rm cusp}\to P_{\rm cusp}\) is an embedding.)
To each point on \(C_{\rm cusp}\) we associate the Lagrangian that Mukai denotes \(W_{p}^{\perp}\), which we interpret as the row space of the Jacobian matrix \(\left[\frac{\partial g_{j}}{\partial x_{i}}(p)\right]\).
Next, we need to choose a pair of complementary Lagrangians \(U_{0}\) and \(U_{\infty}\). Every Lagrangian will have even-dimensional intersection with one of these and odd-dimensional intersection with the other. Mukai assumes that \(U_{0}\) and \(U_{\infty}\) are chosen so that \(W_{p}^{\perp}\) has even-dimensional intersection with \(U_{\infty}\). We choose \(U_{0}={\rm Span}\{g_{0},\ldots,g_{4}\}\) and \(U_{\infty}={\rm Span}\{g_{5},\ldots,g_{9}\}\) and check that our choices satisfy this property.
Next, we compute the half spinors \(s_{i}\) of the Lagrangians \(W_{p}^{\perp}\) associated to the points \(p_{i}\). We find that \(s_{0},\ldots,s_{7}\) span the \(7\)-dimensional vector space given by the row space of the following matrix.
\[M_{\rm cusp}=\left[\begin{array}{cccccccccccccccc}0&0&0&0&-\frac{3}{5}&1&0 &0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&\frac{1}{5}&1&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&\frac{3}{4}&1&0&0&0&0&0\\ 30&0&0&0&0&0&0&0&0&0&0&1&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&\frac{8}{9}&1&0&0\\ 0&-2&0&0&0&0&0&0&0&0&0&0&0&1&0\\ 0&0&0&-\frac{15}{2}&0&0&0&0&0&0&0&0&0&0&0&1\end{array}\right] \tag{3.1}\]
Let \(P_{\rm cusp}=\mathbb{P}({\rm RowSpace}\,M_{\rm cusp})\). We check that \(P_{\rm cusp}\cap{\rm OG}(5,10)\cong C_{\rm cusp}\). To do this, we fix an isomorphism \(h:P_{\rm cusp}\to\mathbb{P}^{6}\), then compute the unique element of \({\rm PGL}(7)\) mapping \(p_{i}\) to \(h(s_{i})\) for \(i=0,\ldots,7\), and check that this maps \(C_{\rm cusp}\) to \(h(P_{\rm cusp}\cap{\rm OG}(5,10))\); see Code 3.4.
These calculations establish the following proposition.
**Proposition 3.1**.: _Let \(C_{\rm cusp}\) be the \(7\)-cuspidal curve with heptagonal symmetry. Then \(\rho:C_{\rm cusp}^{\rm sm}\to\mathbb{P}^{15}\) extends to an embedding, and \(\rho(C_{\rm cusp})=P_{\rm cusp}\cap{\rm OG}(5,10)\), where \(P_{\rm cusp}=\mathbb{P}({\rm RowSpace}\,M_{\rm cusp})\)._
## 4. The balanced genus \(7\) ribbon
Ribbons are dimension \(1\), generically nonreduced schemes that are double structures on the underlying reduced curve. Bayer and Eisenbud write in their seminal paper on ribbons that ribbons are limits of the canonical models of smooth curves as they degenerate to a hyperelliptic curve [3]. A longstanding prediction of the Hassett-Keel program for \((\overline{M}_{g},\Delta)\) is that the locus of hyperelliptic curves is flipped to the ribbon locus.
We consider a specific example. In each odd genus \(g=2k+1\) with \(g\geq 5\) there is a ribbon called the _balanced ribbon_, which is characterized by having a \(\mathbb{G}_{m}\)-action with weights \(-k,\ldots,+k\) as well as an involution interchanging the positive and negative weight spaces. Equations of the canonically embedded genus \(7\) balanced ribbon can be obtained using [7, Cor. 4.8].
\[\begin{array}{rclrcl}f_{0}&=&y_{2}y_{3}-2y_{1}y_{4}+y_{0}y_{5}&&f_{5}&=-y_{1 }y_{2}+y_{0}y_{3}\\ f_{1}&=&y_{2}y_{4}-2y_{1}y_{5}+y_{0}y_{6}&&f_{6}&=-y_{2}^{2}+y_{1}y_{3}\\ f_{2}&=&y_{3}^{3}-2y_{2}y_{4}+y_{1}y_{5}&&f_{7}&=-y_{4}^{4}+y_{3}y_{5}\\ f_{3}&=&y_{3}y_{4}-2y_{2}y_{5}+y_{1}y_{6}&&f_{8}&=-y_{4}y_{5}+y_{3}y_{6},\\ f_{4}&=&-y_{1}^{2}+y_{0}y_{2}&&f_{9}&=-y_{5}^{2}+y_{4}y_{6}\end{array}\]
The variables \(y_{0},\ldots,y_{6}\) have weights \(-3,\ldots,3\), and the involution acts by sending \(y_{0},\ldots,y_{6}\) to \(y_{6},\ldots,y_{0}\).
Next, we compute \(\ker(\operatorname{Sym}^{2}(I_{2})\to I_{4})\) in Macaulay2, and find that these quadrics satisfy the following quadratic form.
\[\frac{1}{2}f_{1}f_{2}-\frac{1}{2}f_{0}f_{3}+f_{6}f_{7}-\frac{1}{2}f_{5}f_{8}+f_ {4}f_{9}=0.\]
We reorder the quadrics so that the \(\mathbb{G}_{m}\) weights are \(-4,-3,-2,-1,0,4,3,2,1,0\), and scale to make the coefficients of the quadratic form \(1\).
\[\begin{array}{ccccc}g_{0}&=&2f_{4}&g_{5}&=&f_{9}\\ g_{1}&=&-f_{5}&g_{6}&=&f_{8}\\ g_{2}&=&2f_{6}&g_{7}&=&f_{7}\\ g_{3}&=&-f_{0}&g_{8}&=&f_{3}\\ g_{4}&=&f_{1}&g_{9}&=&f_{2}\end{array}\]
Then \(\sum_{i=0}^{4}g_{i}g_{i+5}=0\); see Code 4.1.
Next, we compute the spin representation of the automorphism group of the balanced ribbon. Let \(e_{-1},\ldots,e_{-5}\) be \(g_{0},\ldots,g_{4}\), and let \(e_{1},\ldots,e_{5}\) be \(g_{5},\ldots,g_{9}\). Then \(\mathbb{G}_{m}\) acts on the basis \(e_{-1},\ldots,e_{5}\) by
\[\operatorname{Diag}(t^{-4},t^{-3},t^{-2},t^{-1},1,t^{4},t^{3},t^{2},t^{1},1),\]
and the involution acts on this basis by
\[\begin{array}{ccccc}e_{-1}&\mapsto&\frac{1}{2}e_{1}&e_{-4}&\mapsto&-e_{4}\\ e_{-2}&\mapsto&-e_{2}&e_{-5}&\mapsto&e_{-5}\\ e_{-3}&\mapsto&\frac{1}{2}e_{3}&e_{5}&\mapsto&e_{5}\end{array}\]
To lift these elements to \(\operatorname{Spin}(Q)\), we factor them as a product of reflections, lift each reflection to the Clifford algebra, and scale. We find that the \(\mathbb{G}_{m}\) action lifts to the following two elements in \(\operatorname{Spin}(10)\).
\[\pm t^{5}\prod_{j=1}^{4}(e_{-j}+e_{j})(e_{-j}+t^{j-5}e_{j}).\]
The involution lifts to the elements
\[\pm 2(e_{-4}+e_{4})(e_{-3}-\frac{1}{2}e_{3})(e_{-2}+e_{2})(e_{-1}-\frac{1}{2}e_ {1}).\]
Thus, the \(\mathbb{G}_{m}\) action on the basis
\[1,e_{12},e_{13},e_{14},e_{15},e_{23},e_{24},e_{25},e_{34},e_{35},e_{45},e_{123 4},e_{1235},e_{1245},e_{1345},e_{2345}\]
of \(S^{+}\) is given by
\[\operatorname{Diag}(t^{-5},t^{2},t,1,t^{-1},1,t^{-1},t^{-2},t^{-2},t^{-3},t^{ -4},t^{5},t^{4},t^{3},t^{2},t)\]
and the involution acts on this basis as follows.
\[\begin{array}{ccccc}1&\mapsto&\frac{1}{2}e_{1234}&e_{15}&\mapsto&e_{2345}\\ e_{12}&\mapsto&e_{34}&e_{25}&\mapsto&\frac{1}{2}e_{1345}\\ e_{13}&\mapsto&2e_{24}&e_{35}&\mapsto&e_{1245}\\ e_{14}&\mapsto&e_{23}&e_{45}&\mapsto&\frac{1}{2}e_{1235}\end{array}\]
See Code 4.2.
We seek a six-dimensional projective linear subspace \(P_{\operatorname{rib}}\) such that \(P_{\operatorname{rib}}\cap\operatorname{OG}(5,10)\cong C_{\operatorname{rib}}\). We know the weights of the \(\mathbb{G}_{m}\) action on the canonically embedded balanced ribbon, and that the involution swaps positive and negative weight spaces. We use this to narrow down the search for \(P_{\operatorname{rib}}\).
The \(\mathbb{G}_{m}\) weights on the ribbon are \(-3,-2,-1,0,1,2,3\), while the \(\mathbb{G}_{m}\) weights on \(\mathbb{P}(S^{+})\) are (in increasing order) \(-5,-4,-3,-2,-2,-1,-1,0,0,1,1,2,2,3,4,5\). By comparing these two lists, we see that we must kill the \(\pm 5\) and \(\pm 4\) weight spaces; retain the \(\pm 3\) weight spaces; and select a multiplicity \(1\) submodule of the multiplicity \(2\) weight spaces for weights \(\pm 2\), \(\pm 1\), and \(0\).
The \(\pm 4\) and \(\pm 5\) weight spaces are spanned by \(x_{45}\), \(x_{1235}\), \(x_{0}\), and \(x_{1234}\). Thus we set \(x_{45}=x_{1235}=x_{0}=x_{1234}=0\). This gives us \(4\) of the \(9\) hyperplanes we seek to define the linear space \(P\).
Next, consider the weight \(0\) space. This is spanned by \(x_{14}\) and \(x_{23}\). The involution acts on this subspace as \(x_{14}\mapsto x_{23}\). Since the involution is trivial on the weight \(0\) space for the balanced ribbon, we set \(x_{14}=x_{23}=0..\) This gives a fifth hyperplane.
Next, consider the weight \(\pm 1\) space. It is a multiplicity two module with respect to the automorphism group. A general submodule can be written in the form \(\operatorname{Span}\langle c_{1}x_{13}+c_{2}x_{2345},\frac{1}{2}c_{1}x_{24}+c_{ 2}x_{15}\rangle\) for some constants \(c_{1}\) and \(c_{2}\). Assume \(c_{1}\neq 0\). Then we can scale these to obtain two more hyperplanes \(x_{13}+c_{2}x_{2345}=0\) and \(\frac{1}{2}x_{24}+c_{2}x_{15}=0\).
Similarly, the weight \(\pm 2\) space is a multiplicity two module with respect to the automorphism group. A general submodule can be written in the form \(\operatorname{Span}\langle c_{3}x_{12}+c_{4}x_{1345},c_{3}x_{34}+\frac{1}{2}c_ {4}x_{25}\rangle\) for some constants \(c_{3}\) and \(c_{4}\). Assume \(c_{3}\neq 0\). Then we can scale these to obtain the hyperplanes \(x_{12}+c_{4}x_{1345}=0\) and \(x_{34}+\frac{1}{2}c_{4}x_{25}=0\).
We have thus found nine linearly independent hyperplanes with two unknown parameters \(c_{2}\) and \(c_{4}.\) For each pair \(c_{2},c_{4}\), let \(P_{c_{2},c_{4}}\) be the six-dimensional projective linear subspace defined by these hyperplanes. For any values of \(c_{2}\) and \(c_{4}\), the intersection of \(P_{c_{2},c_{4}}\) with the orthogonal Grassmannian yields a scheme with a \(\mathbb{G}_{m}\)-action with weights \(-3,-2,-1,0,1,2,3\) and an involution interchanging the positive and negative weight spaces. Are there any values of \(c_{2}\) and \(c_{4}\) that yield the balanced ribbon?
Next, choose seven of the variables \(x_{I}\) with weights \(-3,-2,-1,0,1,2,3\) to use as variables on \(P_{\mathrm{rib}}\cong\mathbb{P}^{6}\). Here we used \(y_{0}=x_{1245},y_{1}=x_{1345},y_{2}=x_{2345},y_{3}=x_{14},y_{4}=x_{15},y_{5}= \frac{1}{2}x_{25},y_{6}=x_{35}.\) (The choice \(y_{5}=\frac{1}{2}x_{15}\) is because the involution on the balanced ribbon swaps the \(\pm 2\) weight spaces, and this is the basis that has the desired action.)
Substituting the nine hyperplanes found above into Mukai's equations for the orthogonal Grassmannian yields the following quadrics.
\[\begin{array}{l}2c_{4}y_{1}^{2}-2c_{2}y_{0}y_{2},\\ -c_{4}y_{5}^{2}+c_{2}y_{4}y_{6},\\ c_{4}y_{1}y_{2}+y_{0}y_{3},\\ -c_{4}y_{4}y_{5}-y_{3}y_{6},\\ 2c_{2}y_{2}^{2}+2y_{1}y_{3},\\ -c_{2}y_{4}^{2}-y_{3}y_{5},\\ -y_{2}y_{3}-2c_{2}y_{1}y_{4}+c_{4}y_{0}y_{5},\\ y_{3}y_{4}+2c_{2}y_{2}y_{5}-c_{4}y_{1}y_{6},\\ -y_{3}^{2}+2c_{2}^{2}y_{2}y_{4}-c_{4}^{2}y_{1}y_{5},\\ y_{2}y_{4}-2y_{1}y_{5}+y_{0}y_{6}\end{array}\]
See Code 4.3.
Careful inspection reveals that with \(c_{2}=-1\) and \(c_{4}=-1\), each quadric on our list is a nonzero constant multiple of one of the balanced ribbon equations.
These calculations establish the following proposition.
**Proposition 4.1**.: _Let \(C_{\mathrm{rib}}\) be the genus 7 balanced ribbon. Then_
\[C_{\mathrm{rib}}\cong P_{\mathrm{rib}}\cap\mathrm{OG}(5,10)\]
_where_
\[M_{\mathrm{rib}}=\left[\begin{array}{cccccccccccccccccccc}0&0&0&1&0&1&0&0&0&0 &0&0&0&0&0&0\\ 0&0&0&0&0&\frac{1}{2}&0&1&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&2&1&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&1&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&1&0&0\\ 0&1&0&0&0&0&0&0&0&0&0&0&0&1&0\\ 0&0&1&0&0&0&0&0&0&0&0&0&0&0&1\\ \end{array}\right] \tag{4.1}\]
_and \(P_{\mathrm{rib}}=\mathbb{P}(\operatorname{RowSpace}M_{\mathrm{rib}})\)_
## 5. A family of reducible nodal curves
Next, we study a family of reducible nodal curves. This family is a one-dimensional stratum in the boundary of \(\overline{M}_{7}\), also known as an F-curve.
This family is constructed as follows. Let \(G\) be the graph on \(11\) vertices \(0,1,2,34,5,6,7,8,9,10,11\) with edges 0-1, 0-8, 0-9, 1-2, 1-10, 2-34, 2-11, 34-9, 34-5, 34-10, 5-6, 5-11, 6-7, 6-9, 7-8, 7-10, and 8-11. We present two different views of this graph. See Figures 1 and 2.
\(G\) is trivalent at every vertex except vertex \(34\), which is \(4\)-valent.
Bayer and Eisenbud introduced a theory of graph curves in [3]. These are nodal curves for which each irreducible component is a rational curve. The graph in the name is the dual graph of the curve. (Note: in [3], the definition of a graph curve specifies that the graph should be trivalent, but we will continue to call the objects we study graph curves even though there is one 4-valent vertex.)
The graph \(G\) defines a 1-dimensional family of nodal curves because we can vary the cross-ratio of the four nodes on the component labeled 34. The graph \(G\) has three specializations to trivalent graphs as indicated in Figure 2.
### How this family was selected
This family was selected as follows.
We searched for genus 7 trivalent graph curves with pure Betti tables. In Sage, we called a list of the genus 7 trivalent graphs; there are 85 such graphs. Next, we selected the connected and 3-edge-connected graphs among this list, since by [3, Prop. 2.5], these are the ones that give graph curves with very ample dualizing sheaves. Next, we computed the Betti tables of these graph curves in Macaulay2 and found two
Figure 1. The graph \(G\)
Figure 2. The graph \(G\) and its trivalent specializations
genus 7 graph curves with pure Betti tables. We selected the one that had the larger automorphism group for further study; see Code 5.1. This is the graph \(G_{0}\) in Figure 3. It has two types of edges: those that belong to the nonagon, and those that do not. We contracted one of the nonagon edges to obtain the graph \(G\).
For every member of this family of curves, the dualizing sheaf is very ample. Thus, each member of the family is represented in the Hilbert scheme of canonical curves. Moreover, for a general member of this family, and the specialization \(G_{0}\) (but not the specializations \(G_{1}\) and \(G_{\infty}\)), the canonical ideal has a pure Betti table. This permits us to study degenerations in the parameter space of Mukai's model as the curve acquires extra syzygies.
The relevant combinatorial features of the graphs are that \(G_{1}\) and \(G_{\infty}\) each contain 4-cycles, whereas in \(G_{0}\), the shortest cycles have length 5. Bayer and Eisenbud describe in [3, Section 5] how to construct line bundles that lower the Clifford index and add to the Betti table starting from cycles that are sufficiently small relative to the genus of the graph.
### Canonical equations of these graph curves
To produce equations for this family, we begin with the specialization \(G_{0}\). See Figure 3.
Let \(C_{\mathrm{nod},0}\) be the graph curve associated to the graph \(G_{0}\). Since \(G_{0}\) is 3-edge-connected, by [3, Prop. 2.5], \(\omega_{C_{\mathrm{nod},0}}\) is very ample. We can use [3, Prop. 3.1] to write the canonical ideal of \(C_{\mathrm{nod},0}\); see Code 5.2. Let \(y_{0},\ldots,y_{6}\) represent the basis of \(H^{1}(G_{0})\) corresponding to the 5-cycles 0-1-2-3-9-0, 1-2-3-4-10-1, 2-3-4-5-11-2, 3-4-5-6-9-3, 4-5-6-7-10-4, 5-6-7-8-11-5, and 6-7-8-0-9-6. Then the canonical ideal of \(C_{\mathrm{nod},0}\) in these variables is given by the following 10 quadrics. There are 5 monomials and 5 polynomials.
\[\begin{array}{c}I(C_{\mathrm{nod},0})=\langle y_{0}y_{4},y_{0}y_{5},y_{1}y_{ 5},y_{1}y_{6},y_{2}y_{6},y_{0}y_{2}-y_{1}y_{2}+y_{2}y_{3}-y_{3}y_{4}+y_{4}y_{5} -y_{4}y_{6},\\ y_{0}y_{3}-y_{2}y_{3}+y_{3}^{2}-y_{4}y_{5}+y_{3}y_{6}+y_{4}y_{6},y_{1}y_{3}-y_{2 }y_{3}+y_{3}y_{4}-y_{4}y_{5}+y_{4}y_{6},\\ y_{2}y_{4}-y_{3}y_{4}+y_{4}y_{5}-y_{4}y_{6},y_{3}y_{5}-y_{4}y_{5}+y_{4}y_{6} \rangle\end{array}\]
We can compute a primary decomposition of the ideal shown above to obtain the ideal of each irreducible component of \(C_{\mathrm{nod},0}\). This yields Table 1.
Next, we find equations for the other members of this family by replacing the components 3 and 4 by a quadric; see Code 5.3. The union of components 3 and 4 in \(C_{\mathrm{nod},0}\) is contained in the plane \(\langle y_{6},y_{5},y_{1}-y_{2}+y_{4},y_{0}-y_{2}+y_{3}\rangle\). The nodes corresponding to the edges 2-3, 3-9, 4-5, and 4-10 occur at \([1:1:1:0:0:0:0]\), \([-1:0:0:1:0:0]\), \([0:0:1:1:1:0:0]\) and \([0:-1:0:0:1:0:0]\). For all \(t=[t_{0}:t_{1}]\), the quadric \(t_{0}y_{2}y_{3}-t_{1}y_{2}y_{4}+(-t_{0}+t_{1})y_{3}y_{4}\) in this plane passes through these four points. When \(t_{0}=0\), the quadric factors as \((y_{2}-y_{3})y_{4}\), which corresponds to the graph \(G_{0}\). When \(t_{0}=t_{1}\), the quadric factors as \(y_{2}(y_{3}-y_{4})\), which corresponds to the graph \(G_{1}\). When \(t_{1}=0\), the quadric factors as \(y_{3}(y_{2}-y_{4})\), which corresponds to the graph \(G_{\infty}\).
Now, for a general \(t\), we intersect the ideals for components \(0,1,2,5,6,7,8,9,10,11\) with the ideal
\[\langle t_{0}y_{2}y_{3}-t_{1}y_{2}y_{4}+(-t_{0}+t_{1})y_{3}y_{4},y_{6},y_{5},y_ {1}-y_{2}+y_{4},y_{0}-y_{2}+y_{3}\rangle\]
Figure 3. The graph \(G_{0}\)
defining the component 34 to obtain an ideal \(I_{t}\) generated by the following ten quadrics.
\[f_{0} =y_{2}y_{6}\] \[f_{1} =y_{1}y_{6}\] \[f_{2} =y_{3}y_{5}-y_{4}y_{5}+y_{4}y_{6}\] \[f_{3} =y_{1}y_{5}\] \[f_{4} =y_{0}y_{5}\] \[f_{5} =y_{0}y_{4}-y_{2}y_{4}+y_{3}y_{4}-y_{4}y_{5}+y_{4}y_{6}\] \[f_{6} =t_{0}y_{2}y_{3}-t_{1}y_{2}y_{4}+(-t_{0}+t_{1})y_{3}y_{4}+(t_{0}-t _{1})y_{4}y_{5}+(-t_{0}+t_{1})y_{4}y_{6}\] \[f_{7} =y_{1}y_{3}-y_{2}y_{3}+y_{3}y_{4}-y_{4}y_{5}+y_{4}y_{6}\] \[f_{8} =y_{0}y_{3}-y_{2}y_{3}+y_{3}^{2}-y_{4}y_{5}+y_{3}y_{6}+y_{4}y_{6}\] \[f_{9} =y_{0}y_{2}-y_{1}y_{2}+y_{2}y_{3}-y_{2}y_{4}\]
We change to the following basis of \(I_{t}\) so that \(\sum_{i=0}^{4}g_{i}g_{i+5}=0\); see Code 5.4.
\[g_{0} =-t_{0}y_{1}y_{3}+t_{1}y_{0}y_{4}\] \[g_{1} =-(t_{0}-t_{1})y_{0}y_{3}-t_{1}y_{2}y_{3}-(t_{0}-t_{1})y_{3}^{2}+ t_{1}y_{2}y_{4}+(t_{0}-t_{1})y_{3}y_{4}-(t_{0}-t_{1})y_{3}y_{6}\] \[g_{2} =(t_{0}-t_{1})y_{1}y_{3}+t_{1}y_{2}y_{3}-t_{1}y_{2}y_{4}\] \[g_{3} =t_{0}y_{2}y_{3}-t_{1}y_{2}y_{4}-(t_{0}-t_{1})y_{3}y_{4}+(t_{0}-t _{1})y_{3}y_{5}\] \[g_{4} =-t_{1}y_{0}y_{2}+t_{1}y_{1}y_{2}+-t_{1}y_{2}y_{3}+t_{1}y_{2}y_{4}\] \[g_{5} =y_{2}y_{6}\] \[g_{6} =y_{1}y_{5}\] \[g_{7} =y_{0}y_{5}+y_{3}y_{5}-y_{4}y_{5}+y_{4}y_{6}\] \[g_{8} =y_{1}y_{6}\] \[g_{9} =y_{3}y_{5}-y_{4}y_{5}+y_{4}y_{6}\]
For \(t\in(\mathbb{P}^{1}\setminus\{1,\infty\})\), the Betti table of \(I_{t}\) is pure. When \(t\in\{1,\infty\}\), the Betti table is
\[\begin{array}{rccccc}&0&1&2&3&4&5\\ \text{total:}&1&10&19&19&10&1\\ &0&:1&.&.&.&.\\ &1&:&10&16&3&.&.\\ &2&:&.&3&16&10&.\\ &3&:&.&.&.&.&1\\ \end{array}\]
By [24], this is the Betti table of a tetragonal curve.
### Spinor embeddings of each component of \(C_{\mathrm{nod},t}\)
Next, for \(t\not\in\{0,1,\infty\}\), we embed each irreducible component of \(C_{\mathrm{nod},t}\) in \(\mathbb{P}(S^{+})\) and define
\[X_{t}^{v} :=\rho(C_{\mathrm{nod},t}^{v})\text{ for each }v\in G\] \[X_{t} :=\bigcup_{v\in G}X_{t}^{v}\]
We compute each component \(X_{t}^{v}\) as follows. First, we parametrize \(C_{\mathrm{nod},t}^{v}\). Next, we compute the spinor associated to \(W_{p}^{\perp}\), where \(p\) is the general point given by the parametrization of \(C_{\mathrm{nod},t}^{v}\). This gives us a parametrization of the line \(X_{t}^{v}\) in \(\mathbb{P}(S^{+})\). We then eliminate parameters to obtain the ideal of \(X_{t}^{v}\) in \(\mathbb{P}(S^{+})\).
\begin{table}
\begin{tabular}{l l} Component & Ideal \\
0 & \(\langle x_{1234},x_{1235},x_{1245},x_{2345},x_{45},x_{35},x_{25},x_{24},x_{23}, x_{15},x_{14},x_{13},x_{12},x_{0}\rangle\) \\
1 & \(\langle x_{1234},x_{1235},x_{1245},x_{1345},x_{2345},x_{45},x_{35},x_{25},x_{23}, x_{15},x_{14},x_{13},x_{12},x_{0}\rangle\) \\
2 & \(\langle x_{1234},x_{1235},x_{1245},x_{1345},x_{2345},x_{45},x_{35},x_{25},x_{2 4}-x_{34},x_{23},x_{15},x_{14},x_{12}-x_{13},x_{0}\rangle\) \\
34 & \(\langle x_{1234},x_{1235},x_{1245},x_{1345},x_{2345},x_{24}-x_{34},x_{23}-x_{25}, x_{15}+x_{25}-x_{35}-x_{45},x_{14},\) \\ & \(x_{13}+x_{25}+x_{34},x_{12}+x_{25}+x_{34},x_{25}x_{34}-x_{34}x_{35}+x_{25}x_{45 },t_{1}x_{25}-t_{1}x_{35}-t_{1}x_{45}+x_{0},\) \\ & \(t_{0}x_{25}-t_{1}x_{35}-t_{0}x_{45},t_{1}x_{34}x_{45}+t_{1}x_{35}x_{45}+t_{1}x _{45}^{2}-x_{0}x_{34}-x_{0}x_{45},\) \\ & \(t_{0}x_{34}x_{35}-t_{1}x_{34}x_{35}-t_{0}x_{34}x_{45}-t_{1}x_{35}x_{45}-t_{0}x _{45},t_{0}t_{1}x_{35}-t_{1}^{2}x_{35}-t_{0}x_{0}\) \\
5 & \(\langle x_{1234},x_{1245},x_{1345},x_{2345},x_{45},x_{34},x_{24},x_{23}-x_{25},x_{15}+x_{25}-x_{35},x_{14},x_{12}-x_{13},\) \\ & \(t_{1}x_{1235}-x_{13}-x_{25},t_{1}x_{25}-t_{1}x_{35}+x_{0},t_{0}x_{25}-t_{1}x_{ 35},\) \\ & \(t_{0}x_{13}x_{35}-t_{1}x_{13}x_{35}-t_{0}x_{0}x_{1235}+x_{0}x_{35},t_{0}t_{1}x _{35}-t_{1}^{2}x_{35}-t_{0}x_{0}\) \\
6 & \(\langle x_{1234},x_{1245},x_{2345},x_{34}+x_{45},x_{24},x_{23}-x_{25},x_{15}+x _{25}-x_{35}-x_{45},x_{14},x_{13},x_{12},\) \\
6 & \(\langle x_{1234},x_{1245},x_{2345},x_{34}+x_{45},x_{24},x_{23}-x_{25},x_{15}+ x_{25}-x_{35}-x_{45},x_{14},x_{13},x_{12},\) \\ & \(t_{0}x_{45}-t_{1}x_{45}+x_{0},x_{34}x_{35}+x_{34}x_{45}-x_{0}x_{1345},t_{1}x _{35}+t_{1}x_{45}-x_{0}\rangle\) \\
10 & \(\langle x_{1234},x_{1235},x_{1245},x_{1345},x_{2345},x_{35},x_{34}+x_{45},x_{2 5}-x_{45},x_{23}-x_{45},x_{15},x_{14},x_{13},x_{12},x_{0}\rangle\) \\
11 & \(\langle x_{1234},x_{1245},x_{1345},x_{2345},x_{45},x_{35},x_{34},x_{25},x_{2 4},x_{23},x_{15},x_{14},x_{12}-x_{13},x_{0}\rangle\) \\ \end{tabular}
\end{table}
Table 3. Nodes of \(X_{t}\) for generic \(t\)
\begin{table}
\begin{tabular}{l l} Component & Ideal \\
0 & \(\langle x_{1234},x_{1235},x_{1245},x_{2345},x_{45},x_{35},x_{25},x_{24},x_{23}, x_{15},x_{14},x_{13},x_{12},x_{0}\rangle\) \\
1 & \(\langle x_{1234},x_{1235},x_{1245},x_{1345},x_{2345},x_{45},x_{35},x_{25},x_{23}, x_{15},x_{14},x_{13},x_{12},x_{0}\rangle\) \\
2 & \(\langle x_{1234},x_{1235},x_{1245},x_{1345},x_{2345},x_{45},x_{35},x_{25},x_{2 4}-x_{34},x_{23},x_{15},x_{14},x_{12}-x_{13},x_{0}\rangle\) \\
34 & \(\langle x_{1234},x_{1235},x_{1245},x_{1345},x_{2345},x_{24}-x_{34},x_{23}-x_{25},x_{15}+x_{25}-x_{35}-x_{45},x_{14},\) \\ & \(x_{13}+x_{25}+x_{34},x_{12}+x_{25}+x_{34},x_{25}x_{34}-x_{34}x_{35}+x_{25}x_{45 },t_{1}x_{25}-t_{1}x_{35}-t_{1}x_{45}+x_{0},\) \\ & \(t_{0}x_{25}-t_{1}x_{35}-t_{0}x_{45},t_{1}x_{34}x_{45}+t_{1}x_{35}x_{45}+t_{1}x _{45}^{2}-x_{0}x_{34}-x_{0}x_{45},\) \\ & \(t_{0}x_{34}x_{35}-t_{1}x_{34}x_{35}-t_{0}x_{34}x_{45}-t_{1}x_{35}x_{45}-t_{0}x _{45},t_{0}t_{1}x_{35}-t_{1}^{2}x_{35}-t_{0}x_{0}\) \\
5 & \(\langle x_{1234},x_{1245},x_{1345},x_{2345},x_{45},x_{34},x_{24},x_{23}-x_{25},x_{15}+x_{25}-x_{35},x_{14},x_{12}-x_{13},\) \\ & \(t_{1}x_{1235}-x_{13}-x_{25},t_{1}x_{25}-t_{1}x_{35}+x_{0},t_{0}t_{25}-t_{1}x_{ 35},\) \\ & \(t_{0}x_{13}x_{35}-t_{1}x_{33}x_{35}-t_{0}x_{0}x_{1235}+x_{0}x_{35},t_{0}t_{1}x _{35}-t_{1}^{2}x_{35}-t_{0}x_{0}\) \\
6 & \(\langle x_{1234},x_{1245},x_{2345},x
This yields Table 2; see Code 5.5. (Note: the generators shown are not necessarily a Grobner basis in each case.)
For \(t\not\in\{0,1,\infty\}\), these components intersect at the nodes listed in Table 3.
Next, we compute the ideal of \(X_{t}=\bigcup_{v\in G}X_{t}^{v}\); see Code 5.6. (Here we show a minimal set of generators; a Grobner basis is used for the limit computations.)
\[\begin{split} I(X_{t})&=\langle x_{2345},x_{1245},x_ {1234},t_{0}x_{25}-t_{1}x_{35}-t_{0}x_{45},x_{23}-x_{25},x_{15}+x_{25}-x_{35}-x _{45},x_{14},\\ x_{12}-x_{13},x_{0}+t_{1}x_{25}-t_{1}x_{35}-t_{1}x_{45},x_{24}x_{1345}, x_{13}x_{1345},x_{45}x_{1235}+x_{25}x_{1345},x_{34}x_{1235}-x_{25}x_{1345},\\ x_{24}x_{1345},x_{13}x_{45}+x_{25}x_{45}+x_{34}x_{45}+t_{1}x_{25}x_{1345} -t_{1}x_{45}x_{1345},\\ x_{24}x_{35}-x_{34}x_{35}+t_{1}x_{35}x_{1345},\\ x_{13}x_{35}+x_{25}x_{35}+x_{34}x_{35}-t_{1}x_{35}x_{1235}-t_{1}x_{35 }x_{1345},\\ x_{25}x_{34}-x_{34}x_{35}+x_{25}x_{45}+t_{1}x_{35}x_{1345},\\ x_{24}x_{25}-x_{34}x_{35}-x_{24}x_{45}+x_{25}x_{45}+x_{34}x_{45}+t_{1 }x_{25}x_{1345}+t_{1}x_{35}x_{1345}-t_{1}x_{45}x_{1345},\\ x_{13}x_{25}+x_{25}^{2}+x_{34}x_{35}-x_{25}x_{45}-t_{1}x_{25}x_{1235}-t _{1}x_{25}x_{1345}-t_{1}x_{35}x_{1345},\\ x_{13}x_{24}-x_{13}x_{34},x_{35}^{2}x_{1235}x_{1345}-t_{0}x_{35}x_{1235 }^{2}x_{1345}-t_{0}x_{35}x_{1235}x_{1345}^{2})\end{split} \tag{5.1}\]
We define \(X_{0}\), \(X_{1}\), and \(X_{\infty}\) as the flat limits of the family \(X_{t}\) as \(t\) approaches \(0\), \(1\), and \(\infty\).
The ideal \(I(X_{t})\) contains nine generators in degree \(1\). We use them to define families \(P_{\mathrm{nod},t}\subset\mathrm{Gr}(7,16)\) and \(Y_{t}\subset\mathbb{P}^{15}\).
\[P_{\mathrm{nod},t} :=\langle x_{2345},x_{1245},x_{1234},t_{0}x_{25}-t_{1}x_{35}-t_{0} x_{45},x_{23}-x_{25},x_{15}+x_{25}-x_{35}-x_{45},x_{14},\] \[x_{12}-x_{13},x_{0}+t_{1}x_{25}-t_{1}x_{35}-t_{1}x_{45}\rangle \tag{5.3}\] \[Y_{t} :=P_{\mathrm{nod},t}\cap\mathrm{OG}(5,10) \tag{5.2}\]
We establish the following propositions via explicit calculations in Macaulay2.
**Proposition 5.1**.: _For \(t\not\in\{1,\infty\}\), the map \(\rho:C^{\mathrm{sm}}_{\mathrm{nod},t}\to\mathbb{P}^{15}\) extends to an embedding, and_
\[\rho(C_{\mathrm{nod},t})=P_{\mathrm{nod},t}\cap\mathrm{OG}(5,10)\]
_where_
\[M_{\mathrm{nod},t}=\left[\begin{array}{cccccccccccccccccccc}0&1&1&0&0&0&0&0& 0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&1&0&0&0&0&0&0&0&0\\ t_{0}t_{1}-t_{1}^{2}&0&0&0&t_{0}-t_{1}&t_{1}&0&t_{1}&0&t_{0}&0&0&0&0&0\\ 0&0&0&0&0&0&0&1&0&1&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&1&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&1&0&0\\ \end{array}\right] \tag{5.4}\]
_and \(P_{\mathrm{nod},t}=\mathbb{P}(\mathrm{RowSpace}\,M_{\mathrm{nod},t})\)_
Proof.: See Code 5.7 for the case \(t\not\in\{0,1,\infty\}\), and Code 5.8 for the case \(t=0\).
### The limits of this family as \(t\to 1,\infty\)
The next two propositions describe the limits of the families \(X_{t}\) and \(Y_{t}\) as \(t\) approaches \(1\) or \(\infty\).
First, we describe the flat limits of \(X_{t}\) in \(\mathbb{P}(S^{+})\) as \(t\) approaches \(1\) or \(\infty\), that is, degenerations of this family of curves in the Hilbert scheme \(\mathrm{Hilb}(\mathbb{P}^{15},12t-6)\).
**Proposition 5.2**.:
1. \(X_{1}\) _is the union of the limits of the irreducible components in_ \(X_{t}\) _as_ \(t\to 1\)_. It is a graph curve whose dual graph is_ \(G_{1}\)_. However,_ \(P_{\mathrm{nod},1}\cap\mathrm{OG}(5,10)\neq X_{1}\)_._
2. \(X_{\infty}\) _is the union of the limits of the irreducible components in_ \(X_{t}\) _as_ \(t\to\infty\)_. It is a reducible curve that has nodes and spatial triple points as its singularities. Furthermore,_ \(P_{\mathrm{nod},\infty}\cap\mathrm{OG}(5,10)\neq X_{\infty}\)_._
Proof.: See Code 5.9 for the case \(t=1\) and Code 5.10 for the case \(t=\infty\).
Here are a few more details about the curve \(X_{\infty}\). In this limit, the component defined by vertex \(34\) does not split into two lines, at least over \(\mathbb{Q}\). Furthermore, some of the nodes in \(X_{t}\) collide as \(t\to\infty\). Specifically, node \(6\)-\(7\) approaches node \(7\)-\(8\); node \(5\)-\(11\) approaches node \(8\)-\(11\); and node \(0\)-\(9\) approaches node \(0\)-\(8\). By computing the tangent cones at these points, we can check that these singularities are spatial triple points.
Next, we describe the limits of the family \(Y_{t}=P_{\mathrm{nod},t}\cap\mathrm{OG}(5,10)\) in \(\mathrm{Gr}(7,16)\) as \(t\) approaches \(1\) or \(\infty\).
**Proposition 5.3**.:
1. \(Y_{1}\) _is a union of five irreducible components, each of dimension 2._ * _Lines 0 and 9 in the flat limit_ \(X_{1}\) _are replaced by their span_ * _Lines 1, 10, and 3 in the flat limit_ \(X_{1}\) _are replaced by the scroll connecting a point_ \(p\) _on line 1 to its image on line 3 under the isomorphism mapping nodes 0-1, 1-2, and 1-10 to 3-9, 3-4, and 3-10._ * _Lines 2 and 4 in the flat limit_ \(X_{1}\) _are replaced by their span_ * _Lines 5 and 11 in the flat limit_ \(X_{1}\) _are replaced by their span_ * _Lines 6, 7, 8 in the flat limit_ \(X_{1}\) _are replaced by the scroll connecting a point_ \(p\) _on line 6 to its image on line 8 under the isomorphism mapping nodes 5-6, 6-7, and 6-9 to 8-11, 7-8, and 0-8._
2. \(Y_{\infty}\) _is a union of eight irreducible components._ * _Lines 0, 1, 2, 7, 10, 11 in the flat limit_ \(X_{\infty}\) _appear as irreducible components of_ \(Y_{\infty}\)_._ * _Component 34 (an irreducible quadric) in the flat limit_ \(X_{\infty}\) _is an irreducible component of_ \(Y_{\infty}\)_._ * _Lines 5, 6, 8, 9 in the flat limit_ \(X_{\infty}\) _are replaced by their span, a_ \(\mathbb{P}^{2}\)__
Proof.: See Code 5.11 for the case \(t=1\) and Code 5.12 for the case \(t=\infty\).
### GIT instability for the limits as \(t\to 1,\infty\)
In this section we discuss GIT semistability/instability for the family \([P_{\mathrm{nod},t}]\) with respect to the maximal torus \(T\subset\mathrm{Spin}(10)\) given by the lifts of the diagonal maximal torus in \(\mathrm{SO}(10)\). (Recall: we are working with the quadratic form \(\sum q_{i}q_{i+n}\), so there is a maximal torus consisting of diagonal matrices.)
**Proposition 5.4**.: \([P_{\mathrm{nod},t}]\) _is \(T\)-semistable with respect to the lift of the diagonal maximal torus \(T\) in \(\mathrm{SO}(10)\) if and only \(t\not\in\{1,\infty\}\)._
Proof.: GIT semistability with respect to a torus \(T\) can be characterized using state polytopes.
When \(t\not\in\{0,1,\infty\}\), the state of \(P_{\mathrm{nod},t}\) has 21 points, and the state polytope has 20 vertices. The trivial character \(\chi_{0}\) is contained in the interior of the state polytope, so, for general \(t\), \([P_{\mathrm{nod},t}]\) is \(T\)-semistable.
When \(t=0\), the state of \(P_{\mathrm{nod},0}\) has 16 points, and they are all vertices of the state polytope. The trivial character \(\chi_{0}\) is contained in the interior of the state polytope, so \([P_{\mathrm{nod},0}]\) is also \(T\)-semistable.
When \(t=1\), the state of \(P_{\mathrm{nod},1}\) has 9 points, and the state polytope has 8 vertices. The trivial character \(\chi_{0}\) is not contained in the state polytope, so this point is \(T\)-unstable. We compute the proximum and find that the worst 1-parameter subgroup is in the direction \((-2,1,1,1,1)\).
When \(t=\infty\), the state of \(P_{\mathrm{nod},\infty}\) has 12 points, and they are all vertices of the state polytope. The trivial character \(\chi_{0}\) is not contained in the state polytope, so this point is also \(T\)-unstable. We compute the proximum and find that the worst 1-parameter subgroup is in the direction \((1,0,1,0,1)\).
See Code 5.13.
For any maximal torus \(T\subset G\), \(T\)-instability implies \(G\)-instability. But, in general, \(T\)-semistability for one maximal torus gives us little information about \(G\)-semistability, in the following sense. In [15], Hyeon and Park show that in any GIT quotient problem of semisimple group representations, every point is semistable with respect to a general maximal torus. In Section 6, with a great deal more effort, we will study \(G\)-semistability for \([P_{\mathrm{nod},t}]\) with \(t\not\in\{1,\infty\}\).
## 6. Constructing a \(\mathrm{Spin}(10)\)-invariant polynomial for \(\Lambda^{7}S^{+}\)
Let \(S^{+}\) be the half-spin representation of \(\mathrm{Spin}(10)\). Mukai's model of \(\overline{M}_{7}\) is the quotient \(\mathrm{Gr}(7,16)/\!\!/\,\mathrm{Spin}(10)\). By definition, this GIT quotient is \(\mathrm{Proj}(\oplus_{d}(\mathrm{Sym}^{d}\Lambda^{7}S^{+})^{\mathrm{Spin}(10)})\). In this section, we construct a \(\mathrm{Spin}(10)\) invariant polynomial.
We begin with an approach for computing \(G\) invariants in a fixed degree. Sturmfels calls this the _Lie algebra method_ in [27, Section 4.5], and it is also discussed in Derksen and Kemper's book in [9, Section 4.5]. This approach uses the Casimir operator on \(\mathfrak{g}\).
**Definition 6.1**.: Let \(\delta_{1},\ldots,\delta_{m}\) be a basis of \(\mathfrak{g}\), and let \(\gamma_{1},\ldots,\gamma_{m}\) be the dual basis of \(\mathfrak{g}\) with respect to the Killing form \(\kappa\). The Casimir operator \(c\) is defined as
\[c=\sum\delta_{i}*\gamma_{i}\]
One key property of \(c\) is the following:
**Proposition 6.2**.: _If \(V(\lambda)\) is an irreducible representation with highest weight \(\lambda\), then \(c\) acts as multiplication by the scalar \((\lambda,\lambda+2\rho)\). (Here (,) represents the Killing form, and \(\rho\) is half the sum of the positive roots.)_
See for instance [14, (25.14)]. This suggests the following strategy for computing invariants.
**Proposition 6.3**.: \(v\) _is invariant under \(G\) if and only if \(v\in\ker(c)\)._
It also suggests an iterative procedure for computing invariants.
**Proposition 6.4**.: _Let \(V=\bigoplus_{\lambda\in S}V_{\lambda}^{m_{\lambda}}\) be the irreducible decomposition of \(V\). Let \(S^{\prime}=\{(\lambda,\lambda+2\rho):\lambda\in S,\lambda\neq 0\}\). Then the operator \(\prod_{k\in S^{\prime}}(c-k)\) projects \(V\) to \(V^{G}\)._
Proof.: This is [9, Prop. 4.5.18], plus the observation that we can compute the spectrum of the Casimir operator \(c\) on \(V\) once we know the irreducible decomposition of \(V\).
Proposition 6.3 gives a straightforward algorithm for finding the invariant polynomials in a fixed degree: compute the action of \(c\), and then compute its kernel. However, \(\dim V\) is so large for the representation we want to study that computing \(\ker c\) in a naive way will not work. We have \(\dim\Lambda^{7}S^{+}=\binom{16}{7}=11,440.\) A character calculation shows that the lowest degree invariants are in degree \(4\); see Code 6.1. We have
\[\dim\operatorname{Sym}^{4}\Lambda^{7}S^{+}=\binom{11440+4-1}{4}=714,036,824,1 89,260.\]
A standard approach to reduce the dimensions of the spaces appearing in the calculation is to restrict to the \(T\)- and \(W\)-invariant subspace, where \(T\) is a maximal torus and \(W\) is the Weyl group. However, this is still too large; we have \(\dim(\operatorname{Sym}^{4}\Lambda^{7}S^{+})^{T}=359,317,176,120\), which implies that the \(T\)-and \(W\)-invariant subspace will have dimension approximately \(100\) million or more; see Code 6.2.
Here is an observation that leads to a successful approach. \(\Lambda^{7}S^{+}\) is reducible; we have \(\Lambda^{7}S^{+}\cong V_{1}\oplus V_{2}\), where \(V_{1}\) has highest weight \((1,0,1,0,1)\) and \(V_{2}\) has highest weight \((3,0,0,1,0)\). We have \(\dim V_{1}=8800\) and \(\dim V_{2}=2640\), and highest weight vectors \(v_{1}\) and \(v_{2}\) generating these modules are as follows; see Code 6.3.
\[v_{1} =y_{\{1,2\},\{1,3\},\{1,2,3,4\},\{1,2,3,5\},\{1,2,4,5\},\{1,3,4,5 \},\{2,3,4,5\}}\] \[v_{2} =y_{\{1,2\},\{1,3\},\{1,4\},\{1,2,3,4\},\{1,2,3,5\},\{1,2,4,5\}, \{1,3,4,5\}}\]
Thus
\[\operatorname{Sym}^{d}(V_{1}\oplus V_{2})\cong\sum_{d_{1}+d_{2}=d}\operatorname {Sym}^{d_{1}}V_{1}\otimes\operatorname{Sym}^{d_{2}}V_{2}\]
We focus on the summand \(\operatorname{Sym}^{2}V_{1}\otimes\operatorname{Sym}^{2}V_{2}\). There are \(89\)\(\operatorname{Spin}(10)\) invariants in this summand, and they are all of the form
\[(V(\lambda)\otimes V(\lambda^{*}))^{\operatorname{Spin}(10)}\]
for some irreducible \(V(\lambda)\subset V_{1}\) with dual \(V(\lambda^{*})\subset V_{2}\); see Code 6.4.
Next, we analyze the irreducible decompositions of \(\operatorname{Sym}^{2}V_{1}\) and \(\operatorname{Sym}^{2}V_{2}\) and select one dual pair of summands for further study. Specifically, we select the summand of \(V_{1}\) with highest weight \((5,0,0,0,0)\). \(\dim V(5\omega_{1})=1782\). The rationale for this choice is that, on the one hand, if \(\lambda\) is too far from \(0\) in the weight lattice, \(V(\lambda)\) will have large dimension, and the subsequent calculations in \(V(\lambda)\otimes V(\lambda^{*})\) will be difficult. But if \(\lambda\) is too close to \(0\) in the weight lattice, then the weight \(\lambda\) and \(\lambda^{*}\) spaces in \(\operatorname{Sym}^{2}V_{1}\) and \(\operatorname{Sym}^{2}V_{2}\) will have large dimension, and it will be difficult to compute highest weight vectors generating \(V(\lambda)\) and \(V(\lambda^{*})\). Choosing \(\lambda=(5,0,0,0,0)\) was a compromise between these competing considerations; see Code 6.5.
Next, observe that \(V(5\omega_{1})\) appears in the fifth symmetric power of the standard representation of \(\mathfrak{so}(10)\); see Code 6.6.
\[\operatorname{Sym}^{5}\operatorname{Std}\cong V(5\omega_{1})\oplus V(3\omega_ {1})\oplus V(\omega_{1})\]
Choose an explicit basis of \(V(5\omega_{1})\subset\operatorname{Sym}^{5}\operatorname{Std}\) consisting of elements of the form
\[f_{I}=X_{-\alpha_{i_{k}}}\dots X_{-\alpha_{i_{1}}}.w\]
where \(w\) is a highest weight vector of \(V(5\omega_{1})\) and \(I=\{i_{1},\dots,i_{k}\}\) indexes a sequence of negative roots; see Code 6.7. This yields a basis \(B_{5\omega_{1}}=\{f_{I}\otimes g_{J}\}\) of \(V(5\omega_{1})\otimes V(5\omega_{1})\).
The \(T\)-invariants of \(V(5\omega_{1})\otimes V(5\omega_{1})\) are spanned by the basis elements \(f_{I}\otimes g_{J}\) in which \(f_{I}\) and \(g_{J}\) have opposite weight. We have
\[\dim(V(5\omega_{1})\otimes V(5\omega_{1}))^{T}=4722;\]
see Code 6.8. The dimension of this space is sufficiently small that we can compute the kernel of the restriction of the Casimir operator \(c\) to this space using the iterative approach suggested in Proposition 6.4. We obtain a symbolic expression for an invariant polynomial that we denote \(F_{5\omega_{1}}\).
**Proposition 6.5**.: _We have explicit lists of sequences \(I\) and \(J\) defining a basis of \((V(5\omega_{1})\otimes V(5\omega_{1}))^{T}\) and coefficients \(c_{IJ}\in\mathbb{Q}\) such that the linear combination_
\[F_{5\omega_{1}}=\sum_{I,J}c_{IJ}(X_{-\alpha_{i_{k}}}\dots X_{-\alpha_{i_{1}}}. w_{1})\otimes(X_{-\alpha_{j_{\ell}}}\dots X_{-\alpha_{j_{1}}}.w_{2}) \tag{6.1}\]
_is a \(\operatorname{Spin}(10)\) invariant polynomial._
See Code 6.9.
One more ingredient is needed in order to evaluate this symbolic expression for \(F_{5\omega_{1}}\) on points \([P]\in\operatorname{Gr}(7,16)\): namely, we need highest weight vectors \(w_{1}\) and \(w_{2}\) generating \(V(5\omega_{1})\subset\operatorname{Sym}^{2}V_{1}\) and \(V(5\omega_{1})\subset\operatorname{Sym}^{2}V_{2}\), respectively. We obtain these as follows. The Casimir operator acts on \(V(5\omega_{1})\) with eigenvalue \(65\), and acts by different scalars on the other irreducible submodules of \(\operatorname{Sym}^{2}V_{1}\) and \(\operatorname{Sym}^{2}V_{2}\) containing the \(5\omega_{1}\) weight space. Thus, we can compute \(w_{1}\) and \(w_{2}\) by iteratively projecting away the eigenspaces of \((\operatorname{Sym}^{2}V_{1})_{(5,0,0,0,0)}\) and \((\operatorname{Sym}^{2}V_{2})_{(5,0,0,0,0)}\) corresponding to the other eigenvalues of the Casimir operator \(c\). This yields vectors \(w_{1}\) and \(w_{2}\) having \(569\) terms and \(785\) terms, respectively; see Code 6.10.
_Remark_. We can consider \(V(5\omega_{1})\subset\operatorname{Sym}^{5}\operatorname{Std}\) and simplify the expression (6.1) for \(F_{5\omega_{1}}\). This yields an \(\operatorname{SO}(10)\)-invariant polynomial of bidegree \((5,5)\) in two sets of \(10\) variables. It has \(7502\) terms; see Code 6.11. It seems likely that this polynomial has been described in the literature before, but I do not know a reference for this.
### \(\operatorname{Spin}(10)\)-semistability of singular curves
We now state and prove the main theorem.
**Theorem 6.6**.: _The points \([P]\in\operatorname{Gr}(7,16)\) parametrizing the following singular curves are \(\operatorname{Spin}(10)\)-semistable._
1. _The 7-cuspidal curve with heptagonal symmetry_ \(C_{\operatorname{cusp}}\)__
2. _The genus 7 balanced ribbon_ \(C_{\operatorname{rib}}\)__
3. _The reducible nodal curves_ \(C_{\operatorname{nod},t}\) _for_ \(t\neq 1,\infty\)__
Proof.: We use the linear spaces \(P_{\operatorname{cusp}}\), \(P_{\operatorname{rib}}\), and \(P_{\operatorname{nod},t}\) described in Propositions 3.1, 4.1, and 5.1.
We have
\[F_{5\omega_{1}}(P_{\operatorname{cusp}}) =-63984375\] \[F_{5\omega_{1}}(P_{\operatorname{rib}}) =\frac{92664000}{343}\] \[F_{5\omega_{1}}(P_{\operatorname{nod},t}) =t_{1}^{2}(t_{0}-t_{1})^{3}\frac{234000}{343}\]
See Code 6.12.
Since there exists a \(\operatorname{Spin}(10)\) invariant polynomial that does not vanish at these points, these points are \(\operatorname{Spin}(10)\)-semistable.
Recall that by Proposition 5.4, we know that \(P_{\operatorname{nod},1}\) and \(P_{\operatorname{nod},\infty}\) are \(T\)-unstable, hence \(\operatorname{Spin}(10)\)-unstable. Thus, we have a complete description of \(\operatorname{Spin}(10)\)-semistability or instability for each member of the family \(C_{\operatorname{nod},t}\). These results naturally suggest the following question.
**Question 6.7**.: What are the GIT semistable replacements for the family \(P_{\operatorname{nod},t}\) when \(t=1\) and \(t=\infty\)?
Foundational references for the statement of GIT semistable replacement include [21, Lemma 5.3], [25, Theorem 4.1.i], and [26, Proposition 2.1]. More recent references include [4, Section 1.2.1], [5, Theorem 11.1], and [16, Proposition 1.7]. Unfortunately, none of these references give an effective algorithm for computing the GIT semistable replacement.
_Remark._ The calculations reported in the proof of Theorem 6.6 required very large amounts of time and memory. They were accomplished by parallel calculations on four AWS r5.24xlarge instances, each with 96 vCPUs and 768 GB memory. This took approximately 36 hours. In future work, we will try to improve the Macaulay2 code for these calculations to permit additional calculations at a lower cost.
|
2301.06145 | Dyck Words, Pattern Avoidance, and Automatic Sequences | We study various aspects of Dyck words appearing in binary sequences, where
$0$ is treated as a left parenthesis and $1$ as a right parenthesis. We show
that binary words that are $7/3$-power-free have bounded nesting level, but
this no longer holds for larger repetition exponents. We give an explicit
characterization of the factors of the Thue-Morse word that are Dyck, and show
how to count them. We also prove tight upper and lower bounds on $f(n)$, the
number of Dyck factors of Thue-Morse of length $2n$. | Lucas Mol, Narad Rampersad, Jeffrey Shallit | 2023-01-15T17:20:56Z | http://arxiv.org/abs/2301.06145v3 | # Dyck Words, Pattern Avoidance, and Automatic Sequences+
###### Abstract
We study various aspects of Dyck words appearing in binary sequences, where \(0\) is treated as a left parenthesis and \(1\) as a right parenthesis. We show that binary words that are \(7/3\)-power-free have bounded nesting level, but this no longer holds for larger repetition exponents. We give an explicit characterization of the factors of the Thue-Morse word that are Dyck, and show how to count them. We also prove tight upper and lower bounds on \(f(n)\), the number of Dyck factors of Thue-Morse of length \(2n\).
Keywords:First keyword Second keyword Another keyword.
## 1 Introduction
We define \(\Sigma_{k}:=\{0,1,\ldots,k-1\}\). Suppose \(x\in\Sigma_{2}^{*}\); that is, suppose \(x\) is a finite binary word. We say it is a _Dyck word_ if, considering \(0\) as a left parenthesis and \(1\) as a right parenthesis, the word represents a string of balanced parentheses [5]. For example, \(010011\) is Dyck, while \(0110\) is not. Formally, \(x\) is Dyck if \(x\) is empty, or there are Dyck words \(y,z\) such that either \(x=0y1\) or \(x=yz\). The set of all Dyck words forms the _Dyck language_, denoted here by \(\mathcal{D}_{2}\).
In this paper we are concerned with the properties of factors of infinite binary words that are Dyck words.
If \(x\) is a Dyck word, we may talk about its _nesting level_\(N(x)\), which is the deepest level of parenthesis nesting in the string it represents. Formally we have \(N(\epsilon)=0\), \(N(0y1)=N(y)+1\), and \(N(yz)=\max(N(y),N(z))\) if \(y,z\) are
Dyck words. The Dyck property and nesting level are intimately connected with _balance_, which is a function defined by \(B(x)=|x|_{0}-|x|_{1}\), the excess of \(0\)'s over \(1\)'s in \(x\). It is easy to see that a word is Dyck if and only if \(B(x)=0\) and \(B(x^{\prime})\geq 0\) for every prefix \(x^{\prime}\) of \(x\). Furthermore, the nesting level of a Dyck word \(x\) is the maximum of \(B(x^{\prime})\) over all prefixes \(x^{\prime}\) of \(x\).
In this paper we will also be concerned with pattern avoidance, particularly avoidance of powers. We say a finite word \(w=w[1..n]\) has period \(p\geq 1\) if \(w[i]=w[i+p]\) for all indices \(i\) with \(1\leq i\leq n-p\). The smallest period of \(w\) is called _the_ period, and is denoted \(\operatorname{per}(w)\). The _exponent_ of a finite word \(w\) is defined to be \(\exp(w):=|w|/\operatorname{per}(w)\). A word with exponent \(\alpha\) is said to be an \(\alpha\)-power. For example, \(\exp(\mathtt{alfalfa})=7/3\) and so \(\mathtt{alfalfa}\) is a \(7/3\)-power. If a word contains no powers \(\geq\alpha\), then we say it is _\(\alpha\)-power-free_. If it contains no powers \(>\alpha\), then we say it is _\(\alpha^{+}\)-power-free_. If \(w\) is a finite or infinite word, its _critical exponent_ is defined to be \(\operatorname{ce}(w):=\sup\{\exp(x)\colon\,x\text{ is a finite nonempty factor of }w\}\). A _square_ is a word of the form \(xx\), where \(x\) is a nonempty word. An _overlap_ is a word of the form \(axaxa\), where \(a\) is a single letter and \(x\) is a possibly empty word.
Some of our work is carried out using the Walnut theorem prover, which can rigorously prove many results about automatic sequences. See [9, 12] for more details. Walnut is free software that can be downloaded at
[https://cs.uwaterloo.ca/~shallit/walnut.html](https://cs.uwaterloo.ca/~shallit/walnut.html).
## 2 Repetitions and Dyck words
Theorem 2.1: _If a binary word is \(7/3\)-power-free and Dyck, then its nesting level is at most \(3\)._
Proof: The \(7/3\)-power-free Dyck words of nesting level \(1\) are \(01\) and \(0101\). The set of \(7/3\)-power-free Dyck words of nesting level \(2\) is therefore a subset of \(\{01,0011,001011\}^{*}\). Let \(x\) be a \(7/3\)-power-free Dyck word of nesting level \(3\). Suppose that \(x=0y1\), where \(y\) has nesting level \(2\). Then to avoid the cubes \(000\) and \(111\), the word \(y\) must begin with \(01\) and end with \(01\). Furthermore, since \(y\) has nesting level \(2\) it must contain one of \(0011\) or \(001011\). Write \(x=001y^{\prime}011\). The word \(y^{\prime}\) cannot begin or end with \(01\), since that would imply that \(x\) contains one of the \(5/2\)-powers \(01010\) or \(10101\). Thus \(y^{\prime}\) begins with \(001\) and ends with \(011\), which means \(x\) begins with \(001001\) and ends with \(011011\). Consequently \(x\) cannot be extended to the left or to the right without creating a cube or \(7/3\)-power. Furthermore, this implies that a \(7/3\)-power-free Dyck word of nesting level \(3\) cannot be written as a concatenation of two non-empty Dyck words, nor can it be extended to a \(7/3\)-power-free Dyck word of nesting level \(4\).
Theorem 2.2: _Define \(h(0)=01\), \(h(1)=0011\), and \(h(2)=001011\). A binary word \(w\) is an overlap-free Dyck word if and only if either_
1. \(w=h(x)\)_, where_ \(x\in\Sigma_{3}^{*}\) _is square-free and contains no_ \(212\) _or_ \(20102\)_; or_
2. \(w=0h(x)1\)_, where_ \(x\in\Sigma_{3}^{*}\) _is square-free, begins with_ \(01\) _and ends with_ \(10\)_, and contains no_ \(212\) _or_ \(20102\)
Proof: Let \(w\) be an overlap-free Dyck word. By Theorem 1.1, we have \(N(w)\leq 3\). Suppose \(N(w)\leq 2\). Then \(w\in\{01,0011,001011\}^{*}\) by the proof of Theorem 1.1. So we have \(w=h(x)\) for some \(x\in\Sigma_{2}^{*}\). If \(N(w)=3\), then by the proof of Theorem 1.1, we have \(w=0h(x)1\). If \(x\) contains a square \(yy\) as a proper factor, then certainly \(w\) contains one of the overlaps \(1h(y)h(y)\) or \(h(y)h(y)0\). Furthermore, if \(x\) contains \(212\), then \(w\) contains the overlap \(011001100\) and if \(x\) contains \(20102\), then \(w\) contains the overlap \(1101001101001\). Finally, if \(w=0h(x)1\), then \(x\) must begin and end with \(0\) and contain at least one \(1\) or \(2\). If \(x\) begins with \(02\), then \(w\) contains the overlap \(0010010\), and if \(x\) ends with \(20\), then \(w\) contains the overlap \(1011011\). Thus \(x\) begins with \(01\) and ends with \(10\).
For the other direction, let \(x\in\Sigma_{3}^{*}\) be a squarefree word that contains no \(212\) or \(20102\). First consider the word \(h(x)\), which is clearly a Dyck word. We now show that \(h(x)\) is overlap-free. We verify by computer that if \(|x|\leq 10\), then \(h(x)\) is overlap-free. So we may assume that \(|x|\geq 11\). Suppose towards a contradiction that \(h(x)\) contains an overlap \(z\). Assume that \(z=0y0y0\); the case \(z=1y1y1\) is similar, and the proof is omitted. We consider several cases depending on the prefix of \(y\).
If \(y\) starts with \(0\), then \(h^{-1}(z0^{-1})=h^{-1}(0y0y)\) is a square that appears as a proper factor of \(x\).
If \(y\) starts with \(100\), write \(y=100y^{\prime}\), so that \(z=0100y^{\prime}0100y^{\prime}\). Then \(h^{-1}(z0^{-1})=h^{-1}(0100y^{\prime}0100y^{\prime})\) is a square that appears as a proper factor of \(x\).
If \(y\) starts with \(101\), write \(y=101y^{\prime}\), so that \(z=0101y^{\prime}0101y^{\prime}\). Note that \(00\) is not a factor of \(x\), so any occurrence of \(0101\) in \(z\) is as a factor of \(h(2)=001011\). Consequently, the word \(h^{-1}(0z0^{-1})=h^{-1}(00101y^{\prime}0101y^{\prime})\) is a square that appears as a proper factor of \(x\).
Finally, if \(y\) starts with \(11\), then write \(y=11y^{\prime}\), so that \(z=011y^{\prime}011y^{\prime}0\). Then \(z\) is a factor of \(h(ax^{\prime}bx^{\prime}c)\), where \(a,b,c\in\{1,2\}\), and the value of \(b\) is determined by the suffix of \(y^{\prime}\): if \(y^{\prime}\) ends with \(001\) then \(b=2\) and if \(y^{\prime}\) ends with \(0\) then \(b=1\). Clearly we have \(a\neq b\) and \(b\neq c\), since otherwise \(x\) contains a square as a proper factor. However, if \(b=2\) then \(y^{\prime}\) ends with \(001\), which implies \(c=2\), a contradiction. So we have \(b=1\), and further, since \(a\neq b\) and \(b\neq c\), we have \(a=c=2\). We therefore have a factor \(2x^{\prime}1x^{\prime}2\) of \(x\). Now \(x^{\prime}\) can neither begin nor end with \(2\) or \(1\), so we have \(2x^{\prime}1x^{\prime}2=20x^{\prime\prime}010x^{\prime\prime}02\). Similarly, the word \(x^{\prime\prime}\) can neither begin nor end with \(0\) or \(1\), so we have \(20x^{\prime\prime}010x^{\prime\prime}02=202x^{\prime\prime\prime}20102x^{ \prime\prime\prime}202\), whence \(x\) contains the forbidden factor \(20102\), a contradiction.
Thus, we conclude that \(h(x)\) is an overlap-free Dyck word. Finally, assume that \(x\) begins with \(01\) and \(10\), and consider the word \(0h(x)1\). Again, it is clear that \(0h(x)1\) is a Dyck word, and we have already shown that the word \(h(x)\) is overlap-free. Now \(0h(x)1\) begins with \(0010011\) and ends with \(0011011\). Note that the only occurrences of \(00100\) and \(11011\) as factors of \(0h(x)1\) are as a prefix and a suffix, respectively. It follows that if \(0h(x)1\) contains an overlap, then this overlap has period at most \(4\) and occurs as either a prefix or a suffix of \(0h(x)1\). However, one easily verifies that no such overlap exists. This completes the proof.
**Corollary 3**.: _There are arbitrarily long overlap-free Dyck words of nesting level \(2\) (and \(3\))._
Proof.: Consider the well-known word \(\mathbf{s}\), which is the infinite fixed point, starting with \(0\), of the morphism defined by \(0\mapsto 012\), \(1\mapsto 02\), \(2\mapsto 1\). Thue [14] proved that \(\mathbf{s}\) is squarefree and contains no \(010\) or \(212\); this is also easy to verify with Walnut (cf. [12]). Let \(x\) be a prefix of \(\mathbf{s}\) that ends in \(10\). Since the factor \(10\) appears infinitely many times in \(\mathbf{s}\), there are arbitrarily long such words \(x\). So \(x\) is squarefree, contains no \(212\) or \(20102\), begins in \(01\), and ends in \(10\). By Theorem 2, the words \(h(x)\) and \(0h(x)1\) are overlap-free Dyck words. It is easy to see that \(h(x)\) has nesting level \(2\), and \(0h(x)1\) has nesting level \(3\), which completes the proof.
Theorem 1 says that every \(7/3\)-power-free Dyck word has nesting level at most \(3\). We will see that this result is best possible with respect to the exponent \(7/3\); in fact, there are \(7/3^{+}\)-power-free Dyck words of every nesting level. Before we proceed with the construction of such words, we provide a very simple construction of cube-free Dyck words of every nesting level, which serves as a preview of the main ideas in the more complicated construction of \(7/3^{+}\)-power-free Dyck words of every nesting level.
**Lemma 4**.: _Let \(u\) and \(v\) be Dyck words, and let \(f:\Sigma_{2}^{*}\to\Sigma_{2}^{*}\) be the morphism defined by \(f(0)=0u\) and \(f(1)=v1\). If \(w\) is a nonempty Dyck word, then \(f(w)\) is a Dyck word, and \(N(f(w))=N(w)+\max(N(u),N(v))\)._
Proof.: The proof is by induction on \(|w|\). In the base case, if \(w=01\), then \(f(w)=0uv1\), and \(N(f(w))=1+\max(N(u),N(v))=N(w)+\max(N(u),N(v))\).
Now suppose that \(|w|=n\) for some \(n>2\), and that the statement holds for all nonempty Dyck words of length less than \(n\). We have two cases.
**Case 1:** We have \(w=0y1\) for some nonempty Dyck word \(y\).
By the induction hypothesis, the word \(f(y)\) is a Dyck word with \(N(f(y))=N(y)+\max(N(u),N(v))\). So \(f(w)=0uf(y)v1\) is a Dyck word with
\[N(f(w)) =1+\max(N(u),N(f(y)),N(v))\] \[=1+N(y)+\max(N(u),N(v))\] \[=N(w)+\max(N(u),N(v)).\]
**Case 2:** We have \(w=yz\) for some nonempty Dyck words \(y,z\). By the induction hypothesis, the word \(f(y)\) is a Dyck word with
\[N(f(y))=N(y)+\max(N(u),N(v)),\]
and \(f(z)\) is a Dyck word with \(N(f(z))=N(z)+\max(N(u),N(v))\). Therefore, the word \(f(w)=f(y)f(z)\) is a Dyck word with
\[N(f(w)) =\max(N(f(y)),N(f(z)))\] \[=\max(N(y),N(z))+\max(N(u),N(v))\] \[=N(w)+\max(N(u),N(v)).\qed\]
Corollary 5.: _There is a cube-free Dyck word of every nesting level._
Proof.: Let \(f:\Sigma_{2}^{*}\to\Sigma_{2}^{*}\) be the morphism defined by \(f(0)=001\) and \(f(1)=011\). Note that \(f(0)=0u\) and \(f(1)=u1\), where \(u=01\) is a Dyck word with \(N(u)=1\). It is also well-known that the morphism \(f\) is cube-free; for example, this follows easily from a criterion of Keranen [7], which states that to confirm that a uniform binary morphism is cube-free, it suffices to check that the images of all words of length at most \(4\) are cube-free. Thus, by a straightforward induction using Lemma 4, we see that \(w_{t}=f^{t}(01)\) is a cube-free Dyck word with \(N(w_{t})=t+1\).
We now define the specific morphisms involved in our construction of \(7/3^{+}\)-power-free Dyck words of arbitrarily large nesting level. Let \(g:\Sigma_{3}^{*}\to\Sigma_{3}^{*}\) be the \(6\)-uniform morphism defined by
\[g(0) =022012,\] \[g(1) =022112,\text{ and }\] \[g(2) =202101.\]
Let \(f:\Sigma_{3}^{*}\to\Sigma_{2}^{*}\) be the \(38\)-uniform morphism defined by
\[f(0) =001001101001100101100100110011001101,\] \[f(1) =00101100110100110101100101011011,\text{ and }\] \[f(2) =001011001101010101100110101010110100111.\]
We will show that for every \(t\geq 0\), the word \(f(g^{t}(2))\) is a \(7/3^{+}\)-power-free Dyck word of nesting level \(2t+2\). The letters \(f\) and \(g\) denote these specific morphisms throughout the remainder of this section.
Over the ternary alphabet \(\Sigma_{3}\), we think of the letter \(0\) as a left parenthesis, the letter \(1\) as a right parenthesis, and the letter \(2\) as a Dyck word. So we will be particularly interested in the ternary words for which the removal of every occurrence of the letter \(2\) leaves a Dyck word, and we call these _ternary Dyck words_.
Definition 6.: Let \(\beta:\Sigma_{3}^{*}\to\Sigma_{2}^{*}\) be defined by \(\beta(0)=0\), \(\beta(1)=1\), and \(\beta(2)=\varepsilon\), and let \(w\in\Sigma_{3}^{*}\). If \(\beta(w)\) is a Dyck word, then we say that \(w\) is a _ternary Dyck word_. In this case, the _nesting level_ of \(w\), denoted \(N(w)\), is defined by \(N(w)=N(\beta(w))\).
Lemma 7.: _Let \(w\in\Sigma_{3}^{*}\). If \(w\) is a nonempty ternary Dyck word, then \(g(w)\) is a ternary Dyck word with \(N(g(w))=N(w)+1\)._
Proof.: Throughout this proof, we let \(u=01\), a Dyck word with nesting level \(1\). Note that \(\beta(g(0))=001=0u\), \(\beta(g(1))=011=u1\), and \(\beta(g(2))=0101=u^{2}\).
The proof is by induction on \(|\beta(w)|\). We have two base cases. If \(\beta(w)=\varepsilon\), then \(w=2^{i}\) for some \(i\geq 1\), and \(N(w)=0\). We have \(\beta(g(w))=u^{2i}\), so we see
that \(g(w)\) is a ternary Dyck word with \(N(g(w))=1=N(w)+1\). If \(\beta(w)=01\), then \(w=2^{i}02^{j}12^{k}\) for some \(i,j,k\geq 0\), and \(N(w)=1\). We have
\[\beta(g(w))=u^{2i}(0u)u^{2j}(u1)u^{2k}=u^{2i}0u^{2j+2}1u^{2k},\]
so we see that \(g(w)\) is a ternary Dyck word with \(N(g(w))=2=N(w)+1\), as desired.
Now suppose that \(|\beta(w)|=n\) for some \(n>2\), and that the statement holds for all ternary Dyck words \(w^{\prime}\) with \(|\beta(w^{\prime})|<n\). We have two cases.
**Case 1:** We have \(\beta(w)=0y1\) for some nonempty Dyck word \(y\).
In this case we may write \(w=2^{i}0w^{\prime}12^{j}\) for some \(i,j\geq 0\), so that \(\beta(w^{\prime})=y\). By the induction hypothesis, the word \(g(w^{\prime})\) is a ternary Dyck word with \(N(g(w^{\prime}))=N(w^{\prime})+1\). It follows that \(\beta(g(w))=u^{2i}0u\beta(g(w^{\prime}))u1u^{2j}\) is a Dyck word, so \(g(w)\) is a ternary Dyck word, and
\[N(g(w)) =1+N(g(w^{\prime}))\] \[=1+N(w^{\prime})+1\] \[=N(w)+1.\]
**Case 2:** We have \(\beta(w)=y_{1}y_{2}\) for some nonempty Dyck words \(y_{1},y_{2}\).
Write \(w=w_{1}w_{2}\) for some \(w_{1},w_{2}\in\Sigma_{3}^{*}\) such that \(\beta(w_{1})=y_{1}\), and \(\beta(w_{2})=y_{2}\). By the induction hypothesis, the words \(g(w_{1})\) and \(g(w_{2})\) are ternary Dyck words with \(N(g(w_{1}))=N(w_{1})+1\), and \(N(g(w_{2}))=N(w_{2})+1\). Therefore, the word \(g(w)=g(w_{1})g(w_{2})\) is a ternary Dyck word with
\[N(g(w)) =\max\left(N(g(w_{1})),N(g(w_{2}))\right)\] \[=\max(N(w_{1})+1,N(w_{2})+1)\] \[=\max(N(w_{1}),N(w_{2}))+1\] \[=N(w)+1.\qed\]
**Lemma 8**.: _Let \(w\in\Sigma_{3}^{*}\). If \(w\) is a nonempty ternary Dyck word, then \(f(w)\) is a Dyck word with \(N(f(w))=2N(w)+2\)._
Proof.: Note that \(f(0)=0u_{1}0u_{2}\), \(f(1)=u_{3}1u_{4}1\), and \(f(2)=v\), where \(u_{1}\), \(u_{2}\), \(u_{3}\), and \(u_{4}\) are Dyck words of nesting level \(2\) and length \(18\), and \(v\) is a Dyck word of nesting level \(2\) and length \(38\).
The proof is by induction on \(|\beta(w)|\). We have two base cases. If \(\beta(w)=\varepsilon\), then \(w=2^{i}\) for some \(i\geq 1\), and \(N(w)=0\). We have \(f(w)=v\), so we see that \(f(w)\) is a Dyck word with \(N(f(w))=2=2N(w)+2\). If \(\beta(w)=01\), then \(w=2^{i}02^{j}12^{k}\) for some \(i,j,k\geq 0\), and \(N(w)=1\). We have
\[f(w)=v^{i}0u_{1}0u_{2}v^{j}u_{3}1u_{4}1v^{k},\]
so we see that \(f(w)\) is a Dyck word with \(N(f(w))=4=2N(w)+2\).
Now suppose that \(|\beta(w)|=n\) for some \(n>2\), and that the statement holds for all ternary Dyck words \(w^{\prime}\) with \(|\beta(w^{\prime})|<n\). We have two cases.
**Case 1:** We have \(\beta(w)=0y1\) for some nonempty Dyck word \(y\).
In this case we may write \(w=2^{i}0w^{\prime}12^{j}\) for some \(i,j\geq 0\), so that \(\beta(w^{\prime})=y\). By the induction hypothesis, the word \(f(w^{\prime})\) is a Dyck word with \(N(f(w^{\prime}))=2N(w^{\prime})+2\). It follows that \(f(w)=v^{i}0u_{1}0u_{2}f(w^{\prime})u_{3}1u_{4}1v^{j}\) is a Dyck word with
\[N(f(w)) =2+N(f(w^{\prime}))\] \[=2+2N(w^{\prime})+2\] \[=2N(w)+2.\]
**Case 2:** We have \(\beta(w)=y_{1}y_{2}\) for some nonempty Dyck words \(y_{1},y_{2}\).
Write \(w=w_{1}w_{2}\) for some \(w_{1},w_{2}\in\Sigma_{3}^{*}\) such that \(\beta(w_{1})=y_{1}\), and \(\beta(w_{2})=y_{2}\). By the induction hypothesis, the words \(f(w_{1})\) and \(f(w_{2})\) are Dyck words with \(N(f(w_{1}))=2N(w_{1})+2\), and \(N(f(w_{2}))=N(w_{2})+1\). Therefore, the word \(f(w)=f(w_{1})f(w_{2})\) is a Dyck word with
\[N(f(w)) =\max\left(N(f(w_{1})),N(f(w_{2}))\right)\] \[=\max(2N(w_{1})+2,2N(w_{2})+2)\] \[=2\max(N(w_{1}),N(w_{2}))+2\] \[=2N(w)+2.\qed\]
**Theorem 9**.: _There are \(7/3^{+}\)-power-free Dyck words of every nesting level._
Proof.: Let \(t\geq 0\). We claim that the word \(f(g^{t}(2))\) is a \(7/3^{+}\)-free Dyck word of nesting level \(2t+2\). Since \(2\) is a ternary Dyck word with nesting level \(0\), by Lemma 7, and a straightforward induction, the word \(g^{t}(2)\) is a ternary Dyck word with nesting level \(t\). Thus, by Lemma 8, the word \(f(g^{t}(2))\) is a Dyck word with nesting level \(2t+2\).
It remains only to show that \(f(g^{t}(2))\) is \(7/3^{+}\)-power-free. We use the Walnut theorem-prover to show that \(f(g^{\omega}(0))\) is \(7/3^{+}\)-power-free, which is equivalent. One only need type in the following commands:
morphism f "0->001001101001100110010011001011001101 1->0010110011010101011001100110011 2->001011010011010100110011001100110011001100111":
morphism g "0->022012 1->022112 2->202101":
promote GG g: image DFG f GG:
eval DFGtest "?msd_6 Ei,n (n>=1) & At (3*t<=4*n) => DFG[i+t]=DFG[i+t+n]":
and Walnut returns FALSE. Here the first two morphism commands define \(f\) and \(g\), and the next two commands create a DFAO for \(f(g^{\omega}(0))\). Finally, the last command asserts the existence of a \(7/3^{+}\) power in \(f(g^{\omega}(0))\).
This was a large computation in Walnut, requiring 130G of memory and 20321 secs of CPU time.
_Remark 10_.: An alternative method of proof is to first use Walnut to show that the word \(g^{\omega}(0)\) is overlap-free, and then apply an extended version [8, Lemma 23] of a well-known result of Ochem [10, Lemma 2.1] to show that \(f(g^{\omega}(0))\) is \(7/3^{+}\)-power-free.
## 3 Dyck factors of Thue-Morse
In this section we give a characterization of those factors of \(\mathbf{t}\), the Thue-Morse sequence, that are Dyck.
Let \(g:\Sigma_{3}^{*}\to\Sigma_{2}^{*}\) be the morphism defined by \(g(0)=011\), \(g(1)=01\), and \(g(2)=0\) and let \(f:\Sigma_{3}^{*}\to\Sigma_{3}^{*}\) be the morphism defined by \(f(0)=012\), \(f(1)=02\), and \(f(2)=1\). Define \(\mathbf{s}=f^{\omega}(0)\). It is well-known that \(g(\mathbf{s})=\mathbf{t}\). Recall the morphism \(h:\Sigma_{2}^{*}\to\Sigma_{2}^{*}\) defined earlier by \(h(0)=01\), \(h(1)=0011\), and \(h(2)=001011\).
**Theorem 11**.: _The Dyck factors of the Thue-Morse word are exactly the words \(h(x)\) where \(x\) is a factor of \(\mathbf{s}\)._
Proof.: By considering the so-called "returns to \(11\)" in \(\mathbf{t}\) we see that \(\mathbf{t}\) begins with \(011\) followed by a concatenation of the four words
\[0011,\quad 010011,\quad 001011,\quad 01001011.\]
These are all Dyck words, as shown by the bracketings
\[(0(01)1),\quad(01)(0(01)1),\quad(0(01)(01)1),\quad(01)(0(01)(01)1).\]
Furthermore, these words must have the above bracketings when they occur as factors of any larger Dyck word in \(\mathbf{t}\). It follows that \(\mathbf{t}=011\mathbf{t}^{\prime}\), where \(\mathbf{t}^{\prime}\) is a concatenation of the three Dyck words \(h(0)=01\), \(h(1)=0011\), and \(h(2)=001011\).
To complete the proof, it suffices to show that \(h(\mathbf{s})=(011)^{-1}\mathbf{t}=(011)^{-1}g(\mathbf{s})\). We have
\[h(f(0)) =h(012)=g(120210)=g(0^{-1}f^{2}(0)0)\] \[h(f(1)) =h(02)=g(1210)=g(0^{-1}f^{2}(1)0)\] \[h(f(2)) =h(1)=g(20)=g(0^{-1}f^{2}(2)0),\]
so
\[h(\mathbf{s})=h(f(\mathbf{s}))=g(0^{-1}f^{2}(\mathbf{s}))=g(0^{-1}\mathbf{s} )=(011)^{-1}g(\mathbf{s}),\]
as required.
## 4 Dyck factors of some automatic sequences
In this section we are concerned with Dyck factors of automatic sequences. Recall that a sequence over a finite alphabet \((s(n))_{n\geq 0}\) is \(k\)_-automatic_ if there exists a DFAO (deterministic finite automaton with output) that, on input \(n\) expressed in base \(k\), reaches a state with output \(s(n)\).
Since \(\mathcal{D}_{2}\) is not a member of the FO[+]-definable languages [4], this means that "automatic" methods (like that implemented in the Walnut system; see [9, 12]) cannot always directly handle such words. However, in this section we show that if a \(k\)-automatic sequence also has a certain special property, then the number of Dyck factors of length \(n\) occurring in it is a \(k\)-regular sequence.
To explain the special property, we need the notion of synchronized sequence [11]. We say a \(k\)-automatic sequence \((v(n))_{n\geq 0}\) is _synchronized_ if there is a finite automaton accepting, in parallel, the base-\(k\) representation of \(n\) and \(v(n)\). Here the shorter representation is padded with leading zeros, if necessary.
Now suppose \(\mathbf{s}=(s(n))_{n\geq 0}\) is a \(k\)-automatic sequence taking values in \(\Sigma_{2}\) and define the running sum sequence \(v(n)=\sum_{0\leq i<n}s(i)\). If \(\mathbf{v}=(v(n))_{n\geq 0}\) is synchronized, we say that \(\mathbf{s}\) is _running-sum synchronized_.
Theorem 4.1: _Suppose \(\mathbf{s}=(s(n))_{n\geq 0}\) is a \(k\)-automatic sequence taking values in \(\Sigma_{2}\) that is running-sum synchronized. Then there is an automaton accepting, in parallel, the base-\(k\) representations of those pairs \((i,n)\) for which \(\mathbf{s}[i..i+n-1]\) is Dyck. Furthermore, there is an automaton accepting, in parallel, the base-\(k\) representations of those triples \((i,n,x)\) for which \(\mathbf{s}[i..i+n-1]\) is Dyck and whose nesting level is \(x\). In both cases, the automaton can be effectively constructed._
Proof: We use the fact that it suffices to create first-order logical formulas for these claims [12].
Suppose \(V(n,x)\) is true if and only \(v(n)=x\). Then define
\[N_{1}(i,n,x) :\exists y,z\ V(i,y)\,\wedge\,V(i+n,z)\,\wedge\,x+y=z\] \[N_{0}(i,n,x) :\exists y\ N_{1}(i,n,y)\,\wedge\,n=x+y\] \[\operatorname{Dyck}(i,n) :(\exists w\ N_{0}(i,n,w)\,\wedge\,N_{1}(i,n,w))\,\wedge\] \[\qquad(\forall t,y,z\ (t<n\,\wedge\,N_{0}(i,t,y)\,\wedge\,N_{1}(i,t,z)) \implies y\geq z).\]
Here
* \(N_{0}(i,n,x)\) asserts that \(|\mathbf{s}[i..i+n-1]|_{0}=x\);
* \(N_{1}(i,n,x)\) asserts that \(|\mathbf{s}[i..i+n-1]|_{1}=x\);
* \(\operatorname{Dyck}(i,n)\) asserts that \(\mathbf{s}[i..i+n-1]\) is Dyck.
We can now build an automaton for \(\operatorname{Dyck}(i,n)\) using the methods discussed in [12].
Next we turn to nesting level. First we need a first-order formula for the balance \(B(x)\) of a factor \(x\). Since we are only interested in balance for prefixes of Dyck words, it suffices to compute \(\max(0,B(x))\) for a factor \(x\). We can do this as follows:
\[\operatorname{Bal}(i,n,x):\ \exists y,z\ N_{0}(i,n,y)\wedge N_{1}(i,n,z)\wedge((y <z\wedge x=0)\mid(y\geq z\wedge y=x+z)).\]
Next, we compute the nesting level of a factor, assuming it is Dyck:
\[\mathrm{Nest}(i,n,x):\ \exists m\ m<n\land\mathrm{Bal}(i,m,x)\land\forall p,y\ (p<n\land\mathrm{Bal}(i,p,y))\implies y\leq x.\]
This completes the proof.
Corollary 13: _If \(\mathbf{s}=(s(n))_{n\geq 0}\) is a \(k\)-automatic sequence taking values in \(\Sigma_{2}\) that is running-sum synchronized, then it is decidable_
1. _whether_ \(\mathbf{s}\) _has arbitrarily large Dyck factors;_
2. _whether Dyck factors of_ \(\mathbf{s}\) _are of unbounded nesting level._
Proof: It suffices to create first-order logical statements asserting the two properties:
1. \(\forall m\ \exists i,n\ m>n\ \land\ \mathrm{Dyck}(i,m)\)
2. \(\forall q\ \exists i,n,p\ \operatorname{Dyck}(i,n)\ \land\ \mathrm{Nest}(i,n,p)\ \land\ p>q.\)
Example 14: As an example, let us use Walnut to prove that there is a Dyck factor of the Thue-Morse word for all even lengths. We can use the following Walnut commands, which implement the ideas above. We use the fact that the sum of \(T[0..n-1]\) is \(n/2\) if \(n\) is even, and \((n-1)/2+T[n-1]\) if \(n\) is odd.
def even "Ek n=2*k": def odd "Ek n=2*k+1": def V "($even(n) & 2*x=n) | ($odd(n) & 2*x+1=n & T[n-1]=@0) | ($odd(n) & 2*x=n+1 & T[n-1]=@1)":
# number of 1's in prefix T[0..n-1]
def N1 "Ey,z $V(i,y) & $V(i+n,z) & x+y=z":
# number of 1's in T[i..i+n-1] def NO "Ey $N1(i,n,y) & n=x+y":
def Dyck "(Ew $N0(i,n,w) & $N1(i,n,w)) & At,y,z (t+n & $N0(i,t,y) & $tmfac1(i,t,z)) => y>=z":
# is T[i..i+n-1] a Dyck word?
eval AllLengths "An $even(n) => Ei $Dyck(i,n)":
and Walnut returns TRUE.
Example 15: Continuing the previous example, let us show that the nesting level of every Dyck factor of Thue-Morse is \(\leq 2\). Of course, this follows from Theorem 11, but this shows how it can be done for any automatic sequence that is running-sum synchronized.
We use the following Walnut commands:
def Bal "Ey,z $N0(i,n,y) & $N1(i,n,z) & ((y<z & x=0) | (y>=z & y=x+z))":
computes max(0, B(T[i..i+n])) where B is balance; 14 states def Nest "Em (m<n) & $Bal(i,m,x) & Ap,y (p<n & $Bal(i,p,y)) => y<=x":
computes nesting level of factor, assuming it is Dyck eval maxnest2 "Ai,n,x ($Dyck(i,n) & $Nest(i,n,x)) => x<=2": and Walnut returns TRUE for the last assertion.
Now we turn to enumerating Dyck factors by length. Let us recall that a sequence \((s(n))_{n\geq 0}\) is \(k\)_-regular_ if there is a finite set of sequences \((s_{i}(n))_{n\geq 0}\), \(i=1,\ldots,t\), with \(s=s_{1}\), such that every subsequence of the form \((s(k^{e}n+a))_{n\geq 0}\) with \(e\geq 0\) and \(0\leq a<k^{e}\) can be expressed as a linear combination of the \(g_{i}\). See [1] for more details.
Alternatively, a sequence \((s(n))_{n\geq 0}\) is \(k\)-regular if there is a linear representation for it. If \(v\) is a row vector of dimension \(t\), \(w\) is a column vector of dimension \(t\), and \(\gamma\) is a matrix-valued morphism with domain \(\Sigma_{k}\) and range \(t\times t\)-matrices, then we say that the triple \((v,\gamma,w)\) is a _linear representation_ for a function \(s(n)\), of rank \(t\). It is defined by \(s(n)=v\gamma(x)w\), where \(x\) is any base-\(k\) representation of \(n\). See [2] for more details.
It is not difficult to use the characterization of Theorem 4.1 to find a linear representation for \(f(n)\), the number of Dyck factors of length \(2n\) appearing in \(\mathbf{t}\), the Thue-Morse word. However, in this section we will instead use a different approach that is more general.
Theorem 4.2: _Suppose \(\mathbf{s}=(s(n))_{n\geq 0}\) is a \(k\)-automatic sequence that is running-sum synchronized. Then \((f(n))_{n\geq 0}\), the number of Dyck factors of length \(2n\) appearing in \(\mathbf{s}\), is \(k\)-regular._
Proof: It suffices to find a linear representation for \(f(n)\).
To do so, we first find a first-order formula asserting that \(\mathbf{s}[i..i+n-1]\) is _novel_; that is, it is the first occurrence of this factor in \(\mathbf{s}\):
\[\operatorname{FacEq}(i,j,n) :\forall t\ (t<n)\implies\mathbf{s}[i+t]=\mathbf{s}[j+t]\] \[\operatorname{Novel}(i,n) :\forall j\ \operatorname{FacEq}(i,j,n)\implies j\geq i.\]
Then the number of \(i\) for which
\[\operatorname{Novel}(i,2n)\,\wedge\,\operatorname{Dyck}(i,2n)\]
holds is precisely the number of Dyck factors of \(\mathbf{s}\) of length \(2n\). Since \(\mathbf{s}\) is \(k\)-automatic, and its running sum sequence \(\mathbf{v}\) is synchronized, it follows that there is an automaton recognizing those \(i\) and \(n\) for which \(\operatorname{Novel}(i,2n)\,\wedge\,\operatorname{Dyck}(i,2n)\) evaluates to true, and from known techniques we can construct a linear representation for the number of such \(i\).
Corollary 17: _Let \(f(n)\) denote the number of Dyck factors of length \(2n\) appearing in the Thue-Morse word. Then \((f(n))_{n\geq 0}\) is a \(2\)-regular sequence._
Proof.: We can carry out the proof of Theorem 4.1 in Walnut for \(\mathbf{t}\), as follows:
def FacEq "At (t<n) => T[i+t]=T[j+t]": def Novel "Aj $FacEq(i,j,n) => j>=i": def NovelDyck "$Dyck(i,n) & $Novel(i,n)": def LR n "$NovelDyck(i,2*n)": The last command creates a rank-29 linear representation for the number of length-\(2n\) Dyck factors.
_Remark 18_.: Using the algorithm of Schutzenberger discussed in [2, Chapter 2], we can minimize the linear representation obtained in the proof to find a linear representation \((v_{f},\gamma_{f},w_{f})\) for \(f\) of rank 7, as follows:
\[v_{f}^{T}=\left[\begin{smallmatrix}1&0&0&0&0&0\\ 0&0&1&0&0&0\\ 0&0&0&0&1&0\\ 0&0&0&3&4&11/8-2&3/2\\ 0&0&0&1/2&1/4&0&1\\ 0&0&0&-5/2&11/4-2&3\\ 0&0&0&-7/2&19/4-5&5\end{smallmatrix}\right]\qquad\qquad\gamma_{f}(0)=\left[ \begin{smallmatrix}1&0&0&0&0&0\\ 0&0&1&0&0&0\\ 0&0&0&1&0\\ 0&0&3/4&11/8-2&3/2\\ 0&0&0&1/2&1/4&0&1\\ 0&0&0&-5/2&11/4-2&3\\ 0&0&0&-7/2&19/4-5&5\end{smallmatrix}\right]\qquad\qquad w_{f}=\left[ \begin{smallmatrix}1\\ 2\\ 2\\ 2\\ 2\\ 4\\ 6\end{smallmatrix}\right].\]
This gives a very efficient way to compute \(f(n)\).
Table 1 gives the first few terms of the sequence \(f(n)\). It is sequence A345199 in the _On-Line Encyclopedia of Integer Sequences_[13].
## 5 Upper and lower bounds for \(f(n)\)
In this section we prove tight upper and lower bounds for \(f(n)\), the number of Dyck factors of \(\mathbf{t}\) of length \(2n\).
We start with a characterization of some of the subsequences of \((f(n))_{n\geq 0}\).
**Lemma 19**.: _We have_
\[f(2n) =2f(n) \tag{1}\] \[f(4n+3) =2f(n)+f(2n+1)+q(n)\] (2) \[f(8n+1) =2f(2n+1)+f(4n+1)-q(n)\] (3) \[f(8n+5) =2f(n)+f(2n+1)+2f(2n+2) \tag{4}\]
\begin{table}
\begin{tabular}{c|c
_for all \(n\geq 3\). Here \(q(n)\) is the \(2\)-automatic sequence computed by the DFAO in Figure 1._
Notice that \(1\leq q(n)\leq 2\) for \(n\geq 1\).
These relations can be proved using linear representations computable by Walnut. We only prove the most complicated one, namely Eq. (3). Substituting \(n=m+3\), we see that Eq. (3) is equivalent to the claim that \(f(8m+25)=2f(2m+7)+f(4m+13)-q(m+3)\) for \(m\geq 0\). We now obtain linear representations for each of the terms, using the following Walnut commands.
morphism aa "0->01 1->23 2->22 3->33": morphism b "0->0 1->1 2->2 3->1": promote Q1 aa: image Q b Q1:
def term1 m "$LR(i,8*m+25)": def term2 m "$LR(i,2*m+7)": def term3 m "$LR(i,4*m+13)": def term4 m "(i=0 & Q[m+3]=@1) | (i<=1 & Q[m+3]=@2)": From these four linear representations, using block matrices, we can easily create a linear representation for
\[f(8m+25)-2f(2m+7)-f(4m+13)+q(m+3).\]
It has rank 735. When we minimize it (using a Maple implementation of the Schutzenberger algorithm mentioned previously), we get the linear representation for the 0 function, thus proving the identity.
The other identities can be proved similarly.
**Theorem 20**.: _We have \(f(n)\leq n\) for all \(n\geq 1\). Furthermore, this bound is tight, since \(f(n)=n\) for \(n=3\cdot 2^{i}\) and \(i\geq 0\)._
Figure 1: DFAO computing \(q(n)\). States are in the form \(q/a\), where \(q\) is the name of the state and \(a\) is the output.
Proof: We will actually prove the stronger bound that \(f(n)\leq n-(n\bmod 2)\) for \(n\geq 1\), by induction.
The base case is \(1\leq n<29\). In this case we can verify the bound by direct computation. Otherwise assume \(n\geq 29\) and the bound is true for all smaller positive \(n^{\prime}<n\); we prove it for \(n\).
There are four cases to consider: \(n\equiv 0\pmod{2}\), \(n\equiv 3\pmod{4}\), \(n\equiv 1\pmod{8}\), and \(n\equiv 5\pmod{8}\).
Suppose \(n\equiv 0\pmod{2}\). By induction we have \(f(n/2)\leq n/2-(n/2\bmod 2)\). But from Eq. (1) we have \(f(n)=2f(n/2)\leq 2(n/2)-2(n/2\bmod 2)\leq n\).
Suppose \(n\equiv 3\pmod{4}\). By induction we have \(f((n-3)/4)\leq(n-3)/4-((n-3)/4\bmod 2)\) and \(f((n-1)/2)\leq(n-1)/2-((n-1)/2\bmod 2)\). From Eq. (2) we have
\[f(n) =2f((n-3)/4)+f((n-1)/2)+q((n-3)/4)\] \[=(n-3)/2-2((n-3)/4\bmod 2)+(n-1)/2-((n-1)/2\bmod 2)\] \[\qquad\quad+q((n-3)/4)\] \[\leq n-1,\]
as desired.
Suppose \(n\equiv 1\pmod{8}\). By induction we have \(f((n+3)/4)\leq(n+3)/4-((n+3)/4\bmod 2)\) and \(f((n+1)/2)\leq(n+1)/2-((n+1)/2\bmod 2)\). From Eq. (3) we have
\[f(n) =2f((n+3)/4)+f((n+1)/2)-q((n-1)/8)\] \[\leq(n+3)/2-2((n+3)/4\bmod 2)+(n+1)/2-2((n+1)/2\bmod 2)\] \[\qquad\quad-q((n-1)/8)\] \[\leq n-1,\]
as desired.
Suppose \(n\equiv 5\pmod{8}\). By induction we have
\[f((n-5)/8) \leq(n-5)/8-((n-5)/8\bmod 2)\] \[f((n-1)/4) \leq(n-1)/4-((n-1)/4\bmod 2)\] \[f((n+3)/4) \leq(n+3)/4-((n+3)/4\bmod 2).\]
From Eq. (4) we have
\[f(n) =2f((n-5)/8)+f((n-1)/4)+2f((n+3)/4)\] \[\leq(n-5)/4-2((n-5)/8\bmod 2)+(n-1)/4-((n-1)/4\bmod 2)\] \[\qquad\quad+(n+3)/2-2((n+3)/4\bmod 2)\] \[\leq n-1,\]
as desired. This completes the proof of the upper bound.
We can see that \(f(n)=n\) for \(n=3\cdot 2^{i}\) as follows. Using the linear representation for \(n\) we have \(f(3\cdot 2^{i})=v_{f}\gamma_{f}(11)\gamma_{f}(0)^{i}w_{f}\). The minimal polynomial of
\(\gamma_{f}(0)\) is \(X^{2}(X-1)(X+1)(X-2)\). It follows that \(f(3\cdot 2^{i})=a\cdot 2^{i}+b+c(-1)^{i}\) for \(i\geq 2\). Solving for the constants, we find that \(a=3\), \(b=0\), \(c=0\), and hence \(f(3\cdot 2^{i})=3\cdot 2^{i}\) as claimed.
Theorem 4.1: _We have \(f(n)\geq n/2\) for \(n\geq 0\), and \(f(n)\geq(n+3)/2\) for \(n\geq 1\) odd. Furthermore, the bound \(f(n)\geq n/2\) is attained infinitely often._
Proof: We prove the result by induction on \(n\). It is easy to verify by direct computation that the result is true for \(n<29\). Otherwise assume \(n\geq 29\) and the bound is true for all small positive \(n^{\prime}<n\); we prove it for \(n\).
Again we consider the four cases \(n\equiv 0\) (mod 2), \(n\equiv 3\) (mod 4), \(n\equiv 1\) (mod 8), and \(n\equiv 5\) (mod 8).
Suppose \(n\equiv 0\) (mod 2). By induction and Eq. (1) we have \(f(n)=2f(n/2)\geq 2(n/2)/2=n/2\).
Otherwise \(n\) is odd.
Suppose \(n\equiv 3\) (mod 4). By induction we have
\[f((n-3)/4) \geq(n-3)/8\] \[f((n-1)/2) \geq(n+5)/4.\]
Hence using Eq. (2) we get
\[f(n) =2f((n-3)/4)+f((n-1)/2)+q((n-3)/4)\] \[\geq(n-3)/4+(n+5)/4+q((n-3)/4)\] \[\geq(n+1)/2+1\] \[=(n+3)/2.\]
Suppose \(n\equiv 1\) (mod 8). By induction we have
\[f((n+3)/4) \geq((n+3)/4+3)/2 =(n+15)/8\] \[f((n+1)/2) \geq((n+1)/2+3)/2 =(n+7)/4.\]
Hence using Eq. (3) we get
\[f(n) =2f((n+3)/4)+f((n+1)/2)-q((n-1)/8)\] \[\geq(n+15)/4+(n+7)/4-2\] \[=(n+7)/2.\]
Suppose \(n\equiv 5\) (mod 8). By induction we have
\[f((n-5)/8) \geq(n-5)/16\] \[f((n-1)/4) \geq((n-1)/4+3)/2=(n+11)/8\] \[f((n+3)/4) \geq(n+3)/8.\]
Hence using Eq. (4) we get
\[f(n) =2f((n-5)/8)+f((n-1)/4)+2f((n+3)/4)\] \[\geq 2(n-5)/16+(n+11)/8+2(n+3)/8\] \[=(n+3)/2.\]
This completes the induction proof of both lower bounds.
It is easy to prove, using the same techniques as in the last part of the proof of Theorem 4.1, that \(f(n)=n/2\) for \(n=2^{i}\), \(i\geq 2\).
Theorem 4.2: _We have \(\sum_{0\leq i<2^{n}}f(i)=19\cdot 4^{n}/48-2^{n}/4+5/3\) for \(n\geq 2\)._
Proof: The summation \(\sum_{0\leq i<2^{n}}f(i)\) is easily seen to equal \(v_{f}(\gamma_{f}(0)+\gamma_{f}(1))^{n}w_{f}\). We can then apply the same techniques as above to the matrix \(\gamma_{f}(0)+\gamma_{f}(1)\).
It follows that the "average" value of \(f(n)\) is \(\frac{19}{24}n\).
## 6 Dyck words in other sequences
Proposition 23: _The only nonempty Dyck words in the Fibonacci word \(\mathbf{f}\) are \(01\) and \(0101\)._
Proof: Let \(\theta\) be the Fibonacci morphism defined by \(\theta(0)=01\) and \(\theta(1)=0\). Let \(w\) be a nonempty Dyck factor of the Fibonacci word. Then \(w\) begins with \(0\), ends with \(1\), and has an equal number of \(0\)'s and \(1\)'s. It follows that \(w=\theta(w^{\prime})\), where \(w^{\prime}\) is a factor of the Fibonacci word consisting entirely of \(0\)'s. However, the longest such \(w^{\prime}\) is \(w^{\prime}=00\).
A similar argument applied to the morphism that maps \(0\to 01\) and \(1\to 00\) gives the following result.
Proposition 24: _The only nonempty Dyck words in the period-doubling sequence are \(01\), \(0101\), and \(010101\)._
Recall that the Rudin-Shapiro sequence \(\mathbf{r}=(r(n))_{n\geq 0}\) is defined to be the number of occurrences of \(11\), taken modulo \(2\), in the base-\(2\) expansion of \(n\).
Theorem 6.1: _There are Dyck factors of arbitrarily large nesting level in the Rudin-Shapiro sequence._
Proof: For \(n\geq 0\) define \(x_{n}=\mathbf{r}[2\cdot 4^{n}..4^{n+1}-1]\). We will show, by induction on \(n\), that \(x_{n}\) is a Dyck factor of nesting level \(2^{n+1}-1\).
The base case is \(n=0\). In this case \(\mathbf{r}[2..3]=01\) is a Dyck factor of nesting level \(1\).
Now assume the result is true for \(n\); we prove it for \(n+1\). For \(n\geq 0\) define \(y_{n}=\mathbf{r}[0..2\cdot 4^{n}-1]\). We claim that \(x_{n+1}=y_{n}x_{n}\overline{y_{n}}x_{n}\); this follows immediately by considering the first three bits of the base-\(2\) representations of the numbers in the range \([2\cdot 4^{n+1}..4^{n+2}-1]\).
Define \(s(n)=\sum_{0\leq i\leq n}(-1)^{r(i)}\). It should be clear that \(s(n)\) is the imbalance between the number of \(0\)'s (left parens) and \(1\)'s (right parens) in \(\mathbf{r}[0..n]\). We now claim that \(0<s(i)\leq s(2\cdot 4^{n}-1)=2^{n+1}\) for \(0\leq i\leq 2\cdot 4^{n}-1\). In fact, the stronger claim \(s(i)>0\) for all \(i\) is [3, Satz 9]. The fact that \(s(2\cdot 4^{n}-1)=2^{n+1}\) is [3, Beispiel 6], and the inequality \(s(i)\leq 2^{n+1}\) for \(0\leq i\leq 2\cdot 4^{n}-1\) can be deduced from [3, Satz 9]. Thus we have shown that the imbalance of \(y_{n}\) is \(2^{n+1}\), the imbalance of \(x_{n}\) is \(0\) and its nesting level is \(2^{n+1}-1\), the imbalance of \(\overline{y_{n}}\) is \(-2^{n+1}\), and hence \(x_{n+1}=y_{n}x_{n}\overline{y_{n}}x_{n}\) is Dyck with nesting level \(2^{n+2}-1\).
_Question 26_.: Consider the set of \(n\) such that there is a Dyck factor of length \(n\) in the Rudin-Shapiro word. Is it a 2-automatic set? If not, this would give an alternate proof, different from that of [4], that being a Dyck word is not an FO[+]-definable property.
**Conjecture 27**.: _The paperfolding sequence has a Dyck factor of length \(n\) iff \(n\) is of the form \(2^{k}-2^{i}\) for \(0\leq i<k\)._
|
2305.13901 | WinDB: HMD-free and Distortion-free Panoptic Video Fixation Learning | To date, the widely adopted way to perform fixation collection in panoptic
video is based on a head-mounted display (HMD), where users' fixations are
collected while wearing an HMD to explore the given panoptic scene freely.
However, this widely-used data collection method is insufficient for training
deep models to accurately predict which regions in a given panoptic are most
important when it contains intermittent salient events. The main reason is that
there always exist "blind zooms" when using HMD to collect fixations since the
users cannot keep spinning their heads to explore the entire panoptic scene all
the time. Consequently, the collected fixations tend to be trapped in some
local views, leaving the remaining areas to be the "blind zooms". Therefore,
fixation data collected using HMD-based methods that accumulate local views
cannot accurately represent the overall global importance - the main purpose of
fixations - of complex panoptic scenes. To conquer, this paper introduces the
auxiliary window with a dynamic blurring (WinDB) fixation collection approach
for panoptic video, which doesn't need HMD and is able to well reflect the
regional-wise importance degree. Using our WinDB approach, we have released a
new PanopticVideo-300 dataset, containing 300 panoptic clips covering over 225
categories. Specifically, since using WinDB to collect fixations is blind zoom
free, there exists frequent and intensive "fixation shifting" - a very special
phenomenon that has long been overlooked by the previous research - in our new
set. Thus, we present an effective fixation shifting network (FishNet) to
conquer it. All these new fixation collection tool, dataset, and network could
be very potential to open a new age for fixation-related research and
applications in 360o environments. | Guotao Wang, Chenglizhao Chen, Aimin Hao, Hong Qin, Deng-Ping Fan | 2023-05-23T10:25:22Z | http://arxiv.org/abs/2305.13901v3 | # WinDB: HMD-free and Distortion-free Panoptic Video Fixation Learning
###### Abstract
To date, the widely-adopted way to perform fixation collection in panoptic video is based on a head-mounted display (HMD), where participants' fixations are collected while wearing an HMD to explore the given panoptic scene freely. However, this widely-used data collection method is insufficient for training deep models to accurately predict which regions in a given panoptic are most important when it contains intermittent salient events. The main reason is that there always exist "blind zooms" when using HMD to collect fixations since the participants cannot keep spinning their heads to explore the entire panoptic scene all the time. Consequently, the collected fixations tend to be trapped in some local views, leaving the remaining areas to be the "blind zooms". Therefore, fixation data collected using HMD-based methods that accumulate local views cannot accurately represent the overall global importance of complex panoramic scenes. This paper introduces the auxiliary Window with a Dynamic Blurring (**WinDB**) fixation collection approach for panoptic video, which doesn't need HMD and is blind-zoom-free. Thus, the collected fixations can well reflect the regional-wise importance degree. Using our WinDB approach, we have released a new **PanopticVideo-300** dataset, containing 300 panoptic clips covering over 225 categories. Besides, we have presented a simple baseline design to take full advantage of PanopticVideo-300 to handle the blind-zoom-free attribute-induced fixation shifting problem.
HMD-free, Distortion-free, Panoptic Video Fixation Learning
## 1 Introduction and Motivation
Given a panoptic scene, the primary target of panoptic fixation prediction [1, 2, 3, 4, 5, 6] is to fast locate the most important regions in the scene. By using important regions to drive dynamic regional-wise compression ratio and rendering quality, HMD can have better graphics and smoother performance, ultimately increasing immersion. Several downstream applications are as follows: panoptic video navigation [4], panoptic video compression [7], and panoptic video anomaly detection (blind-zoom-free surveillance video) [8]. Different from the conventional 2D fixation prediction [9, 10, 11, 12], which has received extensive research attention, the panoptic fixation prediction is currently in its infancy. The major problem causing such slow progress is the shortage of large-scale datasets [13, 14, 1, 1], because collecting human-eye fixations in panoptic data is much more expensive than that in conventional 2D data [7]. Moreover, the problem domain of panoptic fixation prediction is much more complex than the conventional fixation prediction in 2D images, where 2D data only has one fixed angle. Yet, panoptic data allows users to explore \(360^{\circ}\) freely [16, 17, 18, 19]. Thus, our panoptic fixation prediction research community is currently facing a dilemma -- using tiny small-scale training data to beat an extremely complex problem [20, 21, 22, 23, 24].
To date, the HMD-based human-eye fixation collection is the most popular approach, where participants wear **head mounted**d**display (HMD) to explore the given panoptic scene freely [29, 30], and, at the same time, fixations are collected. The HMD-based fixation collection [25, 26, 27, 28] has two problems, and one of them is extremely critical, which shall be respectively discussed here. **First**, there always exist "blind zooms" when using HMD to collect fixations since the participants cannot keep spinning their heads to explore the entire panoptic scene all the time. The blind zoom problem makes the HMD-based fixations inconsistent with the real regional-wise importance degrees in the given panoptic scene. Thus, a salient event occurring in "blind zoom" might receive zero fixation. To facilitate a better understanding, we have further demonstrated this problem in Fig. 1. **Second**, the HMD-based fixation collection is relatively expensive, and participants usually feel very uncomfortable (_e.g._, cybersickness [31]) when wearing an HMD to explore the panoptic scene.
Besides the above-mentioned HMD-based approach, it is worth mentioning that plain **e**quircetangular projection
Fig. 1: The existing HMD-based fixation collection method [25, 26, 27, 28] for panoptic data has a critical limitation — blind zoom, results in the collected fixations being insufficient to train deep models to accurately predict which regions in a given panoptic are most important. The reason causing the limitation of HMD-based has been shown in subfigure B, where users wearing an HMD tend to become “retard” after the early scene exploring stage, resulting in missing important events that occurred in blind zooms.
(ERP) [32, 33, 34], which projects a panoptic scene, a typical spherical data, to 2D formulation, is not suitable to serve as the platform for human-eye fixation collection. The reason is that the ERP-based 2D form suffers severe visual distortions [35, 36, 37], especially for those regions around the poles (see the top row of Fig. 1-A). As a result, the distorted regions -- being irregular to their surroundings, may occasionally draw participants' fixations even if they are not salient [38, 39, 40]. So, despite the merits, _e.g._, no blind zoom, and lower cost (HMD-free), the ERP-based fixation collection is unsuitable in real works.
Given all aspects mentioned above, this paper presents a novel approach to panoptic fixation collection. Our approach has considered all advantages and disadvantages of both HMD-based and ERP-based methods (see Fig. 2-A), and the complete version of our approach can be found in the bottom-right of Fig. 3. The key idea of our approach is to take full advantage of the ERP-based fixation collection (_i.e._, without blind zoom) while suppressing visual distortions via a series of tailored designs. All these designs are inspired by the natural mechanism of the real human-visual system (HVS). Therefore, the fixations collected by our approach can well indicate each region's importance degree in the given panoramic scene, which can be applied to panoramic video navigation and compression (see Fig. 2-B). Moreover, since our approach is HMD-free, enabling participants to explore a panoptic scene in front of a PC and collect fixations via an eye tracker, which is more comfortable than the HMD-based method (See Fig. 2-A-(b)).
In addition, using our proposed WinDB approach, we have constructed a large panoptic video fixation prediction dataset containing 300 video clips. After conquering blind zoom limitation, fixations collected by our approach may frequently shift from one position to another long-distance position since salient events might occur in any position of the given scene. Fig. 2-C demonstrates the difference between the fixation collected by our approach and those collected by the HMD-based method [27]. Therefore, using a simple yet effective model design, we have devised a deep model to fully use this attribute, achieving SOTA results.
Noticing that our approach focuses on improving some of the limitations present in previous HMD-based methods. Consequently, our WinDB approach does not offer a general way for users to collect fixations without an HMD. Instead, our approach serves as a "supplement" to the HMD-based method in specific scenarios.
In summary, the key contributions of this paper include the following:
* A brand **new human-eye fixation collection approach (WinDB)** for panoptic data, which is also the first one that has truly conquered the ill-formed blind zoom fixations collected by the previous approaches;
* Based on WinDB, we have **constructed a large set (PanopticVideo-300)**, containing 300 video clips with a very large variety of semantic categories, and each clip has been equipped with "dense" fixations;
* We have also devised **a novel deep model** to handle the fixation shifting problem -- a very common phenomenon in panoptic videos yet has been completely overlooked by previous works.
## 2 Related Work
### _Panoptic Fixation Collection Approach_
parts to handle the duplications in the top and bottom areas. The proposed auxiliary Window with a Dynamic Blurring (**WinDB**) fixation collection approach will be detailed below.
**Multi-points Projection**. Generally, the major advantage of representing a panoptic image via ERP is retaining good overall information [44, 45, 46]. However, as shown in Fig. 3-A, the major problem of ERP is the distortion problem [47, 48, 49], where the distortion degree becomes more dramatic when closing to the top and bottom areas. Here we propose to focus on the common distortion problems around the middle area of the ERP image and leave the dramatic distortion problems around the panoptic poles to the later parts of our method. Inspired by the fact that, though our human eye has a relatively large range field of view [50], _i.e._, about \(124^{\circ}\), the focus range is only about \(25^{\circ}\)[51] (see Fig. 4), implying that we shall firstly tackle the distortion problem in "small" regions when developing ERP-based fixation collection tool. Thus, suppose a panoptic frame (with Degree of Freedom: horizontal \(360^{\circ}\) and vertical \(180^{\circ}\)) has been projected on a sphere. We divide the sphere into multiple non-overlapped sub-regions, where horizontal and vertical dividing spans are uniformly set to \(30^{\circ}\)1: (\(360^{\circ}\)/\(30^{\circ}\))\(\times\)(\(180^{\circ}\)/\(30^{\circ}\)) = 72 sub-regions. Next, for each sub-regions in the sphere, we project it to 2D space and align it to the corresponding position in the ERP image. Thus far, each aligned region in the ERP image is distortion free. However, as shown in the bottom-left of Fig. 3, the projected ERP contains multiple other problems, _e.g._, ghost effects in panoptic poles, and massive misalignments around the neighbored sub-regions.
**Applying Window Screen**. To alleviate the misalignment problem, we propose applying an additional "window screen" on the projected ERP image (see the black grids in the middle-bottom of Fig. 3). Our rationales are two-fold: **(1)** By applying the "window screen", humans tend to focus on the inside contents of each "box" rather than staring at the misalignment-induced visual artifacts; **(2)** Although the "window screen" could occlude some image contents, the overall ERP context can still be retained and perceived by our human visual system (HVS) due to the effect of POV (Persistence of Vision) [52] -- visual signals will remain in our brain after the visual signals have been lost; especially in video data, the POV effect can ensure the lowest side-effect after applying window screen.
**Blurring Overlapped Regions**. To further alleviate the misalignment-induced visual artifacts, we blur those neighbored regions for all sub-regions except the center two rows. As shown in the middle-top of Fig. 3, the horizontal neighbored regions are blurred accordingly, where the top and bottom rows are blurred more than the others. Also, we leave the center two rows unchanged since these rows have the lowest distortion degree. Using this part, the projected and screen-masked ERP could have fewer visual artifacts around the center rows (_i.e._, row 2-5).
**Using Auxiliary Windows**. We propose to utilize six auxiliary windows to solve the ghost effects around the panoptic poles. Each window covers {4+0.5\(\times\)4} sub-regions shown in the top-right of Fig. 3. That is, the sphere contents of these sub-regions are jointly projected to 2D space and regarded as one of the six auxiliary windows. Compared with the previous sphere-to-2D projection, the major difference is that the projection used here contains more information than the previous sub-region-based one. The adopted auxiliary window scheme has multiple advantages. **First**, it has well handled the ghost effects around the panoptic poles. **Second**, the adopted six auxiliary windows can well present the context information of the panoptic poles without any distortions.
**Avoiding Being Trapped in Auxiliary Windows**. Despite the advantages of the adopted auxiliary windows, there is also a critical drawback when using additional windows to collect human-eye fixations. That is, compared to the main body of "our projected, screen masked, and blurred EPR image", we have noticed that the human visual system (HVS) tends to pay more attention to the auxiliary windows since they are more informative than the small size sub-regions. Consequently, these auxiliary windows will likely trap human visual fixations. To improve, we propose the dynamic blurring scheme, and the idea of this scheme has been shown in Fig. 5. At the beginning of the fixation collection, all auxiliary windows are blurred using Gaussian smoothing with \(\sigma=0^{2}\) and \(\text{size}=31\). Then, during the fixation collection process, a blurred auxiliary window will become clear immediately if the fixation trajectory sweeps over
Fig. 3: The overall pipeline of our new HMD-free fixation collection approach for panoptic data. Compared to the widely-used HMD-based method, our WinDB approach is more economical, comfortable, and reasonable.
the auxiliary window3. To prevent trapping fixations into auxiliary windows, the "clear status" of an auxiliary window won't last long, and our method will gradually blur it again (it takes about 2\(\sim\)3 seconds) if the fixations keep staying at a relatively fixed spatial range (30\(\times\)30). In this way, the adopted auxiliary windows can facilitate the fixation collection around the poles without obvious side effects.
Footnote 3: Participants may still sweep over the blurred auxiliary window because the human visual system is very sensitive to movements [53]. We can still notice motions occurring in a blurred auxiliary window.
## 4 Proposed PanopticVideo-300 Dataset
**Video Clips**. To construct the mentioned large dataset, we have downloaded almost 400 video clips from youtube in advance. Then, we removed about 100 low-quality clips (_e.g._, scenes with plain backgrounds, simple movements, or low-resolutions). Thus there is a total of 300 high-quality ones retained. It is worth mentioning that, in the previous datasets [25, 26, 27, 28], those clips containing fast object movements were all excluded because the participant wearing an HMD needed to keep spinning their heads to catch up with the moving object, which is not very friendly to participants [54]. However, in our HMD-free approach, fast object movements are no longer a problem. Fig. 6 shows the clips cover 225 semantic categories and semantic distribution.
**Participants**. Based on our newly proposed panoptic fixation collection approach, we have recruited 38 participants, including 12 females and 26 males aged between 18-29. All participants are complete novices before the fixation collection process; of course, no video clips in our video clip pool have been shown to them before. Since our approach is HMD-free, each participant only needs to watch the video with a resolution of 1,920\(\times\)1,080 on the PC, and a Tobbi eye tracker has also been set up. For each participant, the entire fixation collection process takes about 50 minutes. It could be actively suspended at any time if the participant experienced fatigue or discomfort during fixation collection. Notice that this HMD-free approach is more comfortable than the HMD-based one, not to mention that fixations collected by our approach are more consistent with the real regional-wise importance degree in the given panoptic scene.
**Quality Discussion**. To demonstrate the advantage of our panoptic fixation collection approach against the HMD-based and ERP-based ones, we provide some representative cases in Fig. 7, where the scenes in this figure include three parts. The first part (_i.e._, the first two rows) are some scenes with salient events that suddenly occurred in HMD blind zooms. By comparing these two rows, the fixations collected by our approach (the 2nd row) are more reasonable than that collected by the HMD-based method (the 1st row), where those intermittent salient events (highlighted by red cycle) can be captured by our approach yet the HMD-based method is failed. Meanwhile, to demonstrate the advantage of our method against the ERP-based one, we have shown some ordinary scenes (_i.e._, the middle three rows) in Fig. 7. In these scenes, the fixations collected by our approach are generally consistent with those collected by the HMD-based method, showing the correctness of our approach. Also, compared with fixations collected by the ERP-based method, we can easily notice that the fixations collected by our approach can be more focused. Yet, fixations collected by the ERP-based method are scattered points. The reason is clear that the massive visual distortions in ERP could easily influence human-eye fixations, drawing them to those distortion-induced visual artifacts regions. More quantitative evidence can be found in the experiment section. The last two rows in Fig.7 depict qualitative comparisons between our WinDB approach and the representative HMD-based dataset [27]. As seen, the fixation data collected through the HMD-based method only displays the most salient regions in the local view, while missing out on the crucial saliency regions in the blind zoom. On the other hand, our approach can accurately detect and highlight the most important regions in the panorama.
### _Shifting-aware Fixation Learning Module_
**Motivation**. As we have stated, HMD-based fixations tend to be trapped in local view due to blind zoom. Thus, compared with the HMD-based fixations, the fixations collected by our approach can fast shift to those sudden changes, such as the "the man who pushed the door in" shown in Fig. 8-B. So, in our new dataset, the "fixation shifting" is a quite common phenomenon, where fixations might shift to another long-distance position in a very short period. However, the existing spatiotemporal fixation prediction methods [55, 56, 57] cannot well handle the "fixation shifting" problem [57] since their spatiotemporal fusion logics are mainly based on a strong assumption -- the short-term spatiotemporal coherency always exists. For example, the LSTM-based models could fail in facing "fixation shifting" because the
Fig. 4: The HVS Focal Range. Fig. 5: Technical details of the proposed dynamic blurring strategy.
Fig. 6: The semantic categories of PanopticVideo-300 dataset. All fixations in our set are collected by WinDB.
LSTM usually degenerates significantly if the fixation overlapping between consecutive frames is absent.
The simple architecture has been shown in Fig. 8-C, where the technical details of "Spatiotemporal Self-att" can be formulated as the following equations.
\[\mathbb{A}=\mathrm{Flat}\{\mathbf{E}_{1},\mathbf{E}_{2},\mathbf{E}_{3}\}\in \mathbb{R}^{\mathrm{H\times W\times C\times 3}}, \tag{1}\]
\[\mathbb{A}\leftarrow\mathbb{A}\odot\mathrm{Sigmoid}\big{(}\mathrm{Conv} \big{(}\mathrm{Softmax}(\mathbb{A}\times\mathbb{A}^{\top})\times\mathbb{A} \big{)}\big{)}, \tag{2}\]
\[\{\mathbf{D}_{1},\mathbf{D}_{2},\mathbf{D}_{3}\}\leftarrow\mathrm{dFlat}\{ \mathbb{A}\}, \tag{3}\]
where the dense feature vectors \(\mathbf{E}\) and \(\mathbf{D}\) are tensors with size \([12,23,256]_{\mathrm{H\times W\times C}}\) which have been shown in Fig. 8-C; "Flat" flattens its input tensors to a matrix, and "dFlat" does the opposite; \(\odot\) denotes element-wise matrix multiplicative operation. This implementation is simple and efficient yet very effective in handling the long-distance dependency issue induced by "fixation shifting." Notice that a more fancy network design could bring additional performance gain, but that is beyond the main scope of this paper.
## 5 Experiments
### _Experiments Settings_
**Platform and Hardware**. Our method uses Tobii Eye Tracker (v2) to collect panoptic fixations. To verify the effectiveness of our fixation collection method, we also use HMD to collect fixations as the references, where we use the HTC VIVE PRO EYE. We use a PC with an NVIDIA 2070SP GPU for the proposed model training and testing.
**Dataset Split**. We divide the video clips in our PanopticVideo-300 into two groups: 1) clips with fixation shifting, also called the "blind group", and 2) clips without fixation shifting, named as "ordinary group". We measure the maximum fixation shifting distance for every 15 frames to determine if a given clip contains fixation shifting; for every 15 frames, we compute the spherical distance for any 2 frames. We can obtain a 15\(\times\)15 diagonal distance matrix for each clip, if the maximum fixation shifting distance is below a pre-defined threshold, we will regard this clip as a blind one; otherwise, we label it as an ordinary one. This detailed process has been shown in Fig. 9. As can be seen, all fixations collected by our approach are mapped to a sphere. Thus, each fixation point can be represented by polar coordinates. We select one view with the densest fixation points for each frame as the current salient view. Then, for every 15 frames, we can obtain the maximum spherical radian (\(\theta\)) between every two salient views. And the radian threshold has been empirically assigned to \(110^{\circ}\)[4, 59]. Thus, all 300 clips in our set can be divided into 195 "blind" clips and 105 "ordinary" clips.
**Correctness of Our WinDB Approach**. The main advantage of our WinDB collection approach against the classic HMD-based one is to handle the blind zoom limitation. Thus, for those ordinary panoptic scenes (_i.e._, _without_ blind zooms), the fixations collected by our approach should stay consistent with those
Fig. 8: The motivation of the newly proposed model. Subfigures A and B illustrate the “fixation shifting” phenomenon — very common in our set. Our model has devised a very simple yet effective architecture, which performs spatiotemporal self-attention [58] to alleviate the fixation shifting-induced long-distance misalignment problem.
Fig. 7: Qualitative comparisons between fixation collected by our WinDB approach, HMD/ERP-based method, and representative HMD-based set [27]. The fixation shifting phenomenon has been highlighted via red cycles. Zoomed-in for details.
fixations collected by the HMD-based method. To demonstrate the effectiveness of each technical step used in our WinDB collection approach (_i.e._, **A**-**B** in Fig. 3), we have conducted a component evaluation, where each component has been quantitatively tested by measuring four widely-used metrics (AUC-J, SIM, CC, NSS [60, 2]); the HMD-based fixations are also newly collected here to serve as the ground truths.
Notice that we have randomly selected ten clips from the "ordinary" group (_i.e._, all the panoptic scenes have no blind zoom) as a small validation set here. The quantitative results have been shown in Table I. As can be seen, compared with the ERP-based fixations collection approach (marked by **A**), our WinDB method (**B**) can achieve significant performance improvement. We can also notice that the overall performance can get promoted once a critical component has been applied, showing the necessity of each technical component. Besides, we may notice that all quantitative numerics are relatively low. The main reason is that those fixations collected by the HMD-based method, which served as the ground truths in this experiment, are not "perfect". The blind zoom limitation may also affect some clips of the "ordinary group" since the "blind" and "ordinary" splits are just based on an empirical threshold (\(\theta\) = 110\({}^{\circ}\)).
**User Study**. We have conducted a user study to further verify the overall quality of fixation maps collected by our WinDB method against those collected by the HMD-based method. The overall experiment setting is shown in Fig. 10. In this experiment, we have randomly selected 16 video clips from the "blind group" (Fig. 9). To conduct this user study, we recruited 30 subjects equally divided into three groups (see the groups A, B, and C in Fig. 10). Subject groups A and B are the experimental groups that provide HMD-based fixations and our WinDB-based fixations. Then, we show the local salient views of each clip, in which salient views are automatically selected using the same scheme mentioned above, to the subject group C, and each subject in group C will provide an overall score ranging between 0\(\sim\)9 to each clip. Notice that each clip will be shown to subjects in group C three times. The 1st time is the plain ERP version to let the subjects become familiar with the overall contents in advance. The 2nd and 3rd are randomly shown clips with salient views selected by either the HMD-based fixations or our WinDB fixations. Meanwhile, when showing clips (with all regions blurred except the salient view) to subjects in group C, we collect each subject's fixations since a higher quality of salient views should receive more fixations when subjects watch it. Thus, we can obtain two indicators after this user study, _i.e._, **1)** the subjective quality scores and **2)** the fixation point numbers in salient views. These two indicators' results are shown in Fig. 11, where the right part is the subjective quality scores, and the left is the fixation point numbers. As shown in the figure, our method can significantly outperform the HMD-based method in both indicators, where salient views determined by our method can receive more fixations and higher quality scores, verifying the superiority of our approach.
**Generic Analysis**. This experiment targets to verify if fixations collected by our WinDB approach can promote the existing panoptic fixation prediction models, where we choose the three most representative models (SpCNN [61], SalGAN [62], and SalEMA [63]). Our rationales are two-fold: **(1)** In "ordinary" panoptic scenes, if the fixations collected by our method are adaptive to the fixations collected by the HMD-based method, a panoptic fixation prediction model, initially trained on the HMD-based fixations only, could get significant performance improvement once adding our fixations into the training set. **(2)** The fixations collected by our method shall be able to let a panoptic fixation prediction model, initially trained on the HMD-based fixations only, handle "blind" panoptic scenes well. To verify these two aspects, we have conducted the experiments in Table II.
We choose the existing HMD-based VR-EyeTracking dataset [27] as the baseline. Then, we randomly select 50 clips with "ordinary" scenes as **A1** and randomly select 50 clips with "blind" scenes as **A2**. Similarly, from our PanopticVideo-300, we randomly select 50 clips with "ordinary" scenes as **B1** and 50 clips with "blind" scenes as **B2**. Meanwhile, from our set, we randomly select 30 clips with "ordinary" scenes as **C1** and 30 clips with "blind" scenes as **C2**. The formulated **A1**, **A2**, **B1**, and **B2** will serve as the training set to train the adopted three SOTA models;
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
**A** & **B** & **C** & **D** & **C** & **GC** & **SIM** & **NSS** & **AUC** \\ \hline
**✓** & & & & & 0.2265 & 0.1781 & 1.4689 & 0.3622 \\ \hline
**✓** & & & & 0.2302 & 0.1842 & 1.4815 & 0.3759 \\ \hline
**✓** & **✓** & & & 0.2348 & 0.1901 & 1.5313 & 0.3901 \\ \hline
**✓** & **✓** & & & 0.2362 & 0.1918 & 1.5467 & 0.3952 \\ \hline
**✓** & **✓** & **✓** & & 0.2433 & 0.1939 & 1.6304 & 0.3935 \\ \hline
**✓** & **✓** & **✓** & **✓** & **0.2481** & **0.1985** & **1.6563** & **0.4214** \\ \hline \end{tabular}
\end{table} TABLE I: Components Quantitative evaluations in Fig. 3.
Fig. 11: User study results. The left part shows the difference in fixation numbers in salient views selected by our WinDB and the HMD-based fixation collection approach. The right part illustrates the sum of scores assigned by subjects after respectively experiencing our WinDB approach and the HMD-based approach. These two results suggest that our approach is more favorable than the HMD-based one. Zoomed-in for details.
Fig. 10: Subjective user study details.
Fig. 9: Technical details of dividing our PanopticVideo-300 into “blind group” and “ordinary group”.
then these models will be tested on **C1** and **C2** respectively. Notice that there exists no intersection between these splits.
By comparing mark 1 with mark 1 in the table, we can easily notice that models trained on "ordinary" scenes could generally perform worse on "blind" scenes. This is because these models have never learned the fixation shifting phenomenon when taking **A1** as the training set only. Then, we have tested the three models by using **A1+A2** as the training set, and the testing results on **C1** and **C2** are still the same tendency (see mark 1_vs._ 2, and 3_vs._ 4, 5_vs._ 6), _i.e._, numerics in **C2** are generally worse than that in **C1**. Also, as expected, we have noticed that the SOTA models' performances on both **C1** and **C2** have been promoted after using the additional training data. The reason is that the initial 50 clips are far from enough for models to achieve good performances, and frames in **C2** do not completely belong to the fixation shifting case. Finally, we have tested the models' performances by respectively adding **B1** and **B2** to the training set **A1**. By comparing 1 and 2, we can easily notice a significant performance improvement, implying that the added **B2** is very effective to enable models to tackle the "blind" panoptic scenes. Also, as shown by the residuals of 1, 2 and 3, we can further confirm that the performance gain is exactly brought by the available of "blind" clips in the training set. For example, by increasing the training set size only, SalEMA's NSS metric can only be improved from 0.2629\(\rightarrow\)0.2988 on the **C1** split. However, the performance gain could be very significant when the increased data containing "blind" clips, _i.e._, the NSS metric has increased from 0.2296\(\rightarrow\)0.6603 on the **C2** split.
In summary, the experiment conducted above can ensure: 1) our PanopticVideo-300 is, of course, adaptive to the existing HMD-based sets, and 2) models trained on set without containing any "blind" scenes cannot perform well in real works. Thus, our set is generic and necessary.
### _Our Model Results and Analysis_
Table III presents our model (Fig. 8-C) and some of the most representative SOTA panoptic fixation prediction models' performances on our PanopticVideo-300. We have divided our set into training and testing sets, where the training set contains 240 clips and the rest are testing set4.
Footnote 4: The detailed training and testing split can be found on our GitHub.
The compared methods here include GBVS360, BMS360 [64], GBVS5[65] ATsal [60], SalEMA [63], SalGAN [62], and SpCNN [61]. Different from conventional 2D fixation prediction community or 2D salient object detection community, which tend to release their codes publicly, only two panoptic fixation prediction models' codes are made publicly trainable, _i.e._, SalGAN [62] and SpCNN [61]. Others' codes are neither available nor retrainable. To ensure a fair comparison, those models with available codes are all retrained on our training set, marked by "RT or TR" in the "Train" column. As shown in the table, our model -- with the simplest implementation, has achieved the best performance, indicating that the adopted spatiotemporal attention is very useful in handling the fixation shifting problem. _W.r.t._ the relatively lower result in terms of SIM, we believe this phenomenon is quite normal since the compared SOTA models have also been re-trained on our training set. Because our set can be adaptive to the existing HMD-based sets, which have been verified above, it is still possible for other 360 models to achieve good performance gain, demonstrating the generic advantage of our set again.
## 6 Conclusion
In this study, we have pointed out one critical limitation of the widely-used HMD-based panoptic fixation collection method, _i.e._[25, 26, 27, 28], this method suffers from "blind zooms", making the collected fixations not suitable in real works. Thus, we have presented an HMD-free panoptic fixation collection approach named **WinDB**. Then, using our WinDB approach, we constructed a large set **PanopticVideo-300**, containing 300 clips with over 225 semantic categories. Compared to the existing set, PanopticVideo-300 contains 195 clips with fixation shifting. The proposed WinDB approach and the PanopticVideo-300 have a large potential to promote the panoptic fixation prediction tool to a new era. Finally, we have devised a simple yet effective model
\begin{table}
\begin{tabular}{|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|} \hline & \multicolumn{
to handle the fixation shifting issue. We verified the effectiveness and necessity of all technical pieces adopted in the paper through extensive objective and subjective experiments.
|
2302.09726 | Nystrom Method for Accurate and Scalable Implicit Differentiation | The essential difficulty of gradient-based bilevel optimization using
implicit differentiation is to estimate the inverse Hessian vector product with
respect to neural network parameters. This paper proposes to tackle this
problem by the Nystrom method and the Woodbury matrix identity, exploiting the
low-rankness of the Hessian. Compared to existing methods using iterative
approximation, such as conjugate gradient and the Neumann series approximation,
the proposed method avoids numerical instability and can be efficiently
computed in matrix operations without iterations. As a result, the proposed
method works stably in various tasks and is faster than iterative
approximations. Throughout experiments including large-scale hyperparameter
optimization and meta learning, we demonstrate that the Nystrom method
consistently achieves comparable or even superior performance to other
approaches. The source code is available from
https://github.com/moskomule/hypergrad. | Ryuichiro Hataya, Makoto Yamada | 2023-02-20T02:37:26Z | http://arxiv.org/abs/2302.09726v1 | # Nystrom Method for Accurate and Scalable Implicit Differentiation
###### Abstract
The essential difficulty of gradient-based bilevel optimization using implicit differentiation is to estimate the inverse Hessian vector product with respect to neural network parameters. This paper proposes to tackle this problem by the Nystrom method and the Woodbury matrix identity, exploiting the low-rankness of the Hessian. Compared to existing methods using iterative approximation, such as conjugate gradient and the Neumann series approximation, the proposed method avoids numerical instability and can be efficiently computed in matrix operations without iterations. As a result, the proposed method works stably in various tasks and is faster than iterative approximations. Throughout experiments including large-scale hyperparameter optimization and meta learning, we demonstrate that the Nystrom method consistently achieves comparable or even superior performance to other approaches. The source code is available from [https://github.com/moskomule/hypergrad](https://github.com/moskomule/hypergrad).
Machine Learning, ICML
## 1 Introduction
Bilevel optimization is an essential problem in machine learning, which includes hyperparameter optimization (HPO) (Hutter et al., 2019) and meta learning (Hospedales et al., 2021). This problem consists of an inner problem to minimize an inner objective \(f(\mathbf{\theta},\mathbf{\phi},\mathcal{T})\) on data \(\mathcal{T}\) with respect to parameters \(\mathbf{\theta}\in\mathbb{R}^{p}\) and an outer problem to minimize an outer objective \(g(\mathbf{\theta},\mathbf{\phi},\mathcal{V})\) on data \(\mathcal{V}\) with respect to hyper or meta parameters \(\mathbf{\phi}\in\mathbb{R}^{h}\). In the case of HPO, \(f\) and \(g\) correspond to a training loss function and a validation criterion. In contrast, in the case of meta learning, \(f\) and \(g\) correspond to meta-training and meta-testing objectives.
Typically in the deep learning literature, the bilevel optimization problem can be formulated as
\[\min_{\mathbf{\phi}}g(\mathbf{\theta}_{T}(\mathbf{\phi}),\mathbf{\phi},\mathcal{V}) \tag{1}\] \[\text{s.t.}\quad\mathbf{\theta}_{t}(\mathbf{\phi})=\Theta(\mathbf{\theta}_{t- 1}(\mathbf{\phi}),\nabla_{\mathbf{\theta}}f(\mathbf{\theta}_{t-1}(\mathbf{\phi}),\mathbf{\phi}, \mathcal{T}),\mathbf{\phi}), \tag{2}\]
where \(\Theta\) is a gradient-based optimizer, such as SGD and Adam (Kingma and Ba, 2015), and \(t=1,2,\dots,T\). In some cases, the outer problem (1) can also be optimized by gradient-based optimization methods by using _hypergradient_\(\nabla_{\mathbf{\phi}}g\), in a similar way to the inner problem, which is expected to be more efficient and scalable than black-box counterparts. Especially when combined with warm-start bilevel optimization that alternately updates outer parameters as Equation (1) and inner parameters as Equation (2) during training (Jaderberg et al., 2017; Vicol et al., 2022), the gradient-based approaches enjoy higher efficiency (Lorraine et al., 2020; Luketina et al., 2016).
A straightforward approach to achieve this goal is to unroll the inner problem to back-propagate through Equation (2) for hypergradient (Domke, 2012; Finn et al., 2017; Grefenstette et al., 2019). However, unrolling increases the memory cost as the number of inner optimization \(T\) increases, which may also cause gradient vanishing/explosion (Antoniou et al., 2019). Truncating the backward steps (Shaban et al., 2019) may unburden these issues while sacrificing the quality of hypergradients.
Alternatively, approximating hypergradient using implicit differentiation is promising because it requires much less space complexity than the unrolling approach. Exact implicit differentiation needs computationally demanding inverse Hessian vector product, which has been approximated by iterative methods such as conjugate gradient (Pedregosa, 2016; Rajeswaran et al., 2019) and the Neumann series approximation (Lorraine et al., 2020). Thanks to their space efficiency, these methods can scale to large-scale problems (Hataya et al., 2022; M. Li et al., 2021; Lorraine et al., 2020; Zhang et al., 2021), but such iterative approximations cost time complexities. Furthermore, these methods need careful configuration tuning to avoid numerical instability caused by ill-conditioned Hessian or the norm of Hessian.
In this paper, we propose to use the Nystrom method to
leverage the low-rank nature of Hessian matrices of neural networks and compute its inverse by the Woodbury matrix identity, inspired by recent works in quasi second-order optimization literature (D. Singh et al., 2021; S. P. Singh and Alistarh, 2020). Unlike the iterative approaches mentioned above, this approximation excludes iterative evaluations and can be computed instantly in matrix operations. Additionally, the proposed method avoids numerical instability. As a result, the Nystrom method is robust to configurations, and empirically compares favorably with existing approximation methods consistently on various tasks, from HPO to meta learning. In addition, by using the recurrence of the Woodbury matrix identity, this approach can control the tradeoff between time and space complexities without losing accuracy according to one's computational resource.
In the remaining text, we introduce the proposed method in Section 2. After reviewing related work in Section 3, we analyze the approximation quality of the proposed method when Hessian is low-rank in Section 4. Then, Section 5 empirically demonstrates the effectiveness of the method from a synthetic logistic regression problem to a large-scale real-world data reweighting problem, and finally Section 6 concludes this work.
## 2 Method
### Approximating Hypergradient by Implicit Differentiation
In this paper, we focus on the methods to approximate hypergradients \(\nabla_{\mathbf{\phi}}g\) by implicit differentiation so that the outer problem can also be efficiently optimized by gradient descent. Specifically, if \(\nabla_{\mathbf{\theta}}f(\mathbf{\theta}_{T},\mathbf{\phi})\approx\mathbf{0}\), then according to the implicit function theorem, we obtain
\[\frac{\mathrm{d}g(\mathbf{\theta}_{T},\mathbf{\phi})}{\mathrm{d}\mathbf{\phi}}=-\frac{ \partial g}{\partial\mathbf{\theta}}\left(\frac{\partial^{2}f}{\partial\mathbf{\theta }^{2}}\right)^{-1}\frac{\partial^{2}f}{\partial\mathbf{\phi}\partial\mathbf{\theta}}+ \frac{\partial g}{\partial\mathbf{\phi}}, \tag{3}\]
where, in the r.h.s., \(f=f(\mathbf{\theta}_{T},\mathbf{\phi})\) and \(g=g(\mathbf{\theta}_{T},\mathbf{\phi})\). Following the prior works (Lorraine et al., 2020; Pedregosa, 2016; Rajeswaran et al., 2019), we assume that this approximation holds after \(T\) iterations of the inner optimization. We also assume that factors in the r.h.s. of Equation (3) are available, _e.g._\(g\) is differentiable w.r.t. \(\mathbf{\theta}\). In some cases, such as optimization of hyperparameters for regularization, \(\nabla_{\mathbf{\phi}}g(\mathbf{\theta}_{T},\mathbf{\phi})\) is always zero.
Still solving Equation (3) seems computationally intractable, as computing inverse Hessian \((\nabla_{\mathbf{\theta}}^{2}f)^{-1}\) is computationally expensive, when the number of model parameters \(p=\dim\mathbf{\theta}\) is large. Though early works compute inverse Hessian directly (Bengio, 2000; Larsen et al., 1996), especially for modern neural networks, just storing Hessian \(\nabla_{\mathbf{\theta}}^{2}f\) is already infeasible in practice.
To mitigate this issue, some approximations have been proposed. Pedregosa, 2016; Rajeswaran et al., 2019 used the conjugate gradient method (Hestenes and Stiefel, 1952), which iteratively solves a linear equation \(\mathbf{Ax}=\mathbf{b}\) to obtain \(\mathbf{x}=\mathbf{A}^{-1}\mathbf{b}\), where, in this case, \(\mathbf{A}=\nabla_{\mathbf{\theta}}^{2}f\) and \(\mathbf{b}=\nabla_{\mathbf{\theta}}g\). Lorraine et al., 2020 adopted the Neumann series approximation, \(\mathbf{A}^{-1}=\alpha\sum_{i=1}^{\infty}(\mathbf{I}-\alpha\mathbf{A})^{i}\), where \(\mathbf{A}\) is an invertible matrix that satisfies \(\|\alpha A\|\leq 1\) and \(\alpha>0\) is a constant. As these algorithms may take arbitrarily long iterations for convergence, their truncated versions are preferred in practice, which cut off the iterations at a predefined number of steps \(l\).
Importantly, these methods do not require keeping actual Hessian but accessing it as Hessian vector product (HVP), which modern automatic differentiation tools (Bradbury et al., 2018; Paszke et al., 2019) can efficiently compute in \(O(p)\)(Baydin et al., 2018). Because they consist of HVP and vector arithmetics, their time and space complexities are \(O(lp+h)\) and \(O(p+h)\) for the number of iterations \(l\), where \(p=\dim\mathbf{\theta}\) and \(h=\dim\mathbf{\phi}\). In the following discussion, we omit the complexity regarding \(h\) for simplicity.
The downside of conjugate gradient and the Neumann series approximation may be their numerical instability. Conjugate gradient needs a well-conditioned matrix for fast convergence (Golub and Van Loan, 2013; Yousef Saad, 2003), _i.e._, it works sub-optimally with ill-conditioned Hessian. Its longer iterations accumulate numerical errors, and these errors typically need to be alleviated by preconditioning or reorthogonalization, which requires extra time and space complexities. The Neumann series needs the matrix norm to be less than 1, and thus \(\alpha\) needs to be carefully configured.
### Nystrom Approximation
Different from these previous methods using iterative computation, we instead propose to use a low-rank approximation for approximated inverse Hessian vector product (IHVP). Specifically, we propose to use the Nystrom method to obtain IHVP by leveraging the low-rank nature of Hessian matrix (Ghorbani et al., 2019; Karakida et al., 2019; LeCun et al., 2012).
We use the following \(k\)-rank approximation to the original \(p\) dimensional Hessian matrix, where we assume \(k\ll p\):
\[\mathbf{H}_{k}=\mathbf{H}_{[:,K]}\mathbf{H}_{[K,K]}^{\dagger}\mathbf{H}_{[:,K]}^{\top}, \tag{4}\]
where \(K\) is a randomly selected index set of size \(k\), \(\mathbf{H}_{[:,K]}\in\mathbb{R}^{p\times k}\) is a matrix extracted columns of \(\mathbf{H}\) corresponding to indices in \(K\), and \(\mathbf{H}_{[K,K]}\in\mathbb{R}^{k\times k}\) is a matrix extracted rows of \(\mathbf{H}_{[:,K]}\) corresponding to indices in \(K\). \(\mathbf{H}_{[K,K]}^{\dagger}=\mathbf{U}\mathbf{\Lambda}^{-1}\mathbf{U}^{\top}\) denotes the pseudo inverse, where \(\mathbf{U}\) and \(\mathbf{\Lambda}\) are eigenvectors and eigenvalues of \(\mathbf{H}_{[K,K]}\).
Then, we use the Woodbury matrix identity for matrices
\(\mathbf{A},\mathbf{B},\mathbf{C}\) such that
\[(\mathbf{A}+\mathbf{C}\mathbf{B}\mathbf{C}^{\top})^{-1} \tag{5}\] \[= \mathbf{A}^{-1}-\mathbf{A}^{-1}\mathbf{C}(\mathbf{B}^{-1}+\mathbf{C}^{\top}\mathbf{A}^{-1} \mathbf{C})^{-1}\mathbf{C}^{\top}\mathbf{A}^{-1}\]
to obtain the inverse Hessian. Namely, we compute \((\mathbf{H}_{k}+\rho\mathbf{I}_{p})^{-1}\), where \(\rho>0\) is a small constant to improve numerical stability and \(\mathbf{I}_{p}\) is the \(p\)-dimensional identity matrix, to approximate \(\mathbf{H}^{-1}\) as follows:
\[(\rho\mathbf{I}_{p}+\mathbf{H}_{[:,K]}\mathbf{H}_{[K,K]}^{\dagger}\mathbf{H}_{[:,K]}^{\top})^{-1} \tag{6}\] \[= \frac{1}{\rho}\mathbf{I}_{p}-\frac{1}{\rho^{2}}\mathbf{H}_{[:,K]}\left( \mathbf{H}_{[K,K]}+\frac{1}{\rho}\mathbf{H}_{[:,K]}^{\top}\mathbf{H}_{[:,K]}\right)^{-1} \mathbf{H}_{[:,K]}^{\top}.\]
Although this left-hand side requires the inversion of a \(p\times p\) matrix, the right-hand side only needs the inversion of a \(k\times k\) matrix. Because \(k\ll p\), the computational burden of the l.h.s. is drastically reduced in the r.h.s. The use of Woodbury matrix identity as Equation (6) is similar to the idea of D. Singh et al., 2021, but our formulation is slightly efficient as it avoids unnecessary eigen decomposition.
The small constant \(\rho\) in Equation (6) makes a low-rank matrix \(\mathbf{H}_{k}\) invertible. Additionally, it can also be regarded as being stemmed from a proximally regularized inner objective \(f(\mathbf{\theta},\mathbf{\phi})+\frac{\rho}{2}\|\mathbf{\theta}-\mathbf{\theta}^{\prime}\|\), where \(\mathbf{\theta}^{\prime}\in\operatorname*{argmin}_{\mathbf{\theta}}f(\mathbf{\theta},\mathbf{ \phi})\)(Vicol et al., 2022).
We visualize the inverse of a low-rank matrix and its approximations in Figure 1. Nystrom method can approximate the true inverse efficiently and accurately. Because conjugate gradient cannot explicitly output inverse Hessian, we do not display its result here.
To sum up, the proposed method approximates the hypergradient by using a low-rank Hessian \(\mathbf{H}_{k}\) as
\[\frac{\mathrm{d}g(\mathbf{\theta}_{T},\mathbf{\phi})}{\mathrm{d}\mathbf{\phi}}\!\approx\!- \frac{\partial g}{\partial\mathbf{\theta}}(\mathbf{H}_{k}+\rho\mathbf{I}_{p})^{-1}\frac{ \partial^{2}f}{\partial\mathbf{\phi}\partial\mathbf{\theta}}+\frac{\partial g}{ \partial\mathbf{\phi}}. \tag{7}\]
### Space-efficient Variant
The Nystrom approximation is free from iterative algorithms, but it needs to store \(k\) columns of the original Hessian matrix \(\mathbf{H}_{[:,K]}\) and compute the inverse of a \(k\times k\) matrix. As a result, its time and space complexities are \(O(p+k^{3})\) and \(O(kp+k^{2})\), but \(k^{3}\) and \(k^{2}\) are ignorable because usually \(k\ll p\).
Some readers may worry about memory explosion when \(k\) is relatively large. Actually, this Nystrom approximation can be turned into an iterative algorithm that saves memory. Recall that the low-rank matrix can be decomposed as follows
\[\mathbf{H}_{k}=\mathbf{H}_{[:,K]}\mathbf{H}_{[K,K]}^{\dagger}\mathbf{H}_{[:,K]}^{\top}=\sum_{i \in K}\frac{1}{\lambda_{i}}\mathbf{l}_{i}\mathbf{l}_{i}\mathbf{l}_{i}^{\top}, \tag{8}\]
where \(\lambda_{i}\in\mathbb{R}\) is the \(i\)th value of \(\mathbf{\Lambda}\) and \(\mathbf{l}_{i}=(\mathbf{H}_{[:,K]}\mathbf{U})_{[:,i]}\in\mathbb{R}^{p}\). Then, we can iteratively compute the inverse of \(\mathbf{H}_{k}+\rho\mathbf{I}_{p}\) by the Woodbury matrix identity (Equation (5)) as
\[\hat{\mathbf{H}}_{i+1} =\hat{\mathbf{H}}_{i}-\frac{\hat{\mathbf{H}}_{i}\mathbf{l}_{i}\mathbf{l}_{i}^{ \top}\hat{\mathbf{H}}_{i}}{\lambda_{i}+\mathbf{l}_{i}^{\top}\hat{\mathbf{H}}_{i}\mathbf{l}_{i}}, \tag{9}\] \[\text{where}\quad\hat{\mathbf{H}}_{0} =\frac{1}{\rho}\mathbf{I}_{p},\] \[\hat{\mathbf{H}}_{k} =(\mathbf{H}_{k}+\rho\mathbf{I}_{p})^{-1},\]
for \(i=0,1,\dots,k-1\). This variant needs \(O(k^{2}p)\) time complexity and \(O(p)\) space complexity like iterative algorithms. S. P. Singh and Alistarh, 2020 proposed a dynamical algorithm for a similar problem to compute the inverse of the Fisher information matrix (FIM).
### Controlling the Cost Tradeoff
Furthermore, by chunking \(\mathbf{H}_{[:,K]}\) into thinner matrices of width \(\kappa\in(1,k)\) and applying the Woodbury matrix identity iteratively as Algorithm 1, \((\mathbf{H}_{k}+\rho\mathbf{I}_{p})^{-1}\) can be obtained in a hybrid manner with less memory footprint, _i.e._, \(O(\kappa p)\), than Equation (6) and faster, _i.e._, \(O(\{k/\kappa\}^{2}p)\), than Equation (9). In other words, our method allows users to dynamically control the necessary tradeoff between time and space complexities for given accuracy, which is a unique characteristic of our proposed method. See Table 1 for comparison with other methods.
Notice that for any \(\kappa\), the computational result is equivalent to each other up to machine precision. Thus, in the remaining paper, we use Equation (6), where \(\kappa=k\), without otherwise specified.
### Limitations
The proposed method cannot straightforwardly optimize outer parameters \(\mathbf{\phi}\) that do not directly affect the training loss, inheriting the limitation of the methods to approximate hypergradient by implicit differentiation (Lorraine et al., 2020). Such parameters include a learning rate of an optimizer of the inner problem, which needs to be carefully tuned in deep learning research (Schmidt et al., 2021). We may need to rely on unrolling approaches for this problem (Andrychowicz et al., 2016; Grefenstette et al., 2019; Li and Malik, 2017). Additionally, the method does not directly applicable to non-smooth problems as other gradient-based methods, part of which could be alleviated by smoothing the problem or using sub-gradients and sub-Hessians.
## 3 Related Work
### Gradient-based Hyperparameter Optimization and Meta Learning
The development of automatic differentiation (Baydin et al., 2018; Bradbury et al., 2018; Paszke et al., 2019) has encouraged the active research of gradient-based HPO and meta learning, where the outer problem of a bilevel problem (Equation (1)) is also optimized by gradient descent using hypergradient (Franceschi, 2021; Franceschi, Frasconi, et al., 2018).
One way to compute hypergradients is to backpropagate through the unrolled inner optimization (Equation (2)) (Domke, 2012; Finn et al., 2017; Grefenstette et al., 2019). Except for the special cases, where a specific inner optimization algorithm (Maclaurin et al., 2015) or forward-mode automatic differentiation (Franceschi, Donini, et al., 2017) can be used, this approach suffers from space complexity as the inner optimization step \(T\) increases.
Another approach is to approximate hypergradient using the implicit differentiation as Equation (3) with less space complexity (Bengio, 2000; Lorraine et al., 2020; Pedregosa, 2016; Rajeswaran et al., 2019). Although Equation (3) includes inverse Hessian, which is infeasible to compute for modern neural networks, truncated solutions of conjugate gradient (Pedregosa, 2016; Rajeswaran et al., 2019) and the Neumann series approximation (Lorraine et al., 2020) have been adopted to approximate this term efficiently. Other solvers for linear systems, such as GMRES (Youcef Saad and Schultz, 1986), can also be used (Blondel et al., 2021). Our work is in line with these works, but compared to these methods using generic techniques for matrix inverse, the proposed method exploits the low-rankness of Hessian of neural networks.
### Inverse Hessian Approximation
The application of inverse Hessian approximation is not limited to the computation of hypergradient. It has been a key element in estimation of influence function (Koh and Liang, 2017), backpropagation through long recurrence
\begin{table}
\begin{tabular}{l c c} \hline \hline Approximation & Time Complexity & Space Complexity \\ \hline Conjugate gradient (Rajeswaran et al., 2019) & \(O(lp)\) & \(O(p)\) \\ Neumann series & \(O(lp)\) & \(O(p)\) \\ Nyström method (ours) & \(O((k/\kappa)^{2}p)\) & \(O(\kappa p)\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of time and space complexity. \(p\) denotes the number of model parameters. \(l\) is the number of iterations of the algorithms, \(k\) is the rank of low-rank Hessian. The Nyström method allows users to control the complexities by choosing \(\kappa\in\{1,2,\dots,k\}\)
Figure 1: Comparison of inverse of a 40 dimensional matrix \(\mathbf{A}+\rho\mathbf{I}\). \(\mathbf{A}\) is a rank 20 symmetric matrix, and \(\rho\) is set to \(0.1\). Nyström method can approximate the true inverse accurately even in the rank 5 setting.
(Liao et al., 2018), and network pruning (Hassibi and Stork, 1992; S. P. Singh and Alistarh, 2020).
The inverse of Hessian or an FIM is also indispensable in (quasi) second-order optimization (Martens, 2010) and natural gradient descent (Amari, 1998). Thus, its estimation has been studied for a long time. For example, LBFGS employs past gradients and updates (Liu and Nocedal, 1989), and KFAC adopts block-diagonal approximation of FIM (Martens and Grosse, 2015) for efficient approximation of large matrix inversion.
The Hessians and FIMs of neural networks have low-rank structures, as most of their eigenvalues are nearly zero (Ghorbani et al., 2019; Karakida et al., 2019; LeCun et al., 2012). Exploiting this nature, inverse FIM (Frantar et al., 2021) or inverse Hessian (D. Singh et al., 2021) are computed using the Woodbury identity in the literature of quasi second-order optimization. Especially, the latter used the Nystrom method. Although these approaches are technically similar to ours, they are in a different context.
## 4 Theoretical Analysis
This section theoretically shows that the proposed approach can efficiently approximate the true hypergradient.
**Theorem 1**: _Suppose \(\mathbf{H}\) is a positive semidefinite. Let \(\mathbf{h}^{\star}\) and \(\mathbf{h}\) be hypergradients using the true inverse Hessian \((\mathbf{H}+\rho\mathbf{I}_{p})^{-1}\) (r.h.s. of Equation (3)) and the Nystrom method \((\mathbf{H}_{k}+\rho\mathbf{I}_{p})^{-1}\) (r.h.s. of Equation (6)), and \(\mathbf{g}=\nabla\mathbf{\theta}g(\mathbf{\theta}_{T},\mathbf{\phi})\), \(\mathbf{F}=\nabla_{\mathbf{\phi}}\nabla_{\mathbf{\theta}}f(\mathbf{\theta}_{T},\mathbf{\phi})\). Then, the accuracy of approximated gradient is bounded_
\[\|\mathbf{h}^{\star}-\mathbf{h}\|_{2}\leq\|\mathbf{g}\|_{2}\|\mathbf{F}\|_{\mathrm{op}}\left( \frac{1}{\rho}\frac{\|\mathbf{H}-\mathbf{H}_{k}\|_{\mathrm{op}}}{\rho+\|\mathbf{H}-\mathbf{H}_ {k}\|_{\mathrm{op}}}\right). \tag{10}\]
\(\|\cdot\|_{2}\) and \(\|\cdot\|_{\mathrm{op}}\) denote L2 norm and operator norm, respectively.
This theorem is based on (Frangella et al., 2021). See the supplemental material for the derivation.
When considering neural networks, because the training objective \(f\) is not convex w.r.t. \(\mathbf{\theta}\), its Hessian \(\mathbf{H}\) is not always positive semi-definite. However, Ghorbani et al., 2019 empirically demonstrated that most negative eigenvalues disappeared even after a few iterations of training, indicating that we may assume that \(\mathbf{H}+\rho\mathbf{I}\) for \(\rho>0\) is positive semi-definite in practice. Also importantly, \(\|\mathbf{H}-\mathbf{H}_{k}\|_{\mathrm{op}}\) is bounded.
**Remark 1** (Theorem 3 in Drineas and Mahoney, 2005): _Let \(\bar{\mathbf{H}}_{k}\) be the best \(k\)-rank approximation of \(\mathbf{H}\). If \(O(k/\epsilon^{4})\) columns are selected for \(\epsilon>0\) so that the \(i\)th column is chosen proportional to \(H_{i,i}^{2}\), then,_
\[\mathbb{E}[\|\mathbf{H}-\mathbf{H}_{k}\|_{\mathrm{op}}]\leq\|\mathbf{H}-\bar{\mathbf{H}}_{k}\| _{\mathrm{op}}+\epsilon\sum_{i=1}^{p}H_{i,i}^{2}. \tag{11}\]
_Especially if \(\mathbf{H}\) is a rank \(k\) matrix, then_
\[\mathbb{E}[\|\mathbf{H}-\mathbf{H}_{k}\|_{\mathrm{op}}]\leq\epsilon\sum_{i=1}^{p}H_{i,i}^{2} \tag{12}\]
Because Hessian of a trained neural network can be regarded as low rank (Ghorbani et al., 2019; Karakida et al., 2019; LeCun et al., 2012), we may expect that Equation (12) holds.
Equation (10) indicates that an approximated hypergradient converges to the true hypergradient as \(\mathbf{H}_{k}\) approaches to \(\mathbf{H}\). This differs from truncated iterative approximations, such as conjugate gradient, where their expected solutions never converge to the true one for a small number of iterations.
## 5 Experiments
In this section, we empirically demonstrate the effectiveness of the proposed method.
### Experimental Setups
We implemented models and algorithms using PyTorch v1.12 (Paszke et al., 2019) and its accompanying functorch (He and Zou, 2021). The reference code is available from [https://github.com/moskomule/hypergrad](https://github.com/moskomule/hypergrad). Experiments were conducted on a single NVIDIA A100 GPU with CUDA 11.3. The implementations of conjugate gradient and the Neumann series approximation algorithms were adopted from betty v0.1.1 (Choe et al., 2022).
In the following experiments, the Nystrom method was implemented according to Equation (6), that is, the time-efficient variant otherwise specified. Using ReLU as an activation function leads some columns of Hessian to zero vectors, and then the inversion in Equation (6) fails. To circumvent this problem, we replaced ReLU with leaky ReLU: \(\text{LR}(x)=\max(0,x)+0.01\times\min(0,x)\).
### Optimizing Weight-decay of Linear Regression
We first showcase the ability of the Nystrom approximation by optimizing weight-decay parameters for each parameter of a linear regression model using synthetic data. For \(D\) dimensional data, inner parameters \(\mathbf{\theta}\in\mathbb{R}^{D}\) and outer parameters \(\mathbf{\phi}\in\mathbb{R}^{D}\) are optimized. The inner problem is \(f(\mathbf{\theta},\mathbf{\phi})=\ell(\mathbf{\theta}^{\top}\mathbf{x},y)+\mathbf{\theta}^{\top} \operatorname{diag}(\mathbf{\phi})\mathbf{\theta}\), for an input \(\mathbf{x}\) and its label \(y\), where \(\ell\) is binary cross entropy loss. Each input
\(\mathbf{x}\) is sampled from a standard normal distribution, and its label is defined as \(y=\mathbf{w}^{*\top}\mathbf{x}+\mathbf{\epsilon}>0\), where \(\mathbf{w}^{*}\in\mathbb{R}^{D}\) is a constant vector and \(\mathbf{\epsilon}\in\mathbb{R}^{D}\) is a noise vector. This inner problem is optimized by SGD with a learning rate of 0.1, and the inner parameters are reset every 100 iteration. The outer problem is to minimize validation loss by SGD with a learning rate of 1.0 and a momentum of 0.9. The outer parameters \(\mathbf{\phi}\), initialized to \(\mathbf{1}\), are updated after every 100 inner parameter update. We set \(D=100\) and used 500 data points for both inner and outer optimization.
Figure 2 (top) shows validation loss curves, comparing approximated implicit differentiation methods, conjugate gradient, and the Neumann series approximation, with our proposed method. For the conjugate gradient method and the Neumann series approximation, we set the number of iterations \(l\) to \(5\), following Rajeswaran et al., 2019. Accordingly, we set the rank of the Nystrom method to \(5\). As can be seen, the Nystrom method can optimize the weight-decay parameters faster than other methods. Figure 2 (bottom) displays training loss curves. Because the inner parameters are reset when the outer parameters are updated, training loss values at inner-parameter reset moments are high (around \(0.7\)). As the outer optimization proceeds, the inner parameters, particularly those of the Nystrom method, quickly decreases the training loss during each inner optimization period.
For the experiments in Figure 2, the "learning rate" parameter \(\alpha\) of conjugate gradient and the Neumann series approximation was set to \(0.01\), and \(\rho\) of the Nystrom method was set to \(0.01\). We compare other choices of \(\alpha\) in \(\{0.01,0.1,1.0\}\) in Figure 3. Accordingly, we try other values of \(\rho\) in \(\{0.01,0.1,1.0\}\). The results indicate that the Nystrom method surpasses others in most cases and show robustness to the choice of \(\rho\). We will revisit the robustness of the Nystrom method later in Section 5.4. These experimental results were averaged over five runs of different random seeds.
### Dataset Distillation
Dataset distillation is a task to optimize a small synthesized training dataset \(\mathcal{T}_{\mathbf{\phi}}=\{\mathbf{\phi}_{1},\mathbf{\phi}_{2},\ldots,\mathbf{\phi}_{C}\}\) parameterized by \(\mathbf{\phi}\) so that validation loss on real data is minimized (Wang et al., 2018). We used MNIST dataset (Le Cun et al., 1998) and a LeNet-like CNN, and followed the fixed-known initialization setting that CNN weights, _i.e_., the inner parameters, are reset every 100 model parameter update. As MNIST is a 10-class dataset, we set \(C=50\), so each class has 5 distilled images. Each \(\mathbf{\phi}_{i}\in\mathcal{T}_{\mathbf{\phi}}\) has an equal size to an MNIST image. We used fixed learning rates for inner and outer optimization to simplify the problem. Namely, the inner problem is optimized by SGD with a learning rate of \(0.01\), while the outer problem is optimized with an Adam optimizer with a learning rate of \(1.0\times 10^{-3}\).
The test accuracy after 5,000 outer parameter updates is reported in Table 2. These results were averaged over five runs. We set \(\alpha=\rho=0.01\) and \(l=k=10\).
The Nystrom method yields comparable performance to the Neumann series approximation. However, despite our best efforts to select appropriate values of \(\alpha\in\{0.01,0.1,1.0\}\) and \(l\in\{5,10,20\}\) based on the validation performance on a 10% split of training data, the conjugate gradient method failed to learn this task. This failure may be attributed to ill-conditioned Hessian.
### Gradient-based Meta Learning
MAML is a typical method of gradient-based meta learning, where the inner problem learns to adapt to a given problem while the outer problem aims to find good parameters that adapts quickly to new tasks (Finn et al., 2017). Among its variants, iMAML uses implicit differentiation to compute hypergradient, achieving better memory efficiency (Rajeswaran et al., 2019). Although the original iMAML adopts conjugate gradient to obtain IHVP, this
\begin{table}
\begin{tabular}{c c c} \hline Conjugate gradient & Neumann series & Nyström method \\ \hline \(0.17\pm 0.04\) & \(0.47\pm 0.03\) & \(0.49\pm 0.04\) \\ \hline \end{tabular}
\end{table}
Table 2: Test accuracy of the dataset-distillation task on the MNIST dataset. The Nyström method shows better performance than others.
Figure 2: Optimization of weight decay parameters to each model parameter in logistic regression. The top figure shows the _validation_ loss curve of the outer problem, and the bottom figure shows the _training_ loss curves of the inner problem, optimized in 100 iterations.
choice can be replaced with the Neumann series approximation and the Nystrom method.
We compared such backends using few-shot image classification, where models are learned to classify images only from few examples Fei-Fei et al. (2006); B. Lake et al. (2011); Ravi and Larochelle (2017), on the Omniglot dataset B. M. Lake et al. (2015) with a VGG-like CNN, following Antoniou et al. (2019); Rajeswaran et al. (2019). We set \(k=l=10\), \(\alpha=\rho=0.01\). The inner problem is to optimize model parameters by SGD with a learning rate of 0.1 in 10 steps, and the outer problem is to update the initial model parameters by Adam with a learning rate of \(1.0\times 10^{-3}\).
Table 3 shows the averaged accuracy on the test tasks over three runs after training on \(1.6\times 10^{6}\) tasks. As can be seen, the Nystrom method achieved comparable results with iMAML using conjugate gradient both in the 1-shot and 5-shot settings.
### Data Reweighting
Data reweighting is a task to learn to weight a loss value to each example, which aims to alleviate the effect of class imbalance and label noise M. Li et al. (2021); Shu et al. (2019). Its inner problem can be formulated as \(f(\mathbf{\theta},\mathbf{\phi})=\ell(\nu_{\mathbf{\theta}}(\mathbf{x}),\mathbf{y})\cdot\mu_{\mathbf{ \phi}}(\ell(\nu_{\mathbf{\theta}}(\mathbf{x}),\mathbf{y}))\), where \(\ell\) is cross-entropy loss, \(\nu_{\mathbf{\theta}}\) is a model, and \(\mu_{\mathbf{\phi}}\) is a neural network to weight samples. The outer problem is to update \(\mathbf{\phi}\) to minimize validation loss on balanced validation data. The inner parameters are not reset when the outer parameters are updated.
We adopted long-tailed CIFAR-10 datasets Cui et al. (2019), which simulate class imbalance at several degrees, WideResNet 28-10 Zagoruyko and Komodakis (2016) as \(\nu_{\mathbf{\theta}}\), which has approximately \(3.6\times 10^{7}\) parameters, and a two-layer MLP with a hidden dimension of \(100\) as \(\mu_{\mathbf{\phi}}\). The inner problem is optimized by SGD with a learning date of 0.1, momentum of 0.9, and weight decay of \(5.0\times 10^{-4}\), and the outer problem is optimized with an Adam optimizer with a learning rate of \(1.0\times 10^{-5}\) on 2% split of training data, following Shu et al. (2019). We set \(\alpha=\rho=0.01\) and \(l=k=10\).
Table 4 shows the averaged test accuracy over three runs after \(1.5\times 10^{4}\) inner updates and \(1.5\times 10^{3}\) outer updates. Again, the Nystrom method consistently yielded matching or better performance to other methods and outperformed the baseline.
### Runtime Speed and Memory Consumption
Table 5 compares speed and peak GPU memory consumption to compute hypergradients on the data reweighting task (averaged over 10 runs). Because WideResNet 28-10 caused out of memory when \(k=20\) with the time-efficient Nystrom method, we instead used relatively smaller WideResNet 28-2, which has \(1.5\times 10^{6}\) parameters. The reported values were measured after 10 iterations of warmup.
As shown in Table 1, the time complexity of iterative algorithms, conjugate gradient and the Neumann series, depends on \(l\), whereas that of the Nystrom method is independent of \(k\). As a result, the runtime speed of iterative algorithms slowdowns as the approximation quality \(l\) increases, while the deceleration of the time-efficient Nystrom method is marginal. On the other hand, the space complexity of the iterative algorithms is constant of \(l\), which is reflected in the results. In contrast, that of the time-efficient Nystrom method relies on \(k\), which can also be observed from the linear growth of the actual memory consumption.
Table 5 also presents the results of the space-efficient variant of the Nystrom method, where \(\kappa=1\). Its memory consumption is constant, while the speed is almost quadratic to \(k\), demonstrating the controllability of the tradeoff between speed and memory consumption as expected in Table 1.
### Robustness of the Nystrom method
The Nystrom method has two parameters \(\rho\) for numerical stability and \(k\) for the matrix rank. Table 6 shows the effect of these configurations on the data reweighting task using WideResNet 28-2 on the long-tailed CIFAR-10 of the imbalanced factor of 50 over \(\rho\in\{0.01,0.1,1.0\}\) and
\begin{table}
\begin{tabular}{l c c} \hline \hline Task & 1-shot & 5-shot \\ \hline Conjugate gradient & \(0.96\pm 0.00\) & \(0.98\pm 0.00\) \\ Neumann series & \(0.91\pm 0.00\) & \(0.97\pm 0.00\) \\ Nyström method & \(0.95\pm 0.00\) & \(0.98\pm 0.00\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Test accuracy of the meta learning task on the Omniglot dataset. The Nyström method shows comparable performance to conjugate gradient.
Figure 3: Validation loss curves of implicit differentiation methods with different configurations.
\(k\in\{5,10,20\}\). The results differ only marginally, _i.e._, the proposed method is robust to the choice of configurations, which is a favorable property in practical applications. Figure 4 compares the validation curves in weight-decay optimization of logistic regression using the Nystrom method with different \(k\). Again, the differences of curves among configurations are marginal, emphasizing the robustness of the Nystrom method.
These results suggest that \(k=5\) may be sufficient for practically sized problems, which is faster than other methods, while consuming only twice memory (Table 5). Also notice that, throughout various experiments including HPO and meta learning, the proposed method successfully and consistently works, different from other methods that failed at some tasks. This indicates that the Nystrom method may also be robust to the types of problems. These properties are appealing for practical use cases, that is, the Nystrom method may need minimum efforts for "_hyper_ hyperparameter optimization."
## 6 Conclusion and Discussion
This paper introduced an approximated implicit differentiation method for gradient-based bilevel optimization using the Nystrom method. The key idea was to exploit the low-rank property of Hessian of neural networks by the Nystrom method and use the Woodbury matrix identity for fast and accurate computation of inverse Hessian vector product in hypergradient. The proposed method scaled to large-scale problems and was applicable to hyperparameter optimization and meta learning. Empirically, the approach was robust to configurations and about two times faster than iterative approximation methods.
Although hyperparameter optimization is crucial in machine learning, especially in deep learning, traditional hyperparameter optimization is costly and emits a substantial amount of CO2 (Strubell et al., 2020). Contrarily, gradient-based hyperparameter optimization is efficient and may help alleviate this issue. Since the proposed method is fast,
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & & \multicolumn{3}{c}{\(\rho\)} \\ & & 0.01 & 0.1 & 1.0 \\ \cline{3-5} & 5 & \(0.79\pm 0.01\) & \(0.78\pm 0.01\) & \(0.79\pm 0.01\) \\ \(k\) & 10 & \(0.79\pm 0.01\) & \(0.78\pm 0.01\) & \(0.78\pm 0.01\) \\ & 20 & \(0.78\pm 0.02\) & \(0.78\pm 0.01\) & \(0.79\pm 0.01\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: The effect of \(\rho\) and \(k\) of the Nyström method on the data reweighting task. Test accuracy is reported. The baseline without outer optimization yields test accuracy of \(0.75\pm 0.03\). These results indicate the robustness of the proposed method to configurations.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Imbalanced factor & 200 & 100 & 50 \\ \hline Baseline & \(0.62\pm 0.06\) & \(0.67\pm 0.13\) & \(0.74\pm 0.08\) \\ Conjugate gradient & \(0.63\pm 0.06\) & \(0.70\pm 0.05\) & \(0.78\pm 0.02\) \\ Neumann series & \(0.60\pm 0.09\) & \(0.73\pm 0.01\) & \(0.79\pm 0.01\) \\ Nyström method & \(0.66\pm 0.02\) & \(0.73\pm 0.02\) & \(0.79\pm 0.01\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Test accuracy of the data reweighting task on the long-tailed CIFAR-10 datasets. Baseline indicates training without outer optimization. The Nyström method achieves consistently favorable results.
\begin{table}
\begin{tabular}{l l l l} \hline \hline Conjugate gradient (Pedregosa, 2016) & \(l=5\) & 0.44 & 2.46 \\ & \(l=10\) & 0.83 & 2.46 \\ & \(l=20\) & 1.68 & 2.46 \\ \hline Neumann series (Lorraine et al., 2020) & \(l=5\) & 0.40 & 2.39 \\ & \(l=10\) & 0.75 & 2.39 \\ & \(l=20\) & 1.48 & 2.39 \\ \hline Nyström method (ours) & \(k=5\) & 0.24 & 4.66 \\ (time efficient) & \(k=10\) & 0.33 & 8.15 \\ & \(k=20\) & 0.54 & 15.1 \\ \hline Nyström method (ours) & \(k=5\) & 3.11 & 1.94 \\ (space efficient) & \(k=10\) & 10.7 & 1.94 \\ & \(k=20\) & 41.0 & 1.94 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Average runtime speed and peak memory consumption for hypergradient computation in data reweighting task over 10 runs.
scalable, robust, and applicable to a wide range of tasks, it may provide a reliable way for researchers and practitioners to introduce efficient bilevel optimization.
## Acknowledgments
We thank anonymous reviewers for constructive comments to improve the manuscript. R.H. was supported by JST, ACT-X Grant Number JPMJAX210H, and M.Y. was supported by MEXT KAKENHI Grant Number 20H04243. We used computational resources of "mdx: a platform for the data-driven future."
|
2301.08168 | Boundary Chaos: Exact Entanglement Dynamics | We compute the dynamics of entanglement in the minimal setup producing
ergodic and mixing quantum many-body dynamics, which we previously dubbed {\em
boundary chaos}. This consists of a free, non-interacting brickwork quantum
circuit, in which chaos and ergodicity is induced by an impurity interaction,
i.e., an entangling two-qudit gate, placed at the system's boundary. We compute
both the conventional bipartite entanglement entropy with respect to a
connected subsystem including the impurity interaction for initial product
states as well as the so-called operator entanglement entropy of initial local
operators. Thereby we provide exact results in a particular scaling limit of
both time and system size going to infinity for either very small or very large
subsystems. We show that different classes of impurity interactions lead to
very distinct entanglement dynamics. For impurity gates preserving a local
product state forming the bulk of the initial state, entanglement entropies of
states show persistent spikes with period set by the system size and suppressed
entanglement in between, contrary to the expected linear growth in ergodic
systems. We observe similar dynamics of operator entanglement for generic
impurities. In contrast, for T-dual impurities, which remain unitary under
partial transposition, we find entanglement entropies of both states and
operators to grow linearly in time with the maximum possible speed allowed by
the geometry of the system. The intensive nature of interactions in all cases
cause entanglement to grow on extensive time scales proportional to system
size. | Felix Fritzsch, Roopayan Ghosh, Tomaž Prosen | 2023-01-19T16:58:57Z | http://arxiv.org/abs/2301.08168v3 | **Boundary Chaos: Exact Entanglement Dynamics**
## Abstract
**We compute the dynamics of entanglement in the minimal setup producing ergodic and mixing quantum many-body dynamics, which we previously dubbed** _boundary chaos_**. This consists of a free, non-interacting brickwork quantum circuit, in which chaos and ergodicity is induced by an impurity interaction, i.e., an entangling two-qudit gate, placed at the system's boundary. We compute both the conventional bipartite entanglement entropy with respect to a connected subsystem including the impurity interaction for initial product states as well as the so-called operator entanglement entropy of initial local operators. Thereby we provide exact results in a particular scaling limit of both time and system size going to infinity for either very small or very large subsystems. We show that different classes of impurity interactions lead to very distinct entanglement dynamics. For impurity gates preserving a local product state forming the bulk of the initial state, entanglement entropies of states show persistent spikes with period set by the system size and suppressed entanglement in between, contrary to the expected linear growth in ergodic systems. We observe similar dynamics of operator entanglement for generic impurities. In contrast, for T-dual impurities, which remain unitary under partial transposition, we find entanglement entropies of both states and operators to grow linearly in time with the maximum possible speed allowed by the geometry of the system. The intensive nature of interactions in all cases cause entanglement to grow on extensive time scales proportional to system size.**
###### Contents
* 1 Introduction
* 1.1 Summary of Results
* 2 Setting
* 2.1 Boundary Chaos Circuit
* 2.2 Entanglement Entropies
* 3
Entanglement Dynamics of Product States * 3.1 Tensor Network Representation of the Reduced Density Matrix for Product States * 3.1.1 Initial State * 3.1.2 Time Evolved State * 3.1.3 Reduced Density Matrix * 3.2 Entanglement Dynamics * 3.2.1 Impurities with a Vacuum State * 3.2.2 T-dual Impurities * 3.2.3 Numerical Results for Generic Impurities
* 4 Operator Entanglement Dynamics for Local Operators
* 4.1 Tensor Network Representation of the Reduced Super Density Matrix
* 4.2 Entanglement Dynamics
* 4.2.1 Generic Impurity Interactions
* 4.2.2 T-dual Impurity Interactions
* 5 Conclusion
* A Spectral Statistics of the Circuit
* B Subleading Eigenvalues of Transfer Matrices
## 1 Introduction
One of the hallmark features of quantum mechanics which has no counterpart in classical physics is quantum entanglement. With the advent of quantum computing and communication, quantum entanglement has become an important resource to overcome limitations of classical computing [1]. Futhermore, the study of the creation and kinetics of entanglement is currently a well established tool to characterize the complex dynamics of condensed matter systems and the emergence of their thermodynamic description both theoretically [2, 3] and experimentally [4, 5, 6, 7]. Simultaneously the creation of the so-called entanglement entropy sets fundamental limits on how such systems can be simulated on classical computers [8, 9, 10, 11].
A standard protocol to investigate the creation of entanglement in a quantum system is that of a quench, i.e., the time evolution from an initial state with typically little or no entanglement, which is not an eigenstate of the system's evolution operator. A very general feature in such a non-equilibrium situation is the linear growth of entanglement between disjoint subsystems measured by, for example, the von Neumann or Renyi entropies of the reduced density matrix, and their subsequent saturation for finite systems.
This linear growth has been observed in distinguished scenarios, including experimental setups with cold atoms [5], and has been explained by different mechanisms. In integrable systems, the linear growth can be traced back to propagating stable quasi particles [12, 13, 14, 15, 16, 17, 18].
But it has also been shown that the linear growth is ubiquitous even in the absence of stable quasi particle excitations [19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30]. In spatially local chaotic many-body systems it can be qualitatively deduced from a minimal membrane separating the subsystems [23, 24]. Recently, more rigorous results on the nature of the linear growth of entanglement in chaotic systems have been obtained for random quantum circuits [23, 25, 26] or Floquet systems including, e.g., periodically driven chaotic spin chains [27] or Floquet quantum circuits [28, 29, 30]. In particular in the latter a dual description of the dynamics under a swapping of space and time allows for rigorous results in the thermodynamic limit [31, 32, 30, 16, 27].
Nevertheless, there are some notable exceptions from the linear growth of entanglement entropies, e.g., in the presence of confined quasiparticles [33] as well as in disordered [34] and many-body localized systems [35, 36, 37, 38, 39], which display logarithmic growth.
An alternative point of view for characterizing the dynamics of many-body systems is provided by the growth of complexity of initially simple operators, for example, local operators, under Heisenberg time evolution. There are various ways to characterize the complexity of the time evolved operator, including out-of-time-ordered correlators, which quantify the scrambling of operators and the growth of their support [40, 41, 42, 43, 44, 45, 46] as well as their Krylov complexity [47, 48, 49, 50]. Moreover, one can study correlations in the time evolved operator shared by disjoint subsystems.
By interpreting operators as states in an enlarged Hilbert space by means of an operator to state mapping, this idea can be made concrete by applying the concept of entanglement to the vectorized operator. This leads to the notion of operator entanglement, which originally was introduced to study the entanglement properties of evolution operators or, more generally, quantum channels [51]. In the context of many-body system this measure can also be used to quantify the growth of complexity of initially simple operators [52, 53]. The latter provides additional insight into the complexity of the many body dynamics and sets limits for the numerical simulation of Heisenberg time evolution of operators [52, 53, 54, 55, 56].
Previously, the aforesaid quantity has been studied in various settings, including systems with local solitons [57] as well as intergrable systems [52, 58, 59, 53, 56] and conformal field theories [60], where logarithmic growth of operator entanglement entropies were observed. In contrast, in general chaotic systems entanglement entropies initially grow linear in time [61, 62, 24] until they eventually saturate. An interesting exception from saturation at late times, the so-called entanglement barrier, occurs for the reduced density matrix of a pure state in systems with short range interactions [63, 64, 60]. There, after initially growing, entanglement entropies ultimately drop down again until they settle at the value of a lowly entangled thermal state.
In this paper we consider both the entanglement dynamics for product states and the operator entanglement dynamics for local operators in a simple quantum circuits setting, which allows for exact solutions in the limit of large system size \(L\to\infty\). More precisely we study a free quantum circuit model, with trivial free dynamics, which we perturb at the systems boundary with an entangling two-qudit gate, which we call an _impurity interaction_. This setup was introduced in Ref. [65] and dubbed _boundary chaos_. One might think of such a circuit as a toy model for a free system subject to a local perturbation which introduces nontrivial scattering to the otherwise free dynamics. Despite its simple nature the boundary chaos circuit has been shown to be quantum chaotic in the sense of spectral statistics and exhibits ergodic dynamics [65].
Moreover, this setting allows us to analytically integrate out the free part of the dynamics, in a conceptually similar fashion as for Poincare maps in classical dynamics [66]. This enables
us to provide a simplified tensor network representation of the time evolved reduced density matrix, or super density matrix in the case of operator entanglement. In particular, these networks contain only the impurity interaction and hence contains a factor of \(1/L\) less gates. This renders them amenable for efficient numerical contraction in terms of suitable transfer matrices even for very large systems, and for analytical calculations in the thermodynamic limit of system size \(L\to\infty\). Depending on the choice of impurity interaction we either obtain the reduced (super) density matrix or the corresponding Renyi entanglement entropies exactly. For different classes of impuirity interactions we find very different entanglement dynamics including exponentially suppressed (operator) entanglement entropies for most times accompanied by periodically spikes as well as maximally fast linear growth of (operator) entanglement. In all cases, however, we find entanglement to grow on extensive time scales \(\sim L\). We summarize our results in more detail in the following section.
### Summary of Results
To discuss our results, let us first introduce some notation. The free part of the boundary chaos circuit is built from swap gates on a chain of qudits, i.e., \(q\)-level systems of length \(L+1\). Chaos and ergodicity is introduced by placing an impurity interaction, i.e., a non-trivial two-qudit gate \(U\) just at the system boundary. _Remarkably, this is indeed enough to make the system ergodic!_ Namely, spectral fluctuations of the evolution operator coincide with those of appropriate random matrix ensembles, see App. A, and dynamical correlations decay exponentially in time [65]. In this work we use the simplistic nature of this model to obtain the asymptotics of entanglement dynamics analytically for different classes of impurity interactions, yielding results that seem quite non-intuitive at first glance. We systematically compute the entanglement dynamics of initial product states and of local operators, which provides insight to the complex many-body dynamics both for our simple model as well as for generic lattice systems. To define entanglement entropies we introduce a bipartition of the system into a subset \(A\) containing the first \(l+1\) qudits, and its complement \(\overline{A}\), containing the remaining \(L-l\) qudits. For this bipartion we compute the dynamics of the reduced (super) density matrix \(\rho_{l}(t)\), and its Renyi entropies \(R_{n}=\ln\mathrm{tr}(\rho_{l}(t)^{n})/(1-n)\), for initial product states and local operators in the scaling limit \(L,t\to\infty\) with \(t/L\) fixed. Depending on the type of impurity interaction we obtain exact results either for fixed subsystem size \(l\) or in the limit \(l\to\infty\). We describe the different classes of impurity interactions and the results obtained for them below.
1. _Product initial state and impurity interactions with a vacuum state:_ These interactions preserve a certain 2-qudit product state1\(\left|\circ\circ\right\rangle\). For example, for spin qubits we can take this to be the state with both spins pointing up (\(\left|\circ\right\rangle=\left|\uparrow\right\rangle\)). Starting from initial states of the form \(\left|\bullet\circ\circ\cdots\circ\circ\right\rangle\) with a single localized excitation \(\left|\bullet\right\rangle\) (e.g. \(\left|\downarrow\right\rangle\)) at the boundary, for finite systems, we see persistent revivals of \(R_{n}\) with period given by the system size \(L\), see Fig. 1 below. Footnote 1: A trivial example is a U(1)-symmetric, i.e. particle number conserving interaction gate. Here, however, we consider more general gates which involve also the transitions \(\left|\bullet\circ\right\rangle\to\left|\bullet\bullet\right\rangle\), \(\left|\circ\bullet\right\rangle\to\left|\bullet\bullet\right\rangle\). This seemingly contradicts the ergodic-like spectral statistics of the evolution operator, as in such a case a monotonic growth of entanglement is expected. However, because our model has the impurity just at the boundary, the initially localized excitation travels completely into the - typically much larger - complement of the considered subsystem
and leaves only the vacuum in the subsystem close to the boundary. Only at resonant times, i.e., integer multiples of \(L\), has the excitation traveled ballistically through the system and returned to the impurity. And this is the only time where correlations between the subsystem and its complement might develop. Thus, exclusively at the boundary, the excitations can scatter into higher excited states which lead to a growth of entanglement. As this process occurs on a time scale proportional to \(L\) entanglement can grow at most on extensive time scales (time proportional to \(L\)). To put it more concretely, we find the reduced density matrix for large system size to be close to the pure state \[\rho_{l}(t)=|\circ\circ\cdots\circ\circ\rangle\langle\circ\circ\cdots\circ\circ|\] (1) for most times. For finite systems, introducing \(\tau=\lfloor t/L\rfloor\) and a remainder \(\delta=t\operatorname{mod}L\) such that \(t=\tau L+\delta\), we observe that Eq. (1) holds up to terms exponentially suppressed as \(\lambda_{0}^{\delta}\) for some \(\lambda_{0}<1\), controlled by a subleading eigenvalue of certain transfer matrices. As the Renyi entropy of a pure state vanishes it is dominated by the subleading term and reads \[R_{n}(t)\sim|\lambda_{0}|^{\delta}.\] (2) This indicates exponential suppression of Renyi entropies with \(\delta\) as well for most times as \(L\to\infty\). For resonant times, \(t\approx\tau L\), the entanglement entropy is of order one.
2. _Product initial state and T-dual impurity interactions:_ T-dual gates are those two-qudit gates which remain unitary under partial transposition (on a single qudit). Even though the asymptotic reduced density matrix can not be obtained explicitly, the corresponding Renyi entropies can be computed exactly for large subsytem size \(l\to\infty\). In contrast to the previous case we recover the result expected for fully chaotic systems given by \[R_{n}(t)=2\tau\ln(q)+\text{const}.\] (3) independent from the Renyi index \(n\), implying flat entanglement spectrum of \(\sim e^{R_{n}(t)}\) non-zero eigenvalues of \(\rho_{l}(t)\). Noting that \(\tau\sim t/L\) the above equation describes linear growth of \(R_{n}\) with time at maximum velocity \(\ln(q)/L\) allowed by the system's geometry. The only difference to a spatially homogeneous chaotic system is the additional \(1/L\) factor, which is due to the density \(1/L\) of nontrivial interactions. Moreover, Eq. (3) suggests a staircase structure of the entanglement entropies \(R_{n}(t)\) with steps at integer values of \(t/L\). Such staircase structure, but with different step height, is also observed for finite subsystems until \(R_{n}(t)\) saturates at late times, see Fig 2. The saturation value for finite systems, however, depends on the impurity interaction at the boundary.
3. _Product initial state and generic impurity interactions:_ In this case we see a mixture of the above two scenarios. This is depicted in Fig. 3. The leading eigenvalue is still 1, which leads to some plateau of \(R_{n}\) for fixed \(\tau\) independent of \(L\). However, unlike in the T-dual case, there are subleading transfer matrix eigenvalues \(\lambda_{0}\) which lead to additional structure \(\sim\lambda_{0}^{\delta}\) on top of the plateau.
4. _Local operator and generic impurity interactions:_ The concept of entanglement and entanglement entropies can also be applied to vectorized operators, i.e. using an operator-to-state mapping, where the operators are interpreted as states in an enlarged Hilbert
space. For generic impurity interactions the vectorized identity operator \(|\circ\rangle=|\mathds{1}_{q}\rangle\) plays the role of a vacuum state as a consequence of unitality of Heisenberg time evolution similar to the case of states and vacuum-preserving impurity interactions. For local operators the role of the excitation is now played by the nontrivial component of the operator. As a consequence of this analogy the corresponding operator entanglement/Renyi entropies show qualitatively similar dynamics, see Fig. 4 below, and the physical intuition remains the same. The reduced super density matrix is given by the operator version of Eq. (1) for most times and corresponds to a pure state. The latter is just the vectorization of the identity operator \(\mathds{1}_{A}\) on the subsytem \(A\).
5. _Local operator and T-dual impurity interactions:_ As was the case for states, the situation changes drastically, if we additionally demand T-duality of the impurity interaction. Using similar arguments as for states we obtain the Renyi entropies for large system and subsystem size exactly. Again we find linear growth of operator entanglement entropies with time at maximum speed as \[R_{n}(t)=2\tau\ln\left(q^{2}\right)+\text{const.}\] (4) with the only difference to Eq. (3) being the local Hilbert space dimension \(q^{2}\) instead of \(q\). For finite subsystems, we again find a similar staircase structure as in the case of states, see Fig. 5.
We would like to reiterate that for both vacuum-preserving and T-dual impurity interactions, spectral fluctuations of the full circuit show similar properties, e.g. level repulsion, rendering the systems quantum chaotic in the sense of spectral statistics. Dynamics of entanglement, in contrast, is strikingly different being either exponentially suppressed for most times in the case of impurities with a vacuum state, whereas it shows linear growth with maximum speed in the T-dual case.
Additionally, while we state the results for initial states where the localized excitation is placed at the edge of the system (where interaction operates), they are qualitatively valid for the excitation placed anywhere in the lattice, in most cases. This can be shown easily for all the cases above, except for the operator entanglement with T-dual gates where the computation is complicated and a simple conclusion cannot be drawn.
In what follows, we shall first explain the setting of the problem and the notations used throughout the rest of the work in Sec. 2. Then, we will derive the tensor network representation of the reduced density matrix in Sec. 3; followed by details of the computation of entanglement dynamics of product states in Sec. 3.2 and operator entanglement in Sec. 4. Finally, we conclude by discussing implications of our results and possible extensions in Sec. 5. Moreover, in App. A we provide additional insight into the spectral fluctuations in the boundary chaos circuit and in App B we comment on the subleading part of the transfer matrices' spectra.
## 2 Setting
In this section, we introduce the class of quantum circuits we use to obtain our results. As we shall describe in Sec. 2.1, interactions are introduced only on the boundary and hence we call this a _boundary chaos circuit_. We shall define and briefly discuss the entanglement entropies both for states and operators in Sec. 2.2.
### Boundary Chaos Circuit
We start from the Floquet system generated by a free brickwork quantum circuit on a one dimensional lattice of size \(L+1\), with sites labelled by \(i\in\{0,1,\ldots L\}\). Then, we render the evolution non-trivial by adding a two site non-trivial gate acting on sites \(0\) and \(1\). Each site is occupied by a qudit with local Hilbert space given by \(\mathds{C}^{q}\) having canonical (computational) basis \(|\alpha\rangle\) with \(\alpha\in\{0,1,\ldots,q-1\}\). Hence the total Hilbert space \(\mathcal{H}=(\mathds{C}^{q})^{\otimes L+1}\) is of dimension \(N=q^{L+1}\) and the product basis is denoted by \(|\alpha_{0}\alpha_{1}\cdots\alpha_{L}\rangle\). There are two types of local 2-qudit gates, the Swap gate \(P\) governing the free evolution and the entangling unitary gate \(U\in\mathrm{U}(q^{2})\), the impurity interaction at the boundary. For the brickwork circuit design the Floquet operator is given by, \(\mathcal{U}=\mathcal{U}_{2}\mathcal{U}_{1}\in\mathrm{U}\left(\mathds{C}^{N}\right)\) with
\[\mathcal{U}_{1} =\prod_{i=1}^{\lfloor L/2\rfloor}P_{2i-1,2i} \tag{5}\] \[\mathcal{U}_{2} =U_{0,1}\prod_{i=1}^{\lfloor(L-1)/2\rfloor}P_{2i,2i+1} \tag{6}\]
where \(G_{i,j}\) denotes the unitary gate acting as the 2-qudit gate \(G=U,P\) at sites \(i\),\(j\) and trivially otherwise. Diagrammatically, the circuit can be represented as
\[\mathcal{U}= \tag{7}\]
with its elementary building blocks
\[P=\]
and
\[U=\]
. (8)
Here, the wedges indicate the orientation of the impurity interaction and wires carry the \(q\)-dimensional Hilbert space \(\mathds{C}^{q}\).
To compute time evolution of operators in the Heisenberg picture we use a folded picture which introduces a super circuit with larger local Hilbert space dimensions \(q^{2}\)[67, 68]. To this end we define the vectorization of an operator by the isomorphism \(\mathrm{End}\left(\mathds{C}^{q}\right)\simeq\mathds{C}^{q^{2}}\) defined via bilinear extension of
\[\mathrm{End}\left(\mathds{C}^{q}\right)\ni|\alpha\rangle\langle\beta|\mapsto| \alpha\rangle\otimes|\beta\rangle\in\mathds{C}^{q^{2}}. \tag{9}\]
This extends to a vectorization mapping via tensor multiplication \(\mathrm{End}\left(\mathds{C}^{N}\right)\simeq\mathds{C}^{N^{2}}\). Also note that this mapping is unitary with respect to the Hilbert-Schmidt inner product in \(\mathrm{End}\left(\mathds{C}^{N}\right)\), \(\langle A|B\rangle=\mathrm{tr}\big{(}A^{\dagger}B\big{)}\) and the standard inner product in \(\mathds{C}^{N^{2}}\). Abusing the notation a bit, we also choose an orthonormal basis in the space of vectorized operators, \(|\alpha\rangle\) with \(\alpha\in\{0,1,\ldots q^{2}-1\}\) in \(\mathds{C}^{q^{2}}\) where \(|0\rangle\) is the vectorization of \(\mathds{1}_{q}/\sqrt{q}\in\mathrm{End}\left(\mathds{C}^{q}\right)\) and \(|\alpha\rangle\) is the vecotrization of a Hilbert-Schmidt normalized, Hermitian operator, which is orthogonal to the identity and hence traceless. Under this mapping, we can cast the Heisenberg time evolution of operators \(A(t)=\mathcal{U}^{-t}A\mathcal{U}^{t}\) in a quantum circuit formulation. This super circuit is built from folded gates \(S=P\otimes P\) and \(W=U^{\mathrm{T}}\otimes U^{\dagger}\in\mathrm{U}(q^{4})\). The circuit \(\mathcal{W}\) is of the same form as \(\mathcal{U}\) but with the
two layers interchanged, i.e., \(\mathcal{W}=\mathcal{W}_{2}\mathcal{W}_{1}\) with
\[\mathcal{W}_{1} =W_{0,1}\prod_{i=1}^{\lfloor(L-1)/2\rfloor}S_{2i,2i+1} \tag{10}\] \[\mathcal{W}_{2} =\prod_{i=1}^{\lfloor L/2\rfloor}S_{2i-1,2i}, \tag{11}\]
where again, \(S_{i,j}\) denotes the unitary gate acting nontrivially as the folded swap gate \(P\) and, \(W_{i,j}\) the folded impurity interaction \(U\) acting on sites \(i\),\(j\), and trivially otherwise. Diagrammatically, this can be represented as
\[\mathcal{W}=\raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs.eps}} \hskip 14.226378pt\raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs.eps}}\hskip 14.226378pt\raisebox{-14.226378pt}{\includegraphics[width=14.
For entanglement of operators, the above definitions remain the same if the operators are viewed as vectors in an enlarged Hilbert space as it is suggested by the vectorization mapping. Hence, the reduced super density matrix for initial operator \(\mathcal{O}\in\mathrm{End}\left(\mathds{C}^{N}\right)\) with vectorization \(|\mathcal{O}\rangle\) is
\[\hat{\rho}_{l}(t)=\mathrm{tr}_{\overline{A}}\left(\mathcal{W}^{ t}\,|\mathcal{O}\rangle\langle\mathcal{O}|\,\mathcal{W}^{-t}\right). \tag{16}\]
Similarly as before, for pure states (pure vectorized operators), the entanglement entropies are zero, while for the fully mixed super density matrix one gets \(R_{n}(t)=(l+1)\ln(q^{2})\). The only difference is rooted in the local Hilbert space dimensions \(q\) vs \(q^{2}\). Note that an analogous notion of operator entanglement can be applied to the evolution operator \(\mathcal{U}\) itself or even more general quantum evolutions as well [51], which, e.g., characterizes their entangling power [69]. In this work, however, we will focus on local operator entanglement [52, 53, 62]. That is we consider local operators of the form \(a_{0}=a\otimes\mathds{1}_{q}^{\otimes L}\) acting non-trivially as \(a\in\mathrm{End}(\mathds{C}^{q})\) only on the first lattice site. We note that for operators and generic gates as well as for states with vacuum-preserving gates the results are independent from where we put the non-trivial operator/excited state (up to a shift in time). For states and T-dual gates we can go further and perform the computation for arbitrary product initial states.
## 3 Entanglement Dynamics of Product States
In this section we will first provide a tensor network representation of the reduced density matrix for initial product states which allows both for an effective numerical computation even for large system size and for an analytic evaluation. Using this description we will then compute the Renyi entropies for different classes of impurity interactions.
### Tensor Network Representation of the Reduced Density Matrix for Product States
We shall introduce a tensor network representations of the initial and time evolved states in this section first, before moving on to the discussion of the reduced density matrix.
#### 3.1.1 Initial State
Let us begin by choosing two normalized states denoted by \(|a\rangle\) and \(|\circ\rangle\). Without loss of generality, we might choose \(|\circ\rangle=|0\rangle\) as one of the computational basis states. Unless stated otherwise we take \(|a\rangle\) orthogonal to \(|\circ\rangle\). Diagrammatically, these two states are represented as, \(|a\rangle=\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}_{\boldsymbol{a}}\) and \(|\circ\rangle=\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\).
We consider initial product states which are homogeneous in the bulk and correspond to \(|a\rangle\) at the boundary. More precisely they are given by
\[|a_{0}\rangle=|a\rangle\otimes|\circ\rangle^{\otimes L}=\ \raisebox{-1.0pt}{\scalebox{1.5}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{ \scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{ -1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{ \scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}} \raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{ \scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}} \raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{ \scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}} \raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{ \scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}} \raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{ \scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}} \raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{ \scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}} \raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{ \scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}} \raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{ \scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}} \raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{ \scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}} \raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{ \scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}} \raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{ \scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0 pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{ \scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0 pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{ \scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0 pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{ \scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0 pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{ \scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0 pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{ \scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0 pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{ \scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}} \raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$ \bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{ \scalebox{1.5}{$\bullet$}}\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.0 pt}{\scalebox{1.5}{$\bullet$}}\raisebox{-1.
network of smaller size (by a factor \(1/L\) compared to the naive tensor network representation). We provide a short description of the construction here. Let us start by introducing the building blocks of the network. We express time \(t\) as \(t=\tau L+\delta\) for non-negative integer \(\tau\) and remainder \(\delta\in\{0,1,\ldots,L-1\}\). For finite \(L\), two different scenarios appear during evolution, the times \(t\) with \(l/2<\delta\) and \(l/2<L-\delta\) are referred to as non-resonant and the remaining \(t\) are called resonant. For resonant times \(t/L\) differs from the closest integer by less than \(l/(2L)\). Moreover, the tensor network is composed from the local 2-qudit gates \(V=UP\) defined by Eq. (8). We depict the gate \(V\) and its Hermitian adjoint as
\[V=\raisebox{-1.72pt}{\includegraphics[width=14.226378pt]{Fig4.eps}}\,\quad V^{ \dagger}=\raisebox{-1.72pt}{\includegraphics[width=14.226378pt]{Fig5.eps}} \tag{18}\]
where unitarity of \(V\) diagrammatically reads
\[\raisebox{-1.72pt}{\includegraphics[width=14.226378pt]{Fig5.eps}}\ =\ \raisebox{-1.72pt}{ \includegraphics[width=14.226378pt]{Fig6.eps}}\ =\ \raisebox{-1.72pt}{ \includegraphics[width=14.226378pt]{Fig7.eps}}. \tag{19}\]
The initial and final free dynamcis for lattice sites in the bulk is taken into account by combining the action of the swap gates which for a given time \(t\) are not connected to the boundary in forward or backward time direction into a global permutation of lattice sites. To this end we denote the unitary representation of the symmetric group on \(L\) elements, \(S_{L}\) which permutes tensor factors by \(\mathds{P}:S_{L}\rightarrow\mathds{U}\left((\mathds{C}^{q})^{\otimes L}\right)\). In other words, \(\mathds{P}_{\sigma}\) acts as \(\mathds{P}_{\sigma}\left(\bigotimes_{i=1}^{L}|\alpha_{i}\rangle\right)= \bigotimes_{i=1}^{L}|\alpha_{\sigma^{-1}(i)}\rangle\) on the canonical product basis. This is diagrammatically represented as,
\[\mathds{P}_{\sigma}=\raisebox{-1.72pt}{\includegraphics[width=14.226378pt]{ Fig5.eps}} \tag{20}\]
Of particular importance for our construction are the permutations \(\sigma_{\delta}\in S_{L}\) for \(\delta\in\mathds{Z}\) given by
\[\sigma_{\delta}(x)=\big{[}\text{Min}\big{\{}2(x+\delta)+1,2L-2(x+\delta) \big{\}}\big{]}\bmod L \tag{21}\]
for \(x\in\{1,2,\ldots,L\}\). The tensor network representation of \(|a_{0}(t)\rangle\) is given by (see Ref. [65] for details)
\[|a_{0}(t)\rangle= \tag{22}\]
For our choice of initial state \(\left(\mathds{1}_{Q}\otimes\mathds{P}_{\sigma_{0}}^{-1}\right)|a_{0}\rangle=|a_{0}\rangle\) and we can replace \(\mathds{P}_{\sigma_{0}}^{-1}\) by the identity. A similar representation can be obtained for \(\langle a_{0}(t)|\) in which the appropriately oriented adjoint gate \(V^{\dagger}\) enters. Intuitively, in the above network evolution in the time-like variable \(\tau\), i.e. vertically, describes scattering of excitations into the system with trivial free dynamics in between. Hence columns of the network describe such scattering events of this type from impurity interactions which differ by \(L\) layers of the original circuit \(\mathcal{U}^{t}\) obtained from Eq. (7). In a dual picture one might think of contracting the network in the horizontal spatial direction, which corresponds to scattering of excitations along the boundary. Consequently, rows of the tensor network (22) describe collective scattering events along the boundary from impurity interactions in \(L\) subsequent layers in \(\mathcal{U}^{t}\). Given the above interpretation, the physical time variable \(t\) runs along a helix through the network and hence causes the helix-like topology of the tensor network.
#### 3.1.3 Reduced Density Matrix
We now obtain the representation for the reduced density matrix from Eq. (22). We focus on the simpler case of non resonant times. Expanding the reduced density matrix in the appropriate computational basis,
\[\rho_{l}(t)=\sum_{\alpha_{0},\ldots,\alpha_{l}=0}^{q-1}\sum_{\beta_{0},\ldots,\beta_{l}=0}^{q-1}\rho_{\beta_{0}\cdots\beta_{l}}^{\alpha_{0}\cdots\alpha_{ l}}(t)\left|\alpha_{0}\cdots\alpha_{l}\right\rangle\left\langle\beta_{0}\cdots \beta_{l}\right|, \tag{23}\]
we can diagrammatically represent it as,
(24)
where \(\alpha_{0},\ldots,\alpha_{l}\) and \(\beta_{0},\ldots,\beta_{l}\) represent the output and input legs of \(\rho_{l}(t)\) respectively. For a more convenient depiction of the diagrams we rotated the tensor network (22) by \(90^{\circ}\). In order not to complicate the diagrams, we show it for fixed values of \(L\), \(l\) and \(t\). To obtain Eq. (24), we have used \(\left(\mathds{1}_{Q}\otimes\mathds{P}_{\sigma_{0}}^{-1}\right)|a_{0}\rangle= |a_{0}\rangle\) and simplified the lowest row of the tensor network (22) (similarly for \(\langle a_{0}(t)|\)). Finally, Eq. (25), follows from the definition of \(\sigma_{\delta}\) and the unitarity of \(\mathds{P}_{\sigma_{\delta}}\). Moreover, we define \(l_{1}=\lfloor l/2\rfloor\) and \(l_{2}=l-l_{1}\). In the diagram, we also highlight the parts directly unconnected to the in- and output legs of the reduced density matrix with the rose shade, while denote the connected parts via the turquoise shade. The importance of this distinction will become clear in what follows.
To get an explicit expression of the reduced density matrix, we introduce different transfer matrices, which correspond to the columns of the tensor network (25). Hence, the transfer matrices act in the spatial direction corresponding to the vertical direction in Eq. (25) (and
to the horizontal \(x\)-direction in Eq. (22)) Conceptually, this might be thought of as a dual description of the dynamics after a space-time swap, which was recently used in related contexts [16, 27, 30, 31, 32]. Formally, we define transfer matrices \(\mathcal{T}_{\tau}\) and \([\mathcal{T}_{\tau}]_{\beta}^{\alpha}:\left(\mathds{C}^{q}\right)^{\otimes 2 \tau}\rightarrow\left(\mathds{C}^{q}\right)^{\otimes 2\tau}\) for \(\tau\geq 0\) as well as \([\mathcal{T}_{\tau}]_{\beta_{0}\beta_{1}}^{\alpha_{0}\alpha_{1}}\) and \([\mathcal{A}_{\tau}]_{\beta}^{\alpha}:\left(\mathds{C}^{q}\right)^{\otimes 2 \tau}\rightarrow\left(\mathds{C}^{q}\right)^{\otimes 2(\tau-1)}\) for \(\tau\geq 1\) as matrix product operators by their respective diagrammatic representation
\[\mathcal{T}_{\tau} = \tag{26}\] \[\mathcal{T}_{\tau}]_{\beta}^{\alpha} =\] (27) \[\mathcal{T}_{\tau}]_{\beta_{0}\beta_{1}}^{\alpha_{0}\alpha_{1}} =\] (28) \[\mathcal{A}_{\tau}]_{\beta}^{\alpha} = \tag{29}\]
We also define \(\mathcal{C}_{a,\tau}:\left(\mathds{C}^{q}\right)^{\otimes 2\tau}\rightarrow\left( \mathds{C}^{q}\right)^{\otimes 2(\tau+1)},|\nu\rangle\mapsto|a\rangle\otimes|\nu \rangle\otimes|a\rangle\) which diagramatically can be expressed as
\[\mathcal{C}_{a,\tau}= \tag{30}\]
Additionally, we introduce \([\mathcal{A}_{\tau}]_{\beta_{0},\beta_{1}}^{\alpha_{0},\alpha_{1}}=[\mathcal{ T}_{\tau}]_{\beta_{0},\beta_{1}}^{\alpha_{0},\alpha_{1}}\) for \(l=1\) and \([\mathcal{A}_{\tau}]_{\beta_{0}\cdots\beta_{l}}^{\alpha_{0}\cdots\alpha_{l}}: \left(\mathds{C}^{q}\right)^{\otimes 2\tau}\rightarrow\left(\mathds{C}^{q} \right)^{\otimes 2(\tau-1)}\) for \(l\geq 2\) by
\[[\mathcal{A}_{\tau}]_{\beta_{0}\cdots\beta_{l}}^{\alpha_{0}\cdots\alpha_{l}}= \begin{cases}[\mathcal{T}_{\tau-1}]_{\beta_{l}}^{\alpha_{l}}\cdots[\mathcal{T} _{\tau-1}]_{\beta_{4}}^{\alpha_{4}}\,[\mathcal{T}_{\tau-1}]_{\beta_{2}}^{ \alpha_{2}}\,[\mathcal{T}_{\tau}]_{\beta_{0}\alpha_{1}}^{\alpha_{0}\alpha_{1} }\cdots[\mathcal{T}_{\tau}]_{\beta_{l-1}}^{\alpha_{l-1}}&l\text{ even}\\ [\mathcal{T}_{\tau-1}]_{\beta_{l-1}}^{\alpha_{l-1}}\cdots[\mathcal{T}_{\tau-1 }]_{\beta_{4}}^{\alpha_{4}}\,[\mathcal{T}_{\tau-1}]_{\beta_{2}}^{\alpha_{2}} \,[\mathcal{T}_{\tau}]_{\beta_{0}\beta_{1}}^{\alpha_{0}\alpha_{1}}\cdots[ \mathcal{T}_{\tau}]_{\beta_{l}}^{\alpha_{l}}&l\text{ odd.}\end{cases} \tag{31}\]
The operators \(\mathcal{A}_{\tau+1}\) represent the turquoise shaded part of the tensor network (25); while the lower rose shaded part corresponds to \([\mathcal{T}_{\tau+1}]^{\delta-l_{2}}\) and the upper rose shaded part corresponds to \(\mathcal{T}_{\tau}^{L-\delta-l_{1}}\). With the above definitions we have,
\[\rho_{\beta_{0}\cdots\beta_{l}}^{\alpha_{0}\cdots\alpha_{l}}(t)=\operatorname{ tr}\left(\mathcal{T}_{\tau}^{L-\delta-l_{1}}\,[\mathcal{A}_{\tau+1}]_{\beta_{0} \cdots\beta_{l}}^{\alpha_{0}\cdots\alpha_{l}}\,\mathcal{T}_{\tau+1}^{\delta-l _{2}}\mathcal{C}_{a,\tau}\right) \tag{32}\]
We focus on the limit \(L-\delta,\delta\gg l\) where Eq. (32) can be simplified further, as then the leading eigenvalues of \(\mathcal{T}_{\tau+1}\) and \(\mathcal{T}_{\tau}\) will give the dominant contribution. More precisely, we compute \(\lim_{L,t\rightarrow\infty}\rho_{l}(t)\) for fixed \(l\) while \(L\) and \(t\) approaching infinity such that \(t/L\rightarrow\tau_{0}\in\mathds{R}\setminus\mathds{Z}\). This latter condition ensures that for sufficiently large \(L\) and \(t\) we need to consider the non-resonant case only. In the above limit the resonant case is relevant only if \(\tau_{0}\in\mathds{Z}\).
Using the unitarity of gates \(V\) we can already list some basic properties of the spectrum of the transfer matrices. First note, that the transfer matrices are in general not normal, implying a nontrivial Jordan structure and a distinction between left and right eigenvectors. Nevertheless, the transfer matrices \(\mathcal{T}_{\tau}\) are non-expanding [62] such that the leading eigenvalue is at most of modulus \(1\).
The leading eigenvalue of the transfer matrix is in fact equal to 1, which can be seen as follows. We first define the normalized rainbow states \(\ket{r_{\tau}}\in\left(\mathds{C}^{q}\right)^{\otimes 2\tau}\) via
\[\ket{r_{\tau}} =q^{-\frac{\tau}{2}}\sum_{\alpha_{1},\ldots,\alpha_{\tau}=0}^{q-1} \ket{\alpha_{1}\alpha_{2}\cdots\alpha_{\tau}\alpha_{\tau}\cdots\alpha_{2} \alpha_{1}}\] \[=q^{-\frac{\tau}{2}}\,\raisebox{-14.226378pt}{\includegraphics[]{ r1.eps}}. \tag{33}\]
By unitarity of the gates one has \(\bra{r_{\tau}}\mathcal{T}_{\tau}=\bra{r_{\tau}}\) and hence \(\bra{r_{\tau}}=\ket{r_{\tau}}^{\dagger}\) is a left eigenvector for eigenvalue 1, as can be seen by evaluating the eigenvalue equation diagrammatically. The corresponding right eigenvector, however, cannot be described explicitly. In general, there might be more unimodular eigenvalues. However, all the unimodular eigenvalues, and in particular the eigenvalue 1, have equal algebraic and geometric multiplicity. The latter follows from observing that a non-trivial Jordan block corresponding to an unimodular eigenvalue of \(\mathcal{T}_{\tau}\) is no longer non-expanding.
In any case, the above tensor network representation allows us to numerically study very large systems. The computational complexity to compute the reduced density matrix is linear in \(L\) but exponential in \(\tau\) and \(l\) as the dimensions of the involved matrices go up to \(q^{2(\tau+l+1)}\). However, additional constraints on the impurity interaction can lead to situations in which the reduced density matrix or the corresponding entanglement entropies can be computed analytically. In the following sections we shall use these ideas to compute entanglement growth in the boundary chaos circuit both analytically in the limit of large system size and long times \(L,t\rightarrow\infty\) at fixed \(\tau\). We complement those results by numerical computations in large but finite systems.
### Entanglement Dynamics
In this section, we use Eq. (32) to compute the growth of entanglement for different classes of impurities.
#### 3.2.1 Impurities with a Vacuum State
We start our analysis with a class of impurity interactions which allow for an exact computation of the reduced density matrix in the non-resonant case as \(t,L\rightarrow\infty\). More precisely, we consider impurities which preserve a 2-qudit product state. We call this product state a (local) vacuum state. A trivial physical realization of such impurities, resulting in single-particle dynamics, is given by 2-qubit gates which exhibit a local \(\mathrm{U}(1)\) symmetry, for which, e.g., magnetization is conserved. Hence either of the states \(\ket{00}\) and \(\ket{11}\) gives rise to a local vacuum state. However, in order to obtain a non-trivial dynamics, we consider generic vacuum-preserving gates described below.
Consider a two qudit gate \(U\in\mathrm{U}(q^{2})\) which has an eigenstate of product form \(\ket{\phi}\otimes\ket{\phi}\), i.e.,
\[U\ket{\phi}\otimes\ket{\phi}=\mathrm{e}^{\mathrm{i}\varphi}\ket{\phi}\otimes \ket{\phi}. \tag{34}\]
Hence, \(\ket{\phi}\otimes\ket{\phi}\) can be taken as the local vacuum state. The resulting circuit is equivalent (via local unitaries) to a circuit built from \(\mathrm{e}^{\mathrm{i}\varphi}U_{0}\), where \(U_{0}\) is block diagonal, i.e., \(U_{0}=1\oplus u\)
with \(u\in\mathrm{U}(q^{2}-1)\). As forward and backward time evolution appear symmetrically in the reduced density matrix, we can assume \(\varphi=0\) without loss of generality. We find such a systems to be quantum chaotic in the spectral sense as numerically we find the circuit \(\mathcal{U}\) built from such gates to exhibit level repulsion for generic choices of \(u\), see Fig. 6 in App. A. Finally to simplify notation, after a potential change of the local basis we write \(\ket{\phi}=\ket{0}=\ket{\circ}\) and denote the vacuum state by \(\ket{\circ\circ}\).
Spectrum of Transfer Matrices:We shall now try to obtain the leading part of the spectrum of \(\mathcal{T}\) built from gate \(U=U_{0}\) placed at the left end of the circuit. As mentioned before, we intend to focus on the limit where the leading eigenvalues of \(\mathcal{T}\) will give the dominant contribution to Eq. (32). We restrict ourselves to gates \(U\) such that there are no additional unimodular eigenvalues of \(\mathcal{T}_{\tau}\), except for eigenvalue 1 with multiplicity one. We call such gates _completely chaotic_[62]. Numerically, we find this to be the case for generic choices of \(u\in\mathrm{U}(q^{2}-1)\), see App. B.
To compute the eigenvector corresponding to the leading eigenvalue 1, first we observe that \(\mathcal{U}\ket{\circ}^{\otimes L+1}=\ket{\circ}^{\otimes L+1}\) as a consequence of \(U\ket{\circ\circ}=\ket{\circ\circ}\) and \(P\ket{\circ\circ}=\ket{\circ\circ}\). We also have \(V\ket{\circ\circ}=\ket{\circ\circ}\), (and similar for \(V^{\dagger}\)) which can be diagrammatically expressed as
(35)
These imply that \(\mathcal{T}_{\tau}\ket{\circ}^{\otimes 2\tau}=\ket{\circ}^{\otimes 2\tau}\), i.e., \(\ket{\circ}^{\otimes 2\tau}\) is a right eigenvector for eigenvalue 1. The projection onto the eigenspace for eigenvalue 1 of \(\mathcal{T}_{\tau}\) consequently reads
\[\mathcal{P}_{\tau}=q^{\frac{\tau}{2}}\left(\ket{\circ}^{\otimes 2\tau} \right)\bra{r_{\tau}} \tag{36}\]
where the prefactor takes into account the orthonormality with the left eigenvector \(\bra{r_{\tau}}\) defined in Eq. (33), required for the projector property \(\mathcal{P}_{\tau}^{2}=\mathcal{P}_{\tau}\). We shall compute the asymptotic reduced density matrix using Eq. (36).
Asymptotic Reduced Density Matrix:In the limit of large \(L\) and hence \(L-\delta\gg l_{1}\), we replace \(\mathcal{T}_{\tau}^{L-\delta-l_{1}}\) by \(\mathcal{P}_{\tau}\) in Eq. (32) and obtain,
\[\rho_{\beta_{0}\cdots\beta_{l}}^{\alpha_{0}\cdots\alpha_{l}}(t)=q^{\frac{\tau }{2}}\bra{r_{\tau}}(\mathcal{A}_{\tau+1})_{\beta_{0}\cdots\beta_{l}}^{\alpha_ {0}\cdots\alpha_{l}}\left(\mathcal{T}_{\tau+1}\right)^{\delta-l_{2}}\left( \ket{a}\otimes\ket{\circ}^{\otimes 2\tau}\otimes\ket{a}\right), \tag{37}\]
where2 we have used the explicit definition of \(\mathcal{C}_{a,\tau}\). Next, we consider also \(\delta\gg l_{2}\) and replace \(\mathcal{T}_{\tau+1}^{\delta-l_{2}}\) by \(\mathcal{P}_{\tau+1}\) to obtain,
Footnote 2: Strictly speaking the above expression is correct only in the limit \(L\to\infty\) or up to corrections exponentially suppressed with \(L-\delta\) or \(\delta\). For the sake of notational convenience we still write it as an equality. We discuss subleading terms explicitly when appropriate.
\[\rho_{\beta_{0}\cdots\beta_{l}}^{\alpha_{0}\cdots\alpha_{l}}(t)=q^{\frac{\tau }{2}}\bra{r_{\tau}}(\mathcal{A}_{\tau+1})_{\beta_{0}\cdots\beta_{l}}^{\alpha_ {0}\cdots\alpha_{l}}\ket{\circ}^{\otimes 2\tau+2}, \tag{38}\]
where we have used \(q^{\frac{\tau+1}{2}}\bra{r_{\tau+1}}\left(\ket{a}\otimes\ket{\circ}^{\otimes 2 \tau}\otimes\ket{a}\right)=1\). The invariance of the vacuum state implies
\[\left(\mathcal{A}_{\tau+1}\right)_{\beta_{0}\cdots\beta_{l}}^{\alpha_{0} \cdots\alpha_{l}}\ket{\circ}^{\otimes 2\tau+2}=\left(\prod_{i=0}^{l}\delta_{ \alpha_{i},0}\delta_{\beta_{i},0}\right)\ket{\circ}^{\otimes 2\tau} \tag{39}\]
Combining this with \(q^{\frac{\pi}{2}}\left\langle r_{\tau}\right|\left(\left|\circ\right\rangle^{ \otimes 2\tau}\right)=1\) we obtain \(\rho_{\beta_{0}\cdots\beta_{l}}^{\alpha_{0}\cdots\alpha_{l}}(t)=\prod_{i=0}^{l }\delta_{\alpha_{i},0}\delta_{\beta_{i},0}\). Hence, we get,3
Footnote 3: Alternatively, we could have replaced \(\mathcal{T}_{\tau+1}\) first, but the subleading contribution would be still be the same. This is because, replacing \(\mathcal{T}_{\tau+1}\) by \(\mathcal{P}_{\tau+1}\) and subsequently \(\mathcal{T}_{\tau}\) by the projection onto \(\lambda_{0}\) eigenspace gives zero by Eq. (39) and biorthogonality of eigenvectors.
\[\rho_{l}(t)=\left(\left|\circ\right\rangle\!\left\langle\circ \right|\right)^{\otimes l+1}+c(\tau,l)\lambda_{0}^{\delta}. \tag{40}\]
Here we explicitly include subleading terms which scale with the subleading eigenvalue \(\lambda_{0}\) of \(\mathcal{T}_{\tau+1}\). The prefactor can in principle be obtained from the left and right eigenvectors corresponding to \(\lambda_{0}\). Further note, that due to biorthogonality of left and right eigenvectors and Eq. (39) the subleading part of the spectrum of \(\mathcal{T}_{\tau}\) gives rise to contributions exponentially suppressed with \(L\) only, which can safely be ignored in Eq. (40).
Entanglement Entropies and Comparison with Numerics:From Eq. (40), the Renyi entropies follow as
\[R_{n}\left(t\right)\sim\frac{n}{n-1}|\lambda_{0}|^{\delta}, \tag{41}\]
assuming unique subleading eigenvalue and ignoring possible non-trivial Jordan blocks, as both do not change the result qualitatively. The above implies entanglement entropies to be exponentially suppressed with \(\delta\). Hence entanglement can be large only when \(\delta\) is small implying persistent revivals of entanglement entropies with period given by the system size \(L\). This is illustrated in Fig. 1 for the second Renyi entropy. There we show the entanglement entropies obtained from numerically evaluating Eq. (32) for various large system sizes. In particular, we confirm the asysmptotic scaling \(|\lambda_{0}|^{\delta}\) in Fig. 1(b). For small system sizes, for which entanglement dynamics can be evaluated directly, i.e., by performing time evolution with the original circuit \(\mathcal{U}\), Eq. (7), we find saturation of the entanglement entropies at late times (not shown). The saturation value, however, is in general not that of a random state given by the Page value [70, 71], but depends on the concrete choice of the gate.
#### 3.2.2 T-dual Impurities
Another situation in which the leading part of the spectrum can be described explicitly is given by T-dual impurity interactions at the boundary. In what follows we shall elaborate on the implications of T-duality of the impurity interaction on the entanglement of states.
Spectrum of Transfer Matrices:A 2-qudit gate \(U\in\mathrm{U}(q^{2})\) is called T-dual if its partial transpose \(U^{\mathrm{T}_{1}}\) with respect to the first qudit (and hence also w.r.t. the second qudit) is unitary as well [72]. A convenient parameterization of T-dual gates is given by [73, 74]
\[U=\left(u_{+}\otimes u_{-}\right)\exp\left(\mathrm{i}J\Sigma_{q^{2}-1} \otimes\Sigma_{q^{2}-1}\right)\left(v_{+}\otimes v_{-}\right). \tag{42}\]
with \(\Sigma_{i}\) the generalized Gell-Mann matrices, \(J\in[0,\pi/4]\) and \(u_{\pm},v_{\pm}\in\mathrm{U}(q)\).
This is an exhaustive parametrization for \(q=2\) but not for \(q>2\)[73]. Consequently the gate \(V\) becomes dual unitary, i.e., the gate \(\tilde{V}\) which originates from \(V\) by reshuffling of matrix
elements (w.r.t the canonical product basis) according to \(\tilde{V}_{cd}^{ab}=V_{ca}^{db}\) is unitary [73]. This can be diagrammatically expressed as,
\[\includegraphics[width=14.226378pt, width=14.226378pt]{Fig1}\ =\ \includegraphics[width=14.226378pt]{Fig2},\ \ \ \includegraphics[width=14.226378pt]{Fig3}\ =\ \includegraphics[width=14.226378pt]{Fig4} \tag{43}\]
Denoting by \(|\circ\rangle\in\mathds{C}^{q}\) an arbitrary normalized state as boundary condition for the transfer matrices \(\mathcal{T}_{\tau}\) dual unitarity implies that the rainbow state \(|r_{\tau}\rangle\) is a right eigenvector of \(\mathcal{T}_{\tau}\) with eigenvalue \(1\), i.e., \(\mathcal{T}_{\tau}\left|r_{\tau}\right\rangle=|r_{\tau}\rangle\). Hence the left and right eigenvectors coincide in this case. In what follows, we consider only those T-dual impurity interactions where eigenvalue \(1\) has multiplicity \(1\) and no other unimodular eigenvalues exist 4, i.e., the completely chaotic gates.
Footnote 4: We have confirmed numerically that generically this is the case and that there are no other unimodular eigenvalues for all \(\tau\) (and that there is a finite spectral gap \(1-|\lambda_{0}|>0\) as \(\tau\to\infty\)), see. App. B
Asymptotic Reduced Density Matrix:The above properties allow two construct the asymptotic form of the reduced density matrix in the non-resonant case (\(L,t\to\infty\), \(t/L\to\tau_{0}\in\mathds{R}\setminus\mathds{Z}\)) for initial states of the form Eq. (17). That is we start from \(|a\rangle\otimes|\circ\rangle^{\otimes L}\) with \(|a\rangle\in\mathds{C}^{q}\) an arbitrary normalized state (not necessarily orthogonal to \(|\circ\rangle\)). Note, that as \(|r_{\tau}\rangle\) is a right eigenvector of \(\mathcal{T}_{\tau}\) independently of the choice of boundary conditions, the following construction can be applied to arbitrary initial product states, when taking proper care of the action of \(\mathds{1}_{q}\otimes\mathds{P}_{\sigma_{0}}^{-1}\) on the initial state. To simplify the discussion, however, we restrict to the simpler initial states above. The asymptotic reduced density matrix is obtained by replacing powers of \(\mathcal{T}_{\tau}\) with \(|r_{\tau}\rangle\langle r_{\tau}|\) and powers of \(\mathcal{T}_{\tau+1}\) with \(|r_{\tau+1}\rangle\langle r_{\tau+1}|\). This yields
\[\rho_{\beta_{0}\cdots\beta_{l}}^{\alpha_{0}\cdots\alpha_{l}}(t) =\left\langle r_{\tau+1}\right|(|a\rangle\otimes|r_{\tau}\rangle \otimes|a\rangle)\ \left\langle r_{\tau}\right|(\mathcal{A}_{\tau+1})_{\beta_{0}\cdots\beta_{l} }^{\alpha_{0}\cdots\alpha_{l}}\left|r_{\tau+1}\right\rangle \tag{44}\] \[=q^{-\frac{1}{2}}\left\langle r_{\tau}\right|(\mathcal{A}_{\tau+ 1})_{\beta_{0}\cdots\beta_{l}}^{\alpha_{0}\cdots\alpha_{l}}\left|r_{\tau+1}\right\rangle \tag{45}\]
Figure 1: Second Rényi entropy for \(q=2\) and \(l=2\) for an impurity interaction with a vacuum state for various system sizes in (a) linear and (b) semi-logarithmic scale. (a) The dashed line corresponds to the maximum entropy given by \((l+1)\ln(q)\). Orange dots depict \(R_{2}\) obtained from a direct computation via Eq. (14). (b) The dash-dotted lines illustrates the asymptotic scaling \(|\lambda_{0}|^{\delta}\).
up to subleading terms. In the last equality we have used \(\left\langle r_{\tau+1}\right|(\left|a\right\rangle\otimes\left|r_{\tau}\right\rangle \otimes\left|a\right\rangle)=q^{-\frac{1}{2}}\). A diagrammatic representation of the asymptotic reduced density matrix is given by
\[\rho_{l}(t)=\frac{1}{q^{\tau+1}} \tag{46}\]
Unfortunately, unitarity and dual unitarity of \(V\) does not allow to simplify the reduced density matrix further except for \(l=0\). However the Renyi entropies can still be computed in the asymptotic regime of large subsystem size, which is discussed below. Similar to the setting of gates with a vacuum state, subleading terms are at least suppressed as \(\lambda_{0}^{\delta}\). However, numerical investigations, as presented in Fig. 2, seem to indicate that subleading terms are suppressed with system size, i.e., as \(\lambda_{0}^{L}\).
Renyi Entropies and Comparison with Numerics:The asymptotics of the Renyi entropies can be obtained when \(L-\delta\), \(\delta\) and \(l\) are large. Formally, we consider first the simultaneous limit \(L,t\to\infty\), \(t/L\to\tau_{0}\in\mathds{R}\setminus\mathds{Z}\) as described in Sec. 3.1.3, which gives the reduced density matrix derived in Sec. 3.2.2 and afterwards the limit \(l\to\infty\).
The goal, is to write down the Renyi entropies in terms of the leading eigenvalues of the transfer matrices. To do so, we express \(\operatorname{tr}\left(\rho_{l}(t)^{n}\right)\) in a form, in which the asymptotic approximation can be easily applied. More precisely, we aim to obtain \(\operatorname{tr}\left(\rho_{l}(t)^{n}\right)\propto\left\langle\sigma_{\tau} \right|\left(\mathcal{T}_{\tau}^{l}\right)^{\otimes n}\left|\sigma_{\tau}\right\rangle\) for a suitable state \(\left|\sigma_{\tau}\right\rangle\in\left(\mathds{C}^{2\tau}\right)^{\otimes n}\).
To this end, we first define
\[\hat{\rho}_{l}(t)= \tag{47}\]
Note that, in Eq. (47) the number of in- and output legs is reduced by one. That is, \(\hat{\rho}_{l}(t)\) is of dimension \(q^{l}\) instead of \(q^{l+1}\). Moreover, only transfer matrices \([\mathcal{T}_{\tau}]_{\beta}^{\alpha}\) enter, in contrast with Eq. (46) in which also transfer matrices at size \(\tau+1\) enter. The second line, Eq. (48), is a schematic representation used to make the diagrams more compact. The green and orange colored blocks correspond to the \(\tau\times l\) block built from the gates \(V\) and \(V^{\dagger}\), respectively, with the wedges indicating the orientation of the gates, while the white boxes indicate the boundary conditions given by \(\left|\circ\right\rangle\). The wires corresponding to in and out going legs each carry a Hilbert space of dimension \(q^{l}\). The top and bottom wires are an abbreviation for the unnormalized rainbow states \(q^{\frac{\tau}{2}}\left|\tau_{\tau}\right\rangle\) and hence those wires carry a Hilbert space of dimension \(q^{\tau}\). The above definitions allow us to rewrite
\[\mathrm{tr}\left(\rho_{l}(t)^{n}\right)=qq^{-n(\tau+1)}\mathrm{tr}\left(\hat{ \rho}_{l}(t)^{n}\right), \tag{49}\]
where a factor \(q^{-n(\tau+1)}\) enters due to \(n\) normalization constants in Eq. (46) coming from \(n\) replicas of \(\rho_{l}(t)\). The first factor of \(q\), however, originates from repeatedly contracting the gates \(V\) connected to output legs \(\alpha_{0}\) and \(\alpha_{1}\) of the \(i\)-th replica with the adjoint gates \(V^{\dagger}\) connected to the input legs \(\beta_{0}\) and \(\beta_{1}\) in the \(i-1\)-th replica using unitality of the gates. This removes the dependence of our results from \(\delta\), such that entanglement entropies will depend only on \(\tau\).
For \(\tau=0\), Eq. (49) can be evaluated exactly as \(\hat{\rho}_{l}(t)=(\left|\circ\right\rangle\!\left\langle\circ\right|)^{ \otimes l}\) even for finite \(l\). This gives the initial entropy as
\[R_{n}\left(t\right)=\ln(q) \tag{50}\]
up to terms exponentially suppressed in \(\delta\), i.e. \(\sim\left|\lambda_{0}\right|^{\delta}\), with \(\lambda_{0}\) being the subleading eigenvalue of \(\mathcal{T}_{1}\). This gives rise to non-trivial initial dynamics of the entanglement entropies for short times for any finite \(l\), see Fig. 2, as well as in the limit \(l\to\infty\).
For \(\tau>0\) the \(n\) replicas entering \(\mathrm{tr}\left(\rho_{l}(t)^{n}\right)\) in Eq. (49) need to be rearranged to proceed further. This is best seen schematically. For \(n=3\) (with an obvious generalization to arbitrary \(n\)) this reads
\[\mathrm{tr}\left(\hat{\rho}_{l}(t)^{3}\right)= \tag{51}\] \[= \tag{52}\]
The first equality diagrammatically represents the multiplication of subsequent replicas by connecting the input legs of replica \(i\) with the output legs of replica \(i+1\) (see labels in
the bottom left of the boxes) and connecting the legs between \(n\) and \(1\) realizes the trace. The second equality is obtained by rearranging the boxes corresponding to the forward and backward time evolution part while keeping the lines connecting subsequent boxes intact. Then, each combined block consisting of forward block (green) of replica \(i\) and backward block (orange) from replica \(i-1\) is \(\mathcal{T}_{\tau}^{l}\) i.e.,
\[\mathcal{T}_{\tau}^{l}= \tag{53}\]
The wires connecting the combined blocks on the top and bottom of the network can be viewed as states \(\ket{\mathbf{\sigma}^{\tau}}\in\left(\mathds{C}^{q}\right)^{\otimes 2n\tau}\). To give a proper definition we denote the \(2n\tau\)-periodic shift by \(-\tau\) in \(S_{2n\tau}\) by \(\eta_{-\tau}\), and similar to Sec. 3, the unitary representation of \(S_{2n\tau}\) which permutes the tensor factors in \(\left(\mathds{C}^{q}\right)^{\otimes 2n\tau}\) by \(\mathds{P}\). The states \(\ket{\mathbf{\sigma}^{\tau}}\) are then defined by
\[\ket{\mathbf{\sigma}^{\tau}} =q^{\frac{n\tau}{2}}\mathds{P}_{\eta_{-\tau}}\ket{r_{\tau}}^{ \otimes n} \tag{54}\] \[= \tag{55}\]
where each wire carries the Hilbert space \(\left(\mathds{C}^{q}\right)^{\otimes\tau}\) of dimension \(q^{\tau}\). Evidently, \(\ket{\mathbf{\sigma}^{\tau}}\) is just a shifted version of the \(n\)-fold tensor product of the unnormalized rainbow states, where we shift by "half a replica". Hence Eq. (52) can be phrased as
\[\operatorname{tr}\left(\hat{\rho}_{l}(t)^{n}\right)=\bra{\mathbf{\sigma}^{\tau}} \left(\mathcal{T}_{\tau}^{l}\right)^{\otimes n}\ket{\mathbf{\sigma}^{\tau}} \tag{56}\]
The above expression can be simplified in the limit \(l\to\infty\) by replacing \(\mathcal{T}_{\tau}^{l}\) by the projection onto the leading eigenvalue, which by the assumption of having multiplicity \(1\) is given by \(\mathcal{P}_{\tau}=\ket{r_{\tau}}\bra{r_{\tau}}\). Using the diagrammtic representations of states, we find
\[\operatorname{tr}\left(\hat{\rho}_{l}(t)^{n}\right)=\bra{\mathbf{\sigma}^{\tau}} \left(\ket{r_{\tau}}\bra{r_{\tau}}\right)^{\otimes n}\ket{\mathbf{\sigma}^{\tau}} =q^{-\tau(n-2)}. \tag{57}\]
Inserting the above result into Eq. (49) finally yields
\[\operatorname{tr}\left(\rho_{l}^{n}(t)\right)=q^{-(n-1)(2\tau+1)} \tag{58}\]
where the subleading terms are exponentially suppressed at least as \(|\lambda_{0}|^{l}\). Consequently the corresponding Renyi entropies read
\[R_{n}\left(t\right)=2\tau\ln\left(q\right)+\ln\left(q\right). \tag{59}\]
For the resonant case, i.e. when \(L,t\to\infty\), such that, \(t/L\to\tau_{0}\in\mathds{Z}\) first and subsequently \(l\to\infty\), a similar computation yields,
\[\operatorname{tr}\left(\rho_{l}(t)^{n}\right)=q^{-(n-1)2\tau} \tag{60}\]
and hence,
\[R_{n}\left(t\right)=2\tau\ln\left(q\right). \tag{61}\]
Note that the computation for the resonant case involves essentially the same steps but with slight different intermediate tensor networks.
Even though the limit of large subsystem size is not accessible by numerical simulation, the staircase structure of entanglement entropies suggested by Eq. (59) is clearly seen in Fig. 2(b) for small \(l=2\). The average slope, however, is different from \(2\ln(q)\) as in Eq. (59).
Subsystem with one Lattice Site, \(l=0\):Another limit which can be treated exactly is that of the smallest possible subsystem given by \(l=0\), i.e., the subsystem \(A\) consisting of the first lattice site only. We again consider the limit \(t,L\rightarrow\infty\) and the simpler non-resonant case first. Applying the analysis to compute the reduced density matrix for T-dual gates to \(l=0\), we get the reduced density matrix as given by Eq. (45). Using \(\left(\mathcal{A}_{\tau+1}\right)_{\beta}^{\alpha}\left|r_{\tau+1}\right\rangle =q^{-\frac{1}{2}}\delta_{\alpha,\beta}\left|r_{\tau}\right\rangle\) we get,
\[\left(\rho_{l}(t)\right)_{\beta}^{\alpha}=\frac{1}{q}\delta_{ \alpha,\beta}=\frac{1}{q}\mathds{1}_{q}, \tag{62}\]
which is the infinite temperature state. Consequently, the Renyi entropies read
\[R_{n}\left(t\right)=\ln(q). \tag{63}\]
For the resonant case, a similar computation yields the same result. Moreover, as argued in the previous section, the above result is obtained for \(\tau=0\) as well, but with corrections scaling as \(|\lambda_{0}|^{\delta}\). In particular, after the non-trivial initial dynamics entanglement entropies saturate at the maximum possible value, as it is depicted in Fig. 2(a). For any other numerically accessible subsystem size this is not the case, see Fig. 2(b), even for longer times than what is shown there.
#### 3.2.3 Numerical Results for Generic Impurities
For impurity interactions falling in neither of the classes discussed above, there is no simple description of the right eigenvectors corresponding to leading eigenvalue 1. Nevertheless, the tensor network representation (25) allows for computing the reduced density matrix for large system size \(L\) but small subsystem size \(l\) numerically. Here we briefly report the numerical results. In Fig. 3 we depict the second Renyi entropy for (a) \(l=0\) and (b) \(l=2\). In both cases the entanglement dynamcis resembles a combination of the T-dual case and the case of gates
Figure 2: Second Rényi entropy for \(q=2\) and \(J=\pi/4-0.05\) in Eq. (42) with (a) \(l=0\) and (b) \(l=2\) for a T-dual impurity interaction for various system sizes. (a) The dashed line corresponds to the maximum entropy given by Eq. (63). Orange dots depict \(R_{2}\) obtained from a direct computation via Eq. (14).
with local vacuum states. The former leads to the plateau-like structure characterized by constant \(\tau\) which originates from the contribution of the leading eigenvalue \(1\) of the transfer matrices \(\mathcal{T}_{\tau}\). In contrast, similar to the latter case, we observe corrections from the subleading eigenvalue scaling as \(\left|\lambda_{0}\right|^{\delta}\) on top of the plateaus. Even though we observe saturation of entanglement entropies for small system sizes at late times, i.e. longer than what is depicted here, the maximum possible value of \((l+1)\ln(q)\) is in general not reached.
## 4 Operator Entanglement Dynamics for Local Operators
In this section we study the entanglement dynamics of local initial operators. In analogy to the case of states we first construct a tensor network representation similar to Eq. (65) for the reduced super density matrix in Sec. 4.1. This again allows for an exact computation of the asymptotic reduced super density matrix in the limit \(L,t\to\infty\) and subsequently of the operator Renyi entropies. We present this calculation for both generic impurity interactions in Sec. 4.2.1 and T-dual impurity interaction in Sec. 4.2.2.
### Tensor Network Representation of the Reduced Super Density Matrix
We construct the analog of the tensor network representation (25) for the super density matrix for an initial local operator. With a slight abuse of notation we will use the same symbols and diagrammatic representations in the following sections as we did for the state counterparts in the previous sections, as many constructions and arguments are exactly the same for the operators as for the states. However, there are some notable differences, which are as follows.
Firstly, there are slight differences in the tensor networks representations which ultimately originate from the time evolution of states in the Schrodinger picture as opposed to the time evolution of operators in the Heisenberg picture. Those differences are essentially irrelevant for the dynamics of entanglement entropies. The major difference, however, is that the folded
Figure 3: Second Rényi entropy for \(q=2\) with (a) \(l=0\) and (b) \(l=2\) for a generic impurity interaction for various system sizes. (a) The dashed line corresponds to the maximum entropy \((l+1)\ln\left(q\right)\). Orange dots depict \(R_{2}\) obtained from a direct computation via Eq. (14).
gates are unital (see definition below) which leads to additional properties of the relevant transfer matrices. In the following sections we will often drop the adjective'super' when referring to super operators and super density matrices.
Let us first introduce the relevant notation and relate it to the constructions for states in the previous sections. This will provide us with the tensor network representation of the reduced density matrix. Subsequently, for different choices of impurity interactions we will compute its asymptotic form and derive the corresponding entanglement entropies.
The local Hilbert space is now the space of vectorized operators \(\mathds{C}^{q^{2}}\) of dimension \(q^{2}\) with the Hilbert-Schmidt orthonormal basis \((\ket{\alpha})_{\alpha=0}^{q^{2}-1}\) introduced in Sec. 2.1. We denote the Hilbert-Schmidt normalized vectorized identity \(\mathds{1}_{q}/\sqrt{q}\) by \(\ket{\circ}=\ket{0}\) and choose a Hermitian and traceless Hilbert-Schmidt normalized vectorized operator \(\ket{a}\in\mathds{C}^{q^{2}}\) (being traceless implies \(\bra{\circ}\ket{a}=0\)). We shall depict them diagrammatically in the same way as for states and hence, the corresponding local operator \(\ket{a_{0}}=\ket{a}\otimes\ket{\circ}^{\otimes L}\in\left(\mathds{C}^{q^{2}} \right)^{\otimes L+1}\) is diagrammatically presented exactly as in Eq. (17). We introduce the folded gate \(W=U^{\dagger}\otimes U^{T}\) and \(V=WS\) (which is the folded version of the gate \(V=UP\) in the case of states) and use the same diagrammatic representation (18). The gate \(V\) is again unitary, which is diagrammatically depicted by Eq. (19).
We define permutations \(\sigma_{\delta}\in S_{L}\) for \(\delta\in\mathds{Z}\) given by
\[\sigma_{\delta}(x)=\left[\text{Min}(2(x+\delta),2L-2(x+\delta)+1)\right]\text {mod}\,L \tag{64}\]
\(\mathds{P}\) is again the unitary representation of \(S_{L}\) permuting tensor factors, but now acts on \(\left(\mathds{C}^{q^{2}}\right)^{\otimes L}\). With the above notations, the time evolved local operator is given by Eq. (22), and is diagrammatically represented by the tensor network (24). Keeping in mind the difference in the permutations \(\sigma_{\delta}\), we ultimately obtain a very similar tensor network representation as Eq. (25), given by
\[\rho_{l}(t)=\raisebox{-10.0pt}{\includegraphics[scale=0.5]{fig/l1/l2/l3/l4/l5 }} \tag{65}\]
where \(l_{2}=\lfloor l/2\rfloor\) and \(l_{1}=l-l_{2}\). The difference to Eq. (25) is just that even and odd in- and output legs of the reduced density matrices for sites \(1\) to \(l\) are interchanged. The rows of the
network can again be cast in the form of transfer matrices given by Eqs. (26)-(30), but we redefine \([\mathcal{A}_{\tau}]^{\alpha_{0}\alpha_{1}}_{\beta_{0}\beta_{1}}=[\mathcal{T}_{ \tau-1}]^{\alpha_{1}}_{\beta_{1}}\,[\mathcal{A}_{\tau}]^{\alpha_{0}}_{\beta_{0}}\) for \(l=1\) as well as
\[[\mathcal{A}_{\tau}]^{\alpha_{0}\cdots\alpha_{l}}_{\beta_{0}\cdots\beta_{l}}= \begin{cases}[\mathcal{T}_{\tau-1}]^{\alpha_{l-1}}_{\beta_{l-1}}\cdots[ \mathcal{T}_{\tau-1}]^{\alpha_{1}}_{\beta_{1}}\,[\mathcal{T}_{\tau}]^{\alpha_ {0}\alpha_{2}}_{\beta_{0}\beta_{2}}\,[\mathcal{T}_{\tau}]^{\alpha_{4}}_{\beta_ {4}}\cdots[\mathcal{T}_{\tau}]^{\alpha_{l}}_{\beta_{l}}&l\text{ even}\\ [\mathcal{T}_{\tau-1}]^{\alpha_{l}}_{\beta_{l}}\cdots[\mathcal{T}_{\tau-1}]^{ \alpha_{1}}_{\beta_{1}}\,[\mathcal{T}_{\tau}]^{\alpha_{0}\alpha_{2}}_{\beta_ {0}\beta_{2}}\,[\mathcal{T}_{\tau}]^{\alpha_{4}}_{\beta_{4}}\cdots[\mathcal{T }_{\tau}]^{\alpha_{l-1}}_{\beta_{l-1}}&l\text{ odd}\end{cases} \tag{66}\]
for \(l>1\). From here on the same techniques can be applied to compute entanglement entropies as in the case of states.
### Entanglement Dynamics
Now we shall compute the entanglement dynamics for operators for different kinds of impurity interactions, and highlight the difference with the dynamics of states, if any.
#### 4.2.1 Generic Impurity Interactions
In this section we study the entanglement dynamics for local operators in case of generic (completely chaotic, see below) unitary interactions. This is closely related to the case of gates with a vacuum state in Sec. 3.2.1, as the vectorized identity plays the role of the vacuum state.
In the case of operators the gate \(V\) and its adjoint are unital, as they act by conjugation with the unitaries \(UP\) or \(PU^{\dagger}\) on operators, i.e.,
\[V\ket{\circ\circ}=\ket{\circ\circ}\qquad\text{and}\qquad V^{\dagger}\ket{ \circ\circ}=\ket{\circ\circ}. \tag{67}\]
By taking the adjoint of the above equations, one can see that unitality applies also to \(\bra{\circ\circ}\), corresponding to trace preservation. Diagrammatically, this can expressed as previously in Eq. (35). Consequently the transfer matrices are unital as well, i.e., \(\mathcal{T}_{\tau}\ket{\circ}^{\otimes 2\tau}=\ket{\circ}^{\otimes 2\tau}\), meaning that \(\ket{\circ}^{\otimes 2\tau}\) is a right eigenvector for eigenvalue one. The above implies that \(\mathcal{T}_{\tau}\) is a unital CP map and that \(\mathcal{T}_{\tau}^{\dagger}\) is a CPTP map.
The corresponding left eigenvector is again the rainbow state \(\ket{r_{\tau}}\in\left(\mathds{C}^{q^{2}}\right)^{\otimes 2\tau}\), which appropriately normalized now reads
\[\ket{r_{\tau}} =q^{-\tau}\sum_{\alpha_{1},\ldots,\alpha_{\tau}=0}^{q-1}\ket{ \alpha_{1}\alpha_{2}\cdots\alpha_{\tau}\alpha_{\tau}\cdots\alpha_{2}\alpha_{ 1}} \tag{68}\] \[=q^{-\tau}\xrTo[\tau]{\alpha_{1}\alpha_{2}\cdots\alpha_{\tau}} \tag{69}\]
The projection onto the eigenspace corresponding to eigenvalue \(1\) is then given by \(\mathcal{P}_{\tau}=q^{\tau}\ket{\circ\circ\cdots\circ}\langle r_{\tau}|\) with the prefactor ensuring proper normalization of the left and right eigenvector and hence \(\mathcal{P}_{\tau}^{2}=\mathcal{P}_{\tau}\).
In the context of operator entanglement we call a generic impurity interaction completely chaotic, if there is no additional linear independent eigenvector for a unimodular eigenvalue of \(\mathcal{T}_{\tau}\) for any \(\tau\) and when there is a finite spectral gap between eigenvalue \(1\) and the subleading eigenvalue \(\lambda_{0}\). Numerics suggest that this is the generic situation; see App. B. Repeating the same arguments from Sec. (3.2.1) by replacing \(q\) by \(q^{2}\) in intermediate steps, yields the reduced density matrix described by Eq. (40) and the entanglement entropies (41). For various
system sizes \(L\) we obtain the second Renyi entropy also numerical by contracting the tensor network (65) and depict it in Fig. 4 for subsystem size \(l+1=3\). There the asymptotic exponential dependence \(|\lambda_{0}|^{\delta}\) is well confirmed for the largest system size (dashed line in (b)) and holds even for moderately large systems \(L>50\) (not shown).
#### 4.2.2 T-dual Impurity Interactions
For the case of T-dual impurity interactions, the entanglement dynamics for local traceless operators acting non-trivially at the boundary can also be treated exactly for large systems and large subsystems. That is, in the limit \(L,t,l\to\infty\), when limits are taken in the order described in Sec. 3.2.2. However, unlike the previous section, the leading part of the spectrum of transfer matrices for folded T-dual impurity interactions is different from the ones studied in Sec. 3.2.2.
Spectrum of Transfer Matrices:The main difference to the case of initial product states lies in the unitality and dual unitality of the folded gate \(V=WS\) in addition to dual unitarity of the folded gates. This gives rise to additional eigenvectors of \(\mathcal{T}_{\tau}\) for leading eigenvalue 1. Unitality is a general property of the folded gates and was introduced in Sec. 4.2.1 already. On the other hand, dual unitality of the dual folded gate \(\tilde{V}\) is defined akin to the state setting
\[\tilde{V}\ket{\circ\circ}=\ket{\circ\circ},\ (\tilde{V^{\dagger}})\ket{ \circ\circ}=\ket{\circ\circ} \tag{70}\]
and similarly for \(\bra{\circ\circ}\). This can be diagrammatically depicted as
(71)
These properties give rise to \(\tau+1\) linear independent eigenvectors of \(\mathcal{T}_{\tau}\) for eigenvalue 1 given by [62, 75]
\[\ket{s_{x}}=\ket{\circ}^{\otimes\tau-x}\otimes\ket{r_{x}}\otimes \ket{\circ}^{\otimes\tau-x}\in\left(\mathds{C}^{q^{2}}\right)^{\otimes 2\tau} \tag{72}\]
Figure 4: Second operator Rényi entropy for \(q=2\) and \(l=2\) for a generic impurity interaction, \(a\) being the spin-\(z\) operator, for various system sizes in (a) linear and (b) semi-logarithmic scale. (b) The dash-dotted lines illustrates the asymptotic scaling \(|\lambda_{0}|^{\delta}\).
constructed from the rainbow states, Eq. (33), \(|r_{x}\rangle\) for \(x\in\{1,\ldots,\tau\}\) and \(|\circ\rangle^{\otimes 2\tau}\). In this case, we call the impurity completely chaotic if there are no other linearly independent eigenvectors with unimodular eigenvalue. In what follows, we first consider \(l>0\) and \(\tau>0\) in order to avoid constraints arising from the small size of the tensor networks. We shall discuss the other cases separately later.
One thing to immediately note about the state \(|s_{x}\rangle\) is that they are not orthonormal, as \(\langle s_{x}|s_{y}\rangle=q^{-|x-y|}\). Hence, we need to apply the Gram-Schmidt procedure to obtain a orthonormal set of eigenvectors given by
\[|t_{0}\rangle =|\circ\rangle^{\otimes 2\tau} \tag{73}\] \[|t_{x}\rangle =\frac{q}{\sqrt{q^{2}-1}}\left(|s_{x}\rangle-\frac{1}{q}\,|s_{s-1 }\rangle\right)\quad\text{for }x\in\{1,\ldots,\tau\}. \tag{74}\]
Thus the projection onto the eigenvalue \(1\) eigenspace is \(\mathcal{P}_{\tau}=\sum_{x=0}^{\tau}|t_{x}\rangle\langle t_{x}|\). Also note that for T-dual impurity interactions left and right eigenvectors for eigenvalue \(1\) coincide. In particular, as \(|t_{0}\rangle\) is both a left and a right eigenvector, \(\mathcal{T}_{\tau}\) is the vectorization of a unital CPTP map.
Asymptotic Reduced Density Matrix:We now derive the asymptotic reduced density matrix as \(L,t\to\infty\) in the non-resonant case and briefly comment on the resonant case later. The degenerate eigenspace for eigenvalue \(1\) gives rise to a slightly more complex structure of the reduced density matrix. Upon replacing the transfer matrices \(\mathcal{T}_{\tau}^{L-\delta-l_{1}}\) by \(\mathcal{P}_{\tau}\) only the term \(|t_{\tau}\rangle\langle t_{\tau}|\) gives a non-vanishing contribution to the reduced density matrix. For all the other terms the leftmost tensor factor \(\langle\circ|\) in \(\langle t_{x}|\) allows for contracting the leftmost column of the tensor network (25) due to unitality of the folded gate \(V\) and yields a factor of \(\langle\circ|a\rangle=0\). By the same argument only the first two of the four terms
\[|t_{\tau}\rangle\langle t_{\tau}|=\frac{q^{2}}{q^{2}-1}\left(|s_{\tau}\rangle \langle s_{\tau}|-\frac{1}{q}\,|s_{\tau-1}\rangle\langle s_{\tau}|-\frac{1}{ q}\,|s_{\tau}\rangle\langle s_{\tau-1}|+\frac{1}{q^{2}}\,|s_{\tau-1}\rangle \langle s_{\tau-1}|\right) \tag{75}\]
give a non-vanishing contribution to the reduced density matrix. Hence, we can replace \(\mathcal{T}_{\tau}\) by \(\frac{q^{2}}{q^{2}-1}\left(|s_{\tau}\rangle\langle s_{\tau}|-\frac{1}{q}\,|s _{\tau-1}\rangle\langle s_{\tau}|\right)\) and similarly for \(\mathcal{T}_{\tau+1}\). This yields the asymptotic reduced density matrix as
\[\rho_{l}(t)=\tilde{\rho}_{l}^{(\tau)}(t)-\frac{1}{q^{2}}\tilde{\rho}_{l}^{( \tau-1)}(t) \tag{76}\]
where,
\[\left(\tilde{\rho}_{l}^{(\tau)}(t)\right)_{\beta_{0}\cdots\beta_{l}}^{\alpha_ {0}\cdots\alpha_{l}}=\frac{q}{q^{2}-1}\,\langle r_{\tau}|\,(\mathcal{A}_{\tau+ 1})_{\beta_{0}\cdots\beta_{l}}^{\alpha_{0}\cdots\alpha_{l}}\,|r_{\tau+1} \rangle\,. \tag{77}\]
This can be diagrammatically represented as,
\[\tilde{\rho}_{l}^{(\tau)}(t)=\frac{1}{q^{2}-1}\frac{1}{q^{2\tau}} \tag{78}\]
Unfortunately, Eq. (76) can not be simplified further except of \(l=0\) as was the case for states, which we shall discuss separately in 4.2.2.
Renyi Entropies for Large Subsystems \(l\to\infty\):Now we shall derive the asymptotics of the Renyi entropies when also the subsystem is large, i.e. we take the limit \(l\to\infty\) in a similar manner as in the case of states. As the asymptotic reduced density matrix, Eq. (76), is the difference of two terms, the computation is more involved than in the case of states. We moreover restrict to \(\tau>0\) in order to avoid additional complications due to small networks. We first sketch the main steps before getting into the details of the computation. There are four main steps we need to do to obtain our desired result.
1. **Rearranging \(\operatorname{tr}\left(\rho_{l}(t)^{n}\right)\):** We rewrite \(\operatorname{tr}\left(\rho_{l}(t)^{n}\right)\) as an alternating sum of terms of the form \(\langle\boldsymbol{\sigma}|\,\mathcal{T}_{\sigma_{n}\sigma_{n-1}}^{l}\otimes \cdots\otimes\mathcal{T}_{\sigma_{1}\sigma_{n}}^{l}\,|\boldsymbol{\sigma}\rangle\) for suitable states \(|\boldsymbol{\sigma}\rangle\), \(\sigma_{i}\in\{\tau-1,\tau\}\) and generalized transfer matrices \(\mathcal{T}_{\sigma_{i}\sigma_{i-1}}\) with similar spectral properties as the \(\mathcal{T}_{\tau}\).
2. **Taking the limit \(l\to\infty\):** Upon replacing the generalized transfer matrices by the projection onto their leading eigenvalue \(1\) for large \(l\) most of the terms in the sum above cancel and we obtain \(\operatorname{tr}\left(\rho_{l}(t)^{n}\right)\propto\langle\boldsymbol{ \sigma}^{\tau}|\,\mathcal{P}_{\tau}^{\otimes n}\,|\boldsymbol{\sigma}^{\tau} \rangle-\langle\boldsymbol{\sigma}^{\tau-1}|\,\mathcal{P}_{\tau-1}^{\otimes n }\,|\boldsymbol{\sigma}^{\tau-1}\rangle\) with the states \(|\boldsymbol{\sigma}^{\tau}\rangle\) similar as for states and the \(\mathcal{P}_{\tau}\) as in the previous section.
3. **Evaluating matrix elements \(\langle\boldsymbol{\sigma}^{\tau}|\,\mathcal{P}_{\tau}^{\otimes n}\,| \boldsymbol{\sigma}^{\tau}\rangle\):** Inserting \(\mathcal{P}_{\tau}=\sum_{x}|t_{x}\rangle\langle t_{x}|\) in the first term all but the term \(|t_{\tau}\rangle\langle t_{\tau}|\) are canceled by \(\mathcal{P}_{\tau-1}\) in the second term and we are left with \(\operatorname{tr}\left(\rho_{l}(t)^{n}\right)\propto\langle\boldsymbol{ \sigma}^{\tau}|\,(|t_{\tau}\rangle\langle t_{\tau}|)^{\otimes n}\,| \boldsymbol{\sigma}^{\tau}\rangle\).
4. **Computing the overlap \(\langle\boldsymbol{\sigma}^{\tau}|\,(|t_{\tau}\rangle)^{\otimes n}\):** Evaluating \(\langle\boldsymbol{\sigma}^{\tau}|\,(|t_{\tau}\rangle\langle t_{\tau}|)^{ \otimes n}\,|\boldsymbol{\sigma}^{\tau}\rangle\) eventually gives the final result in Eq. (109).
1. Rearranging \(\operatorname{tr}\left(\rho_{l}(t)^{n}\right)\):From Eq. (76) we obtain \[\operatorname{tr}\left(\rho_{l}(t)^{n}\right)=\sum_{\boldsymbol{\sigma}\in\{ \tau-1,\tau\}^{n}}\left(\frac{-1}{q^{2}}\right)^{\sharp\boldsymbol{\sigma}} \operatorname{tr}\left(\tilde{\rho}_{l}^{(\sigma_{1})}(t)\tilde{\rho}_{l}^{( \sigma_{2})}(t)\cdots\tilde{\rho}_{l}^{(\sigma_{n})}(t)\right),\] (79) with the \(\tilde{\rho}_{l}^{(\sigma)}\) defined in Eq. (77) and where we define \(\sharp\boldsymbol{\sigma}:=|\{i\in\{1,\ldots,n\}:\sigma_{i}=\tau-1\}|\). For subsequent calculations it is convenient to rewrite this in the form, \[\operatorname{tr}\left(\rho_{l}(t)^{n}\right)=q^{2}\left(\frac{1}{q^{2}-1} \frac{1}{q^{2\tau}}\right)^{n}\sum_{\boldsymbol{\sigma}\in\{\tau-1,\tau\}^{ n}}\left(-1\right)^{\sharp\boldsymbol{\sigma}}\operatorname{tr}\left(\tilde{ \rho}_{l}^{(\sigma_{1})}(t)\hat{\rho}_{l}^{(\sigma_{2})}(t)\cdots\hat{\rho}_{l}^ {(\sigma_{n})}(t)\right),\] (80)
where the first factor \(q^{2}\) arises similar as in the case of states by (repeatedly) contracting gates \(V\) connected to the outputp legs \(\alpha_{0}\) and \(\alpha_{2}\) of the \(i\)-th replica with the adjoint gates \(V^{\dagger}\) connected to the input legs \(\beta_{0}\) and \(\beta_{2}\) of the \((i-1)\)-th replica. The second factor comes from the normalization constants (and the prefactor \(1/q^{2}\)) occuring in Eq. (78) (and Eq. (76)). Finally, \(\hat{\rho}_{l}^{(\sigma_{i})}(t)\) is given by the tensor network representation
\[\hat{\rho}_{l}^{(\sigma_{i})}(t)= \tag{81}\] \[=\]
This is similar as in the case of states but we now additionally indicate the size of the blocks in their respective bottom left corner. This also fixes the dimension of the Hilbert spaces carried by the wires connecting forward and backward block to be \(q^{2\sigma_{i}}\). Again the above simplification implies that our subsequent results are independent from \(\delta\).
For fixed \(\boldsymbol{\sigma}\in\{\tau-1,\tau\}^{n}\) we can repeat the argument from Sec. 3.2.2 to obtain a similar equation as Eq. (52) by reshuffling the forward and backward parts of subsequent replicas. The resulting tensor networks have the same structure but in the operator case the wires connecting the (reshuffled) replica \(i\) with replica \(i-1\) carry Hilbert spaces whose dimension \(q^{2\sigma_{i}}\) depends on the index \(i\) of the replica. The (reshuffled) replicas can now be described in terms of the generalized transfer matrices \(\mathcal{T}_{\sigma_{i},\sigma_{i-1}}\) acting on \(\left(\mathds{C}^{q^{2}}\right)^{\otimes\sigma_{i}}\otimes\left(\mathds{C}^{q^ {2}}\right)^{\otimes\sigma_{i-1}}\) which are defined by their diagrammatic representation
[MISSING_PAGE_POST]
To complete the reshuffling of replicas leading to the operator version of Eq. (52) we introduce vectorized operators \(|\mathbf{\sigma}\rangle\) which connect the replicas. Again, the state \(|\mathbf{\sigma}\rangle\) will be obtained from the \(n\)-fold tensor product of rainbow states shifted by "half a replica". However, the individual factors now are states in \(\left(\mathds{C}^{q^{2}}\right)^{\otimes 2\sigma_{i}}\) and hence depend on the index \(i\) of the replicas they connect. To make the above precise we first define \(|\mathbf{\sigma}|=2\sum_{i}\sigma_{i}\). We again denote by \(\eta_{-\sigma_{n}}\in S_{|\mathbf{\sigma}|}\) the \(|\mathbf{\sigma}|\) periodic shift by \(-\sigma_{n}\) and by \(\mathds{P}\) the unitary representation of \(S_{|\mathbf{\sigma}|}\) which permutes the tensor factors in \(\left(\mathds{C}^{q^{2}}\right)^{\otimes|\mathbf{\sigma}|}\). We then define \(|\mathbf{\sigma}\rangle\in\left(\mathds{C}^{q^{2}}\right)^{\otimes|\mathbf{\sigma}|}\) by
\[|\mathbf{\sigma}\rangle =q^{\frac{|\mathbf{\sigma}|}{2}}\mathds{P}_{\eta_{-\sigma_{n}}}\left( \left|r_{\sigma_{n}}\right\rangle\otimes\left|r_{\sigma_{n-1}}\right\rangle \otimes\cdots\otimes\left|r_{\sigma_{1}}\right\rangle\right) \tag{85}\] \[= \tag{86}\]
In the above network the wire reaching from left to right carries the Hilbert space \(\left(\mathds{C}^{q^{2}}\right)^{\otimes\sigma_{n}}\) of dimension \(q^{2\sigma_{n}}\) and the inner wires carry Hilbert spaces of dimensions \(d=q^{2\sigma_{n-1}}\), \(q^{2\sigma_{n-2}}\), \(\ldots\), \(q^{2\sigma_{1}}\) (left to right). In particular for the case, where all the \(\sigma_{i}\) are the same, i.e., for
\[\mathbf{\sigma}^{\tau}=(\tau,\tau,\ldots,\tau)\qquad\text{and}\quad\mathbf{\sigma}^{ \tau-1}=(\tau-1,\tau-1,\ldots,\tau-1) \tag{87}\]
we obtain the analog of the states defined in Sec. 3.2.2. Finally we arrive at
\[\operatorname{tr}\left(\hat{\rho}_{l}^{(\sigma_{1})}(t)\hat{\rho}_{l}^{( \sigma_{2})}(t)\cdots\hat{\rho}_{l}^{(\sigma_{n})}(t)\right)=\langle\mathbf{ \sigma}|\,\mathcal{T}_{\sigma_{n}\sigma_{n-1}}^{l}\otimes\mathcal{T}_{\sigma_ {n-1}\sigma_{n-2}}^{l}\otimes\cdots\otimes\mathcal{T}_{\sigma_{1}\sigma_{n}}^ {l}\left|\mathbf{\sigma}\right\rangle. \tag{88}\]
This concludes the first step 1. The above expression can be only evaluated further in the limit \(l\to\infty\).
2. Taking the limit \(l\to\infty\):As the generalized transfer matrices enter to the power of \(l\), the above expression can be evaluated in the limit \(l\to\infty\) by replacing the \(\mathcal{T}_{\sigma_{i}\sigma_{i-1}}\) by their leading eigenvalue and the projection onto the corresponding eigenspace.
The \(\mathcal{T}_{\sigma_{i}\sigma_{i-1}}\) are non-expanding, unital, CPTP maps with leading eigenvalue 1. Unitality and (dual) unitarity of the gate \(V\) give rise to \(\min\{\sigma_{i-1},\sigma_{i}\}+1\) linearly independent eigenvectors. For the completely chaotic T-dual impurity interactions considered here, these are the only eigenvectors, since one has \(\operatorname{spec}\left(\mathcal{T}_{\sigma_{1},\sigma_{2}}\right)\subseteq \operatorname{spec}\left(\mathcal{T}_{\tau}\right)\) for any \(\tau\geq\max\{\sigma_{1},\sigma_{2}\}\) as a consequence of (dual) unitality. More precisely, given a right eigenvector \(|\lambda\rangle\) of \(\mathcal{T}_{\sigma_{1},\sigma_{2}}\) with eigenvalue \(\lambda\) the vector \(|\circ\rangle^{\otimes\sigma_{1}-\tau}\otimes|\lambda\rangle\otimes|\circ \rangle^{\otimes\sigma_{2}-\tau}\) is an eigenvector of \(\mathcal{T}_{\tau}\) with the same eigenvalue. Adapting this argument to the eigenvalue 1 for completely chaotic impurity interactions the projections \(\mathcal{P}_{\sigma_{i},\sigma_{i-1}}\) onto the corresponding eigenspace are given by
\[\mathcal{P}_{\tau\tau} =\mathcal{P}_{\tau} \tag{89}\] \[\mathcal{P}_{\tau\tau-1} =|\circ\rangle\langle\circ|\otimes\mathcal{P}_{\tau-1}\] (90) \[\mathcal{P}_{\tau-1\tau} =\mathcal{P}_{\tau-1}\otimes|\circ\rangle\langle\circ| \tag{91}\]
with \(\mathcal{P}_{\tau}\) the corresponding projection for \(\mathcal{T}_{\tau}\) introduced above. Hence,
\[\operatorname{tr}\left(\hat{\rho}_{l}^{(\sigma_{1})}(t)\hat{\rho}_{l}^{(\sigma _{2})}(t)\cdots\hat{\rho}_{l}^{(\sigma_{n})}(t)\right)=\langle\mathbf{\sigma}|\, \mathcal{P}_{\sigma_{n}\sigma_{n-1}}\otimes\mathcal{P}_{\sigma_{n-1}\sigma_{n- 2}}\otimes\cdots\otimes\mathcal{P}_{\sigma_{1}\sigma_{n}}\left|\mathbf{\sigma}\right\rangle \tag{92}\]
up to terms exponentially suppressed with \(l\).
The above expression is equal for all \(\mathbf{\sigma}\neq\mathbf{\sigma}^{\tau}\) and hence in particular equals the expression for \(\mathbf{\sigma}^{\tau-1}\). To see this, first consider \(\mathbf{\sigma}\in\{\tau-1,\tau\}^{n}\) with not all entries identical. Thus there is \(j\in\{1,\ldots,n\}\) with \(\sigma_{j}=\tau-1\) and \(\sigma_{j-1}=\tau\). A straightforward computation then shows that contracting the projection \(\mathcal{P}_{\tau\sigma_{j-2}}\) with \(\langle\circ|\) and \(|\circ\rangle\) on the left yields the projection \(\mathcal{P}_{\tau-1\sigma_{j-2}}\) acting on a smaller space. Formally, this reads
\[\left(\langle\circ|\otimes\mathds{1}\right)\mathcal{P}_{\tau\sigma_{j-2}} \left(|\circ\rangle\otimes\mathds{1}\right)=\mathcal{P}_{\tau-1\sigma_{j-2}}, \tag{93}\]
where \(\mathds{1}\) denotes the identity on \(\left(\mathds{C}^{q^{2}}\right)^{\otimes\tau-1+\sigma_{j-2}}\). This is obvious for \(\sigma_{j-2}=\tau-1\) and for \(\sigma_{j-2}=\tau\) follows from writing
\[\mathcal{P}_{\tau\tau}=\mathcal{P}_{\tau}=|t_{\tau}\rangle\langle t_{\tau}|+| \circ\rangle\langle\circ|\otimes\mathcal{P}_{\tau-1}\otimes|\circ\rangle \langle\circ| \tag{94}\]
and noting that
\[\left(\langle\circ|\otimes\mathds{1}\right)|t_{\tau}\rangle\langle t_{\tau}| \left(|\circ\rangle\otimes\mathds{1}\right)=0. \tag{95}\]
From the above properties it follows that \(\langle\mathbf{\sigma}|\,\mathcal{P}_{\sigma_{n}\sigma_{n-1}}\otimes\mathcal{P}_{ \sigma_{n-1}\sigma_{n-2}}\otimes\cdots\otimes\mathcal{P}_{\sigma_{1}\sigma_{n} }\,|\mathbf{\sigma}\rangle=\langle\mathbf{\pi}|\,\mathcal{P}_{\pi_{n}\pi_{n-1}}\otimes \mathcal{P}_{\pi_{n-1}\pi_{n-2}}\otimes\cdots\otimes\mathcal{P}_{\pi_{1}\pi_{ n}}\,|\mathbf{\pi}\rangle\) if \(\pi_{i}=\sigma_{i}\) for \(i\neq j-1\) and \(\pi_{j-1}=\tau-1\).
This argument is best illustrated diagrammatically by
\[\mathbf{\cdots} \tag{96}\] \[= \mathbf{\cdots} \tag{97}\]
where the gray boxes represent the indicated projections with which we replaced \(\mathcal{T}_{\sigma_{j}\sigma_{j-1}}\) in Eq. (109). At the extreme left, we have \(\mathcal{P}_{\sigma_{j}\sigma_{j-1}}=\mathcal{P}_{\tau-1\tau}\). The wires to the left (right) carry the Hilbert space \(\mathds{C}^{d}\) of dimension \(d=q^{2(\tau-1)}\) (\(d=q^{2(\sigma_{j-2})}\)) and for the central thick wires \(d=q^{2(\tau-1)}\), while for the central thin wires \(d=q^{2}\). By repeated use of the above argument it follows that \(\langle\mathbf{\sigma}|\,\mathcal{P}_{\sigma_{n}\sigma_{n-1}}\otimes\mathcal{P}_ {\sigma_{n-1}\sigma_{n-2}}\otimes\cdots\otimes\mathcal{P}_{\sigma_{1}\sigma_ {n}}\,|\mathbf{\sigma}\rangle=\langle\mathbf{\sigma}^{\tau-1}|\,\mathcal{P}_{\tau-1}^{ \otimes n}\,|\mathbf{\sigma}^{\tau-1}\rangle\). Finally using \(\sum_{\mathbf{\sigma}\neq\mathbf{\sigma}^{\tau}}(-1)^{\sharp\mathbf{\sigma}}=\sum_{k=1}^{n }\binom{n}{k}(-1)^{k}=-1\), as follows from the binomial theorem, we simplify Eq. (80) as
\[\operatorname{tr}\left(\rho_{l}(t)^{n}\right)=q^{2}\left(\frac{1}{q^{2}-1} \frac{1}{q^{2\tau}}\right)^{n}\left(\langle\mathbf{\sigma}^{\tau}|\,\mathcal{P}_{ \tau}^{\otimes n}\,|\mathbf{\sigma}^{\tau}\rangle-\langle\mathbf{\sigma}^{\tau-1}|\, \mathcal{P}_{\tau-1}^{\otimes n}\,|\mathbf{\sigma}^{\tau-1}\rangle\right). \tag{98}\]
This concludes the second step.
3. Evaluating matrix elements \(\langle\mathbf{\sigma}^{\tau}|\,\mathcal{P}_{\tau}^{\otimes n}\,|\mathbf{\sigma}^{ \tau}\rangle\):Now, we shall show that the second term in Eq. (98) almost completely cancels the first term. To this end we first insert Eq. (94) into the first term. Then a similar argument as sketched in Eq. (97) yields
\[\langle\mathbf{\sigma}^{\tau}|\,\mathcal{P}_{\tau}^{\otimes n}\,|\mathbf{\sigma}^{\tau }\rangle=\langle\mathbf{\sigma}^{\tau}|\,(|t_{\tau}\rangle\langle t_{\tau}|)^{ \otimes n}\,|\mathbf{\sigma}^{\tau}\rangle+\langle\mathbf{\sigma}^{\tau-1}|\, \mathcal{P}_{\tau-1}^{\otimes n}\,|\mathbf{\sigma}^{\tau-1}\rangle\,, \tag{99}\]
where mixed terms in the \(n\)-fold tensor product cancel due to Eq. (95). Clearly, the second term in the above equation is exactly canceled by the second term in Eq. (98). This yields
\[\operatorname{tr}\left(\rho_{l}(t)^{n}\right)=q^{2}\left(\frac{1}{q^{2}-1}\frac{ 1}{q^{2\tau}}\right)^{n}\left|\left\langle\boldsymbol{\sigma}^{\tau}\right| \left(\left|t_{\tau}\right\rangle^{\otimes n}\right)\right|^{2} \tag{100}\]
4. Computing the overlap \(\left\langle\boldsymbol{\sigma}^{\tau}\right|\left(\left|t_{\tau}\right\rangle ^{\otimes n}\right)\):Finally, we are left with the computation of the overlap \(\left|\left\langle\boldsymbol{\sigma}^{\tau}\right|\left(\left|t_{\tau} \right\rangle^{\otimes n}\right)\right|^{2}\). Using the fact, that \(\left|t_{\tau}\right\rangle\) coincides with \(\left|r_{\tau-1}\right\rangle\) on all but the leftmost and rightmost tensor factor the overlap factorizes as
\[\left|\left\langle\boldsymbol{\sigma}^{\tau}\right|\left(\left|t_{\tau} \right\rangle^{\otimes n}\right)\right|^{2}=\left|\left\langle\boldsymbol{ \sigma}^{\tau-1}\right|\left(\left|r_{\tau-1}\right\rangle^{\otimes n} \right)\right|^{2}\left|\left\langle\boldsymbol{\sigma}^{1}\right|\left( \left|t_{1}\right\rangle^{\otimes n}\right)\right|^{2}, \tag{101}\]
where in the last factor \(\left|t_{1}\right\rangle=\frac{q}{\sqrt{q^{2}-1}}\left(\left|r_{1}\right\rangle -\frac{1}{q}\left|\circ\circ\right)\right)\), i.e., the state \(\left|t_{\tau}\right\rangle\) in Eq. (74) for \(\tau=1\). Using the diagrammatic representation of states, the first factor gives \(q^{-(\tau-1)(2n-4)}\). Similarly, for the second factor we obtain
\[\left\langle\boldsymbol{\sigma}^{1}\right|\left(\left|t_{1}\right\rangle \right)^{\otimes n}=\left(\frac{q}{\sqrt{q^{2}-1}}\right)^{n}\left(\left\langle \boldsymbol{\sigma}^{1}\right|\left(\left|r_{1}\right\rangle\right)^{\otimes n }-\left\langle\boldsymbol{\sigma}^{1}\right|\left(\frac{1}{q}\left|\circ \circ\right)\right)^{\otimes n}\right)=\left(q^{2}-1\right)^{1-\frac{n}{2}}. \tag{102}\]
as all the mixed terms in the \(n\)-fold tensor product \(\left(\left|t_{1}\right\rangle\right)^{\otimes n}\) cancel by a similar argument as for deriving Eq. (98). Combining everything we conclude the fourth step by obtaining
\[\left|\left\langle\boldsymbol{\sigma}^{\tau}\right|\left(\left|t_{\tau} \right\rangle^{\otimes n}\right)\right|^{2}=\left(\frac{q^{2}}{\left(q^{2}-1 \right)q^{2\tau}}\right)^{n-2}. \tag{103}\]
This ultimately leads to
\[\operatorname{tr}\left(\rho_{l}(t)^{n}\right)=\left(\frac{q}{(q^{2}-1)q^{2 \tau}}\right)^{2(n-1)} \tag{104}\]
up to terms exponentially suppressed at least as \(\left|\lambda_{0}\right|^{l}\). This gives the Renyi entropy as
\[R_{n}\left(t\right)=2\tau\ln\left(q^{2}\right)-2\ln\left(\frac{q}{q^{2}-1}\right) \tag{105}\]
independent from \(n\) up to terms which vanish as \(l\to\infty\). For the case \(\tau=0\), applying the above line of reasoning gives
\[\operatorname{tr}\left(\rho_{l}(t)^{n}\right)=\left(\frac{1}{(q^{2}-1)}\right) ^{n-1} \tag{106}\]
as the exact result even for finite \(l\). This corresponds to the Renyi entropy
\[R_{n}\left(t\right)=\ln\left(q^{2}-1\right). \tag{107}\]
However, originating from the subleading terms of the asymptotic reduced density matrix Eq. (76) the subleading terms of the entropies scale as \(\left|\lambda_{0}\right|^{\delta}\) and hence give rise to non-trivial initial dynamcis. Finally, for completeness, we mention the corresponding result for the resonant case and \(\tau>1\), since the derivation is similar. We have
\[\operatorname{tr}\left(\rho_{l}(t)^{n}\right)=\left(\frac{q^{2}}{(q^{2}-1)q^{2 \tau}}\right)^{2(n-1)}. \tag{108}\]
This gives the Renyi entropies as
\[R_{n}\left(t\right)=2\tau\ln\left(q^{2}\right)-2\ln\left(\frac{q^{2}}{q^{2}-1}\right) \tag{109}\]
Even though the tensor network (65) allows for computing the reduced density matrix for large system size \(L\) and small subsystem size \(l\), direct numerical simulation fails for large \(l\) as the complexity of the computation grows exponentially with \(l\). Nevertheless, at least the plateau-like structure suggested by Eq. (109), i.e., almost constant entanglement entropy for constant \(\tau\), can be observed for small subsystem size as well. This is depicted in Fig. 5(b) for \(l=1\). Also the non-trivial initial dynamics as predicted by Eq. (107) is confirmed there; see inset. Moreover, we find the entanglement entropy to saturate at the maximum possible value after times \(t=2L\) (\(\tau=2\)) for the example considered here.
Renyi Entropies for Small Subsystems \(l=0\):As mentioned before, another case which allows for exact results is that of minimal subsystem size \(l=0\) for which the subsystem only contains the lattice site \(0\) at the boundary, whereas \(L,t\rightarrow\infty\). In this situation we can obtain an exact expression of the reduced density matrix as described below.
Firstly, for \(\tau\geq 1\) the asymptotic analysis from discussion earlier in this section applies for the non-resonant case. Hence, the asymptotic reduced density matrix is given by Eq. (76). Then we evaluate Eq. (77) further using \(\left(\mathcal{A}_{\tau+1}\right)_{\beta}^{\alpha}\left|r_{\tau+1}\right\rangle =q^{-1}\delta_{\alpha,\beta}\left|r_{\tau}\right\rangle\) to get
\[\left(\rho_{l}^{(\tau)}(t)\right)_{\beta}^{\alpha}=\frac{1}{q^{2}-1}\delta_{ \alpha,\beta}. \tag{110}\]
Thus, we obtain \(\left(\rho_{0}\right)_{\beta}^{\alpha}\left(t\right)=q^{-2}\delta_{\alpha,\beta}\) and hence
\[\rho_{0}(t)=\frac{1}{q^{2}}\mathds{1}_{q^{2}} \tag{111}\]
Figure 5: Second Rényi entropy for \(q=2\), a T-dual impurity interaction wit \(J=\pi/4-0.05\) in Eq. (42), and \(a\) the spin-\(z\) operator with (a) \(l=0\) and (b) \(l=1\) for various system sizes. The dashed lines corresponds to (a) Eq. (114) and (b) Eq. (107) as well as the maximum entropy \((l+1)\ln\left(q^{2}\right)\). The inserts show a magnification for initial times.
is the infinite temperature state upto corrections proportional to \(|\lambda_{0}|^{L}\).
Secondly, for \(\tau=0\), i.e., \(0\leq t<L\) the reduced density matrix takes the simple form
\[\left(\rho_{0}\right)_{\beta}^{\alpha}\left(t\right)=\left(\mathcal{A}_{1} \right)_{\beta}^{\alpha}\mathcal{T}_{1}^{\delta-1}\left|a\right\rangle\otimes \left|a\right\rangle. \tag{112}\]
For large \(\delta\gg 1\) the transfer matrix \(\mathcal{T}_{1}\) can again be replaced by the projection onto the eigenspace for the eigenvalue \(1\). Following the argument for general \(l\) we see that only the terms proportional to \(|r_{1}\rangle\langle r_{1}|\) and \(|\circ\circ\rangle\langle r_{1}|\) give a non-vanishing contribution. The first term gives a contribution \(\propto\mathds{1}_{q^{2}}\) whereas the second term gives a contribution \(\propto|\circ\rangle\langle\circ|\). Collecting both terms we obtain
\[\rho_{0}(t)=\frac{1}{q^{2}-1}\left(\mathds{1}_{q^{2}}-|\circ\rangle\langle \circ|\right), \tag{113}\]
which corresponds to the infinite temperature state restricted to the subspace orthogonal to \(|\circ\rangle\), i.e., of traceless operators. Consequently the corresponding Renyi entropies read
\[R_{n}\left(t\right)=\begin{cases}\ln\left(q^{2}-1\right)&\text{if }\tau=0\\ \ln\left(q^{2}\right)&\text{if }\tau>0\end{cases} \tag{114}\]
and are independent of \(n\). For \(\tau=0\) this coincides with Eq. (106) and gives rise to the same non-trivial initial entanglement dynamics discussed there. In the resonant case, the same results can be obtained as in the non-resonant case, only for \(t=L\), i.e., \(\tau=1\) and \(\delta=0\) the reduced density matrix and the corresponding entropies correspond to Eq. (114). In Fig. 5(a) we depict the second Renyi entropy for various system sizes obtained from contracting the tensor network (65) for \(l=0\). The asymptotic form of the entropies is approached fast even for moderately large system sizes. In the inset we additionally show the non-trivial initial entanglement dynamics.
## 5 Conclusion
We study the entanglement dynamics for both product states and local operators in a minimal model of many-body quantum chaos built from a locally perturbed free quantum circuit. Using a minimal but exact description of time evolution resulting from analytically integrating out the free part of the circuit we obtain tensor network representations of the reduced density matrices. We contract the tensor networks using a transfer matrix approach in spatial direction resulting in a simple form of the reduced density matrices in the limit of large system size \(L\).
Then, depending on the choice of the perturbation, i.e., the impurity interaction at the system's boundary, we either compute the reduced density matrix or the corresponding Renyi entropies exactly. For the gates which exhibit a local vacuum state, the reduced density matrix of an initial product state is close to a pure and hence unentangled state at most times. Similar dynamics is observed in the reduced super density matrix of initially local operators for generic impurity interactions. It is only for times \(t\approx\tau L\) in resonance with system size, that entanglement entropies are large in both cases. This results in untypical entanglement dynamics, of peridodically spiking entanglement entropies, despite the system being chaotic in the sense of spectral statistics. In such chaotic systems entanglement entropies generically
grow linearly in time. Hence our setting resembles an example where different notions of many-body quantum chaos, namely random-matrix like spectral fluctuations and linear growth of entanglement entropies do not coincide, as it is also the case when studying thermalization in the present setting [65].
In contrast we recover the entanglement dynamics of typical chaotic systems, i.e., linear growth of entanglement entropies, for T-dual impurity interactions when the size of the subsystem is large. This is the case both for initial product states and local operators. More precisely entanglement grows linearly with \(\tau\) leading to plateaus in the Renyi entropies in between resonant times. The height of the plateaus grows at maximum speed given by \(2\log(d)\), with \(d=q\) or \(q^{2}\), respectively. Hence, as \(\tau\approx t/L\), the speed of entanglement growth is reduced by a factor of \(1/L\) compared to the maximum value, which we attribute to only one gate, i.e., the impurity interaction, of the in total \(L\) gates of the circuit being entangling. One therefore might conjecture, that for a number of \(n\) entangling gates on should get a correction \(n/L\) to the maximal possible speed and that the maximum speed is recovered in the spatially homogeneous setting.
Our work hence provides an exact description of the entanglement dynamics in large systems for either arbitrary subsystem size (entanglement of states for gates with vacuum state and operator entanglement for generic gates) or in the limit of infinite subsystem size (T-dual gates). In the latter case our results explain the entanglement dynamics qualitatively even for small subsystems. However, for finite subsystems we are currently not able to address the question of saturation of entanglement entropies at late times. This is due to exponential scaling of the size of transfer matrices with \(\tau\), which renders the large \(\tau\) regime intractable via numerics. Unfortunately, this cannot be computed analytically as well due to the fact that, for finite subsystems the subleading part of the spectrum of the transfer matrices also becomes relevant, for which we lack an analytical description.
Hence, to address the question of saturation, one requires different techniques, e.g., methods based on a dual space-time swapped interpretation as recently introduced in Ref. [30], which is beyond the scope of this work. Also, if one was able to approach longer times, one might be able to study the phenomenon of entanglement barriers for operator entanglement in the boundary-chaos setting.
## Acknowledgements
FF thanks K. Klobas and B. Bertini for insightful discussions.
Funding informationFF would like to acknowledge support from Deutsche Forschungsgemeinschaft (DFG) Project No. 453812159. RG would also like to acknowledge support from Grant No. J1-1698 from the Slovenian Research Agency (ARRS) and UKRI grant EP/R029075/1. TP acknowledges support from research Program P1-0402 of ARRS.
## Appendix A Spectral Statistics of the Circuit
In this appendix we present the level spacing distribution \(p(s)\) for the boundary chaos circuit for the three different classes of impurity interactions - gates preserving a vacuum state,
T-dual gates, and generic gates. The scaled level spacing \(s_{i}=\frac{q^{L+1}}{2\pi}\left(\epsilon_{i+1}-\epsilon_{i}\right)\) is given by the difference of consecutive eigenphases/quasi-energies of the boundary chaos circuit \(\mathcal{U}\) and is normalized to unit mean spacing. For the impurity interaction with a vacuum-preserving gate used in Fig.1 we depict \(p(s)\) in Fig. 6(a), whereas (b) shows \(p(s)\) for the T-dual impurity interaction from Fig. 5 and (c) shows \(p(s)\) for the generic impurity interaction from Fig. 4. Each agrees well with the random matrix result for the respective symmetry class. For the T-dual case this is the circular orthogonal ensemble (COE) for \(q=2\) and the circular unitary ensemble (CUE) for larger \(q\) (not shown). The other two cases correspond to the CUE for any \(q\). The impurity interactions used for Fig. 2 and Fig. 3 yield similar level spacing distributions and are not shown separately. The correspondence between the level spacing distribution for the boundary chaos circuit and the respective random matrix results clearly indicates our setting indeed leads to chaotic quantum systems in the sense of spectral statistics.
## Appendix B Subleading Eigenvalues of Transfer Matrices
Our analysis of entanglement dynamics, both for states and operators, requires subleading eigenvalues \(\lambda\) of the transfer matrices \(\mathcal{T}_{\tau}\) to be gapped from one, i.e., \(|\lambda|<1\). As this is out of scope of a rigorous proof we resort to extensive numerical studies to confirm this claim. Using Arnoldi iteration in the subspace orthogonal to the eigenspace of the leading eigenvlaue 1 we compute the subleading eigenvalue of the transfer matrices for the largest accessible values of \(\tau\). Note that \(\mathcal{T}_{\tau}\) is a non-Hermitian (and in general non-normal) matrix of dimension \(q^{2\tau}\) in the case of states, whereas it is of dimension \(q^{4\tau}\) in the case of operators. We compute the subleading eigenvalue at size \(\tau=9\) for states and \(\tau=5\) for operators for 10000 realizations for qubits \(q=2\) for the different classes of impurity interactions. Here, we sample the generic gates Haar random from U(4), while we choose \(u\) in the gate a with vacuum state, \(U=1\oplus u\), Haar random from U(3). For T-dual gates we fix the interaction \(J=\pi/4-0.05\) in Eq. (42) and choose the local unitaries \(u_{\pm},v_{\pm}\) Haar random from U(2). In Fig. 7(a) we show the distribution of the modulus of the subleading eigenvalue \(|\lambda_{0}|\) for the case of states. For generic impurity interactions we find the probabiltiy to dropp towards zero when \(|\lambda_{0}|\) approaches 1 indicating a finite gap for random choices of the gate. In contrast, both for T-dual impurity interactions and those with a vacuum state we find the probability to be largest around 1 indictating finite probability to find arbitrary large subleading eigenvalue Nevertheless, we do not find a single instance where the subleading eigenvalue actually has modulus one and hence there will be at least a small gap for generic choices of the impurity interaction from these classes. A small spectral gap only implies, that the limiting entanglement dynamcis for \(L\to\infty\) is approached much slower. In Fig. 7(b) we additionally show the same data for the transfer matrices from the operator case and find qualitatively very similar behavior as for states.
Figure 6: Level spacing distribution \(p(s)\) for (a) impurity interaction with vacuum, (b) T-dual and (c) generic impurity interactions for \(L+1=14\) and \(q=2\). Dashed and dotted black lines correspond to the corresponding distribution for the CUE (a,c) and COE (b) respectively.
Figure 7: Distribution \(p(|\lambda_{0}|)\) of the subleading eigenvalue \(\lambda_{0}\) for (a) states with \(\tau=9\) and (b) operators with \(\tau=5\) from 10000 realizations from different classes of impurity interactions (see legend). |
2308.04259 | Generalized Forgetting Recursive Least Squares: Stability and Robustness
Guarantees | This work presents generalized forgetting recursive least squares (GF-RLS), a
generalization of recursive least squares (RLS) that encompasses many
extensions of RLS as special cases. First, sufficient conditions are presented
for the 1) Lyapunov stability, 2) uniform Lyapunov stability, 3) global
asymptotic stability, and 4) global uniform exponential stability of parameter
estimation error in GF-RLS when estimating fixed parameters without noise.
Second, robustness guarantees are derived for the estimation of time-varying
parameters in the presence of measurement noise and regressor noise. These
robustness guarantees are presented in terms of global uniform ultimate
boundedness of the parameter estimation error. A specialization of this result
gives a bound to the asymptotic bias of least squares estimators in the
errors-in-variables problem. Lastly, a survey is presented to show how GF-RLS
can be used to analyze various extensions of RLS from the literature. | Brian Lai, Dennis S. Bernstein | 2023-08-08T13:49:13Z | http://arxiv.org/abs/2308.04259v3 | # Generalized Forgetting Recursive Least Squares: Stability and Robustness Guarantees
###### Abstract
This work present generalized forgetting recursive least squares (GF-RLS), a generalization of recursive least squares (RLS) that encompasses many extensions of RLS as special cases. First, sufficient conditions are presented for the 1) Lyapunov stability, 2) uniform Lyapunov stability, 3) global asymptotic stability, and 4) global uniform exponential stability of parameter estimation error in GF-RLS when estimating fixed parameters without noise. Second, robustness guarantees are derived for the estimation of time-varying parameters in the presence of measurement noise and regressor noise. These robustness guarantees are presented in terms of global uniform ultimate boundedness of the parameter estimation error. A specialization of this result gives a bound to the asymptotic bias of least squares estimators in the errors-in-variables problem. Lastly, a survey is presented to show how GF-RLS can be used to analyze various extensions of RLS from the literature.
errors-in-variables, identification, recursive least squares, robustness, stability analysis
## 1 Introduction
Recursive least squares (RLS) is a foundational algorithm in systems and control theory for the online identification of fixed parameters [1, 2, 3]. A property of RLS is the eigenvalues of the covariance matrix are monotonically decreasing over time and may become arbitrarily small [4, subsection 2.3.2][5], resulting in eventual sluggish adaptation and inability to track time-varying parameters [6, 7]. Numerous extensions of RLS have been developed to improve identification of time-varying parameters, including exponential forgetting [3, 8], variable-rate forgetting [9, 10, 11, 12, 13, 14], directional forgetting [15, 16, 17], resetting [7, 18, 19], and multiple forgetting [20], among others. Hence, we use the general term _forgetting_ to describe the processes in extensions of RLS which break the monotonicity of the covariance matrix.
Furthermore, several general frameworks have been developed which include extensions of RLS as special cases, for example [21] in discrete-time and [22] in continuous-time. The recent work of [23] develops a much more general framework of recursive estimators which still contains RLS extensions as a special case. These frameworks help to unify various RLS extensions and provide overarching analysis. In discussing RLS extensions, we highlight three important points: 1) cost function, 2) stability, and 3) robustness.
#### 1 Cost Function
The RLS update equations are derived as a recursive method to find the minimizer of a least-squares cost function [3]. While some RLS extensions can be derived from modified least-squares cost functions (e.g. exponential forgetting [3] and variable-rate forgetting [9]), many have been developed as ad-hoc modifications to the RLS update equations, without an associated cost function (e.g. [11, 12, 15, 16, 17, 8, 11, 20, 18]). Therefore, there is an interest in developing a cost function from which extensions of RLS can be derived. In [22], a continuous-time cost functional is presented, from which continuous-time least-squares algorithms are derived. However, the least-squares algorithms derived from the cost function in [22] are continuous-time versions of RLS extensions, not the original RLS extensions developed in discrete-time.
#### 1.0.2 Stability
Many RLS extensions give conditions which guarantee stability of parameter estimation error to zero when estimating constant parameters [8, 9, 17, 18]. General frameworks [21, 23], and [22] all present stability guarantees which apply to various RLS extensions. The stability analyses in [21] and [22] consider RLS extensions with scalar measurements and present exponential stability guarantees for constant-parameter estimation. While the stability analysis in [23] encompasses RLS extensions with vector measurements, the sufficient conditions for stability may be overly restrictive when applied to RLS extensions as a much more general class of recursive estimators is analyzed.
#### 1.0.3 Robustness
Several RLS extensions further analyze robustness to time-varying parameters and to bounded measurement noise [24, 25, 11]. The general framework in [23] further studies robustness of recursive estimation algorithms to bounded measurement noise and bounded regressor noise, known as the _errors-in-variables_ problem. The errors-in-variables problem is significantly more challenging than the problem of robustness to measurement noise alone [26], with many methods in the literature assuming that measurement noise and regressor noise are uncorrelated and that their statistical properties are known [27, 28]. In fact, if the measurement noise and regressor noise are correlated, then, with the exception of some special cases, the least squares estimator is asymptotically biased [29, p. 205]. If no statistical assumptions are made on the measurement noise and regressor noise, this problem can be analyzed in terms of input-to-state stability [30, 31], as is done in [23]. Similarly to stability however, while the very general robustness analysis in [23]
encompasses RLS extensions, the sufficient conditions may be overly restrictive and the bounds on asymptotic bias overly loose when applied to RLS extensions.
### Contributions
The contributions of this article are summarized as follows:
1. We derive generalized forgetting recursive least squares (GF-RLS), which is a discrete-time version of the continuous-time RLS generalization developed in [22]. See section II. We later show in section V how various extensions of RLS can be derived from the GF-RLS cost function as special cases.
2. For constant-parameter estimation, we use Lyapunov methods to develop stability guarantees for GF-RLS. These guarantees extend results obtained in [21] by generalizing to vector measurements and by providing weaker stability guarantees when not all the conditions for exponential stability are met. See section III.
3. In addition, we develop robustness guarantees of GF-RLS to bounded parameter variation, bounded measurement noise, and bounded regressor noise. In particular, we obtain sufficient conditions for the global uniform ultimate boundedness of the parameter-estimation error. A specialization of this results provides a bound on the asymptotic bias of parameter estimation error in the context of the errors-in-variables problem. See subsection IV.
### Notation and Terminology
\(\mathbb{N}_{0}\) denotes the set of non-negative integers \(\{0,1,2,\ldots\}\). \(I_{n}\) denotes the \(n\times n\) identity matrix, and \(0_{m\times n}\) denotes the \(m\times n\) zero matrix. For symmetric \(A\in\mathbb{R}^{n\times n}\), let the \(n\) real eigenvalues of \(A\) be denoted by \(\boldsymbol{\lambda_{\min}}(A)\triangleq\boldsymbol{\lambda_{n}}(A)\leq \cdots\leq\boldsymbol{\lambda_{\max}}(A)\triangleq\boldsymbol{\lambda_{1}}(A)\). For \(B\in\mathbb{R}^{m\times n}\), \(\boldsymbol{\sigma_{\max}}(B)\) denotes the largest singular value of \(B\), and \(\boldsymbol{\sigma_{\min}}(B)\) denotes the smallest singular value of \(B\).
For symmetric \(P,Q\in\mathbb{R}^{n\times n}\), \(P\prec Q\) (respectively, \(P\preceq Q\)) denotes that \(Q-P\) is positive definite (respectively, positive semidefinite). For all \(x\in\mathbb{R}^{n}\), \(\|x\|\) denotes the Euclidean norm, that is \(\|x\|\triangleq\sqrt{TxTx}\). For positive-semidefinite \(R\in\mathbb{R}^{n\times n}\) and \(x\in\mathbb{R}^{n}\), \(\|x\|_{R}\triangleq\sqrt{x^{\mathrm{T}}Rx}\). For symmetric \(S\in\mathbb{R}^{n\times n}\), \(\|x\|_{S}^{2}\triangleq x^{\mathrm{T}}Sx\). Note that the notation \(\|x\|_{S}^{2}\) is used only for convenience and that \(\|x\|_{S}\) is not defined when \(S\) is not positive semidefinite. For \(\varepsilon>0\) and \(x_{\varepsilon}\in\mathbb{R}^{n}\), define the closed ball \(\bar{\mathcal{E}}_{\varepsilon}(x_{\varepsilon})\triangleq\{x\in\mathbb{R}^{n} \colon\|x-x_{\varepsilon}\|\leq\varepsilon\}\).
**Definition 1**: _A sequence \((\phi_{k})_{k=k_{0}}^{\infty}\subset\mathbb{R}^{p\times n}\) is persistently exciting if there exist \(N\geq 1\) and \(\alpha>0\) such that, for all \(k\geq k_{0}\),_
\[\alpha I_{n}\preceq\sum_{i=k}^{k+N-1}\phi_{i}^{\mathrm{T}}\phi_{i}. \tag{1}\]
_Furthermore, \(\alpha\) and \(N\) are, respectively, the lower bound and persistency window of \((\phi_{k})_{k=k_{0}}^{\infty}\)._
**Definition 2**: _A sequence \((\phi_{k})_{k=k_{0}}^{\infty}\subset\mathbb{R}^{p\times n}\) is bounded if there exists \(\beta\in(0,\infty)\) such that, for all \(k\geq k_{0}\),_
\[\phi_{k}^{\mathrm{T}}\phi_{k}\preceq\beta I_{n}. \tag{2}\]
_Furthermore, \(\beta\) is the upper bound of \((\phi_{k})_{k=k_{0}}^{\infty}\)._
## 2 Generalized Forgetting Recursive Least Squares (GF-RLS)
The following theorem presents generalized forgetting recursive least squares, which is a discrete-time RLS generalization derived from minimizing a least-squares cost function.
**Theorem 1**: _For all \(k\geq 0\), let \(\Gamma_{k}\in\mathbb{R}^{p\times p}\) be positive definite, let \(\phi_{k}\in\mathbb{R}^{p\times n}\), and let \(y_{k}\in\mathbb{R}^{p}\). Furthermore, let \(P_{0}\in\mathbb{R}^{n\times n}\) be positive definite, and let \(\theta_{0}\in\mathbb{R}^{n}\). For all \(k\geq 0\), let \(F_{k}\in\mathbb{R}^{n\times n}\) be symmetric and satisfy_
\[F_{k}\prec P_{0}^{-1}+\sum_{i=0}^{k-1}\left(-F_{i}+\phi_{i}^{\mathrm{T}}\Gamma _{i}^{-1}\phi_{i}\right). \tag{3}\]
_For all \(k\geq 0\), define \(J_{k}\colon\mathbb{R}^{n}\to\mathbb{R}\) by_
\[J_{k}(\hat{\theta})\triangleq J_{k,\mathrm{loss}}(\hat{\theta})-J_{k,\mathrm{ forget}}(\hat{\theta})+J_{k,\mathrm{reg}}(\hat{\theta}), \tag{4}\]
_where_
\[J_{k,\mathrm{loss}}(\hat{\theta}) \triangleq\sum_{i=0}^{k}\|y_{i}-\phi_{i}\hat{\theta}\|_{\Gamma_{i} ^{-1}}^{2}, \tag{5}\] \[J_{k,\mathrm{forget}}(\hat{\theta}) \triangleq\sum_{i=0}^{k}\|\hat{\theta}-\theta_{i}\|_{F_{i}}^{2},\] (6) \[J_{k,\mathrm{reg}}(\hat{\theta}) \triangleq\|\hat{\theta}-\theta_{0}\|_{P_{0}^{-1}}^{2}. \tag{7}\]
_Then, \(J_{k}\) has a unique global minimizer, denoted_
\[\theta_{k+1}\triangleq\operatorname*{arg\,min}_{\hat{\theta}\in\mathbb{R}^{n}}J _{k}(\hat{\theta}), \tag{8}\]
_which, for all \(k\geq 0\), is given by_
\[P_{k+1}^{-1} =P_{k}^{-1}-F_{k}+\phi_{k}^{\mathrm{T}}\Gamma_{k}^{-1}\phi_{k}, \tag{9}\] \[\theta_{k+1} =\theta_{k}+P_{k+1}\phi_{k}^{\mathrm{T}}\Gamma_{k}^{-1}(y_{k}- \phi_{k}\theta_{k}). \tag{10}\]
Proof:: See Appendix C.
For all \(k\geq 0\), We call \(y_{k}\in\mathbb{R}^{p}\) the measurement, \(\phi_{k}\in\mathbb{R}^{p\times n}\) the regressor, and \(\theta_{k}\in\mathbb{R}^{n}\) the parameter estimate. Moreover, we call \(\Gamma_{k}\in\mathbb{R}^{p\times p}\) the weighting matrix, \(F_{k}\in\mathbb{R}^{n\times n}\) the forgetting matrix, and \(P_{k}\in\mathbb{R}^{n\times n}\) the covariance matrix. Furthermore, (9) and (10) are the GF-RLS update equations.
Notice that condition (3) guarantees that \(J_{k}\) has a unique global minimizer, as shown in the proof of Theorem 1. Corollary 1 gives an important interpretation to (3).
**Corollary 1**: _Consider the notation and assumptions of Theorem 1. Then, for all \(k\geq 0\),_
\[P_{k}^{-1}=P_{0}^{-1}+\sum_{i=0}^{k-1}\left(-F_{i}+\phi_{i}^{\mathrm{T}}\Gamma_{ i}^{-1}\phi_{i}\right). \tag{11}\]
_Hence, for all \(k\geq 0\), (3) holds if and only if_
\[P_{k}^{-1}-F_{k}\succ 0. \tag{12}\]
Proof:: Equation (11) follows directly from repeated substitution of (9). Next, (12) follows from substituting (11) into (3).
Corollary 1 shows that to ensure that the GF-RLS cost (4) has a unique global minimizer (i.e. (3) is satisfied), it suffices to, for all \(k\geq 0\), choose \(F_{k}\) such that \(P_{k}^{-1}-F_{k}\succ 0\).
**Definition 3**: _GF-RLS is proper if, for all \(k\geq 0\), \(F_{k}\in\mathbb{R}^{n\times n}\) is positive semidefinite. GF-RLS is improper if it is not proper._
Note that the GF-RLS cost \(J_{k}(\hat{\theta})\) is composed as the sum of three terms, namely, the _loss term_\(J_{k,\mathrm{loss}}(\hat{\theta})\), the _forgetting term_\(-J_{k,\mathrm{forget}}(\hat{\theta})\), and the _regularization term_\(J_{k,\mathrm{reg}}(\hat{\theta})\). Note that, if GF-RLS is proper, then, for all \(\hat{\theta}\in\mathbb{R}^{n}\), the forgetting term \(-J_{k,\mathrm{forgetting}}(\hat{\theta})\) is nonpositive. In practice, if GF-RLS is proper, then the forgetting term rewards the difference between the estimate \(\theta_{k+1}\) and \(\theta_{i}\) for previous steps \(0\leq i\leq k\). This reward is weighted by the forgetting matrix \(F_{k}\in\mathbb{R}^{n\times n}\). It is shown in Section V that for particular choices of the forgetting matrix, we recover extensions of RLS with forgetting from GF-RLS.
## 3 Stability of Fixed Parameter Estimation
For the analysis of this section, we make the assumption that there exist fixed parameters \(\theta\in\mathbb{R}^{n}\) such that, for all \(k\geq 0\),
\[y_{k}=\phi_{k}\theta. \tag{13}\]
Furthermore, for all \(k\geq 0\), we define the parameter estimation error \(\tilde{\theta}_{k}\in\mathbb{R}^{n}\) by
\[\tilde{\theta}_{k}\triangleq\theta_{k}-\theta. \tag{14}\]
Substituting into (10), it then follows that
\[\tilde{\theta}_{k+1}=M_{k}\tilde{\theta}_{k}, \tag{15}\]
where, for all \(k\geq 0\), \(M_{k}\in\mathbb{R}^{n\times n}\) is defined
\[M_{k}\triangleq I_{n}-P_{k+1}\phi_{k}^{\mathrm{T}}\Gamma_{k}^{-1}\phi_{k}. \tag{16}\]
Hence, (15) is a linear time-varying system with an equilibrium \(\tilde{\theta}_{k}\equiv 0\).
Next, for all \(k\geq 0\), let \(\Gamma_{k}^{-\frac{1}{2}}\in\mathbb{R}^{p\times p}\) be the unique positive-semidefinite matrix such that
\[\Gamma_{k}^{-1}=\Gamma_{k}^{-\frac{1}{2}\mathrm{T}}\Gamma_{k}^{-\frac{1}{2}}. \tag{17}\]
Furthermore, define the _weighted regressor_\(\tilde{\phi}_{k}\in\mathbb{R}^{p\times n}\) by
\[\tilde{\phi}_{k}\triangleq\Gamma_{k}^{-\frac{1}{2}}\phi_{k}. \tag{18}\]
Substituting (18) into (16), it follows that, for all \(k\geq 0\),
\[M_{k}=I_{n}-P_{k+1}\overline{\phi}_{k}^{\mathrm{T}}\tilde{\phi}_{k}. \tag{19}\]
Finally, let \(k_{0}\geq 0\) and consider the following conditions:
1. For all \(k\geq k_{0}\), \(F_{k}\succeq 0\).
2. There exists \(b\in(0,\infty)\) such that, for all \(k\geq k_{0}\), \((P_{k}^{-1}-F_{k})^{-1}\preceq bI_{n}\).
3. There exists \(a>0\) such for all \(k\geq k_{0}\), \(aI_{n}\preceq P_{k}\).
4. The sequence of weighted regressors \((\tilde{\phi}_{k})_{k=k_{0}}^{\infty}\) is persistently exciting with lower bound \(\bar{\alpha}>0\) and persistency window \(N\geq 1\) and bounded with upper bound \(\bar{\beta}\in(0,\infty)\).
We now present Theorem 2 which gives sufficient conditions for the stability of the equilibrium \(\theta_{k}\equiv 0\) of (15).
**Theorem 2**: _For all \(k\geq 0\), let \(\Gamma_{k}\in\mathbb{R}^{p\times p}\) be positive definite, let \(\phi_{k}\in\mathbb{R}^{p\times n}\), let \(y_{k}\in\mathbb{R}^{p}\), and let \(F_{k}\in\mathbb{R}^{n\times n}\) be symmetric and satisfy (3). Let \(P_{0}\in\mathbb{R}^{n\times n}\) be positive definite, and let \(\theta_{0}\in\mathbb{R}^{n}\). For all \(k\geq 1\), let \(P_{k}\in\mathbb{R}^{n\times n}\) and \(\theta_{k}\in\mathbb{R}^{n}\) be recursively updated by (9) and (10). Furthermore, assume there exists \(\theta\in\mathbb{R}^{n}\) such that, for all \(k\geq 0\), (13) holds. Then the following statements hold:_
1. _If there exists_ \(k_{0}\geq 0\) _such that_ 1_) and_ 2_), then the equilibrium_ \(\tilde{\theta}_{k}\equiv 0\) _of (_15_) is Lyapunov stable._
2. _If there exists_ \(k_{0}\geq 0\) _such that_ 1_),_ 2_), and_ 3_), then the equilibrium_ \(\tilde{\theta}_{k}\equiv 0\) _of (_15_) is uniformly Lyapunov stable._
3. _If there exists_ \(k_{0}\geq 0\) _such that_ 1_),_ 2_), and_ 4_), then the equilibrium_ \(\tilde{\theta}_{k}\equiv 0\) _of (_15_) is globally asymptotically stable._
4. _If there exists_ \(k_{0}\geq 0\) _such that_ 1_),_ 2_),_ 3_), and_ 4_), then the equilibrium_ \(\tilde{\theta}_{k}\equiv 0\) _of (_15_) is globally uniformly exponentially stable._
In the case \(k_{0}=0\), see Appendix D for a proof of statements 1) and 2), and Appendix E for a proof of statements 3) and 4). The case \(k_{0}\geq 1\) can be shown similarly.
### Discussion of Conditions A1) through A4)
This subsection gives a brief discussion of conditions _A1)_ through A4)_ used in Theorem 2.
#### 3.1.1 Condition A1):
Note that by Definition 3, this condition is equivalent to GF-RLS being proper. Furthermore, whether or not GF-RLS is proper is a direct consequence of the algorithm design. We will show in section V how ten different extensions of RLS are all proper (some requiring minor assumptions), and hence satisfy condition _A1)_.
#### 3.1.2 Conditions A2) and A3)
In 1988, [7] qualitatively proposed that RLS extensions should guarantee a (nonzero) lower bound and a (noninfinite) upper bound of the covariance matrix \(P_{k}\) for good performance. That is, \(a>0\) and \(b\in(0,\infty)\) such that, for all \(k\geq 0\), \(aI_{n}\preceq P_{k}\preceq bI_{n}\). Many RLS extensions since have provided analysis which guarantee an upper and lower bound of the covariance matrix [5, 15, 17, 19]. A lower bound on \(P_{k}\) is equivalent to condition _A3)_. However, an upper bound on \(P_{k}\) does not guarantee condition _A2)_.
Nevertheless, for many choices of \(F_{k}\) from the literature, condition _A2)_ follows easily from an upper bound on the covariance matrix. A future area of interest is whether similar stability and robustness guarantees exist if condition _A2)_ is replaced with an upper bound on \(P_{k}\).
#### 3.1.3 Condition A4)
Persistent excitation and boundedness of the sequence of regressors \((\phi_{k})_{k=k_{0}}^{\infty}\) is an important requirement for convergence in RLS extensions [7, 8]. While work has been done the relax the persistent excitation condition [32, 33, 34], it has been shown that _weak persistent excitation_ is necessary for the global asymptotic stability of RLS [35].
Note that condition _A4)_ requires persistent excitation and boundedness of the the sequence of weighted regressors \((\tilde{\phi}_{k})_{k=k_{0}}^{\infty}\), rather than the sequence of regressors \((\phi_{k})_{k=k_{0}}^{\infty}\). Corollary 2 gives a sufficient condition for when persistent excitation and boundedness of the sequence of regressors \((\phi_{k})_{k=k_{0}}^{\infty}\) implies persistent excitation and boundedness of the
sequence of weighted regressors \((\bar{\phi}_{k})_{k=k_{0}}^{\infty}\). Note, however, that the bounds guaranteed by Corollary 2 are often loose and it is preferable in practice to directly analyze the sequence of weighted regressors.
**Corollary 2**: _Assume there exists \(k_{0}\geq 0\) and \(0<\gamma_{\min}<\gamma_{\max}\) such that, for all \(k\geq k_{0}\),_
\[\gamma_{\min}I_{p}\preceq\Gamma_{k}\preceq\gamma_{\max}I_{p}. \tag{20}\]
_Furthermore, let \((\phi_{k})_{k=k_{0}}^{\infty}\) be persistently exciting with lower bound \(\alpha>0\) and persistency window \(N\) and bounded with upper bound \(\beta\in(0,\infty)\). Then, \((\bar{\phi}_{k})_{k=k_{0}}^{\infty}\) is persistently exciting with lower bound \(\frac{\alpha}{\gamma_{\max}}\) and persistency window \(N\) and bounded with upper bound \(\frac{\beta}{\gamma_{\min}}\)._
Note that, for all \(k\geq k_{0}\), \(\bar{\phi}_{k}^{\mathrm{T}}\bar{\phi}_{k}=\phi_{k}^{\mathrm{T}}\Gamma_{k}^{-1 }\phi_{k}\geq\frac{\gamma_{\max}}{\gamma_{\max}}\phi_{k}^{\mathrm{T}}\phi_{k}\), and hence \(\sum_{i=k}^{k+N-1}\bar{\phi}_{i}^{\mathrm{T}}\bar{\phi}_{i}\succeq\sum_{i=k}^ {k+N-1}\frac{1}{\gamma_{\max}}\phi_{i}^{\mathrm{T}}\bar{\phi}_{i}\succeq\frac {\alpha}{\gamma_{\max}}I_{n}\).
## 4 Robustness to Time-Varying Parameters, Measurement Noise, and Regressor Noise
For the analysis of this section, we make the assumption that, for all \(k\geq 0\), the parameters \(\theta_{\mathrm{true},k}\in\mathbb{R}^{n}\) are time-varying and satisfy
\[\theta_{\mathrm{true},k+1}=\theta_{\mathrm{true},k}+\delta_{\theta,k}, \tag{21}\]
where, for all \(k\geq 0\), \(\delta_{\theta,k}\in\mathbb{R}^{n}\) is the _change in the parameters_. Note that no model of how the parameters evolve is known. Furthermore, assume that, for all \(k\geq 0\),
\[y_{k}=(\phi_{k}+\delta_{\phi,k})\theta_{\mathrm{true},k}+\delta_{y,k}, \tag{22}\]
where \(\delta_{y,k}\in\mathbb{R}^{p}\) is the _measurement noise_ and \(\delta_{\phi,k}\in\mathbb{R}^{p\times n}\) is the _regressor noise_. Furthermore, for all \(k\geq 0\), we define the _weighted measurement noise_ at step \(k\), \(\bar{\delta}_{y,k}\in\mathbb{R}^{p}\), and the _weighted regressor noise_, \(\bar{\delta}_{\phi_{k}}\in\mathbb{R}^{p\times n}\), by
\[\bar{\delta}_{y,k} \triangleq\Gamma_{k}^{-\frac{1}{2}}\delta_{y,k}, \tag{23}\] \[\bar{\delta}_{\phi,k} \triangleq\Gamma_{k}^{-\frac{1}{2}}\delta_{\phi,k}. \tag{24}\]
Note that, for all \(k\geq 1\), the parameter estimate \(\theta_{k}\) is based on measurements up to step \(k-1\), that is, \(\{y_{0},\ldots,y_{k-1}\}\). To compensate for this one-step delay, we define, for all \(k\geq 0\), the parameter estimation error \(\tilde{\theta}_{k}\in\mathbb{R}^{n}\) by
\[\tilde{\theta}_{k}\triangleq\theta_{k}-\theta_{\mathrm{true},k-1}. \tag{25}\]
Substituting (22) and (25) into (10) implies that, for all \(k\geq 0\),
\[\tilde{\theta}_{k+1}=M_{k}(\tilde{\theta}_{k}-\delta_{\theta,k-1})+P_{k+1} \phi_{k}^{\mathrm{T}}\Gamma_{k}^{-1}(\delta_{\phi,k}\theta_{\mathrm{true},k}+ \delta_{y,k}),\]
and then substituting (18), (23), and (24), it follows that
\[\tilde{\theta}_{k+1}=M_{k}(\tilde{\theta}_{k}-\delta_{\theta,k-1} )+P_{k+1}\bar{\phi}_{k}^{\mathrm{T}}(\tilde{\delta}_{\phi,k}\theta_{\mathrm{ true},k}+\bar{\delta}_{y,k}), \tag{26}\]
Hence, (26) is a nonlinear system \(\tilde{\theta}_{k+1}=\tilde{f}(k,\tilde{\theta}_{k})\).
Finally, let \(k_{0}\geq 0\) and consider the following conditions:
1. There exists \(\delta_{\theta}\geq 0\) such that, for all \(k\geq k_{0}\), \(\|\delta_{\theta,k}\|\leq\delta_{\theta}\).
2. There exists \(\bar{\delta}_{y}\geq 0\) such that, for all \(k\geq k_{0}\), \(\|\bar{\delta}_{y,k}\|\leq\bar{\delta}_{y}\).
3. There exists \(\bar{\delta}_{\phi}\geq 0\) such that the sequence \((\bar{\delta}_{\phi,k})_{k=k_{0}}^{\infty}\) is bounded with upper bound \(\bar{\delta}_{\phi}\).
4. There exists \(\theta_{\max}\geq 0\) such that, for all \(k\geq k_{0}\), \(\|\theta_{\mathrm{true},k}\|\leq\theta_{\max}\).
We now present Theorem 3 which gives sufficient conditions for the global uniform ultimate boundedness of (26).
**Theorem 3**: _For all \(k\geq 0\), let \(\Gamma_{k}\in\mathbb{R}^{p\times p}\) be positive definite, let \(y_{k}\in\mathbb{R}^{p}\), and let \(F_{k}\in\mathbb{R}^{n\times n}\) be symmetric and satisfy (3). Let \(P_{0}\in\mathbb{R}^{n\times n}\) be positive definite, and let \(\theta_{0}\in\mathbb{R}^{n}\). For all \(k\geq 1\), let \(P_{k}\in\mathbb{R}^{n\times n}\) and \(\theta_{k}\in\mathbb{R}^{n}\) be recursively updated by (9) and (10). Furthermore, for all \(k\geq 0\), let \(\theta_{\mathrm{true},k}\in\mathbb{R}^{n}\), \(\delta_{\theta,k}\in\mathbb{R}^{n}\), \(\delta_{y,k}\in\mathbb{R}^{p}\), and \(\delta_{\phi,k}\in\mathbb{R}^{p\times n}\) satisfy (21) and (22). Finally, let \(k_{0}\geq 0\) be such that conditions _A1_), _A2_), _A3), _A4), A5), _A6), _A7), and _A8)_ hold. Then, the system (26) is globally uniformly ultimately bounded with bound \(\varepsilon\) given by
\[\varepsilon=\varepsilon^{*}\left[\delta_{\theta}+b\bar{\beta}^{\frac{1}{2}} \left(\bar{\delta}_{\phi}^{\frac{1}{4}}\theta_{\max}+\bar{\delta}_{y}\right) \right], \tag{27}\]
where
\[\varepsilon^{*}\triangleq\max\left\{1,\frac{1}{\sqrt{a}}\right\}\left(\Delta_{N }+\sqrt{\Delta_{N}+\Delta_{N}^{2}}\right)N, \tag{28}\]
\[\Delta_{N}\triangleq\frac{N}{a\bar{\alpha}}\left(1+b\bar{\beta}\right)\left[1+ \frac{N-1}{2}\left(b\bar{\beta}\right)^{2}\right]-1. \tag{29}\]
We prove the case \(k_{0}=0\). The case \(k_{0}\geq 1\) can be shown similarly. Note that, for all \(k\geq 0\), (26) can be written as
\[\tilde{\theta}_{k+1}=M_{k}(\tilde{\theta}_{k}-\delta_{\theta,k-1} +M_{k}^{-1}P_{k+1}\bar{\phi}_{k}^{\mathrm{T}}(\bar{\delta}_{\phi,k}\theta_{ \mathrm{true},k}+\bar{\delta}_{y,k})). \tag{30}\]
Moreover, it follows from (9) and (16) that, for all \(k\geq 0\), \(M_{k}=P_{k+1}(P_{k}^{-1}-F_{k})\). It follows from Corollary 1 that, for all \(k\geq 0\), \((P_{k}^{-1}-F_{k})\) is nonsingular, and hence
\[M_{k}^{-1}=(P_{k}^{-1}-F_{k})^{-1}P_{k+1}^{-1}. \tag{31}\]
Substituting (31) into (30) then gives, for all \(k\geq 0\),
\[\tilde{\theta}_{k+1}=M_{k}(\tilde{\theta}_{k}-\zeta_{k}),\]
where \(\zeta_{k}\in\mathbb{R}^{n}\) is defined
\[\zeta_{k}\triangleq\delta_{\theta,k-1}-(P_{k}^{-1}-F_{k})^{-1}\bar{\phi}_{k}^{ \mathrm{T}}(\bar{\delta}_{\phi,k}\theta_{\mathrm{true},k}+\bar{\delta}_{y,k}). \tag{32}\]
Next, it follows from applying triangle inequality and norm sub-multiplicativity to (32) and using the bounds in conditions _A2_), _A4), _A5), _A6), _A7)_, and _A8)_ that, for all \(k\geq 0\),
\[\|\zeta_{k}\|\leq\delta_{\theta}+b\bar{\beta}^{\frac{1}{2}}\left(\bar{\delta}_{ \phi}^{\frac{1}{2}}\theta_{\max}+\bar{\delta}_{y}\right)\triangleq\zeta.\]
Finally
#### 4.1.1 Condition A5)
Condition _A5)_ is a bound on how quickly the parameters being estimated, \(\theta_{\mathrm{true},k}\), can change. While these parameters are not known, in practice, this bound can be estimated from data.
#### 4.1.2 Conditions A6) and A7)
Conditions _A6)_ and _A7)_ are, respectively, bounds on the weighted measurement noise and weighted regressor noise. While noise from certain distributions has no guaranteed bound (e.g. Gaussian noise), in practice these bounds can be approximated from data.
Corollary 3 gives a sufficient condition for when bounded measurement noise and bounded regressor noise imply, respectively, bounded weighted measurement noise and bounded weighted regressor noise. Note, however, that the bounds guaranteed by Corollary 2 are often loose and it is preferable in practice to directly analyze the weighted measurement noise and weighted regressor noise.
**Corollary 3**: _Assume there exists \(k_{0}\geq 0\) and \(0<\gamma_{\min}<\gamma_{\max}\) such that, for all \(k\geq k_{0}\), (20) holds. Then, the following statements hold:_
1. _If there exists_ \(\delta_{y}\geq 0\) _such that, for all_ \(k\geq k_{0}\)_,_ \(\|\delta_{y,k}\|\leq\delta_{y}\)_, then, for all_ \(k\geq k_{0}\)_,_ \(\|\bar{\delta}_{y,k}\|\leq\nicefrac{{\delta_{y}}}{{\sqrt{\gamma_{\min}}}}\)_._
2. _If there exists_ \(\delta_{\phi}\geq 0\) _such that_ \((\delta_{\phi,k})_{k=k_{0}}^{\infty}\) _is bounded with upper bound_ \(\delta_{\phi}\)_, then_ \((\bar{\delta}_{\phi,k})_{k=k_{0}}^{\infty}\) _is bounded with upper bound_ \(\frac{\delta_{\phi}}{\gamma_{\min}}\)_._
To show 1, note that, for all \(k\geq k_{0}\), \(\|\Gamma_{k}^{-\frac{1}{2}}\xi_{k}\|\leq\mathbf{\sigma_{\max}}(\Gamma_{k}^{-\frac{ 1}{2}})\|\xi_{k}\|\leq\frac{\xi}{\sqrt{\gamma_{\min}}}\). Lastly, to show 2, note that, for all \(k\geq k_{0}\), \(\bar{\delta}_{\phi,k}^{\intercal}\bar{\delta}_{\phi,k}=\bar{\delta}_{\phi,k} ^{\intercal}\Gamma_{k}^{-1}\delta_{\phi,k}\preceq\frac{1}{\gamma_{\min}} \delta_{\phi,k}^{\intercal}\delta_{\phi,k}\preceq\frac{\xi_{\phi}}{\gamma_{ \min}}I_{n}\).
#### 4.1.3 Condition A8)
Condition _A8)_ is a bound on the magnitude of the parameters being estimated. While the parameters \(\theta_{\mathrm{true},k}\) are not known, this bound can also be approximated in practice.
### Specialization to Errors-in-Variables
An important specialization of Theorem 3 is the case of fixed parameters (i.e. \(\delta_{\theta}=0\)). In this case, only the effect of measurement noise and regressor noise is considered, a problem known as errors-in-variables [26]. Note that the measurement noise and regressor noise may be correlated, resulting in an asymptotically biased least squares estimator [29, p. 205]. Corollary 4 gives sufficient conditions for an explicit bound on the asymptotic bias.
**Corollary 4**: _Consider the assumptions and notation of Theorem 3. Furthermore, assume that, for all \(k\geq 0\), \(\delta_{\theta,k}=0_{n\times 1}\). Then, the system (26) is globally uniformly ultimately bounded with bound \(\varepsilon\) given by_
\[\varepsilon=\varepsilon^{*}b\bar{\beta}^{\frac{1}{4}}\left(\bar{\delta}_{\phi }^{\frac{1}{2}}\theta_{\max}+\bar{\delta}_{y}\right), \tag{33}\]
_and where \(\varepsilon^{*}>0\) is defined in (28)._
Corollary 4 follows as the special case of Theorem 3 with \(\delta_{\theta}=0\).
### Other Specializations of Theorem 3
More generally, Theorem 3 can be specialized to assume fixed parameters by setting \(\delta_{\theta}=0\) and/or to assume no measurement noise by setting \(\bar{\delta}_{y}=0\) and/or to assume no regressor noise by setting \(\bar{\delta}_{\phi}=0\). As a sanity check, note that if \(\delta_{\theta}=\bar{\delta}_{y}=\bar{\delta}_{\phi}=0\), then (27) simplifies to \(\varepsilon=0\).
An as additional extension, note that in the case of no regressor noise (i.e. \(\bar{\delta}_{\phi}=0\)), the parameters \(\theta_{\mathrm{true},k}\) need not be bounded for (26) to be globally uniformly ultimately bounded. In other words, unbounded parameters can be tracked without regressor noise. Similarly, if the parameters \(\theta_{\mathrm{true},k}\) are fixed at zero, then the regressor noise need not be bounded for (26) to be globally uniformly ultimately bounded. These two extensions are given in Corollary 5.
**Corollary 5**: _Consider the assumptions and notation of (21) through (25). Furthermore, assume there exists \(k_{0}\geq 0\) such that conditions A1), A2), A3), and A4) of Theorem 2 and conditions A5) and A6) of Theorem 3 are met. Then, the following two statement hold:_
1. _If, for all_ \(k\geq 0\)_,_ \(\bar{\delta}_{\phi,k}=0_{p\times n}\)_, then the system (_26_) is globally uniformly ultimately bounded with bound_ \(\varepsilon\) _given by_ \[\varepsilon=\varepsilon^{*}\left(\delta_{\theta}+b\bar{\beta}^{\frac{1}{2}} \bar{\delta}_{y}\right),\] (34) _and where_ \(\varepsilon^{*}>0\) _is defined in (_28_)._
2. _If, for all_ \(k\geq 0\)_,_ \(\theta_{\mathrm{true},k}=0_{n\times 1}\)_, then the system (_26_) is globally uniformly ultimately bounded with bound_ \(\varepsilon\) _given by (_34_)._
First, we prove statement 1). Note, for all \(k\geq 0\), (26) simplifies to \(\theta_{k+1}=M_{k}(\hat{\theta}_{k}-\delta_{\theta,k-1})+P_{k+1}\bar{\phi}_{k}^{ \intercal}\bar{\delta}_{y,k}\). By similar reasoning to the proof of Theorem 3, it follows that, for all \(k\geq 0\), \(\bar{\theta}_{k+1}=M_{k}(\hat{\theta}_{k}-\zeta_{k})\), where \(\zeta_{k}\in\mathbb{R}^{n}\) is defined \(\zeta_{k}\triangleq\delta_{\theta,k-1}-(P_{k}^{-1}-F_{k})^{-1}\bar{\phi}_{k}^{ \intercal}\bar{\delta}_{y,k}\).
Also by similar reasoning to the proof of Theorem 3, it follows that, for all \(k\geq 0\), \(\|\zeta_{k}\|\leq\delta_{\theta}+b\bar{\beta}^{\frac{1}{2}}\bar{\delta}_{y} \triangleq\zeta\). Finally, it follows Lemma 17 in Appendix F that (26) is globally uniformly ultimately bounded with bound \(\varepsilon^{*}\zeta\).
The proof of 2) is identical after noting that, for all \(k\geq 0\), (26) also simplifies to \(\dot{\theta}_{k+1}=M_{k}(\hat{\theta}_{k}-\delta_{\theta,k-1})+P_{k+1}\bar{\phi}_{k }^{\intercal}\bar{\delta}_{y,k}\).
## 5 RLS Extensions as Special Cases of GF-RLS
This section shows how several extensions of recursive least squares with forgetting are special cases of generalized forgetting recursive least squares. For simplicity, we assume that, for all \(k\geq 0\), \(\Gamma_{k}=I_{p}\) in GF-RLS. This uniform weighting is in accordance with the RLS extensions we present as originally published. However, these methods can easily be extended to nonuniform weighting by, for all \(k\geq 0\), selecting positive-definite \(\Gamma_{k}\in\mathbb{R}^{p\times p}\). Thereafter, only the forgetting matrix \(F_{k}\) needs to be specified for all \(k\geq 0\). Furthermore, the stability results presented in Theorem 2 and robustness results presented in Theorem 3 apply to any algorithm that is a special case of GF-RLS.
For all the following methods, for all \(k\geq 0\), let \(\phi_{k}\in\mathbb{R}^{p\times n}\) and \(y_{k}\in\mathbb{R}^{p}\). Furthermore let \(P_{0}\in\mathbb{R}^{n\times n}\) be positive definite and \(\theta_{0}\in\mathbb{R}^{n}\). If an extension of RLS is a special case of proper GF-RLS, we say that extension is proper. Note that we have made minor notation changes to some RLS extensions in order to present all algorithms with the same notation. Otherwise,
we have done our best to present all algorithms as originally published. A flowchart summary of this section is given in Figure 1.
### Recursive Least Squares
Recursive least squares [3] is derived by denoting the minimizer of the cost function
\[J_{k}(\hat{\theta})=\sum_{i=0}^{k}\|y_{i}-\phi_{i}\hat{\theta}\|^{2}+\|\theta- \theta_{0}\|_{P_{0}^{-1}}^{2} \tag{35}\]
by \(\theta_{k+1}\triangleq\arg\min_{\hat{\theta}\in\mathbb{R}^{n}}J_{k}(\hat{ \theta})\). It follows that, for all \(k\geq 0\), \(\theta_{k+1}\) is given by
\[P_{k+1}^{-1} =P_{k}^{-1}+\phi_{k}^{\mathrm{T}}\phi_{k}, \tag{36}\] \[\theta_{k+1} =\theta_{k}+P_{k+1}\phi_{k}^{\mathrm{T}}(y_{k}-\phi_{k}\theta_{k }). \tag{37}\]
Comparing (36) and (37) to (9) and (10), it follows that recursive least squares is a special case of GF-RLS where, for all \(k\geq 0\), \(\Gamma_{k}=I_{p}\) and
\[F_{k}=0_{n\times n}. \tag{38}\]
Note that, for all \(k\geq 0\), \(P_{k}^{-1}\succ 0\), hence \(P_{k}^{-1}-F_{k}\succ 0\) and \(F_{k}\succeq 0\). Therefore recursive least squares is proper.
### Exponential Forgetting
A classical method to introduce forgetting in RLS is called _exponential forgetting_, where a forgetting factor \(0<\lambda\leq 1\) is introduced which provides exponentially higher weighting to more recent measurements and data [3, 5]. Exponential forgetting RLS is derived by denoting the minimizer of the cost function
\[J_{k}(\hat{\theta})=\sum_{i=0}^{k}\lambda^{k-i}\|y_{i}-\phi_{i}\hat{\theta}\| ^{2}+\lambda^{k+1}\|\theta-\theta_{0}\|_{P_{0}^{-1}}^{2} \tag{39}\]
by \(\theta_{k+1}\triangleq\arg\min_{\hat{\theta}\in\mathbb{R}^{n}}J_{k}(\hat{ \theta})\). It follows that, for all \(k\geq 0\), \(\theta_{k+1}\) is given by
\[P_{k+1}^{-1} =\lambda P_{k}^{-1}+\phi_{k}^{\mathrm{T}}\phi_{k}, \tag{40}\] \[\theta_{k+1} =\theta_{k}+P_{k+1}\phi_{k}^{\mathrm{T}}(y_{k}-\phi_{k}\theta_{k }). \tag{41}\]
Comparing (40) and (41) to (9) and (10), it follows that exponential forgetting is a special case of GF-RLS where, for all \(k\geq 0\), \(\Gamma_{k}=I_{p}\) and
\[F_{k}=(1-\lambda)P_{k}^{-1}. \tag{42}\]
Note that, for all \(k\geq 0\), \(P_{k}^{-1}\succ 0\), hence \(P_{k}^{-1}-F_{k}=\lambda P_{k}^{-1}\succ 0\) and \(F_{k}\succeq 0\). Therefore exponential forgetting is proper.
### Variable-Rate Forgetting
An extension of exponential forgetting is _variable-rate forgetting_, in which a time-varying forgetting factor, \(0<\lambda_{k}\leq 1\), is selected at each step \(k\geq 0\), in place of the constant forgetting factor of exponential forgetting. Variable-rate forgetting is derived in [9] by defining the cost function
\[J_{k}(\hat{\theta})=\sum_{i=0}^{k}\frac{\rho_{k}}{\rho_{i}}\|y_{i}-\phi_{i}\hat {\theta}\|^{2}+\rho_{k}\|\theta-\theta_{0}\|_{P_{0}^{-1}}^{2}, \tag{43}\]
where, for all \(k\geq 0\), \(\rho_{k}\triangleq\prod_{i=0}^{k}\lambda_{i}\). If, for all \(k\geq 0\), the minimizer of \(J_{k}(\hat{\theta})\) is denoted by \(\theta_{k+1}\triangleq\arg\min_{\hat{\theta}\in\mathbb{R}^{n}}J_{k}(\hat{ \theta})\), it follows that, for all \(k\geq 0\), \(\theta_{k+1}\) is given by
\[P_{k+1}^{-1} =\lambda_{k}P_{k}^{-1}+\phi_{k}^{\mathrm{T}}\phi_{k}, \tag{44}\] \[\theta_{k+1} =\theta_{k}+P_{k+1}\phi_{k}^{\mathrm{T}}(y_{k}-\phi_{k}\theta_{k }). \tag{45}\]
Comparing (44) and (45) to (9) and (10), it follows that variable-rate forgetting is a special case of GF-RLS where, for all \(k\geq 0\), \(\Gamma_{k}=I_{p}\) and
\[F_{k}=(1-\lambda_{k})P_{k}^{-1}. \tag{46}\]
Note that, for all \(k\geq 0\), \(P_{k}^{-1}\succ 0\), hence \(P_{k}^{-1}-F_{k}=\lambda_{k}P_{k}^{-1}\succ 0\) and \(F_{k}\succeq 0\). Therefore, variable-rate forgetting is proper.
Many methods exist to design this time-varying forgetting factor including methods assuming known noise variance [10], online estimation of noise power [12], gradient-based methods [13], and statistical methods [14].
### Data-Dependent Updating
Data-dependent updating was developed in [11] and was inspired as a way to prevent instabilities in the presence of bounded output disturbances. Data-dependent updating can be summarized by the update equations
\[P_{k+1}^{-1} =(1-\mu_{k})P_{k}^{-1}+\mu_{k}\phi_{k}^{\mathrm{T}}\phi_{k}, \tag{47}\] \[\theta_{k+1} =\theta_{k}+\mu_{k}P_{k+1}\phi_{k}^{\mathrm{T}}(y_{k}-\phi_{k} \theta_{k}), \tag{48}\]
where, for all \(k\geq 0\), \(0\leq\mu_{k}<1\). Next, for all \(k\geq 0\), define \(\bar{P}_{k}\in\mathbb{R}^{n\times n}\) by
\[\bar{P}_{k}\triangleq\mu_{k-1}P_{k}, \tag{49}\]
where \(\mu_{-1}\triangleq 1\). It then follows that, for all \(k\geq 0\), (47) and (48) can be written as
\[\bar{P}_{k+1}^{-1} =\frac{(1-\mu_{k})\mu_{k-1}}{\mu_{k}}\bar{P}_{k}^{-1}+\phi_{k}^{ \mathrm{T}}\phi_{k}, \tag{50}\] \[\theta_{k+1} =\theta_{k}+\bar{P}_{k+1}\phi_{k}^{\mathrm{T}}(y_{k}-\phi_{k} \theta_{k}). \tag{51}\]
Comparing (50) and (51) to (44) and (45), it follows that data-dependent updating is simply a special case of variable-rate forgetting where, for all \(k\geq 0\),
\[\lambda_{k}=\frac{(1-\mu_{k})\mu_{k-1}}{\mu_{k}}. \tag{52}\]
For connections to GF-RLS, see subsection V-C on variable-rate forgetting.
### Exponential Resetting
Exponential resetting was developed in [19] and can be summarized by the update equations
\[P_{k+1}^{-1} =\lambda P_{k}^{-1}+(1-\lambda)R_{\infty}+\phi_{k}^{\mathrm{T}} \phi_{k}, \tag{53}\] \[\theta_{k+1} =\theta_{k}+P_{k+1}\phi_{k}^{\mathrm{T}}(y_{k}-\phi_{k}\theta_{k}), \tag{54}\]
where \(R_{\infty}\in\mathbb{R}^{n\times n}\) is positive semidefinite. Note that while [19] assumes that \(R_{\infty}\) is positive definite, it is simple to extend the results of [19] to positive-semidefinite \(R_{\infty}\). It is shown in
[19] that, for all \(k\geq 0\), \(P_{k}\) is positive definite. Furthermore, [19] shows that the exponential resetting property is satisfied, namely that if there exists \(M\geq 0\) such that, for all \(k\geq M\), \(\phi_{k}=0_{p\times n}\), then \(\lim_{k\to\infty}P_{k}^{-1}=R_{\infty}\).
Comparing (53) and (54) to (9) and (10), it follows that exponential resetting is a special case of GF-RLS where, for all \(k\geq 0\), \(\Gamma_{k}=I_{p}\) and
\[F_{k}=(1-\lambda)(P_{k}^{-1}-R_{\infty}). \tag{55}\]
Note that, for all \(k\geq 0\), \(P_{k}^{-1}-F_{k}=\lambda P_{k}^{-1}+(1-\lambda)P_{\infty}^{-1}\succ 0\). Furthermore, Proposition 6 of [19] shows that, for all \(k\geq 0\), \(P_{k}^{-1}\succeq\lambda^{k}P_{0}^{-1}+(1-\lambda^{k})R_{\infty}\). Note that if \(P_{0}^{-1}\succeq R_{\infty}\), then, for all \(k\geq 0\), \(P_{k}^{-1}\succeq\lambda^{k}R_{\infty}+(1-\lambda^{k})R_{\infty}=R_{\infty}\) implying that \(F_{k}\succeq 0\). Therefore, if \(P_{0}^{-1}\succeq R_{\infty}\), then exponential resetting is proper.
### Covariance Resetting
A simple ad-hoc extension of RLS is _covariance resetting_[36] where, if a criterion for resetting is met at step \(k\), then the covariance matrix \(P_{k}\) is reset to a desired positive-definite matrix, \(P_{\infty,k}\in\mathbb{R}^{n\times n}\). Covariance resetting gives, for all \(k\geq 0\), the update equations
\[P_{k+1}^{-1} =\begin{cases}P_{\infty,k}^{-1}+\phi_{k}^{\mathrm{T}}\phi_{k}& \text{criterion is met},\\ P_{k}^{-1}+\phi_{k}^{\mathrm{T}}\phi_{k}&\text{otherwise},\end{cases} \tag{56}\] \[\theta_{k+1} =\theta_{k}+P_{k+1}\phi_{k}^{\mathrm{T}}(y_{k}-\phi_{k}\theta_{k}). \tag{57}\]
Comparing (56) and (57) to (9) and (10), it follows that covariance resetting is a special case of GF-RLS where, for all \(k\geq 0\), \(\Gamma_{k}=I_{p}\) and
\[F_{k}=\begin{cases}P_{k}^{-1}-P_{\infty,k}^{-1}&\text{criterion is met},\\ 0_{n\times n}&\text{otherwise}.\end{cases} \tag{58}\]
Note that, for all \(k\geq 0\),
\[P_{k}^{-1}-F_{k}=\begin{cases}P_{\infty,k}^{-1}&\text{criterion is met},\\ P_{k}^{-1}&\text{otherwise},\end{cases} \tag{59}\]
and hence \(P_{k}^{-1}-F_{k}\succeq 0\). Moreover, note that when a criterion for resetting is met, \(F_{k}\succeq 0\) if and only of \(P_{k}\preceq P_{\infty,k}\). Thus, if \(P_{k}\preceq P_{\infty,k}\) whenever a criterion for resetting is met, then covariance resetting is proper.
Covariance resetting can similarly be applied to any RLS extension, resetting the covariance when a criterion is met, and following nominal algorithm otherwise. Such an algorithm would also be a special case of GF-RLS.
### Directional Forgetting by Information Matrix Decomposition
A directional forgetting algorithm based on the decomposition of the information matrix (i.e. inverse covariance matrix) is presented in [15]. This method was developed in the special case of scalar measurements (\(p=1\)) and can be summarized by the update equations
\[R_{k+1} =\bar{R}_{k}+\phi_{k}^{\mathrm{T}}\phi_{k}, \tag{60}\] \[P_{k+1} =\bar{P}_{k}-\frac{\bar{P}_{k}\phi_{k}^{\mathrm{T}}\phi_{k}\bar{P }_{k}}{1+\phi_{k}\bar{P}_{k}\phi_{k}^{\mathrm{T}}},\] (61) \[\theta_{k+1} =\theta_{k}+P_{k+1}\phi_{k}^{\mathrm{T}}(y_{k}-\phi_{k}\theta_{k}), \tag{62}\]
where
\[\bar{R}_{k} \triangleq\begin{cases}R_{k}-(1-\lambda)\frac{R_{k}\phi_{k}^{ \mathrm{T}}\phi_{k}R_{k}}{\phi_{k}R_{k}\phi_{k}^{\mathrm{T}}}&\|\phi_{k}\|> \varepsilon,\\ R_{k}&\|\phi_{k}\|\leq\varepsilon,\end{cases} \tag{63}\] \[\bar{P}_{k} \triangleq\begin{cases}P_{k}+\frac{1-\lambda}{\lambda}\frac{\phi_{ k}^{\mathrm{T}}\phi_{k}}{\phi_{k}R_{k}\phi_{k}^{\mathrm{T}}}&\|\phi_{k}\|> \varepsilon,\\ P_{k}&\|\phi_{k}\|\leq\varepsilon,\end{cases} \tag{64}\]
Fig. 1: This flowchart summarizes how different extensions of RLS can be derived as special cases of GF-RLS (red). Furthermore, this chart summarizes how certain RLS extensions are special cases of other RLS extensions (black).
and where \(\varepsilon>0\), \(0<\lambda\leq 1\) and, for all \(k\geq 0\), \(R_{k}=P_{k}^{-1}\) and \(\bar{R}_{k}=\bar{P}_{k}^{-1}\).
Comparing (60) and (62) to (9) and (10), it follows that directional forgetting by information matrix decomposition is a special case of GF-RLS where, for all \(k\geq 0\), \(\Gamma_{k}=I_{p}\) and \(F_{k}=R_{k}-\bar{R}_{k}\). If \(\|\phi_{k}\|>\varepsilon\), then \(F_{k}\) can be expressed
\[F_{k}=(1-\lambda)\frac{P_{k}^{-1}\phi_{k}^{\mathrm{T}}\phi_{k}P_{k}^{-1}}{\phi _{k}P_{k}^{-1}\phi_{k}^{\mathrm{T}}}, \tag{65}\]
otherwise, \(F_{k}=0_{n\times n}\). It is shown in [15] that, for all \(k\geq 0\), \(\bar{R}_{k}=P_{k}^{-1}-F_{k}\succ 0\) and \(F_{k}\succeq 0\). Therefore, directional forgetting by information matrix decomposition is proper.
### Variable-Direction Forgetting
Variable-direction forgetting was developed in [5] and is based on the singular value decomposition of the inverse covariance matrix \(P_{k}^{-1}\). For all \(k\geq 0\), a positive-definite \(\Lambda_{k}\in\mathbb{R}^{n\times n}\) is constructed for the update equations
\[P_{k+1}^{-1} =\Lambda_{k}P_{k}^{-1}\Lambda_{k}+\phi_{k}^{\mathrm{T}}\phi_{k}, \tag{66}\] \[\theta_{k+1} =\theta_{k}+P_{k+1}\phi_{k}^{\mathrm{T}}(y_{k}-\phi_{k}\theta_{k}). \tag{67}\]
Details on constructing \(\Lambda_{k}\) can be found in equations (67) and (68) of [5]. Comparing (66) and (67) to (9) and (10), it follows that variable-direction forgetting is a special case of GF-RLS where, for all \(k\geq 0\), \(\Gamma_{k}=I_{p}\) and
\[F_{k}=P_{k}^{-1}-\Lambda_{k}P_{k}^{-1}\Lambda_{k}. \tag{68}\]
Note that, for all \(k\geq 0\), \(P_{k}^{-1}-F_{k}=\Lambda_{k}P_{k}^{-1}\Lambda_{k}\succ 0\). Moreover, it is shown in the proof of Proposition 9 of [5] that, for all \(k\geq 0\), \(P_{k}^{-1}-\Lambda_{k}P_{k}^{-1}\Lambda_{k}\succeq 0\). Therefore, variable-direction forgetting is proper.
### Tracking of Slowly Varying Parameters by Directional Forgetting
Another directional forgetting method, denoted in [16] and analyzed in [17], was designed to track slowly varying parameters. A simulation study of this method can also be found in [37]. This method was developed in the special case of scalar measurements (\(p=1\)) and can be summarized by the update equations
\[P_{k+1}^{-1} =P_{k}^{-1}+\beta_{k}\phi_{k}^{\mathrm{T}}\phi_{k}, \tag{69}\] \[\theta_{k+1} =\theta_{k}+\frac{1}{1+\phi_{k}P_{k}\phi_{k}^{\mathrm{T}}}P_{k} \phi_{k}^{\mathrm{T}}(y_{k}-\phi_{k}\theta_{k}), \tag{70}\]
where, for all \(k\geq 0\),
\[\beta_{k}\triangleq\begin{cases}\mu-\frac{1-\mu}{\phi_{k}P_{k}\phi_{k}^{ \mathrm{T}}}&\phi_{k}P_{k}\phi_{k}^{\mathrm{T}}>0,\\ 1&\phi_{k}P_{k}\phi_{k}^{\mathrm{T}}=0,\end{cases} \tag{71}\]
and \(0<\mu\leq 1\) is the forgetting factor. To show this method is a special case of GF-RLS, first note that, for all \(k\geq 0\), (89) of Lemma 2 can be used to rewrite (70) as
\[\theta_{k+1}=\theta_{k}+(P_{k}^{-1}+\phi_{k}^{\mathrm{T}}\phi_{k})^{-1}\phi_{ k}(y_{k}-\phi_{k}\theta_{k}). \tag{72}\]
Next, defining \(\bar{P}_{0}\triangleq P_{0}\) and, for all \(k\geq 0\), \(\bar{P}_{k+1}^{-1}\triangleq P_{k}^{-1}+\phi_{k}^{\mathrm{T}}\phi_{k}\). It then follows that, for all \(k\geq 0\), (72) and (69) can be rewritten as
\[\bar{P}_{k+1}^{-1} =\bar{P}_{k}^{-1}-(1-\beta_{k-1})\phi_{k-1}^{\mathrm{T}}\phi_{k-1 }+\phi_{k}^{\mathrm{T}}\phi_{k}, \tag{73}\] \[\theta_{k+1} =\theta_{k}+\bar{P}_{k+1}\phi_{k}^{\mathrm{T}}(y_{k}-\phi_{k} \theta_{k}), \tag{74}\]
where \(\beta_{-1}\triangleq 0\) and \(\phi_{-1}\triangleq 0_{1\times n}\).
Comparing (73) and (74) to (9) and (10), it follows that this direction forgetting method is a special case of GF-RLS where, for all \(k\geq 0\), \(\Gamma_{k}=I_{p}\) and \(F_{k}=(1-\beta_{k-1})\phi_{k-1}^{\mathrm{T}}\phi_{k-1}\). If \(\phi_{k}P_{k}\phi_{k}^{\mathrm{T}}>0\), then \(F_{k}\) simplifies to
\[F_{k}=(1-\mu)\Big{(}\phi_{k-1}^{\mathrm{T}}\phi_{k-1}+\frac{\phi_{k-1}^{ \mathrm{T}}\phi_{k-1}}{\phi_{k-1}P_{k-1}\phi_{k-1}^{\mathrm{T}}}\Big{)}, \tag{75}\]
otherwise, \(F_{k}=0_{n\times n}\). It is shown in [17] that, for all \(k\geq 0\), \(P_{k}\succ 0\). Therefore, for all \(k\geq 0\), \(\bar{P}_{k}^{-1}-F_{k}=\bar{P}_{k+1}^{-1}-\phi_{k}^{\mathrm{T}}\phi_{k}=P_{k}^{ -1}\succ 0\). Furthermore, for all \(k\geq 0\), \(\beta_{k}\leq 1\), and hence \(F_{k}\succeq 0\). Therefore, this direction forgetting method is proper.
### Multiple Forgetting
Multiple forgetting was developed in [20] for the special case \(n=2\) and \(p=1\) to allow for different forgetting factors for the two parameters being estimated. To introduce multiple forgetting, we write, for all \(k\geq 0\), \(\phi_{k}\in\mathbb{R}^{1\times 2}\) as
\[\phi_{k}=\left[\phi_{1,k}\quad\phi_{2,k}\right]. \tag{76}\]
Then, multiple forgetting can be summarized by the update equations
\[R_{1,k+1} =\lambda_{1,k}R_{1,k}+\phi_{1,k}^{2}, \tag{77}\] \[R_{2,k+1} =\lambda_{2,k}R_{2,k}+\phi_{2,k}^{2},\] (78) \[\theta_{k+1} =\theta_{k}+L_{\mathrm{new},k}(y_{k}-\phi_{k}\theta_{k}), \tag{79}\]
where, for all \(k\geq 0\), \(\lambda_{1,k},\lambda_{2,k}\in(0,1]\), \(R_{1,k},R_{2,k}\in(0,\infty)\), and
\[L_{\mathrm{new},k}\triangleq\frac{1}{1+\frac{\phi_{1,k}^{2}}{\lambda_{1,k}R_{1,k}} +\frac{\phi_{2,k}^{2}}{\lambda_{2,k}R_{2,k}}\begin{bmatrix}\frac{\phi_{1,k}}{ \lambda_{1,k}R_{1,k}}\\ \frac{\phi_{2,k}}{\lambda_{2,k}R_{2,k}}\end{bmatrix}. \tag{80}\]
It was further shown in [38] that (77) through (80) are equivalent to the update equations
\[R_{k+1} =\begin{bmatrix}\lambda_{1,k}R_{1,k}&0\\ 0&\lambda_{2,k}R_{2,k}\end{bmatrix}+\phi_{k}^{\mathrm{T}}\phi_{k}, \tag{81}\] \[\theta_{k+1} =\theta_{k}+P_{k+1}\phi_{k}^{\mathrm{T}}(y_{k}-\phi_{k}\theta_{k}), \tag{82}\]
where, for all \(k\geq 0\), \(R_{k}\in\mathbb{R}^{2\times 2}\) is positive definite and \(P_{k}\triangleq R_{k}^{-1}\in\mathbb{R}^{2\times 2}\). Furthermore, for all \(k\geq 0\), denote \(R_{k}\) as
\[R_{k}\triangleq\begin{bmatrix}R_{1,k}&R_{12,k}\\ R_{12,k}&R_{2,k}\end{bmatrix}. \tag{83}\]
Note that (81) and (82) are equivalent to the GF-RLS update equations (9) and (10) where, for all \(k\geq 0\), \(\Gamma_{k}=I_{p}\) and
\[F_{k}=\begin{bmatrix}(1-\lambda_{1,k})R_{1,k}&R_{12,k}\\ R_{12,k}&(1-\lambda_{2,k})R_{2,k}\end{bmatrix}. \tag{84}\]
Furthermore, note that, for all \(k\geq 0\),
\[P_{k}^{-1}-F_{k}=\begin{bmatrix}\lambda_{1,k}R_{1,k}&0\\ 0&\lambda_{2,k}R_{2,k}\end{bmatrix}\succ 0, \tag{85}\]
since the diagonal elements of positive-definite \(R_{k}\) are positive. Note that, for all \(k\geq 0\), \(F_{k}\) is not necessarily positive semidefinite. However, for all \(k\geq 0\), since \(R_{k}\) is positive definite, there exist \(\lambda_{1,k},\lambda_{2,k}\in(0,1]\) small enough such that \(F_{k}\) is positive semidefinite. Hence, if, for all \(k\geq 0\), \(\lambda_{1,k},\lambda_{2,k}\) are chosen sufficiently small, then multiple forgetting is proper.
## 6 Conclusion
This article develops GF-RLS, a general framework for RLS extensions derived from minimizing a least-squares cost function. Several RLS extensions are shown to be special cases of GF-RLS, and hence, can be derived from the GF-RLS cost function. It is important to note that while the update equations of an RLS extension may not, at face value, seem to be a special case of the GF-RLS update equations, they may still be a special case with some re-definitions. For example, see subsections V-D, V-I, and V-J. This connects a cost function to many RLS extensions which were originally developed as ad-hod modifications to the RLS update equations (e.g. [7, 8, 11, 12, 15, 16, 17, 19, 20]).
Further, stability and robustness guarantees are presented for GF-RLS. These guarantees facilitate stability and robustness analysis for various RLS extension that are a special cases of GF-RLS. Furthermore, a specialization of the robustness result presented gives a bound to the asymptotic bias of the least squares estimator in the errors-in-variables problem. Applications of this analysis include RLS-based adaptive control [39, 40] and online transfer function identification using least squares [41, 42]. We believe similar analysis may be used to derive tighter bounds if specialized to a single extension of RLS. Moreover, if statistical properties of the change in parameters, of the measurement noise, and/or of the regressor noise are known, such analysis may also allow for more informed tuning of different RLS extensions by tuning forgetting factors to minimize an ultimate bound on parameter estimation error.
## Appendix A: Useful Lemmas
**Lemma 1**: _Let \(A\in\mathbb{R}^{n\times n}\) be positive definite, let \(b\in\mathbb{R}^{n}\) and \(c\in\mathbb{R}\), and define \(f\colon\mathbb{R}^{n}\to\mathbb{R}\) by_
\[f(x)\triangleq x^{\mathrm{T}}Ax+2b^{\mathrm{T}}x+c. \tag{86}\]
_Then, \(f\) has a unique stationary point, which is the global minimizer given by_
\[\operatorname*{arg\,min}_{x\in\mathbb{R}^{n}}f(x)=-A^{-1}b. \tag{87}\]
**Lemma 2** (Matrix Inversion Lemma): _Let \(A\in\mathbb{R}^{n\times n}\), \(U\in\mathbb{R}^{n\times p}\), \(C\in\mathbb{R}^{p\times p}\), and \(V\in\mathbb{R}^{p\times n}\). If \(A\), \(C\), and \(A+UCV\) are nonsingular, then \(C^{-1}+VA^{-1}U\) is nonsingular, and_
\[(A+UCV)^{-1}=A^{-1}-A^{-1}U(C^{-1}+VA^{-1}U)^{-1}VA^{-1}, \tag{88}\]
\[(A+UCV)^{-1}UC=A^{-1}U(C^{-1}+VA^{-1}U)^{-1}. \tag{89}\]
**Lemma 3**: _Let \(A\in\mathbb{R}^{n\times m}\) be the partitioned matrix_
\[A\triangleq\begin{bmatrix}A_{11}&\ldots&A_{1l}\\ \vdots&\ddots&\vdots\\ A_{k1}&\ldots&A_{kl}\end{bmatrix}, \tag{90}\]
_where, for all \(i\in\{1,\ldots,k\}\) and \(j\in\{1,\ldots,l\}\), \(A_{ij}\in\mathbb{R}^{n_{i}\times m_{j}}\). Then,_
\[\boldsymbol{\sigma_{\max}}(A)^{2}\leq\sum_{i=1}^{k}\sum_{j=1}^{l}\boldsymbol{ \sigma_{\max}}(A_{ij})^{2}. \tag{91}\]
See Theorem 1 of [43].
## Appendix B: Discrete-Time Stability Theory
Let \(f\colon\mathbb{N}_{0}\times\mathbb{R}^{n}\to\mathbb{R}^{n}\) and consider the system
\[x_{k+1}=f(k,x_{k}), \tag{92}\]
where, for all \(k\geq 0\), \(x_{k}\in\mathbb{R}^{n}\), and \(f(k,\cdot)\) is continuous.
**Definition 4**: _For \(x_{\mathrm{eq}}\in\mathbb{R}^{n}\), \(x_{k}\equiv x_{\mathrm{eq}}\) is an equilibrium of system (92) if, for all \(k\geq 0\), \(f(k,0)=0\)._
The following definition is given by Definition 13.7 in [44, pp. 783, 784].
**Definition 5**: _If \(x_{k}\equiv 0\) is an equilibrium of (92), then define the following:_
1. _The equilibrium_ \(x_{k}\equiv 0\) _of (_92_) is Lyapunov stable if, for all_ \(\varepsilon>0\)_,_ \(k_{0}\geq 0\)_, and_ \(x_{k_{0}}\in\mathbb{R}^{n}\)_, there exists_ \(\delta>0\) _such that, if_ \(\|x_{k_{0}}\|<\delta\)_, then, for all_ \(k\geq k_{0}\)_,_ \(\|x_{k}\|<\varepsilon\)_._
2. _The equilibrium_ \(x_{k}\equiv 0\) _of (_92_) is_ globally asymptotically stable_ _if it is Lyapunov stable and, for all_ \(k_{0}\geq 0\) _and_ \(x_{k_{0}}\in\mathbb{R}^{n}\)_,_ \(\lim_{k\to\infty}x_{k}=0\)_._
3. _The equilibrium_ \(x_{k}\equiv 0\) _of (_92_) is_ globally uniformly exponentially stable_ _if there exist_ \(\alpha>0\) _and_ \(\beta>1\) _such that, for all_ \(k_{0}\geq 0\)_,_ \(x_{k_{0}}\in\mathbb{R}^{n}\)_, and_ \(k\geq k_{0}\)_,_ \(\|x_{k}\|\leq\alpha\|x_{k_{0}}\|\beta^{-k}\)_._
The following result is a specialization of Theorem 13.11 given in [44, pp. 784, 785].
**Theorem 4**: _Let \(\mathcal{D}\subset\mathbb{R}^{n}\) be an open set such that \(0\in\mathcal{D}\) and let \(x_{k}\equiv 0\) be an equilibrium of (92). Then the following statements hold:_
1. _Let_ \(V\colon\mathbb{N}_{0}\times\mathcal{D}\to\mathbb{R}\) _and assume that, for all_ \(k\in\mathbb{N}_{0}\)_,_ \(V(k,\cdot)\) _is continuous. Furthermore, assume there exists_ \(\alpha>0\) _such that, for all_ \(k\geq 0\) _and_ \(x\in\mathcal{D}\)_,_ \[V(k,0)=0,\] (93) \[\alpha\|x\|^{2}\leq V(k,x),\] (94) \[V(k+1,f(k,x))-V(k,x)\leq 0.\] (95) _Then, the equilibrium_ \(x_{k}\equiv 0\) _of (_92_) is Lyapunov stable._
2. _Let_ \(V\colon\mathbb{N}_{0}\times\mathcal{D}\to\mathbb{R}\) _and assume that, for all_ \(k\in\mathbb{N}_{0}\)_,_ \(V(k,\cdot)\) _is continuous. Furthermore, assume there exist_ \(\alpha>0\)_, and_ \(\beta>0\) _such that, for all_ \(k\geq 0\) _and_ \(x\in\mathcal{D}\)_, (_94_), (_95_), and_ \[V(k,x)\leq\beta\|x\|^{2}.\] (96)
Then, the equilibrium \(x_{k}\equiv 0\) of (92) is uniformly Lyapunov stable.
3. Let \(V\colon\mathbb{N}_{0}\times\mathbb{R}^{n}\to\mathbb{R}\) and assume that, for all \(k\in\mathbb{N}_{0}\), \(V(k,\cdot)\) is continuous. Furthermore, assume there exist \(\alpha>0\), and \(\gamma>0\) such that, for all \(k\geq 0\) and \(x\in\mathbb{R}^{n}\), (93), (94) and \[V(k+1,f(k,x))-V(k,x)\leq-\gamma\|x\|^{2}.\] (97) Then, the equilibrium \(x_{k}\equiv 0\) of (92) is globally asymptotically stable.
4. Let \(V\colon\mathbb{N}_{0}\times\mathbb{R}^{n}\to\mathbb{R}\) and assume that, for all \(k\in\mathbb{N}_{0}\), \(V(k,\cdot)\) is continuous. Furthermore, assume there exist \(\alpha>0\), \(\beta>0\), \(\gamma>0\) such that, for all \(k\geq 0\) and \(x\in\mathbb{R}^{n}\), (94), (96), and (97). Then, the equilibrium \(x_{k}\equiv 0\) of (92) is globally uniformly exponentially stable.
The following definition is given by Definition 13.9 in [44, pp. 789, 790].
**Definition 6**: _The system (92) is globally uniformly ultimately bounded with bound \(\varepsilon\) if, for all \(\delta\in(0,\infty)\), there exists \(K>0\) such that, for all \(k_{0}\geq 0\) and \(x_{k_{0}}\in\mathbb{R}^{n}\), if \(\|x_{k_{0}}\|<\delta\), then, for all \(k\geq k_{0}+K\), \(\|x_{k}\|<\varepsilon\)._
The following result is a specialization of Corollary 13.5 given in [44, pp. 790, 791].
**Theorem 5**: _Let \(V\colon\mathbb{N}_{0}\times\mathbb{R}^{n}\to\mathbb{R}\) and assume that, for all \(k\in\mathbb{N}_{0}\), \(V(k,\cdot)\) is continuous. Furthermore, assume that, for all \(k\geq 0\) and \(x\in\mathbb{R}^{n}\),_
\[\alpha\|x\|^{2}\leq V(k,x)\leq\beta\|x\|^{2}. \tag{98}\]
_Furthermore, assume there exist \(\mu>0\) and a continuous function \(W\colon\mathbb{R}^{n}\to\mathbb{R}\) such that, for all \(k\geq 0\) and \(\|x\|>\mu\), \(W(x)>0\) and_
\[V(k+1,f(k,x))-V(k,x)\leq-W(x). \tag{99}\]
_Finally, assume that \(\sup_{(k,x)\in\mathbb{N}_{0}\times\mathcal{B}_{\mu}(0)}V(k+1,f(k,x))\) exists, where \(\vec{\mathcal{B}}_{\mu}(0)\triangleq\{x\in\mathbb{R}^{n}\colon\|x\|\leq\mu\}\). Then, for all \(\varepsilon\) such that1_
Footnote 1: Note that Corollary 13.5 of [44] writes \(\sup_{(k,x)\in\mathbb{N}_{0}}V(k,f(k,x))\) which is a typo that has been verified with the author W. M. Haddad of [44].
\[\varepsilon\geq\max\Big{\{}\mu,\sqrt{\sup_{(k,x)\in\mathbb{N}_{0}\times \mathcal{B}_{\mu}(0)}V(k+1,f(k,x))}\Big{\}}, \tag{100}\]
_the system (92) is globally uniformly ultimately bounded with bound \(\varepsilon\)._
_Next, \(k\geq 0\), define \(f_{k}\colon\mathbb{R}^{n}\to\mathbb{R}^{n}\) by, for all \(x\in\mathbb{R}^{n}\),_
\[f_{k}(x)=f(k,x). \tag{101}\]
_Furthermore, let \(N\geq 1\) and, for all \(l=0,1,\ldots,N-1\), define \(f_{l}^{N}\colon\mathbb{N}_{0}\times\mathbb{R}^{n}\to\mathbb{R}^{n}\) by, for all \(j\geq 0\) and \(x\in\mathbb{R}^{n}\),_
\[f_{l}^{N}(j,x)\triangleq(f_{jN+l+N-1}\circ\cdots\circ f_{jN+l+1}\circ f_{jN+l} )(x), \tag{102}\]
_and note that, for all \(j\geq 0\), \(f_{l}^{N}(j,\cdot)\) is continuous. Also note that, for all \(j\geq 0\),_
\[x_{(j+1)N+l}=f_{l}^{N}(j,x_{jN+l}). \tag{103}\]
_In other words, \(f_{l}^{N}\) can be used to evolve the states of (92) at time steps \(\{l,N+l,2N+l,\ldots\}\). Finally, for all \(l=0,1,\ldots,N-1\) and \(j\geq 0\), define \(x_{l,j}^{N}\in\mathbb{R}^{n}\) by_
\[x_{l,j}^{N}\triangleq x_{jN+l}, \tag{104}\]
_which gives the system_
\[x_{l,j+1}^{N}=f_{l}^{N}(j,x_{l,j}^{N}). \tag{105}\]
**Lemma 4**: _Let \(N\geq 1\), and assume that, for all \(l=0,\ldots,N-1\), the system (105) is globally uniformly ultimately bounded with bound \(\varepsilon\). Then, (92) is globally uniformly ultimately bounded with bound \(\varepsilon\)._
Let \(\delta_{0}\in(0,\infty)\), let \(k_{0}\geq 0\), and let \(x_{k_{0}}\in\mathbb{R}^{n}\). Assume that \(\|x_{k_{0}}\|<\delta_{0}\). Note that there exist \(j_{0}\geq 0\) and \(l_{0}\in\{0,\ldots,N-1\}\) such that \(k_{0}\triangleq j_{0}N+l_{0}\), and it follows from assumption that the system \(x_{l_{0},j+1}^{N}=f^{N}(j,x_{l_{0},j}^{N})\) is globally uniformly ultimately bounded with bound \(\varepsilon\). Hence, there exists \(J_{0}\geq 0\) such that, for all \(j\geq j_{0}+J_{0}\), \(\|x_{jN+l_{0}}\|<\varepsilon\). Equivalently, for all \(j\geq J_{0}\), \(\|x_{k_{0}+jN}\|<\varepsilon\).
Next, for all \(i=1,2,\ldots,N-1\), note that there exists \(\delta_{i}\in(0,\infty)\) such that \(\|x_{k_{0}+i}\|<\delta_{i}\). By similar reasoning as before, there exists \(J_{i}\geq 0\) such that, for all \(j\geq J_{i}\), \(\|x_{k_{0}+i+jN}\|<\varepsilon\).
Finally, let \(K\triangleq(\max(J_{0},\ldots,J_{N-1})-1)\) for all \(k\geq k_{0}+K\), there exist \(i\in\{0,1,\ldots,N-1\}\) and \(j\geq J_{i}\) such that \(k=k_{0}+i+jN\), and hence \(\|x_{k}\|<\varepsilon\).
## Appendix C: Proof of Theorem 1
[Proof of Theorem 1] First note that it follows from (4) that \(J_{0}(\hat{\theta})\) can be written as \(J_{0}(\hat{\theta})=\hat{\theta}^{\mathrm{T}}H_{0}\hat{\theta}+2b_{0}^{\mathrm{T}} \hat{\theta}+c_{0}\), where
\[H_{0} \triangleq\phi_{0}^{\mathrm{T}}\Gamma_{0}^{-1}\phi_{0}+P_{0}^{-1}-F_{ 0},\] \[b_{0} \triangleq-\phi_{0}^{\mathrm{T}}\Gamma_{0}^{-1}y_{0}-(P_{0}^{-1}-F_ {0})\theta_{0},\] \[c_{0} \triangleq y_{0}^{\mathrm{T}}\Gamma_{0}^{-1}y_{0}+\theta_{0}^{ \mathrm{T}}(P_{0}^{-1}-F_{0})\theta_{0}.\]
Defining \(P_{1}\triangleq H_{0}^{-1}\), it follows that (9) holds for \(k=0\). Furthermore, it follows from (3) with \(k=0\) that \(P_{0}^{-1}-F_{k}\succ 0\), and hence \(H_{0}\succeq P_{0}^{-1}-F_{k}\succ 0\). Therefore, Lemma 1 implies that \(J_{0}\) has the unique minimizer \(\theta_{1}\in\mathbb{R}^{n}\) given by
\[\theta_{1}=-H_{0}^{-1}b_{0}=P_{1}[\phi_{0}^{\mathrm{T}}\Gamma_{0}^{ -1}y_{0}+(P_{0}^{-1}-F_{0})\theta_{0}]\] \[=P_{1}[\phi_{0}^{\mathrm{T}}\Gamma_{0}^{-1}y_{0}+(P_{0}^{-1}-F_ {0}+\phi_{0}^{\mathrm{T}}\Gamma_{0}^{-1}\phi_{0})\theta_{0}-\phi_{0}^{\mathrm{T}} \Gamma_{0}^{-1}\phi_{0}\theta_{0}]\] \[=P_{1}[\phi_{0}^{\mathrm{T}}\Gamma_{0}^{-1}y_{0}+P_{1}^{-1}\theta_ {0}-\phi_{0}^{\mathrm{T}}\Gamma_{0}^{-1}\phi_{0}\theta_{0}]\] \[=\theta_{0}+P_{1}\phi_{0}^{\mathrm{T}}\Gamma_{0}^{-1}(y_{0}-\phi_{0} \theta_{0}).\]
Hence, (10) is satisfied for \(k=0\).
Now, let \(k\geq
Furthermore, \(H_{k}\) and \(b_{k}\) can be written recursively as
\[H_{k}=H_{k-1}-F_{k}+\phi_{k}^{\mathrm{T}}Q_{i}^{-1}\phi_{k},\] \[b_{k}=b_{k-1}-\phi_{k}^{\mathrm{T}}Q_{i}^{-1}y_{k}+F_{k}\theta_{k}.\]
Defining \(P_{k+1}\triangleq H_{k}^{-1}\), it follows that (9) is satisfied. Furthermore, it follows from (3) that \(H_{k}\) is positive definite. Therefore, Lemma 1 implies that \(J_{k}\) has the unique minimizer \(\theta_{k+1}\) given by
\[\theta_{k+1}=-H_{k}^{-1}b_{k}=-P_{k+1}b_{k}\] \[=-P_{k+1}(b_{k-1}-\phi_{k}^{\mathrm{T}}Q_{i}^{-1}y_{k}+F_{k} \theta_{k})\] \[=P_{k+1}(P_{k}^{-1}\theta_{k}+\phi_{k}^{\mathrm{T}}Q_{k}^{-1}y_{ k}-F_{k}\theta_{k})\] \[=P_{k+1}\big{[}(P_{k+1}^{-1}+\phi_{k}^{\mathrm{T}}Q_{k}^{-1}\phi_ {k}+F_{k})\theta_{k}+\phi_{k}^{\mathrm{T}}Q_{k}^{-1}y_{k}-F_{k}\theta_{k}\big{]}\] \[=P_{k+1}\big{[}P_{k+1}^{-1}\theta_{k}-\phi_{k}^{\mathrm{T}}Q_{k}^ {-1}\phi_{k}\theta_{k}+\phi_{k}^{\mathrm{T}}Q_{k}^{-1}y_{k}\big{]}\] \[=\theta_{k}+P_{k+1}\phi_{k}^{\mathrm{T}}Q_{k}^{-1}(y_{k}-\phi_{k} \theta_{k}).\]
Hence, (10) is satisfied.
## Appendix D: Proof of 1) and 2) of Theorem 2
**Lemma 5**: _For all \(k\geq 0\), let \(F_{k}\succeq 0\) and assume there exists \(b\in(0,\infty)\) such that \((P_{k}^{-1}-F_{k})^{-1}\preceq bI_{n}\). Then, \(P_{k}\preceq bI_{n}\)._
Proof:: It follows from \((P_{k}^{-1}-F_{k})^{-1}\preceq bI_{n}\) that \(P_{k}^{-1}-F_{k}\succeq\frac{1}{b}I_{n}\), and hence \(P_{k}^{-1}\succeq\frac{1}{b}I_{n}+F_{k}\succeq\frac{1}{b}I_{n}\). Therefore, \(P_{k}\preceq bI_{n}\).
**Lemma 6**: _For all \(k\geq 0\), define \(G_{k}\in\mathbb{R}^{p\times p}\) by_
\[G_{k}\triangleq I_{p}+\bar{\phi}_{k}(P_{k}^{-1}-F_{k})^{-1}\bar{\phi}_{k}^{ \mathrm{T}}. \tag{106}\]
_Then, \(G_{k}\) is nonsingular, and_
\[G_{k}^{-1}=I_{p}-\bar{\phi}_{k}P_{k+1}\bar{\phi}_{k}^{\mathrm{T}}. \tag{107}\]
Proof:: Let \(k\geq 0\). It follows from (3) and (11) that \(P_{k}^{-1}-F_{k}\succ 0_{n\times n}\). Therefore, \(\bar{\phi}_{k}(P_{k}^{-1}-F_{k})^{-1}\bar{\phi}_{k}^{\mathrm{T}}\succeq 0\). It then follows that \(G_{k}\succeq I_{p}\) and hence \(\bar{G}_{k}\) is nonsingular. Next, it follows from substituting (9) into (106) that
\[G_{k}=I_{p}+\bar{\phi}_{k}[P_{k+1}^{-1}-\bar{\phi}_{k}^{\mathrm{T}}\bar{\phi} _{k}]^{-1}\bar{\phi}_{k}^{\mathrm{T}}. \tag{108}\]
Finally, applying (88) of Lemma 2 to (108) give (107).
For all \(k\geq 0\), define \(\Delta V_{k}\in\mathbb{R}^{n\times n}\) by
\[\Delta V_{k}\triangleq-M_{k}^{\mathrm{T}}P_{k+1}^{-1}M_{k}+P_{k}^{-1}, \tag{109}\]
where, for all \(k\geq 0\), \(M_{k}\) is defined in (16).
**Lemma 7**: _For all \(k\geq 0\),_
\[\Delta V_{k}\succeq F_{k}+\frac{\bar{\phi}_{k}^{\mathrm{T}}\bar{\phi}_{k}}{1+ \mathbf{\lambda_{\max}}(\bar{\phi}_{k}\bar{\phi}_{k}^{\mathrm{T}})\mathbf{\lambda_{ \max}}((P_{k}^{-1}-F_{k})^{-1})}. \tag{110}\]
Proof:: Let \(k\geq 0\). It follows from substituting (19) into (109) that \(\Delta V_{k}\) can be expanded as
\[\Delta V_{k}=-P_{k+1}^{-1}+2\bar{\phi}_{k}^{\mathrm{T}}\bar{\phi}_{k}-\bar{ \phi}_{k}^{\mathrm{T}}\bar{\phi}_{k}P_{k+1}\bar{\phi}_{k}^{\mathrm{T}}\bar{\phi }_{k}+P_{k}^{-1}. \tag{111}\]
It then follows from substituting (9) into (111) that
\[\Delta V_{k}=F_{k}+\bar{\phi}_{k}^{\mathrm{T}}\bar{\phi}_{k}-\bar{\phi}_{k}^{ \mathrm{T}}\bar{\phi}_{k}P_{k+1}\bar{\phi}_{k}^{\mathrm{T}}\bar{\phi}_{k},\] \[=F_{k}+\bar{\phi}_{k}^{\mathrm{T}}(I_{p}-\bar{\phi}_{k}P_{k+1}\bar{ \phi}_{k}^{\mathrm{T}})\bar{\phi}_{k}.\]
It then follows from Lemma 6 that
\[\Delta V_{k}= F_{k}+\bar{\phi}_{k}^{\mathrm{T}}G_{k}^{-1}\bar{\phi}_{k}, \tag{112}\]
where \(G_{k}\) is defined in (106). Finally, it follows from (106) that
\[G_{k}\preceq\big{[}1+\mathbf{\lambda_{\max}}(\bar{\phi}_{k}^{\mathrm{T}}\bar{\phi} _{k})\mathbf{\lambda_{\max}}\left((P_{k}^{-1}-F_{k})^{-1}\right)\big{]}\,I_{p}. \tag{113}\]
Combining (112) and (113) yields (110).
Proof:: _Proof of statements 1) and 2) of Theorem 2._ Define \(V\colon\mathbb{N}_{0}\times\mathbb{R}^{n}\to\mathbb{R}\) by
\[V(k,\tilde{\theta})\triangleq\bar{\theta}^{\mathrm{T}}P_{k}^{-1}\tilde{\theta}.\]
Note that, for all \(k\geq 0\),
\[V(k,0)=0. \tag{114}\]
Next, from (15), we define \(f\colon\mathbb{N}_{0}\times\mathbb{R}^{n}\to\mathbb{R}^{n}\) by
\[f(k,\tilde{\theta})\triangleq M_{k}\tilde{\theta}.\]
Note that, for all \(k\geq 0\) and \(\tilde{\theta}\in\mathbb{R}^{n}\),
\[V(k+1,f(k,\tilde{\theta}))-V(k,\tilde{\theta})=-\bar{\theta}^{\mathrm{T}}\Delta V_{ k}\tilde{\theta}.\]
Then, for all \(k\geq 0\) and \(\tilde{\theta}\in\mathbb{R}^{n}\), it follows from Lemma 7 and condition _A1)_ that
\[V(k+1,f(k,\tilde{\theta}))-V(k,\tilde{\theta})\leq-\tilde{\theta}^{\mathrm{T}}F_{ k}\tilde{\theta}\leq 0. \tag{115}\]
We now prove statements 1) and 2):
1. By Lemma 5, conditions _A1)_ and _A2)_ imply that, for all \(k\geq 0\) and \(\tilde{\theta}\in\mathbb{R}^{n}\), \[\frac{1}{b}\|\tilde{\theta}\|^{2}\leq V(k,\tilde{\theta}).\] (116) Equations (114), (115), and (116) imply that (93), (94), and (95) are satisfied. It then follows from part _i)_ of Theorem 4 that the equilibrium \(\tilde{\theta}_{k}\equiv 0\) of (15) is simply _Lyapunov stable_.
2. Condition _A3)_ further implies that, for all \(k\geq 0\) and \(\tilde{\theta}\in\mathbb{R}^{n}\), \[V(k,\tilde{\theta})\leq\frac{1}{a}\|\tilde{\theta}\|^{2}.\] (117) Equations (115), (116), and (117) imply that (94), (95), and (96) are satisfied. By part _ii)_ of Theorem 4, it follows that the equilibrium \(\tilde{\theta}_{k}\equiv 0\) of (15) is uniformly Lyapunov stable.
## Appendix E: Proof of 3) and 4) of Theorem 2
For all \(k\geq 0\), \(i\geq 1\), define \(W_{k,i}\in\mathbb{R}^{p\times p}\) by
\[W_{k,i}\triangleq\bar{\phi}_{k+i}P_{k+1}\bar{\phi}_{k}^{\mathrm{T}}. \tag{118}\]
For all \(k\geq 0\), define \(\Phi_{k,1}\triangleq\bar{\phi
\(\Psi_{k,N}\in\mathbb{R}^{Np\times Np}\), and \(\mathcal{W}_{k,N}\in\mathbb{R}^{Np\times Np}\) by
\[\Phi_{k,N}\triangleq\begin{bmatrix}\bar{\phi}_{k}\\ \vdots\\ \bar{\phi}_{k+N-1}\end{bmatrix}, \tag{119}\] \[\Psi_{k,N}\triangleq\begin{bmatrix}\bar{\phi}_{k}\\ \bar{\phi}_{k+1}M_{k}\\ \vdots\\ \bar{\phi}_{k+N-1}M_{k+N-2}\cdots M_{k+1}M_{k}\end{bmatrix},\] (120) \[\mathcal{W}_{k,N}\triangleq\begin{bmatrix}I_{p}&0&0&\cdots&0\\ W_{k,1}&I_{p}&0&\cdots&0\\ W_{k,2}&W_{k+1,1}&I_{p}&\cdots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ W_{k,N-1}&W_{k+1,N-2}&W_{k+2,N-3}&\cdots&I_{p}\end{bmatrix}. \tag{121}\]
**Lemma 8**: _For all \(k\geq 0\) and \(N\geq 2\),_
\[\begin{bmatrix}W_{k,N-1}&W_{k+1,N-2}&\cdots&W_{k+N-2,1}&I_{p} \end{bmatrix}\Psi_{k,N}\\ =\bar{\phi}_{k+N-1}. \tag{122}\]
Let \(k\geq 0\) and \(N\geq 2\). We first show that
\[\begin{bmatrix}W_{k,N-1}&\cdots&W_{k+N-2,1}\end{bmatrix}\Psi_{k,N-1}\\ =\bar{\phi}_{k+N-1}(I_{p}-M_{k+N-2}\cdots M_{k+1}M_{k}). \tag{123}\]
It follows from (118) that, for all \(0\leq i\leq N-2\),
\[W_{k+i,N-1-i}\bar{\phi}_{k+i} =\bar{\phi}_{k+N-1}P_{k+i+1}\bar{\phi}_{k+i}^{\mathrm{T}}\bar{ \phi}_{k+i}\] \[=\bar{\phi}_{k+N-1}(I_{p}-M_{k+i}).\]
Substituting this identity into the left-hand side of (123), it follows that
\[\begin{bmatrix}W_{k,N-1}&\cdots&W_{k+N-2,1}\end{bmatrix}\Psi_{k,N-1}= \sum_{i=0}^{N-2}W_{k+i,N-1-i}\bar{\phi}_{k+i}\\ =\bar{\phi}_{k+N-1}(I_{p}-M_{k}+\sum_{i=1}^{N-2}(I-M_{k+i})M_{k+i- 1}\cdots M_{k}).\end{bmatrix}\]
Note that this forms a telescoping series, which, by cancellation of successive terms, simplifies to the right-hand size of (123). Hence, (123) is proven. Next, adding \(\bar{\phi}_{k+N-1}M_{k+N-2}\cdots M_{k+1}M_{k}\) to both sides of (123) gives (122).
**Lemma 9**: _For all \(k\geq 0\) and \(N\geq 1\),_
\[\mathcal{W}_{k,N}\Psi_{k,N}=\Phi_{k,N}, \tag{124}\]
For all \(k\geq 0\), and \(N=1\), (124) simplifies to \(\bar{\phi}_{k}=\bar{\phi}_{k}\). Next, note that, for all \(k\geq 0\) and \(N\geq 2\), \(\mathcal{W}_{k,N}\Psi_{k,N}\) can be written as
\[\mathcal{W}_{k,N}\Psi_{k,N}=\begin{bmatrix}\bar{\phi}_{k}\\ \begin{bmatrix}W_{k,1}&I_{p}\end{bmatrix}\Psi_{k,2}\\ \begin{bmatrix}W_{k,2}&W_{k+1,1}&I_{p}\end{bmatrix}\Psi_{k,3}\\ \vdots\\ \end{bmatrix}.\]
Hence, (124) follows from repeated usage of Lemma 8 to the row partitions of \(\mathcal{W}_{k,N}\Psi_{k,N}\).
**Lemma 10**: _Assume there exists \(b\in(0,\infty)\) such that, for all \(k\geq 0\), \(P_{k}\preceq bI_{n}\). Furthermore, assume \((\bar{\phi}_{k})_{k=0}^{\infty}\) is bounded with upper bound \(\bar{\beta}\in(0,\infty)\). Then, for all \(k\geq 0\) and \(N\geq 1\),_
\[\boldsymbol{\sigma_{\max}}(\mathcal{W}_{k,N})^{2}\leq N\left[1+\frac{N-1}{2} \left(b\bar{\beta}\right)^{2}\right]. \tag{125}\]
If \(N=1\), (125) simplifies to \(\boldsymbol{\sigma_{\max}}(I_{p})^{2}\leq 1\) which holds. Next, let \(k\geq 0\) and \(N\geq 2\). It follows from Lemma 3 that \(\boldsymbol{\sigma_{\max}}(\mathcal{W}_{k,N})^{2}\leq N\boldsymbol{\sigma_{ \max}}(I_{p})^{2}+\sum_{i=1}^{N-1}\sum_{j=1}^{N-i}\boldsymbol{\sigma_{\max}}(W_ {k-1+i,j})^{2}\). Note that, for all \(1\leq i\leq N-1\) and \(1\leq j\leq N-i\), it follows from (118) that \(\boldsymbol{\sigma_{\max}}(W_{k-1+i,j})\leq b\bar{\beta}\). Using this inequality, it follows that \(\boldsymbol{\sigma_{\max}}(W_{k,N})^{2}\leq N+\sum_{i=1}^{N-1}\sum_{j=1}^{N-i} \left(b\bar{\beta}\right)^{2}\), which simplifies to \(\boldsymbol{\sigma_{\max}}(W_{k,N})^{2}\leq N+\frac{N(N-1)}{2}\left(b\bar{ \beta}\right)^{2}\), and (125) follows.
For all \(k\geq 0\) and \(N\geq 1\), define \(\Delta^{N}V_{k}\in\mathbb{R}^{n\times n}\) by
\[\Delta^{N}V_{k}\triangleq-M_{k}^{\mathrm{T}}\cdots M_{k+N-1}^{\mathrm{T}}P_{k+N }^{-1}M_{k+N-1}\cdots M_{k}+P_{k}^{-1}. \tag{126}\]
**Lemma 11**: _Assume that, for all \(k\geq 0\), \(F_{k}\succeq 0_{n\times n}\). Assume there exists \(b\in(0,\infty)\) such that, for all \(k\geq 0\), \(P_{k}\preceq bI_{n}\). Furthermore, assume \((\bar{\phi}_{k})_{k=0}^{\infty}\) is bounded with upper bound \(\bar{\beta}\in(0,\infty)\). Then, for all \(k\geq 0\) and \(N\geq 1\),_
\[\Delta^{N}V_{k}\succeq(1+b\bar{\beta})^{-1}\Psi_{k,N}^{\mathrm{T}}\Psi_{k,N}. \tag{127}\]
For brevity, we define \(\nu\triangleq(1+b\bar{\beta})^{-1}\). Proof follows by induction on \(N\). First, let \(k\geq 0\) and consider the base case \(N=1\). Note that \(\Delta^{1}V_{k}=\Delta V_{k}\) and \(\Psi_{k,1}=\bar{\phi}_{k}\). Hence, it follows from Lemma 7 that \(\Delta^{1}V_{k}\succeq F_{k}+\nu\bar{\phi}_{k}^{\mathrm{T}}\bar{\phi}_{k} \succeq\nu\Psi_{k,1}^{\mathrm{T}}\Psi_{k,1}\). Next, let \(N\geq 2\). Note that \(\Delta^{N}V_{k}\), given by (126) can be expressed recursively as
\[\Delta^{N}V_{k} =M_{k}^{\mathrm{T}}(\Delta^{N-1}V_{k+1}-P_{k+1}^{-1})M_{k}+P_{k}^{-1}\] \[=M_{k}^{\mathrm{T}}\Delta^{N-1}V_{k+1}M_{k}+\Delta V_{k}.\]
It follows from inductive hypothesis that \(\Delta^{N-1}V_{k+1}\succeq\nu\Psi_{k+1,N-1}^{\mathrm{T}}\Psi_{k+1,N-1}\). Substituting into the previous equation gives
\[\Delta^{N}V_{k} \succeq\nu M_{k}^{\mathrm{T}}\Psi_{k+1,N-1}^{\mathrm{T}}\Psi_{k +1,N-1}M_{k}+\nu\bar{\phi}_{k}^{\mathrm{T}}\bar{\phi}_{k}\] \[=\nu\left[\bar{\phi}_{k}^{\mathrm{T}}\quad M_{k}^{\mathrm{T}} \Psi_{k+1,N-1}^{\mathrm{T}}\right]\left[\begin{matrix}\bar{\phi}_{k}\\ \Psi_{k+1,N-1}M_{k}\end{matrix}\right].\]
Note, from (120), that \(\left[\bar{\phi}_{k}^{\mathrm{T}}\quad M_{k}^{\mathrm{T}}\Psi_{k+1,N-1}^{ \mathrm{T}}\right]=\Psi_{k,N}^{\mathrm{T}}\), and (127) follows.
**Lemma 12**: _Assume that, for all \(k\geq 0\), \(F_{k}\succeq 0_{n\times n}\). Also assume there exists \(b\in(0,\infty)\) such that, for all \(k\geq 0\), \(P_{k}\preceq bI_{n}\). Furthermore, assume \((\bar{\phi}_{k})_{k=0}^{\infty}\) is persistently exciting with lower bound \(\bar{\alpha}>0\) and persistency window \(N\) and bounded with upper bound \(\bar{\beta}\in(0,\infty)\). Then, for all \(k\geq 0\) and \(N\geq 1\),_
\[\Delta^{N}V_{k}\succeq c_{N}I
nonsingular. It then follows from Lemma 9 that \(\Psi_{k,N}=\mathcal{W}_{k,N}^{-1}\Phi_{k,N}\) and thus
\[\Psi_{k,N}^{\rm T}\Psi_{k,N}\geq\frac{\Phi_{k,N}^{\rm T}\Phi_{k,N}}{\mathbf{\sigma_{ \max}}(\mathcal{W}_{k,N})^{2}}. \tag{130}\]
Moreover, by Lemma 10,
\[\frac{1}{\mathbf{\sigma_{\max}}(\mathcal{W}_{k,N})^{2}}\geq\frac{1}{N}\left[1+ \frac{N-1}{2}\left(b\bar{\beta}\right)^{2}\right]^{-1}. \tag{131}\]
Furthermore, persistent excitation of \((\bar{\phi}_{k})_{k=0}^{\infty}\) implies that
\[\Phi_{k,N}^{\rm T}\Phi_{k,N}\succeq\bar{\alpha}I_{n}. \tag{132}\]
Finally, substituting (130), (131), and (132) into (127) yields (128).
**Lemma 13**: _Assume, for all \(k\geq 0\), \(F_{k}\succeq 0\). Moreover, assume there exists \(b\in(0,\infty)\) such that, for all \(k\geq 0\), \((P_{k}^{-1}-F_{k})^{-1}\preceq bl_{n}\). Furthermore, assume \((\bar{\phi}_{k})_{k=0}^{\infty}\) is bounded with upper bound \(\bar{\beta}\in(0,\infty)\). Then, for all \(k\geq 0\),_
\[\|\tilde{\theta}_{k+1}\|\leq(1+b\bar{\beta})\|\tilde{\theta}_{k}\|. \tag{133}\]
Proof:: Let \(k\geq 0\). It follows from Lemma 5 that, for all \(k\geq 0\), \(P_{k}\preceq bl_{n}\). Next, it follows from (15) and (19) that
\[\|\tilde{\theta}_{k+1}\| \leq(\mathbf{\sigma_{\max}}(I_{n})+\mathbf{\sigma_{\max}}(P_{k+1})\mathbf{ \sigma_{\max}}(\bar{\phi}_{k}^{\rm T}\bar{\phi}_{k}))\|\tilde{\theta}_{k}\|\] \[\leq(1+b\bar{\beta})\|\tilde{\theta}_{k}\|.\]
Proof of statements 3) and 4) of Theorem 2.: Note that repeated substitution of (15) gives, for all \(j\geq 0\),
\[\tilde{\theta}_{(j+1)N}=M_{(j+1)N-1}\cdots M_{jN}\tilde{\theta}_{jN}.\]
Hence, we define \(f^{N}\colon\mathbb{N}_{0}\times\mathbb{R}^{n}\to\mathbb{R}^{n}\) by, for all \(j\geq 0\) and \(\tilde{\theta}\in\mathbb{R}^{n}\),
\[f^{N}(j,\tilde{\theta})\triangleq M_{jN+N-1}\cdots M_{jN+1}M_{jN}\tilde{ \theta}.\]
Further, for all \(j\geq 0\), define \(\tilde{\theta}_{j}^{N}\in\mathbb{R}^{n}\) by \(\tilde{\theta}_{j}^{N}\triangleq\tilde{\theta}_{jN}\), which yields the system
\[\tilde{\theta}_{j+1}^{N}=f^{N}(j,\tilde{\theta}_{j}^{N}). \tag{134}\]
Next, define \(V^{N}\colon\mathbb{N}_{0}\times\mathbb{R}^{n}\to\mathbb{R}\) by
\[V^{N}(j,\tilde{\theta})\triangleq\tilde{\theta}^{\rm T}P_{jN}^{-1}\tilde{ \theta},\]
and note that, for all \(j\geq 0\),
\[V^{N}(j,0)=0. \tag{135}\]
Also, note that, for all \(j\geq 0\),
\[V^{N}\big{(}j+1,f^{N}(j,\tilde{\theta})\big{)}-V^{N}\big{(}j,\tilde{\theta} \big{)}=-\tilde{\theta}^{\rm T}\Delta^{N}V_{jN}\tilde{\theta},\]
where \(\Delta^{N}V_{jN}\in\mathbb{R}^{n\times n}\) is defined in (126). It then follows from Lemma 12 that, for all \(j\geq 0\),
\[V^{N}\big{(}j+1,f^{N}(j,\tilde{\theta})\big{)}-V^{N}\big{(}j,\tilde{\theta} \big{)}\leq-c_{N}\|\tilde{\theta}\|^{2}, \tag{136}\]
where \(c_{N}>0\) is defined in (129). Furthermore, it follows from Lemma 13 that, for all \(j\geq 0\) and \(l=1,\ldots,N-1\),
\[\|\tilde{\theta}_{jN+l}\|\leq\big{(}1+b\bar{\beta}\big{)}^{N-1}\|\tilde{ \theta}_{jN}\|. \tag{137}\]
We now prove statements 3) and 4):
* By Lemma 5, conditions _A1)_ and _A2)_ implies that, for all \(j\geq 0\) and \(\tilde{\theta}\in\mathbb{R}^{n}\), \[\frac{1}{b}\|\tilde{\theta}\|^{2}\leq V^{N}(j,\tilde{\theta}).\] (138) Equations (135), (136), and (138) imply that (93), (94), and (97) are satisfied. Hence, by part _iii)_ of Theorem 4, the equilibrium \(\tilde{\theta}_{j}^{N}\equiv 0\) of (134) is globally asymptotically stable. We now show that the equilibrium \(\tilde{\theta}_{k}\equiv 0\) of (15) is also globally asymptotically stable. Let \(\varepsilon>0\) and \(k_{0}\geq 0\). Write \(k_{0}=j_{0}N+l_{0}\), where \(j_{0}\geq 0\), and \(0\leq l_{0}\leq N-1\). Since the equilibrium \(\tilde{\theta}_{j}^{N}\equiv 0\) of (134) is Lyapunov stable, we can choose \(\delta\) such that \(\|\tilde{\theta}_{j_{0}N}\|<\delta\) implies that, for all \(j\geq j_{0}\), \[\|\tilde{\theta}_{jN}\|<\varepsilon(1+b\bar{\beta})^{\frac{1}{N-1}}.\] (139) Let \(\|\tilde{\theta}_{k_{0}}\|<\delta\). For all \(k^{*}\geq k_{0}\), write \(k^{*}=j^{*}N+l^{*}\) and note that \(j^{*}\geq j_{0}\). It then follows from (137) and (139) that, for all \(k^{*}\geq k_{0}\), \[\|\tilde{\theta}_{k^{*}}\| \leq(1+b\bar{\beta})^{N-1}\|\tilde{\theta}_{j^{*}N}\|\] \[<(1+b\bar{\beta})^{N-1}\varepsilon(1+b\bar{\beta})^{\frac{1}{N-1}}=\varepsilon.\] Hence, the equilibrium \(\tilde{\theta}_{k}\equiv 0\) of (15) is Lyapunov stable. A similar argument using (137) can be used to show that \(\lim_{j\to\infty}\tilde{\theta}_{jN}=0\) implies that \(\lim_{k\to\infty}\tilde{\theta}_{k}=0\). Therefore, the equilibrium \(\tilde{\theta}_{k}\equiv 0\) of (15) is globally asymptotically stable.
* Condition _A3)_ further implies that, for all \(j\geq 0\) and \(\tilde{\theta}\in\mathbb{R}^{n}\), \[V^{N}(j,\tilde{\theta})\leq\frac{1}{a}\|\tilde{\theta}\|^{2}.\] (140) Equations (136), (138), and (140) imply that (94), (96), and (97) are satisfied. Hence, by part _iv)_ of Theorem 4, the equilibrium \(\tilde{\theta}_{j}^{N}\equiv 0\) of (134) is globally uniformly exponentially stable. Using (137) in an argument similar to the proof of statement 3), it can be shown that the equilibrium \(\tilde{\theta}_{k}\equiv 0\) of (15) is also globally uniformly exponentially stable.
## Appendix F Lemmas used in the proof of Theorem 3
Suppose, for all \(k\geq 0\), there exists \(\zeta_{k}\in\mathbb{R}^{n}\) such that,
\[\tilde{\theta}_{k+1}=M_{k}(\tilde{\theta}_{k}-\zeta_{k}), \tag{141}\]
where \(\tilde{\theta}_{k}\) is defined in (25) and \(M_{k}\) is defined in (16). Next, for all \(k\geq 0\) and \(i\geq 1\), define \(\mathcal{M}_{k,i}\in\mathbb{R}^{n\times n}\) by
\[\mathcal{M}_{k,i}\triangleq\begin{cases}M_{k}&i=1,\\ M_{k+i-1}\cdots M_{k+1}M_{k}&i\geq 2.\end{cases} \tag{142}\]
Moreover, for all \(k\geq 0\), let \(P_{k}^{-\frac{1}{2}}\in\mathbb{R}^{n\times n}\) be the unique positive-semidefinite matrix such that \(P_{k}^{-1}=P_{k}^{-\frac{1}{2}\Upsilon}P_{k}^{-\frac{1}{2}}\).
**Lemma 14**: _Assume conditions _A1)_, _A2)_, _A3)_, and _A4)_ hold. Then, for all \(k\geq 0\) and \(N\geq 1\),
\[\mathbf{\sigma_{\max}}(P_{k+N}^{-\frac{1}{2}}\mathcal{M}_{k,N})\leq\sqrt{\frac{1}{a }-c_{N}}, \tag{143}\]
where \(c_{N}\) is defined in (129).
It follows from Lemma 12 that \(\Delta^{N}V_{k}=\mathcal{M}_{k,N}^{\rm T}P_{k+N}^{-1}\mathcal{M}_{k,N}-P_{k}^{-1} \preceq-c_{N}I_{n}\). Hence,
\[\mathcal{M}_{k,N}^{\rm T}P_{k+N}^{-1}\mathcal{M}_{k,N} =(P_{k+N}^{-\frac{1}{2}}\mathcal{M}_{k,N})^{\rm T}(P_{k+N}^{-\frac {1}{2}}\mathcal{M}_{k,N})\] \[\preceq P_{k}^{-1}-c_{N}I_{n}\preceq(\frac{1}{a}-c_{N})I_{n}\]
and (143) follows.
**Lemma 15**: _Assume, for all \(k\geq 0\), there exists \(\zeta_{k}\in\mathbb{R}^{n}\) such that (141) holds. Then, for all \(N\geq 1\) and \(k\geq 0\),_
\[\tilde{\theta}_{k+N}=\mathcal{M}_{k,N}\tilde{\theta}_{k}-\overline{\mathcal{ M}}_{k,N}\bar{\zeta}_{k,N}, \tag{144}\]
_where \(\overline{\mathcal{M}}_{k,N}\in\mathbb{R}^{n\times Nn}\) and \(\bar{\zeta}_{k,N}\in\mathbb{R}^{Nn\times 1}\) are defined_
\[\overline{\mathcal{M}}_{k,N}\triangleq\begin{bmatrix}\mathcal{M}_{k,N}& \mathcal{M}_{k+1,N-1}&\cdots&\mathcal{M}_{k+N-1,1}\end{bmatrix}, \tag{145}\]
_\[\bar{\zeta}_{k,N}\triangleq\begin{bmatrix}\zeta_{k}^{\rm T}&\zeta_{k+1}^{\rm T }&\cdots&\zeta_{k+N-1}^{\rm T}\end{bmatrix}.\] (146)_
Let \(k\geq 0\) and proof follows by induction on \(N\geq 1\). First, consider the base case \(N=1\). Note that \(\overline{\mathcal{M}}_{k,1}=\mathcal{M}_{k,1}=M_{k}\) and \(\bar{\zeta}_{k,1}=\zeta_{k}\). Hence, (144) follows immediately from (141). Next, let \(N\geq 2\). By inductive hypothesis, \(\tilde{\theta}_{k+N-1}=\mathcal{M}_{k,N-1}\hat{\theta}_{k}-\overline{\mathcal{ M}}_{k,N-1}\bar{\zeta}_{k,N-1}\). Furthermore, it follows from (141) that \(\tilde{\theta}_{k+N}=M_{k+N-1}(\hat{\theta}_{k+N-1}-\zeta_{k+N-1})\). Combining these two equalities gives
\[\tilde{\theta}_{k+N} =M_{k+N-1}\mathcal{M}_{k,N-1}\tilde{\theta}_{k}\] \[+\begin{bmatrix}M_{k+N-1}\overline{\mathcal{M}}_{k,N-1}&M_{k+N-1} \end{bmatrix}\begin{bmatrix}\bar{\zeta}_{k,N-1}\\ \zeta_{k+N-1}\end{bmatrix},\]
which can be rewritten as (144).
Note that from (141) and Lemma 15, it follows that, for all \(l=0,1,\ldots,N-1\) and \(j\geq 0\),
\[\tilde{\theta}_{(j+1)N+l}=\mathcal{M}_{jN+l,N}\tilde{\theta}_{jN+l}-\overline{ \mathcal{M}}_{jN+l,N}\bar{\zeta}_{jN+l,N}. \tag{147}\]
Next, for all \(l=0,1,\ldots,N-1\) and \(j\geq 0\), define \(\tilde{\theta}_{l,j}^{N}\in\mathbb{R}^{n}\) by
\[\tilde{\theta}_{l,j}^{N}\triangleq\tilde{\theta}_{jN+l}, \tag{148}\]
which, for all \(l=0,1,\ldots,N-1\), gives the system
\[\tilde{\theta}_{l,j+1}^{N}=\mathcal{M}_{jN+l,N}\tilde{\theta}_{l,j}^{N}- \overline{\mathcal{M}}_{jN+l,N}\bar{\zeta}_{jN+l,N}. \tag{149}\]
**Lemma 16**: _Assume, for all \(k\geq 0\), there exists \(\zeta_{k}\in\mathbb{R}^{n}\) such that (141) holds. Furthermore, assume conditions A1), A2), A3), and A4) hold and assume there exists \(\zeta\geq 0\) such that, for all \(k\geq 0\),_
\[\|\zeta_{k}\|\leq\zeta. \tag{150}\]
_Then, for all \(l=0,1,\ldots,N-1\), the system (149) is globally uniformly ultimately bounded with bound \(\varepsilon^{*}\zeta\), where \(\varepsilon^{*}\) is given by (28)._
We prove the case \(l=0\). The cases \(l=1,\ldots,N-1\) can be shown similarly.
Define \(\hat{f}^{N}\colon\mathbb{N}_{0}\times\mathbb{R}^{n}\to\mathbb{R}^{n}\) by
\[\hat{f}^{N}(j,\tilde{\theta})\triangleq\mathcal{M}_{jN,N}\tilde{\theta}- \overline{\mathcal{M}}_{jN,N}\bar{\zeta}_{jN,N}, \tag{151}\]
and note that, for all \(j\geq 0\), \(\tilde{\theta}_{0,j+1}^{N}=f^{N}(j,\tilde{\theta}_{0,j}^{N})\). Furthermore, define \(V^{N}\colon\mathbb{N}_{0}\times\mathbb{R}^{n}\to\mathbb{R}\) by
\[V^{N}(j,\tilde{\theta})\triangleq\tilde{\theta}^{\rm T}P_{jN}^{-1}\tilde{\theta}. \tag{152}\]
Note that, for all \(j\geq 0\) and \(\check{\theta}\in\mathbb{R}^{n}\),
\[\frac{1}{b}\|\check{\theta}\|^{2}\leq V^{N}(j,\check{\theta})\leq\frac{1}{a} \|\check{\theta}\|^{2}, \tag{153}\]
where the lower bound follows from Lemma 5 and conditions A1) and A2) and where the upper bound follows from condition A3).
Next, by substituting (151) into (152), it follows that, for all \(j\geq 0\) and \(\tilde{\theta}\in\mathbb{R}^{n}\),
\[V^{N}(j+1,\hat{f}^{N}(j,\check{\theta}))-V^{N}(j,\check{\theta})\] \[\quad=-\tilde{\theta}^{\rm T}\Delta^{N}V_{jN}\check{\theta}-2 \tilde{\theta}^{\rm T}\mathcal{M}_{jN,N}^{\rm T}P_{(j+1)N}^{-1}\overline{ \mathcal{M}}_{jN,N}\bar{\zeta}_{jN,N}\] \[\quad\quad\quad+\bar{\zeta}_{jN,N}^{\rm T}\overline{\mathcal{M}} _{jN,N}P_{(j+1)N}^{-1}\overline{\mathcal{M}}_{jN,N}\bar{\zeta}_{jN,N}, \tag{154}\]
where \(\Delta^{N}V_{jN}\in\mathbb{R}^{n\times n}\) is defined in (126). Since conditions A1), A2), A3), and A4) hold, it then follows from Lemma 12 that, for all \(j\geq 0\),
\[-\Delta^{N}V_{jN}\preceq-c_{N}I_{n}, \tag{155}\]
where \(c_{N}>0\) is defined in (129). Next, it follows from Lemma 14 that, for all \(j\geq 0\), \(\boldsymbol{\sigma_{\max}}(P_{(j+1)N}^{-\frac{1}{2}}\mathcal{M}_{jN,N})\leq \sqrt{\frac{1}{a}-c_{N}}\). Furthermore, it follows from (145) that, for all \(j\geq 0\),
\[P_{(j+1)N}^{-\frac{1}{2}}\overline{\mathcal{M}}_{jN,N}\] \[\quad=\begin{bmatrix}P_{(j+1)N}^{-\frac{1}{2}}\mathcal{M}_{jN,N}& \cdots&P_{(j+1)N}^{-\frac{1}{2}}\mathcal{M}_{jN+N-1,1}\end{bmatrix}. \tag{156}\]
It similarly follows from Lemma 14 that, for all \(i=1,\ldots,N\)\(\boldsymbol{\sigma_{\max}}(P_{(j+1)N}^{-\frac{1}{2}}\mathcal{M}_{(j+1)N-i,i})\leq \sqrt{\frac{1}{a}-c_{N}}\). Hence, applying Lemma 3 to (156) gives
\[\boldsymbol{\sigma_{\max}}(P_{(j+1)N}^{-\frac{1}{2}})\overline{\mathcal{M}}_{jN,N })\leq\sqrt{\sum\nolimits_{i=1}^{N}(\frac{1}{a}-c_{i})}. \tag{157}\]
Furthermore, note that, for all \(i\geq 1\), \(c_{i+1}<c_{i}\), and hence, for all \(j\geq 0\),
\[\boldsymbol{\sigma_{\max}}(P_{(j+1)N}^{-\frac{1}{2}}\overline{\mathcal{M}}_{jN,N })\leq\sqrt{N(\frac{1}{a}-c_{N})}. \tag{158}\]
Applying Lemma 14 and (158), it follows that, for all \(j\geq 0\),
\[\mathcal{M}_{jN,N}^{\rm T}P_{(j+1)N}^{-1}\overline{\mathcal{M}}_{jN,N} \preceq\sqrt{N}(\frac{1}{a}-c_{N})I_{n}, \tag{159}\] \[\overline{\mathcal{M}}_{jN,N}^{\rm T}P_{(j+1)N}^{-1}\overline{ \mathcal{M}}_{jN,N} \preceq N(\frac{1}{a}-c_{N})I_{n}. \tag{160}\]
Finally,
It then follows from (162) that, for all \(j\geq 0\) and \(\check{\theta}\in\mathbb{R}^{n}\),
\[\frac{1}{c_{N}} \big{[}V^{N}(j+1,\check{f}^{N}(j,\check{\theta}))-V^{N}(j,\check{ \theta})\big{]}\] \[\leq-\|\check{\theta}\|^{2}+2N\zeta(\frac{1}{ac_{N}}-1)\|\check{ \theta}\|+N^{2}\zeta^{2}(\frac{1}{ac_{N}}-1).\] \[=-\|\check{\theta}\|^{2}+2N\zeta\Delta_{N}\|\check{\theta}\|+N^{ 2}\zeta^{2}\Delta_{N}. \tag{164}\]
Next, we define \(\mu_{N}\in\mathbb{R}\) by
\[\mu_{N}\triangleq\big{(}\Delta_{N}+\sqrt{\Delta_{N}+\Delta_{N}^{2}}\,\big{)}N\zeta. \tag{165}\]
It follows from solving the quadratic equation (164) that, for all \(j\geq 0\) and \(\check{\theta}\in\mathbb{R}^{n}\) such that \(\|\check{\theta}\|>\mu_{N}\),
\[V^{N}(j+1,\check{f}^{N}(j,\check{\theta}))-V^{N}(j,\check{\theta})<0. \tag{166}\]
Next, it follows from substituting (151) into (152) that, for all \(j\geq 0\) and \(\check{\theta}\in\mathbb{R}^{n}\),
\[\sqrt{V^{N}(j+1,\check{f}^{N}(j,\check{\theta}))}\] \[= \|\mathcal{M}_{jN,N}\check{\theta}-\overline{\mathcal{M}}_{jN,N} \check{\zeta}_{jN,N}\|_{P^{-1}_{(j+1)N}}\] \[= \|P^{-\frac{1}{2}}_{(j+1)N}\mathcal{M}_{jN,N}\check{\theta}-P^{- \frac{1}{2}}_{(j+1)N}\overline{\mathcal{M}}_{jN,N}\check{\zeta}_{jN,N}\|\] \[\leq \boldsymbol{\sigma_{\max}}(P^{-\frac{1}{2}}_{(j+1)N}\mathcal{M}_ {jN,N})\|\check{\theta}\|\] \[+\boldsymbol{\sigma_{\max}}(P^{-\frac{1}{2}}_{(j+1)N}\overline{ \mathcal{M}}_{jN,N})\|\check{\zeta}_{jN,N}\|,\]
where the last inequality follows from triangle inequality. Applying Lemma 14 and inequalities (158) and (161) to the previous equation, it then follows that, for all \(j\geq 0\) and \(\check{\theta}\in\mathbb{R}^{n}\),
\[\sqrt{V^{N}(j+1,\check{f}^{N}(j,\check{\theta}))}\leq\sqrt{\frac{1}{a}-c_{N}} \|\check{\theta}\|+\sqrt{\frac{1}{a}-c_{N}}N\zeta. \tag{167}\]
For brevity, define
\[\sqrt{\sup V}\triangleq\sqrt{\sup_{(j,\check{\theta})\in\mathbb{R}_{N}\times \check{\mathcal{B}}_{\mu_{N}}(0)}V^{N}(j+1,\check{f}^{N}(j,\check{\theta}))}.\]
It then follows from (167) that
\[\sqrt{\sup V}\leq\sqrt{\frac{1}{a}-c_{N}}(\mu_{N}+N\zeta).\]
Substituting (165), it follows that
\[\sqrt{\sup V}\leq\sqrt{\frac{1}{a}-c_{N}}\left(1+\Delta_{N}+\sqrt{\Delta_{N}+ \Delta_{N}^{2}}\right)N\zeta. \tag{168}\]
Next, note that (163) implies that
\[\frac{1}{a}-c_{N}=\frac{1}{a}\frac{\Delta_{N}}{1+\Delta_{N}}. \tag{169}\]
Substituting (169) into (168) and simplifying, it follows that
\[\sqrt{\sup V}\leq\frac{1}{\sqrt{a}}\Big{(}\Delta_{N}+\sqrt{\Delta_{N}+\Delta _{N}^{2}}\Big{)}N\zeta. \tag{170}\]
Applying Theorem 5, it follows from (153), (165), (166), and (170) that the system (149) with \(l=0\) is globally uniformly ultimately bounded with bound \(\varepsilon^{*}\zeta\), where \(\varepsilon^{*}\) is given by (28).
**Lemma 17**: _Assume, for all \(k\geq 0\), there exists \(\zeta_{k}\in\mathbb{R}^{n}\) such that (141) holds. Furthermore, assume conditions A1), A2), A3), and A4) hold. Finally, assume there exists \(\zeta\geq 0\) such that, for all \(k\geq 0\), \(\|\zeta_{k}\|\leq\zeta\). Then, the system (141) is globally uniformly ultimately bounded with bound \(\varepsilon^{*}\zeta\), where \(\varepsilon^{*}\) is given by (28)._
It follows from Lemma 16 that, for all \(l=0,1,\ldots,N-1\), the system (149) is globally uniformly ultimately bounded with bound \(\varepsilon^{*}\zeta\), where \(\varepsilon^{*}\) is given by (28).
|
2303.11240 | Truth Social Dataset | Formally announced to the public following former President Donald Trump's
bans and suspensions from mainstream social networks in early 2022 after his
role in the January 6 Capitol Riots, Truth Social was launched as an
"alternative" social media platform that claims to be a refuge for free speech,
offering a platform for those disaffected by the content moderation policies of
the existing, mainstream social networks. The subsequent rise of Truth Social
has been driven largely by hard-line supporters of the former president as well
as those affected by the content moderation of other social networks. These
distinct qualities combined with its status as the main mouthpiece of the
former president positions Truth Social as a particularly influential social
media platform and give rise to several research questions. However, outside of
a handful of news reports, little is known about the new social media platform
partially due to a lack of well-curated data. In the current work, we describe
a dataset of over 823,000 posts to Truth Social and and social network with
over 454,000 distinct users. In addition to the dataset itself, we also present
some basic analysis of its content, certain temporal features, and its network. | Patrick Gerard, Nicholas Botzer, Tim Weninger | 2023-03-20T16:26:24Z | http://arxiv.org/abs/2303.11240v1 | # Truth Social Dataset
###### Abstract
Formally announced to the public following former President Donald Trump's bans and suspensions from mainstream social networks in early 2022 after his role in the January 6 Capitol Riots, Truth Social was launched as an "alternative" social media platform that claims to be a refuge for free speech, offering a platform for those disaffected by the content moderation policies of the existing, mainstream social networks. The subsequent rise of Truth Social has been driven largely by hard-line supporters of the former president as well as those affected by the content moderation of other social networks. These distinct qualities combined with its status as the main mouthpiece of the former president positions Truth Social as a particularly influential social media platform and give rise to several research questions. However, outside of a handful of news reports, little is known about the new social media platform partially due to a lack of well-curated data. In the current work, we describe a dataset of over 823,000 posts to Truth Social and and social network with over 454,000 distinct users. In addition to the dataset itself, we also present some basic analysis of its content, certain temporal features, and its network.
## Introduction
The social media platform _Truth Social_ was launched in February of 2022 about a year after the suspension of former United States President Donald Trump from Twitter, Facebook, and other social media platforms. Truth Social is largely stylized after Twitter where Tweets are instead called _Truths_ and ReTweets are instead called _ReTuths_. Note that throughout the remainder of this paper we most commonly refer to _Truths_ as _posts_ in order to avoid confusion with the epistemic use of truth (as in true/false). Due to the political and social circumstances surrounding its creation and launch, Truth Social has positioned itself as a hub for right-wing social media users disgruntled by mainstream platforms' attempts to root out hateful and harmful communities and content.
Since its inception, and largely due to the influence of the former president's use of the platform, Truth Social has increasingly dominated a space of social media platforms that cater to users affiliated with the alt-right political movement--a technology space sometimes referred to as _alt-tech_ that also includes Parler, Gab, Rumble and others. Truth Social, in effect, functions as a kind of right-wing Twitter, but without the content regulation that is typically found in mainstream social media platforms. As a result, Truth Social, along with other alt-tech platforms, are a potential hotped for misinformation, conspiracy theories, and other malign social media activity.
Despite its scope and influence, little is known about the posts and content that is shared on the Truth Social network. The dearth of research involving Truth Social is partially due to how new it is, but also due to the lack of a publically available API. To ameliorate these issues, the current work presents a large dataset of Truths, ReTuths, users, and other data collected from a broad crawl over the Truth Social plat
Figure 1: Annotated illustration of Truth Social Web Interface. The Web scraper extracted user data, and post information, including time, content (with links or other media), quotes, ReTuths, and likes.
form from its launch on February 21, 2022 until October 15, 2022. In total, this dataset contains the content of 823,927 Truths posted by 454,458 users including the full history of the 65,536 most active users. Truth Social does not publish its usage statistics, but we estimate that this dataset contains user data of about 20% of the total registered users and an unknown, but larger, proportion of the total number of posts.
The dataset represents the first of its kind and can be used to ask and answer numerous research questions. For starters, social media's effects on people's consumption of information has become a topic of increasing importance [14]. Providing users more direct agency over the information they consume, social media--while transformative--may limit exposure to diverse perspectives and cause the formation of like-minded users reinforcing shared narratives [13]. This lack of exposure to diverse information and differing perspectives has been found to provide the scaffolding upon which conspiracy theories [13] and misinformation [12] may grow. Truth Social is but the latest example of the formation of a self-referential, insular community--commonly called an echo chamber--that is known to lead to increased political polarization [1]. Thus, because Truth Social is itself a right-wing echo chamber, catering to politically polarized defectors of mainstream social media, it provides a fertile ground for the spread of misinformation and conspiracy theories. Therefore, with the increased understanding of conspiracy theories' potentially damaging effects on democracy [13], understanding Truth Social's interaction with echo chambers and misinformation presents an important topic for continued research.
### A Brief Overview of Truth Social
Truth Social's announcement and ultimate launch as a social media platform can be traced to former U.S. President Donald Trump's ban from several major social media platforms following his role in the January 6 United States Capitol attack1. As it currently stands, Truth Social is occupied largely by both users disaffected by mainstream platforms' moderation policies and enthusiastic followers of Donald J. Trump, and appears generally similar to other "alt-tech" platforms. However, Truth Social's status as the main mouthpiece for a former President whose influence remains momentous in the United States Republican Party positions it as a remarkably influential _alt-tech_ platform.
Footnote 1: [https://blog.twitter.com/en_us/topics/company/2020/](https://blog.twitter.com/en_us/topics/company/2020/) suspension
Platforms with natures similar to Truth Social--catering largely to users disaffected by the content moderation of mainstream social networks--have been found to be decidedly successful in drawing users over from the original platform [15], specifically in the case of followers of Donald Trump [16]. Moreover, these types of platforms have been shown to both harbor and insigate dangerous conspiracy theories [17], which, despite being birthed on seemingly fringe platforms, have been found to ultimately jump to mainstream platforms [18], nevertheless advancing the same dangerous misinformation that mainstream social networks attempted to thwart in the first place [19].
## Dataset Collection Methodology
Collecting the posts and other activity data from Truth Social is not straightforward because the site does not provide a public API. Instead, we implemented a custom Web scraper and programmatically extracted the relevant context from Truth Social's Web interface directly. This Web interface did not impose any crawling restrictions nor did it disallow any crawling with the robots.txt standard. Because no API was available the most straightforward way to collect posts was from each specific account.
We crawled the Truth Social one account at a time, starting with @realDonaldTrump and then iteratively in a breadth first manner over the followers of each account. Specifically, the crawling methodology proceeded as follows:
1. Collect information about the user's follower count, following count, creation date.
2. Iterate through users following the user and users that the user follows, create an edge for each follower-followee relationship, and add that user to the breadth-first queue if that user has not been scraped in the past 14 days.
3. Extract the full set of available Truths posted by the user.
This crawl began on September 4, 2022 and continued until October 14, 2022. In that time, all content posted by 65,536 users was collected. The dataset, therefore, has the complete set of all posts for the visited users before September of 2022.
One particular complication of the crawling methodology was the extraction of secondary-posts, _i.e._, ReTruths, Quotes, and Replies. During the initial crawl, we collected these secondary-posts but not the original post itself. So, at the end of the initial crawl, we additionally collected all of the original posts from which the initially collected ReTruths were based. Due to Truth Social's HTTP request limitations, we were only able to to connect the Quotes and Replies to the user to whom they were directed. Although these efforts resulted in a full accounting of the originating user account for all Quotes and Replies as well as the post content for all ReTruths, the opposite is not true. In other words, although we have the original user for all collected Quotes and Replies and the original post for all collected ReTruths, we we may not have collected all of the ReTruths, Quotes, or Replies for a given post. Nevertheless, as a result of our efforts the dataset is internally consistent.
### Data Model
During this crawl, data elements were stored in a local database system. The relational schema of this database is illustrated in Fig. 2. This dataset contains various kinds of related data and is therefore modeled as a relational database with foreign key dependencies. For example, follower/followee relationships are enumerated in the follows table,
with each entry referencing the following and the followed user via a foreign key to the corresponding user record. Likewise, truth entries are linked to the corresponding user, and quotes, replies, media, hashtags, external_urls, and their respective edge tables are all linked with foreign key relationships to further contextualize the content the Truths.
Tables from the database system were exported to text files in a tab-delimited format and are available on the Zenodo data service at [https://doi.org/10.5281/zenodo.7531625](https://doi.org/10.5281/zenodo.7531625) and a sample containing the first 10,000 records from each file are included as Supplemental Material. Table 1 lists the files available in the dataset and the number of records in each file.
### FAIR Principles
The dataset presented by the present work conforms to the FAIR principles [20] and are therefore findable, accessible, interoperable, and re-usable:
#### Findable:
We provide the dataset publically using the Zenodo data service and give it a permanent digital object identifier (DOI) [https://doi.org/10.5281/zenodo.7531625](https://doi.org/10.5281/zenodo.7531625).
#### Accessible:
The dataset is freely available on the Internet and can be accessed by anyone with an internet connection. All of the data is provided as tab separated value (TSV) files, a standard format for handling data tabular data.
#### Interoperable:
The dataset is easily loaded and viewed with most current database management systems or spreadsheet systems.
#### Re-usable:
Metadata is also included in a Readme file and is linked to the DOI of this paper for further reference.
#### Limitations
Although we endeavoured to capture a complete and holistic dataset, it is important to be aware of certain methodological and technological limitations and possible sampling biases that may be present in the dataset. Some of the key limitations are are follows:
1. **Access to a user's followers is limited.** While Truth Social permits clients on its Web application to scroll through the entirety of a user's following list, it limits clients' access to a user's followers list to only 50 followers. It is unclear why this limitation exists and how these 50 followers are selected. It may be possible to estimate who follows whom by analyzing a complete following lists of all users and/or the users who frequently ReTruth, Quote or like another user. However, that analysis is not present in the current dataset.
2. **Web Request Limits.** Although scraping limits are not published by Truth Social, we did endeavor to be responsible with the number of HTTP requests that were issued to Truth Social. This restricted our ability to capture more data.
3. **Sampling Bias.** Recall that the crawling methodology proceeded in a breadth first manner from the a popular user @realDonaldTrump and then proceeded to other highly active users. As a result, this dataset likely contains the most active users of the platform. The choice of @realDonaldTrump as the seed-user may have also nudged the data collection towards more political users and posts. However, the average path length of the user network was relatively small and the post content is quite diverse, so we are confident that the sample is moderately representative of the whole platform.
### Ethical Considerations
The dataset is collected from a publically available resource. Users who submit content do so with the explicit intent of making their activity publically and widely visible. This research was observational only. No intervention or treatment
\begin{table}
\begin{tabular}{l l} \hline \hline \multicolumn{1}{l}{Table} & Number of Records \\ \hline users.tsv & 454,458 \\ follows.tsv & 4,002,115 \\ truths.tsv & 823,927 \\ quotes.tsv & 10,508 \\ replies.tsv & 506,276 \\ media.tsv & 184,884 \\ hashtags.tsv & 21,599 \\ external\_urls.tsv & 173,947 \\ truth\_hashtag\_edges.tsv & 213,295 \\ truth\_media\_edges.tsv & 257,500 \\ truth\_external\_url\_edges.tsv & 252,877 \\ truth\_user\_tag\_edges.tsv & 145,234 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Description of the Truth Social Dataset
Figure 2: Relationship diagram between data elements in the collected schema. Arrows represent foreign key relationships among the tables.
was made to the population; therefore, this research was determined to be exempt from full ethical review panel by the University of Notre Dame institutional review board.
## Content Analysis
To get a better understanding of the collected dataset we performed some rudimentary analysis on the content of the dataset. This analysis includes (1) an investigation into the top-linked Web sites, (2) a look at certain temporal artifacts of the text content of the posts, and (3) an analysis of the follower network of the platform.
### External Link Analysis
Research has shown that misinformation campaigns often utilize and share external links to proliferate information across and between multiple platforms [14, 15]. For this reason, we extracted the external Web links that users posted. We found 173,947 links in total. The domain of each link was extracted and aggregated for further analysis.
Figure 3 illustrates the top linked domains in terms of the number of Truths in which they appear (x-axis) and the number of ReTruths in which they appear.
The top linked external Web site in terms of frequency in both Truths and ReTruths is Rumble, a video sharing platform. This site is known for having looser content moderation restrictions compared to many other video sharing platforms like YouTube and has become a haven for controversial figures that have been banned from mainstream platforms. Rumble has grown in popularity in recent years at least partially due to its use by former President Donald Trump to stream his political rallies and post news clips.
Other popular site include right-wing political networks like OANN, Breitbart, FoxNews, etc. Curiously, a large number of external links point to the Telegram social media platform. Popular in Russia, Telegram is frequently used by supporters of Russia in the Russia-Ukraine War [13]. It also in been investigated as a platform that acts as a safe haven for those that have been deplatformed from other social media sites [12]. Because Telegram has this association with deplatformed individuals we further investigated the five most shared telegram channels to better understand the type of content that was shared. In order of decreasing popularity, the five most shared telegram channels are:
1. **RealKarliBonne**. This channel shares a large variety of memes, Truths, and messages supporting Donald Trump. The channel also shares a large amount of video content featuring the Biden administration. Although some video messages are supportive of Trump, the preponderance is negative of the Biden administration and may serve as an example of affective polarization [12].
2. **LauraAbolichannel**. This channel does not appear to have a clear focus. Its content contains commentary and clips from right-wing media, anti-vaccination messages, and uplifting memes.
3. **freedomforcebattalion**. This channel presents former President Donald Trump's messaging and other current events with a biblical perspective.
4. **realx22report**. This is the official Telegram channel of X22report, a daily show that covers financial and political issues.
5. **drawandstrikechannel** This is the channel of right-wing political columnist Brian Cates. Cates is an active user on Truth Social and draws a major following as well in his Telegram channel. Content in the channel focuses around various political topics and current events.
The combination of Truth Social's politically charged user-base and external links' role in narrative-building and misinformation campaigns creates a significant space for further analysis in both the information contained in the external links and the temporal aspect of the external links.
Examinations into the information contained in the external links may provide major insights into the spread of conspiracy theories and misinformation on Truth Social and across the social Web. For example, to better understand the popularization of conspiracy theories on Truth Social, one may trace the earliest external domains that references a certain conspiracy theory and analyze their role in the narrative spread through Truth Social.
Overall, external links in Truth Social remarkably reflect the politically charged user-base, and therefore external agents' influence on the user-base cannot be overlooked. Thus, we believe that further analysis of these external links' role in the Truth Social network is critical to understanding how certain narratives and conspiracy theories propagate across the network.
Figure 3: Top 10 linked domains appearing in Truths (red, x-axis) and top 10 linked domains appearing in ReTruths (blue, y-axis). Purple marks indicate that the domain is in the top 10 in both Truths and ReTruths.
### Text Analysis
To better understand both the temporal nature of the dataset and the possible interaction between external events and popular topics on Truth Social, we next performed a text analysis on the 823,927 posts in the dataset.
Figure 4 provides an example illustration of our findings. Max-normalized by posts per day, the frequency of all posts provides a backdrop for our further evaluation into posts containing certain keywords. Overall, we found that the daily frequency of posts was highest near the official launch of Truth Social, before falling slightly and stabilizing.
Because of its affiliation with the former president, posts on Truth Social largely revolve around activities related to conservative politics and news stories. To illustrate this more concretely we considered two events that occurred during dates covered by the data collection methodology involving the former president: (1) the public hearings from the United States' House of Representative's Select Committee on the January 6 riots at the Capitol Building, and (2) the FBI raid of the former president's residence commonly called Mar-a-Lago.
Our first text analysis centers on the January 6 United States Capitol attack. Utilizing the keywords "January 6", "January Six", "Jan 6", and "Jan Six", we identified posts containing any of these phrases (with case-insensitive criteria). As shown in solid blue in Fig. 4, immediately following the start of the January 6 Committee's Public Hearings, posts containing these phrases spiked and continued to be elevated for approximately ten weeks. We believe that further evaluation may find correlation between these repetitive spikes and the ensuing broadcasts of the January 6 Committee's public hearings, but a thorough analysis is outside the scope of the current paper.
Next, we analyzed posts related to the FBI raid of Mar-a-Lago, utilizing the keywords "Mar-a-Lago" and "Mar a Lago". Illustrated in solid red in Fig. 4, we again identified posts containing any of these phrases (with case-insensitive criteria). Like in the January 6 example above, in this case we also found that Truths containing these phrases spiked following the events.
Dotted blue and red lines in Fig. 4 illustrate the data from Google Trends for the January 6 and Mar-a-Lago searches, respectively. We see that the attention of these two events are symmetrical across Truth Social and the Web in general. This symmetry shows that the topics discussed on Truth Social appears to be largely representative of activity on the Web.
When evaluating these spikes, we found that they were often driven largely by a small number of original posts that were ReTruthed many times. Some of the most popular posts originated from the former president himself, but were elevated and expanded by popular commentators, whose roles in narrative-building on Truth Social is a clear topic for further inspection. For example, following the FBI search of Mar-a-Lago, the former president posted "A horrible thing that took place yesterday at Mar-a-Lago. We are no better than a third world country, a banana republic. It is a continuation of Russia, Russia, Russia, Impachment Hoax #1, Impeachment Hoax # 2, the no collusion Mueller Report, and more. To make matters worse it is all, in my opinion, a coordinated attack with Radical Left Democrat state and
Figure 4: Daily max-normalized frequency of Truths posted in the dataset. Because Truth Social has a strong affiliation with former United States President Donald Trump, political activity and news stories involving the former president are commonly discussed on the site. In this figure, posts mentioning political events like the US House of Representative’s Special Committee on the January 6 riots at the US Capitol (blue) and the FBI search at the former president’s residence (Mar-a-Lago) spike surrounding these events. Dotted blue and red lines illustrate the data from Google Trends corresponding to the event.
local D.A.'s and A.G.'s". This set off a conspiracy-laden narrative of the events that echoed throughout the platform. These posts typically gathered tens of thousands of ReTruths and likes--a large number for the relatively small platform. These findings and the dataset as a whole may provide much needed insight into the cohesion of the Truth Social platform as well as a better understanding of the role that popular commentators play in driving the narrative for the majority of users.
Ultimately, external events appear to have significant influence on Truth Social's user network. Moreover, the near-immediate rise in posts following certain events may point to both the interests of users and the cohesiveness of the Truth Social network. Further work lies in both the examination of narratives and sentiments accompanying apparent reactions to external events, as well as the examination of the platform-wide influence of a small group of popular commentators and if the narratives and stories that percolate within Truth Social eventually make their way onto mainstream social media platforms.
### Network Analysis
Like Facebook, Instagram, and Twitter, the friendship network (via followers or ReTruths) of the Truth Social platform forms a social network that can be analyzed to find social roles, and network-based artifacts like centrality, betweenness, cliques, and other interesting network-based phenomena.
Because the follower graph could not be reliably extracted from the Web interface, we instead used ReTruths to construct the network. Although not the same as the follower graph, ReTweets (the Twitter-analog to ReTruths) are known to "more closely mirror real-world relationships and trust" [11].
We used two different criteria to analyze the network. First, we found the number of unique, fully-scraped users who ReTruthed a user. Figure 5 (left) illustrates the degree distribution of the ReTruth graph, that is, the number of distinct users who Retruthed another user. This distribution appeared to follow the generally followed typical power-law degree distribution found on Twitter. The user with the highest number of distinct ReTruthing users was ReTruthed by 1,448 users.
Rather than looking at distinct users, our next analysis looked at the total number of ReTruths for each user. Figure 5 (right) shows that the ReTruth multi-graph, where multiple ReTruths from the same user count as multiple edges, has a slightly different degree distribution that more closely resembles a log-normal distribution. The user with the highest number of ReTruths was ReTruthed 1,0176,833 times.
## Conclusion
This paper presents a large dataset, which contains information of 454,000 users and over 823,000 posts including the complete history of the 65,536 most active users from the Truth Social platform. In addition, this dataset covers the ReTruths, quotes, text, media, and other information from the platform. We also perform a preliminary analysis of this dataset.
Our preliminary analysis shows that a handful of external Web sites dominate Truth Social posts, with Rumble appearing most frequently. Moreover, a brief look at the most commonly linked Telegram channels finds that right-wing and Trump-focused Telegram channels appear most often. We also analyzed the temporal and structural nature of the posts and ReTruths in the dataset. We have uncovered several interesting findings and avenues for further study.
Overall, Truth Social is an emerging alt-tech platform. Targeted at hard-right users disaffected from mainstream social media platforms and working as the main mouthpiece of former President Donald Trump, Truth Social's unique position in the information ecosystem cannot be overlooked. This dataset provides researchers a means to study the Truth Social platform, permitting research on both Truth Social itself and important socio-technical issues including the cultivation and spread of information and narratives.
|
2310.09537 | Microscopic derivation of transition-state theory for complex quantum
systems | The decay of quantum complex systems through a potential barrier is often
described with transition-state theory, also known as RRKM theory in chemistry.
Here we derive the basic formula for transition-state theory based on a generic
Hamiltonian as might be constructed in a configuration-interaction basis. Two
reservoirs of random Hamiltonians from Gaussian orthogonal ensembles are
coupled to intermediate states representing the transition states at a barrier.
Under the condition that the decay of the reservoirs to open channels is large,
an analytic formula for reaction rates is derived. The transition states act as
independent Breit-Wigner resonances which contribute additively to the total
transition probability, as is well known for electronic conductance through
resonant tunneling states. It is also found that the transition probability is
independent of the decay properties of the states in the second reservoir over
a wide range of decay widths. | K. Hagino, G. F. Bertsch | 2023-10-14T08:55:22Z | http://arxiv.org/abs/2310.09537v3 | # Microscopic derivation of transition-state theory for complex quantum systems
###### Abstract
The decay of quantum complex systems through a potential barrier is often described with transition-state theory, which is also known as RRKM theory in chemistry. Here we derive the basic formula for transition-state theory based on a generic configuration-interaction Hamiltonian. To this end, we consider two random Hamiltonians, which are coupled to intermediate configurations at a barrier. Under a condition that the total decay probability of the post-barrier configurations to open channels is large, we show that the transmission coefficient from the first random Hamiltonian to the second is given as a factorized form of the formation and the decay probabilities of transition states. In that limit the transmission coefficient is found to be independent of the decay widths of the configurations in the random Hamiltonians. We also show that the transmission coefficient is reduced to a Breit-Wigner form, which is well known for electronic conductance through resonant tunneling states.
## I Introduction
Transition-state theory is ubiquitous in physics and chemistry to calculate reaction and decay rates for many-particle systems in the presence of a barrier [1; 2; 3; 4; 5]. The assumptions in the theory are clear in classical dynamics but less so in the quantum regime. For fermionic systems of equal-mass particles, the Hamiltonian is often formulated in a configuration-interaction (CI) representation. This motivates considering models that exhibit the barrier dynamics in the CI framework to understand conditions to support transition-state approximations.
## II Model Hamiltonian
Following previous recent work, we consider here a Hamiltonian composed of three parts: the barrier Hamiltonian \(H_{2}\) and two statistical reservoirs \(H_{1},H_{3}\) for the states on either side, and the matrix \(V_{12}\) and \(V_{23}\) coupling them together,
\[H=\begin{pmatrix}H_{1}&V_{12}&0\\ V_{12}^{T}&H_{2}&V_{32}^{T}\\ 0&V_{32}&H_{3}\end{pmatrix}. \tag{1}\]
Here, \(H_{k}\) (\(k\) = 1,2, and 3) is a \(N_{k}\times N_{k}\) Hermitian matrix, while \(V_{kl}\) is a \(N_{k}\times N_{l}\) real matrix. The statistical reservoirs are described by Hamiltonians of the Gaussian Orthogonal Ensemble (GOE). The barrier Hamiltonian \(H_{2}\) is composed of an arbitrary set of configurations, each connected to states in the two reservoirs by the submatrices \(V_{12}\) and \(V_{23}\). In this respect the models discussed here are generalizations of the model in Ref. [6], which has a single state at the barrier top.
To complete a model for reactions, one also needs the coupling matrix elements between the CI Hamiltonian and the external reaction channels. With those ingredients the \(S\)-matrix for transitions from one channel to another can be computed by standard linear algebra manipulations. If one is only interested in reaction probabilities, the linear algebra can be collapsed to Datta's formula [7; 8; 9] for inelastic reactions, given by
\[|S_{ab}(E)|^{2}=\mathrm{Tr}\left(\tilde{\Gamma}_{a}G(E)\tilde{\Gamma}_{b}G^{ \dagger}(E)\right) \tag{2}\]
Here \(S_{ab}\) is the scattering \(S\)-matrix element connecting channels \(a\) and \(b\) and \(G\) is the Green's function of the Hamiltonian in presence of decay channels \(i\) at reaction energy1\(E\),
Footnote 1: It is implicitly assumed in Eq. (3) that the \(\tilde{\Gamma}_{i}\) are independent of \(E\).
\[G(E)=\left(H-i\sum_{c}\tilde{\Gamma}_{c}/2-E\right)^{-1}. \tag{3}\]
The \(\tilde{\Gamma}_{c}\) are matrices (of the same dimension as \(H\)) of decay widths to external channels \(i\). They have diagonal block structure given by
\[\begin{pmatrix}\tilde{\Gamma}_{c,\mathrm{in}}&0&0\\ 0&0&0\\ 0&0&0\end{pmatrix}\ \mathrm{or}\ \begin{pmatrix}\tilde{\Gamma}_{c,1}&0&0\\ 0&0&0\\ 0&0&0\end{pmatrix}\ \mathrm{or}\ \begin{pmatrix}0&0&0\\ 0&0&0\\ 0&0&\tilde{\Gamma}_{c,3}\end{pmatrix} \tag{4}\]
depending on which subblock the channel \(c\) connects to.
In general, one is interested in the reaction probability \(T_{a3}\) from an entrance channel \(a\) = in to all possible decay channels in the second reservoir,
\[T_{a3}=\sum_{b\in 3}|S_{ab}|^{2}. \tag{5}\]
Due to the block structures of \(H\) and \(\tilde{\Gamma}\) we only need the \(G_{13}\) block of the Green's function
\[G=\begin{pmatrix}G_{11}&G_{12}&G_{13}\\ G_{21}&G_{22}&G_{23}\\ G_{31}&G_{32}&G_{33}\end{pmatrix} \tag{6}\]
in Eq. (2). As derived in Appendix A, the submatrix \(G_{13}\) reduces to
\[G_{13}=G_{1}V_{12}G_{2}V_{32}^{T}G_{3}, \tag{7}\]
where \(G_{1}\), \(G_{3}\), and \(G_{2}\) are given by
\[G_{1} = \left(H_{1}-i\tilde{\Gamma}_{\rm c,in}/2-i\tilde{\Gamma}_{c,1}/2- E\right)^{-1}, \tag{8}\] \[G_{3} = \left(H_{3}-i\tilde{\Gamma}_{c,3}/2-E\right)^{-1},\] (9) \[G_{2} = \left(H_{2}-V_{12}^{T}G_{1}V_{12}-V_{32}^{T}G_{3}V_{32}-E\right) ^{-1}. \tag{10}\]
Substituting Eq. (7) into Eq. (2), the transmission coefficient is obtained as
\[T_{\rm in,3}(E) = {\rm Tr}[\tilde{\Gamma}_{\rm c,in}(G_{1}V_{12}G_{2}V_{32}^{T}G_{3 })\tilde{\Gamma}_{\rm c,3}(G_{3}^{\dagger}V_{32}G_{2}^{\dagger}V_{12}^{T}G_{ 1}^{\dagger})], \tag{11}\] \[= {\rm Tr}[(V_{12}^{T}G_{1}^{\dagger}\tilde{\Gamma}_{c,{\rm in}}G_ {1}V_{12})G_{2}(V_{23}G_{3}\tilde{\Gamma}_{c,3}G_{3}^{\dagger}V_{32})G_{2}^{ \dagger}].\]
We write the elements of the two GOE Hamiltonians as
\[(H_{i})_{jk}=(H_{i})_{kj}=v_{i}\sqrt{1+\delta_{kj}}\,r_{ijk}, \tag{13}\]
where \(r_{ijk}\) is a random number from a Gaussian distribution of unit dispersion, \(\langle r_{ijk}^{2}\rangle=1\). Then the average level density \(\rho_{i}\) at \(E=0\) at the centers of the GOE Hamiltonians is given by
\[\rho_{i}=\frac{N_{i}^{1/2}}{\pi v_{i}}. \tag{14}\]
We set \(E=0\) for the rest of this paper. Each state in \(H_{2}\) is assumed to couple to specific states in the GOE reservoirs. We parameterize the couplings as 2
Footnote 2: We implicitly assume \(N_{1}>N_{2}\) and \(N_{3}>N_{2}\).
\[(V_{12})_{jk}=v_{12}N_{1}^{1/2}\delta_{jk},\quad(V_{32})_{jk}=v_{32}N_{3}^{1/2 }\delta_{jk}. \tag{15}\]
This parameterization is not as restrictive as it may seem. Due to the GOE invariance, the couplings can be to arbitrary orthogonal vectors in the GOE spaces. The specific form of the coupling is such that the average matrix element is the same as in \(H_{i}\) if \(v_{i2}=v_{i}\).
The matrices for the decay widths are assumed to be diagonal with elements
\[(\tilde{\Gamma}_{1})_{jk}=\gamma_{1}\,\delta_{j,k},\quad(\tilde{\Gamma}_{3})_ {kj}=\gamma_{3}\,\delta_{j,k}, \tag{16}\]
except for the entrance channel \(a=\) in, which couples to a single state in the first reservoir,
\[(\tilde{\Gamma}_{\rm in})_{jk}=\gamma_{\rm in}\,\delta_{j,{\rm in}}\delta_{k, {\rm in}}. \tag{17}\]
## III Transition-state theory
We now examine how the average reaction probability depends on the parameters of the model. Since transition-state theory deals with fluxes into or from a statistical reservoir, it is convenient to define a transmission coefficient \({\cal T}_{ci}\) of a channel \(c\) into the reservoir \(i\)
\[{\cal T}_{ci}=2\pi\rho_{i}\gamma_{c} \tag{18}\]
and its sum over channels,
\[{\cal T}_{i}=\sum_{c\in i}{\cal T}_{ci}. \tag{19}\]
For small values of \({\cal T}_{i}\) it has the physical significance of the transmission factor from an external channel into the statistical reservoir. As shown in Ref. [6], it is straightforward to carry out the statistical averaging for Eq. (2) in the limit that \({\cal T}_{i}\gg 1\) for both reservoirs.
We first examine the Green's function \(G_{2}\) and the coupling terms \(V_{i2}^{T}G_{i}V_{i2}\) in it. The average and standard deviation of the GOE Green's functions3 including the decay-width matrices are given by [6; 10; 11; 12]
Footnote 3: We note that there are mild restrictions on the ranges of the parameters in Eq. (20). In practice, the widths associated with the individual channels should be large compared to the level spacing in the GOE but small with respect to the boundaries of its eigenspecurturn.
\[\langle G_{1}\rangle_{jk} = \frac{\pi\rho_{1}}{N_{i}}\left[i\delta_{jk}\pm\left(\frac{2(1+ \delta_{j,k})N_{i}}{{\cal T}_{1}}\right)^{1/2}\right. \tag{20}\] \[\left.\pm i\left.\left(\frac{2(1+\delta_{j,k})N_{i}}{{\cal T}_{1}} \right)^{1/2}\right]\]
and similarly for \(G_{3}\). The fluctuations go to zero in the limit \({\cal T}_{i}\gg 1\) and these terms in \(G_{2}\) can be be replaced by \(i\pi v_{i2}^{2}\rho_{i}\) times the unit matrix. Thus the correlations between \(G_{2}\) and the other terms in Eq. (12) vanish, allowing it to be evaluated as
\[\bar{G}_{2}=(H_{2}-V_{12}^{T}(G_{1})V_{12}-V_{32}^{T}(G_{3})V_{32}-E)^{-1}. \tag{21}\]
The two terms in parentheses in Eq. (12) are independent of each other so can also be replaced by their ensemble averages. As is shown in Appendix B, they are given by
\[\langle(V_{12}^{T}G_{1}\tilde{\Gamma}_{\rm in}G_{1}^{\dagger}V_{12})_{jk}\rangle =\frac{\gamma_{\rm in}}{N_{1}\gamma_{1}}2\pi v_{i2}^{2}\rho_{i}\delta_{jk} \tag{22}\]
\[\langle(V_{32}^{T}G_{3}\tilde{\Gamma}_{3}G_{3}^{\dagger}V_{32})_{jk}\rangle=2 \pi v_{32}^{2}\rho_{i}\delta_{jk}. \tag{23}\]
in the limit \({\cal T}_{i}\gg 1\). A computer program comparing these formulas with the full numerical evaluation of Eq. (2) is provided in the Supplementary Material.
We can cast the formulas in a more transparent notation by defining decay widths of the transition states to the right-hand and left-hand reservoirs as
\[\Gamma_{R}=2\pi v_{32}^{2}\rho_{3} \tag{24}\]
and
\[\Gamma_{L}=2\pi v_{12}^{2}\rho_{1}. \tag{25}\]
Using Eq. (107) in Appendix B, one obtains
\[\langle T_{\text{in,3}}\rangle = \frac{\mathcal{T}_{\text{in}}}{\mathcal{T}_{1}}\,\Gamma_{L} \Gamma_{R}\sum_{i,j}\langle|(G_{2})_{ij}|^{2}\rangle\,\sum_{b\in 3}\left(\frac{T_{b}}{ \sum_{b^{\prime}\in 3}T_{b^{\prime}}}\right), \tag{26}\] \[= \frac{\mathcal{T}_{\text{in}}}{\mathcal{T}_{1}}\,\Gamma_{L} \Gamma_{R}\sum_{i,j}\langle|(G_{2})_{ij}|^{2}\rangle \tag{27}\]
where \(\mathcal{T}_{\text{in}}=2\pi\Gamma_{\text{in}}\rho_{1}/N_{1}\). It is remarkable that the ensemble average of the transmission coefficient is independent of \(\Gamma_{3}\), and thus the insensitive property [10; 13; 14] is realized.
Notice that \(G_{2}\) in Eq. (10) can be written
\[G_{2}=(H_{2}-i(\Gamma_{L}/2+\Gamma_{R}/2)\mathbb{1})^{-1}. \tag{28}\]
Then if \(H_{2}\) is diagonal with matrix elements \((H_{2})_{ij}=E_{i}\delta_{i,j}\), the transmission coefficient becomes
\[\langle T_{\text{in,3}}\rangle=\frac{\mathcal{T}_{\text{in}}}{\mathcal{T}_{1}} \sum_{i}\frac{\Gamma_{L}\Gamma_{R}}{E_{i}^{2}+(\Gamma_{L}+\Gamma_{R})^{2}/4}. \tag{29}\]
This is a well-known formula for electron transport through intermediate resonances [16; 17]. It agrees with an underlying assumption in transition-state models, that the contributions of the individual transition states are additive to the total transmission probability, provided flux conservation is maintained through the calculated decay rates4\(\Gamma_{i}\).
Footnote 4: This is not the case for the probability fluxes through the individual transition states.
The formula also shows that the contribution to the transmission coefficient is suppressed when the energy of a bridge state is outside the range of \(\pm(\Gamma_{L}+\Gamma_{R})/2\) around the incident energy. This is marked contrast to models in which the transition state is an internal channel that remains open at all energies above the threshold. To maintain the correspondence to the CI formulation, one would have to include highly excited configurations that carry momentum along a collective coordinate. At some point the model would break down because the coupling matrix element would become small compared to \(v_{i}\), the coupling strength of the configurations within the GOE's.
## IV Summary
While transition-state theory for decay of quantum complex systems is usually derived with a statistical approach, we have successfully derived it starting from a matrix Hamiltonian as is commonly used in configuration-interaction formulations. To this end, we considered two reservoirs described by random matrices. One of the configurations in the first reservoir undergoes a transitions to configurations in the second reservoir through bridge configurations between them. A potential barrier may exist for the bridge configurations. This generalizes a model with a single barrier configuration that was discussed by Weidenmuller [6].
As in Ref. [6], we have shown that the average transmission coefficient from the entrance configuration to configurations in the second reservoir can be factorized into a product form of the formation and the decay probabilities of transition channels, in the limit of \(\mathcal{T}_{3}\gg 1\). This is also a consequence of the usual starting point of transition state theory, that once the system passes the barrier, it never comes back.
If the condition \(\mathcal{T}_{1}\gg 1\) is also satisfied, the transmission coefficient is further simplified to a product of the population probability of the first reservoir, the transmission coefficient over the barrier, and the decay probability of the configurations in the second reservoir. In that case the transmission coefficient can be expressed in terms of Breit-Wigner resonance decays, as has been long known in the field of electron transport.
Transition-state theory is a landmark framework for decays of quantum complex systems, but conditions for transition-state theory to work have not yet been clarified well. The microscopic derivation based on the random matrix approach shown in this paper provides a necessary condition for transition-state theory to work. Such consideration would be important in the decay of complex systems at energies close to the barrier top.
###### Acknowledgements.
We thank Hans Weidenmuller for useful discussions. This work was supported in part by JSPS KAKENHI Grant Numbers JP19K03861 and JP23K03414.
## Appendix A the Green's function for a block-triagonal Hamiltonian
We invert the matrix
\[H-i\tilde{\Gamma}_{\text{in}}/2-i\tilde{\Gamma}_{1}/2-i\tilde{\Gamma}_{3}/2-E= \begin{pmatrix}\tilde{H}_{1}&V_{12}&0\\ V_{12}^{T}&\tilde{H}_{2}&V_{23}\\ 0&V_{23}^{T}&\tilde{H}_{3}\end{pmatrix}, \tag{30}\]
where \(\tilde{H}_{i}\) are defined as
\[\tilde{H}_{1} \equiv H_{1}-i\Gamma_{\rm in}/2-i\Gamma_{1}/2-E, \tag{10}\] \[\tilde{H}_{2} \equiv H_{2}-E,\] (11) \[\tilde{H}_{3} \equiv H_{1}-i\Gamma_{3}/2-E. \tag{12}\]
The Green's function (6) satisfies the relation,
\[\begin{pmatrix}\tilde{H}_{1}&V_{12}&0\\ V_{12}^{T}&\tilde{H}_{2}&V_{32}^{T}\\ 0&V_{32}&H_{3}\end{pmatrix}\begin{pmatrix}G_{11}&G_{12}&G_{13}\\ G_{21}&G_{22}&G_{23}\\ G_{31}&G_{32}&G_{33}\end{pmatrix}=\begin{pmatrix}1&0&0\\ 0&1&0\\ 0&0&1\end{pmatrix}, \tag{13}\]
from which one finds
\[\tilde{H}_{1}G_{13}+V_{12}G_{32}^{T}=0, \tag{14}\] \[V_{12}^{T}G_{13}+\tilde{H}_{2}G_{23}+V_{23}G_{33}=0,\] (15) \[V_{32}G_{23}+\tilde{H}_{3}G_{33}=1. \tag{16}\]
From Eqs. (14) and (16), \(G_{13}\) and \(G_{33}\) read
\[G_{13}=-\tilde{H}_{1}^{-1}V_{12}G_{23}, \tag{17}\]
and
\[G_{33}=\tilde{H}_{3}^{-1}-\tilde{H}_{3}^{-1}V_{32}G_{23}, \tag{18}\]
respectively. Substituting these into Eq. (15), one obtains
\[G_{23}=-(\tilde{H}_{2}-V_{12}^{T}\tilde{H}_{1}^{-1}V_{12}-V_{32}^{T}\tilde{H}_ {3}^{-1}V_{32})^{-1}V_{32}^{T}\tilde{H}_{3}^{-1}. \tag{19}\]
Combining Eqs. (17) and (19), one finally obtains Eq. (7).
Following a similar procedure, one can also derive
\[G_{11}=G_{1}+G_{1}V_{12}G_{2}V_{12}^{T}G_{1}. \tag{20}\]
## Appendix B Ensemble average of \(VG\tilde{\Gamma}G^{\dagger}V^{T}\)
In this Appendix, we evaluate the ensemble average of a matrix \(VG\tilde{\Gamma}G^{\dagger}V^{T}\), where the elements of \(V\) are Gaussian-distributed random numbers with \(\langle v_{jk}^{2}\rangle=v^{2}\) and the Green's function \(G\) is given by \(G=(H-i\tilde{\Gamma}/2)^{-1}\). Here \(H\) is an element of the \(N\times N\) GOE, and \(\tilde{\Gamma}\) is a constant times the unit matrix \(\tilde{\Gamma}_{jk}=\gamma\delta_{k,j}\). We closely follow Refs. [10; 13] to carry out the ensemble averaging. We first write the elements of the Green's function as
\[G_{jk}=\sum_{\lambda}\frac{\phi_{\lambda,j}\phi_{\lambda,k}}{E_{\lambda}-i \gamma/2}, \tag{21}\]
where \(E_{\lambda}\) are the eigenvalues of the Hamiltonian \(H\) and \(\phi_{\lambda}\) are the corresponding eigenfunctions. The ensemble average of \(VG\tilde{\Gamma}G^{\dagger}V^{T}\) then reads,
\[\langle(VG\tilde{\Gamma}G^{\dagger}V^{T})_{jk}\rangle \tag{22}\] \[=\gamma\sum_{m}\sum_{\lambda,\lambda^{\prime}}\left\langle\frac{( \mathbf{V}_{j}\cdot\mathbf{\phi}_{\lambda})(\mathbf{V}_{k}\cdot\mathbf{\phi}_{\lambda^{\prime }})\phi_{\lambda}(m)\phi_{\lambda^{\prime}}(m)}{(E_{\lambda}-i\gamma/2)(E_{ \lambda^{\prime}}+i\gamma/2)}\right\rangle,\]
where \(\mathbf{V}_{i}\cdot\mathbf{\phi}_{\lambda}\) is defined as \(\mathbf{V}_{i}\cdot\mathbf{\phi}_{\lambda}=\sum_{k}V_{ik}\phi_{\lambda,k}\). We now assert that in the limit \(N\gg 1\) the ensemble average can be factored into a product of three averages:
\[\langle\sum_{m}\phi_{\lambda,m}\phi_{\lambda^{\prime},m}\rangle=\delta_{ \lambda,\lambda^{\prime}}, \tag{23}\]
\[\langle(\mathbf{V}_{i}\cdot\mathbf{\phi}_{\lambda})(\mathbf{V}_{j}\cdot\mathbf{\phi}_{\lambda} )\rangle=v^{2}\delta_{i,j}, \tag{24}\]
and
\[\left\langle\sum_{\lambda}\frac{1}{E_{\lambda}^{2}+\gamma^{2}/4}\right\rangle= \frac{2\pi}{\gamma}. \tag{25}\]
The last two equations are derived in more detail in Ref. [10; 13]. The final result is
\[\left\langle(VG\tilde{\Gamma}G^{\dagger}V^{T})_{jk}\right\rangle=2\pi v^{2} \rho_{0}\,\delta_{j,k}. \tag{26}\]
We also need the ensemble average of \(VG\tilde{\Gamma}_{\rm in}G^{\dagger}V^{T}\) with \(G=(H-i\tilde{\Gamma}/2-i\tilde{\Gamma}_{\rm in}/2)^{-1}\), where \(\tilde{\Gamma}_{\rm in}\) is given by Eq. (17). In this case, the sum over \(m\) is restricted to \(m=\) in in Eq. (22). Due to the invariance of the averages under unitary transformations, Eq. (23) becomes \(\langle\phi_{\lambda,{\rm in}}\phi_{\lambda^{\prime},{\rm in}}\rangle=\delta_ {\lambda,\lambda^{\prime}}/N\). The final result is
\[\left\langle(VG\tilde{\Gamma}_{\rm in}G^{\dagger}V^{T})_{jk}\right\rangle=\frac {\Gamma_{\rm in}}{N\gamma}2\pi v^{2}\rho_{0}\,\delta_{j,k}. \tag{27}\]
|
2310.16417 | Enhanced Simultaneous Machine Translation with Word-level Policies | Recent years have seen remarkable advances in the field of Simultaneous
Machine Translation (SiMT) due to the introduction of innovative policies that
dictate whether to READ or WRITE at each step of the translation process.
However, a common assumption in many existing studies is that operations are
carried out at the subword level, even though the standard unit for input and
output in most practical scenarios is typically at the word level. This paper
demonstrates that policies devised and validated at the subword level are
surpassed by those operating at the word level, which process multiple subwords
to form a complete word in a single step. Additionally, we suggest a method to
boost SiMT models using language models (LMs), wherein the proposed word-level
policy plays a vital role in addressing the subword disparity between LMs and
SiMT models. Code is available at https://github.com/xl8-ai/WordSiMT. | Kang Kim, Hankyu Cho | 2023-10-25T07:10:42Z | http://arxiv.org/abs/2310.16417v1 | # Enhanced Simultaneous Machine Translation with Word-level Policies
###### Abstract
Recent years have seen remarkable advances in the field of Simultaneous Machine Translation (SiMT) due to the introduction of innovative policies that dictate whether to READ or WRITE at each step of the translation process. However, a common assumption in many existing studies is that operations are carried out at the subword level, even though the standard unit for input and output in most practical scenarios is typically at the word level. This paper demonstrates that policies devised and validated at the subword level are surpassed by those operating at the word level, which process multiple subwords to form a complete word in a single step. Additionally, we suggest a method to boost SiMT models using language models (LMs), wherein the proposed word-level policy plays a vital role in addressing the subword disparity between LMs and SiMT models. Code is available at [https://github.com/xl8-ai/WordSiMT](https://github.com/xl8-ai/WordSiMT).
## 1 Introduction
Simultaneous Machine Translation (SiMT) commences the translation process while simultaneously receiving the input, making it an effective approach for applications that require minimal latency such as simultaneous interpretation or live broadcast. The development of a novel policy is central to research efforts in SiMT. This policy dictates the translation process by determining whether to execute a READ or WRITE action at each step of the process.
Neural SiMT models, like offline Neural Machine Translation (NMT) models, commonly employ Byte Pair Encoding (BPE) [16] or similar techniques to encode an input sentence into a sequence of tokens. Typically, a single READ or WRITE action of a SiMT policy is responsible for handling an encoded token, which may sometimes be a word but often a subword.
The development of BPE-based SiMT models and their policies has resulted in researchers focusing on working at the subword level. The performance analysis and implementation of many SiMT systems, to the best of our knowledge, have been carried out on encoded sequences of source and target subwords, rather than on the original source and target sentences1. This has led to two critical issues that need to be addressed.
Footnote 1: We provide a list of reference works pertaining to this case in A.1.
The first issue is the lack of a standardized tokenization and encoding scheme, meaning that different implementations may employ varying token sequences to encode identical text. This variability can impact latency evaluation results and complicate score comparisons across different systems.
The second issue is the missed opportunity to process more source tokens before writing each target token without added latency. For a BPE-based SiMT model, the input must be received on a word-by-word basis to ensure proper encoding of each word into a sequence of subwords. Consequently, when the model encodes a word and performs a READ to process only a subword, it delays the reading of the remaining subwords without any benefit in actual latency2, and may adversely impact
Figure 1: Illustration of both token-level Wait-1 (red arrow lines) and word-level Wait-1 (green arrow lines) policies. The symbol ” ” at the end of a token indicates a word boundary.
translation quality by causing the model to rely on incomplete representations extracted from partially-read words. Similarly, performing a WRITE to generate a subword earlier than the remaining subwords does not necessarily reduce latency, as the subword must wait for a complete word to be generated before it can be displayed.
In this paper, we show that establishing the unit of measuring and operating SiMT systems at the word level, rather than the subword level, offers a viable solution to these issues. Specifically, to tackle the first issue, we propose word-level latency metric calculation for measuring the latency of SiMT systems. This not only enables consistent comparisons between different systems but also provides a more accurate reflection of the actual latency experienced in SiMT applications that display translation results word by word.
To address the second issue, we illustrate that an existing token-level policy can be transformed into a word-level policy that inherently overcomes the second issue, resulting in improved performance. Word-level policies take into consideration word boundaries and perform a READ or WRITE action to sequentially process a sequence of tokens that form a complete word. This conversion process can be applied to any token-level policies, and our experiments reveal that state-of-the-art fixed and adaptive policies exhibit significantly better performance when transformed into their word-level counterparts. Notably, these word-level policies often outperform token-level policies, even when evaluated using a token-level latency metric, due to their enhanced utilization of input and output tokens.
Additionally, to boost translation accuracy, we suggest incorporating a pre-trained language model (LM) into a SiMT model, where the word-level policy plays a crucial role as a pivotal component. One of the major hurdles in utilizing an LM for SiMT is the vocabulary mismatch between the SiMT model and the LM. The difficulty of handling subword disparities when utilizing an LM for a downstream task is widely acknowledged (Liu et al., 2021; Wang et al., 2022), and it becomes particularly problematic in SiMT, as inconsistent subwords between the LM and SiMT model make processing the same source or target prefix challenging. Our study demonstrates that our proposed word-level policy effectively tackles this challenge, enabling a successful integration of LMs into SiMT systems.
## 2 Related Work
### Simultaneous Machine Translation
SiMT systems that employ a fixed policy utilize a pre-defined sequence of READ and WRITE operations for each source sentence. STATIC-RW (Dalvi et al., 2018) and Wait-k (Ma et al., 2019) policies first read k source tokens, then alternate between reading and writing a single token. Elbayad et al. (2020) propose the multi-path training of a Wait-k model to train a single model that supports different k values at test time. Zhang et al. (2021) improve Wait-k policy using knowledge distillation from an offline MT model, while Zhang and Feng (2021) suggest a Mixture-of-Experts Wait-k Policy where predictions from multiple k values are combined inside a single model.
In contrast, research efforts on adaptive policies focus on the development of dynamic decision-making processes for READ/WRITE actions. Cho and Esipova (2016) firstly introduce model-based adaptive criteria for Neural SiMT. Gu et al. (2017) propose to learn a policy by using reinforcement learning. Raffel et al. (2017) introduce Monotonic Attention that ensures monotonic alignment in the attention mechanism. Succeeding works improve it by extending the alignment window (Chiu and Raffel, 2017; Arivazhagan et al., 2019), extending it as monotonic multi-head attention (MMA) (Ma et al., 2019) or learning transposed policies between the forward and backward models. (Zhang and Feng,
Figure 2: An exemplary case depicting the difference between token-level and word-level Wait-k policies and their Average Lagging (AL) scores. The token-level model begins translating in the middle of reading “Tremendously” and fails to recover from the incorrectly translated target prefix. On the other hand, the word-level model processes “Tremendously” in a single READ action and produces a correct translation.
2022c). Zheng et al. (2020) derive a policy by composing Wait-k models trained with different values of k. Zhang and Feng (2022a) model to predict the alignment between each target token and the source token. Zhang and Feng (2022b) measure accumulated information from source tokens to decide whether to write the current target token.
Despite significant advancements, the impact of operating policies at the word level has not been thoroughly explored in existing works, which have mainly focused on developing and evaluating systems at the token level. In this paper, we address this gap by demonstrating that implementing various types of policies at the word level consistently outperforms their token-level counterparts.
### Utilizing pre-trained LM for MT
Since the successful utilization of Transformer-based LMs pre-trained on large text corpora for downstream NLP tasks Devlin et al. (2019); Liu et al. (2019); Lample and Conneau (2019), the utilization of these models for MT has become a significant research area. Several studies have demonstrated the effectiveness of incorporating encoder-only LMs into NMT models. Weng et al. (2020); Yang et al. (2020) combine the LM representations with the encoder's representation using a gating mechanism. Zhu et al. (2020) propose attention between BERT and both the encoder and decoder. Weng et al. (2022) leverage mBERT as an encoder and introduce a decoder that attends to grouped representations of the encoder output.
Another research direction focuses on developing LMs with the encoder-decoder architecture designed for NMT as the target downstream task Lewis et al. (2020); Liu et al. (2020). These models show improvements particularly for low-resource language pairs. To enhance their adaptation for MT, various methods have been proposed, including fine-tuning specific parts of the LMs Cooper Stickland et al. (2021), reducing domain mismatch and overestimation Wang et al. (2022) and mitigating the copying behavior Liu et al. (2021).
The integration of pre-trained LMs into SiMT remains an underexplored area of research. To date, Indurthi et al. (2022) is the only related study we are aware of. It improves MMA by integrating the LM's prediction of the next target token, which is encoded using their model's vocabulary before being inputted into the model. However, this approach sacrifices the semantic coherence of the original tokens due to token fragmentation. Additionally, their approach falls under target-side LM integration, overlooking the potential advantages of source-side LM integration.
In this paper, we demonstrate an effective way of integrating a source-side LM into SiMT systems, offering a more versatile solution that can be integrated into most existing neural SiMT models. Building upon previous research conducted in offline MT Zhu et al. (2020), we introduce essential modifications, with a particular focus on word-level policies as a pivotal component. The effective management of vocabulary mismatches between the LM and the SiMT model is contingent upon the successful implementation of a word-level SiMT policy, a key aspect that we address in our study.
## 3 Proposed Methods
In this section, we propose the concept of employing a word-level latency metric and outline our conversion process for translating token-level policies into their word-level equivalents. Additionally, we present an integration of LM into SiMT, highlighting the advantages of utilizing word-level policies.
### Preliminaries
Given a source sentence \(\textbf{x}=(x_{1},x_{2},...x_{n})\), the goal of a SiMT model is to generate a target sentence of \(\textbf{y}=(y_{1},y_{2},...y_{m})\) while minimizing latency metrics. A SiMT model's policy, represented by the variable \(g_{i}\), determines the number of source tokens to process before predicting target token \(y_{i}\). Then the probability of generating **y** given **x** is formulated as follows:
\[p(\textbf{y}|\textbf{x})=\prod_{i}^{|\textbf{y}|}p(y_{i}|\textbf{x}_{\leq g_{i }},\textbf{y}_{<i};\theta) \tag{1}\]
where \(\theta\) is the model's parameters which are commonly optimized with a cross-entropy loss.
Transformer encoder-decoder model Vaswani et al. (2017) is currently the most widely used architecture for SiMT. To avoid redundant encoding of the input sequence after each READ operation, the encoder is typically modified to encode the source tokens unidirectionally Elbayad et al. (2020). Alternatively, more advanced techniques like the recurrent Linear Transformer Khardipraja et al. (2021) or Partial Bidirectional Encoding Iranzo Sanchez et al. (2022) can be adopted to enhance the encoding capabilities further.
During the evaluation of SiMT systems, translation quality is commonly assessed in conjunction with the latency required for generating translations. Various metrics have been proposed to calculate latency scores, with Average Lagging (AL) [13] being the most commonly used metric.
### Measuring latency based on the word level
As detailed in A.1, a substantial body of prior research work assesses the performance of SiMT systems by utilizing a latency metric on encoded source tokens under different tokenization and encoding schemes. This practice results in each system being evaluated on non-identical token sequences for the same dataset, thereby making it challenging to accurately compare scores across different systems.
To address this, we propose word-level latency score calculation by considering the word boundaries in token-level sequences. Specifically, when the first token of a source word is processed through a READ operation, we consider it as reading the corresponding word. Similarly, when the last token of a target word is written via a WRITE operation, we consider it as writing that word. By doing so, the latency scores are calculated consistently, regardless of the tokenization and encoding of the input. This ensures that results from different systems can be compared fairly.
### Word-level SiMT policies
The proposed word-level policy restricts a SiMT policy's transition from READ to WRITE or vice versa to occur exclusively at the boundaries of words. Any token-level policy can be transformed to operate at the word-level by following the conversion process we outline below.
Concretely, we ensure that a word-level policy does not write a target token in the middle of reading a sequence of source tokens that make up a word. To accomplish this word-level READ, we delay \(g_{i}\) until it reaches the nearest source word boundary. We formally define \(r_{i}\) that has a refined value of \(g_{i}\) based on the word boundaries in **x** as follows:
\[r_{i}:=\min\{j|j\geq g_{i}\wedge j\in B_{S}\} \tag{2}\]
Here, \(B_{S}\) denotes the indices of the source words' last tokens. Substituting \(r_{i}\) for \(g_{i}\) as a policy transforms it into another policy that upholds the same decision-making criterion while ensuring uninterrupted reading of an entire word when initiating the reading of a token.
Similarly to the word-level READ, we design a word-level WRITE to balance READ and WRITE actions throughout the translation. To achieve this, we can modify \(r_{i}\) such that it writes until it produces a token that ends with an end-of-word symbol. We define \(w_{i}\) that satisfies this as follows:
\[b_{i}:=\min\{j|j\geq i\wedge j\in B_{T}\} \tag{3}\]
\[w_{i}:=\begin{cases}r_{i},&\text{if }i=1\lor b_{i-1}\neq b_{i}\\ w_{i-1},&\text{otherwise}\end{cases} \tag{4}\]
where \(B_{T}\) denotes the indices of the target words' last tokens and \(b_{i}\) represents the index of the final token in the word that includes \(y_{i}\). By employing \(w_{i}\) in place of \(r_{i}\) (or \(g_{i}\)), we ensure that the policy consistently composes entire words without interruptions from any READ actions. This approach effectively reduces latency by facilitating faster writing of certain tokens compared to the original policy, thereby compensating for the increased latency resulting from word-level READ operations. Figure 1 provides a visual comparison between word-level and token-level policies in the context of Wait-1, with the word-level policy encompassing both word-level READ and WRITE operations.
### Intra-word bidirectional encoding
Unidirectional encoding in SiMT is vital for managing computational complexity and training efficiency. However, it has an inevitable consequence of weakening the source sequence representations compared to bidirectional encoding. This is an additional factor contributing to the lower translation
Figure 3: Comparison of masking in the token-level unidirectional attention (left) and the intra-word bidirectional encoding (right). Word boundaries are represented by vertical/horizontal bars on each axis.
quality of SiMT compared to offline models, along with the early translation from partial inputs.
To mitigate this issue, we utilize a technique called _intra-word bidirectional encoding_. At the word level, this approach involves unidirectional encoding for each word in the input sentence, meaning past words cannot attend to future words. However, at the subword level, subwords within the same word can attend to each other, allowing past subwords to attend to future subwords within the same word. Since READ operates at the word level in word-level policies, this encoding does not require recomputation during each WRITE operation. It only necessitates a single forward pass, similar to token-level unidirectional encoding. However, it can produce a better encoded representation by enabling attention to more tokens An example masking to enable intra-word bidirectional encoding is depicted in Figure 3.
### Integration of LM into SiMT through word-level policies
In this subsection, we showcase an additional benefit of word-level policies when integrating an LM into a SiMT system. One of the key challenges in this integration is the vocabulary mismatch between the LM and the SiMT model, which hinders ensuring that both models process an equal amount of input prefix at each translation step.
One possible solution is to use the LM's vocabulary for the SiMT model. However, the LM's training data may not align well with the specific domain targeted by the SiMT system Wang et al. (2022). This can result in suboptimal vocabulary for the SiMT model compared to a vocabulary obtained from in-domain data Liu et al. (2021). Another option is to explore methods to bridge vocabulary gaps Kim et al. (2019); Sato et al. (2020); Liu et al. (2021), but they are either validated only in certain transfer learning scenarios or require an additional training phase to train adapters or fine-tuning the entire LM using pre-training data.
In this paper, we introduce a method for leveraging an LM in a manner that facilitates the integration of an off-the-shelf LM into a SiMT model, utilizing a word-level policy, regardless of vocabulary mismatches and the internal structure of the SiMT model. Specifically, we employ an LM fused attention for both the encoder and decoder, following the approach outlined in Zhu et al. (2020), but with two notable modifications.
Firstly, we replace BERT with a decoder-only auto-regressive LM Radford et al. (2019); Lin et al. (2022) for unidirectional encoding of the input, aligning with SiMT models for efficient training and inference. Secondly, the attention between the SiMT model and the LM occurs when both models execute a word-level READ for an input word. This ensures they interact only when they process an equal amount of input prefix, naturally resolving the synchronization issue. Additionally, as they align at every word boundary, the SiMT model can operate independently with a vocabulary derived from in-domain data, while the LM continues to use its original vocabulary. Unlike methods targeting specific SiMT models Indurthi et al. (2022), our approach can benefit any Neural SiMT model with any decoder-only LM. Figure 4 illustrates the proposed integration of the LM with word-level Wait-1.
Figure 4: Illustration of LM-fused attention with the word-level Wait-1. Word and model activations processed at a specific stage are highlighted in red. When a source word is received, it is independently encoded by the LM’s vocabulary and the in-domain vocabulary. The hidden activations of the word from the LM is then utilized in the encoder. The decoder generates a sequence of tokens for a word by using both the LM and encoder activations.
## 4 Experiments
### Datasets
We use the following two datasets for experiments:
**IWSLT 17 English to French (En-Fr)**(Cettolo et al., 2017) consists of 242k pairs split into 233k training, 890 validation, and 8,597 test pairs.
**WMT 15 German to English (De-En)** comprises 4.5 million pairs. We use newstest2013 (3,000 pairs) for validation and newstest2015 (2,169 pairs) for testing.
### Experimental settings
Our methods are evaluated on the following systems, all of which are based on Transformer Vaswani et al. (2017) with unidirectional encoding.
**Wait-k**Ma et al. (2019): A model operating under Wait-k policy. A single model is trained for all k values by randomly sampling k during training Elbayad et al. (2020). For the word-level Wait-k policy, we define it as reading the first k words and then alternating between reading one source word and writing one target word. 3
Footnote 3: Technically, the word-level policy derived from the token-level Wait-k through the conversion process in Section 3.3 can wait between 1 and 4 tokens, depending on the input encoding. Therefore, it is not equivalent to the word-level Wait-k policy we define here, which always waits for k words.
**MoE Wait-k**Zhang and Feng (2021): A Mixture-of-Experts model initially trained with fixed Experts weights and then fine-tuned with dynamic Experts weights. The word-level Wait-k policy of different k is applied to each expert.
**ITST**Zhang and Feng (2022): A SoTA SiMT model equipped with Information-Transport-based policy that quantifies information weights from each source to the current target token. To implement word-level ITST, we convert the number of source tokens required for the first target token of each target word into the corresponding number of source words using Equation 2. Once the required number of source words is read, we complete the translation of the word. Additionally, we calculate the latency cost at the word level.
We compare each system by training both token-level and word-level models, with and without an LM. For models with an LM, we use XGLM-564M Lin et al. (2022) and employ the two-stage training in which we first train the SiMT model without the LM and initialize the encoder and decoder from the LM-fused model with the pre-trained weights Zhu et al. (2020). We also tested the single stage training where all trainable parameters are trained jointly with the LM from scratch. The difference of these strategies are discussed in Section 5.3. We tokenized and encoded the input using sentencepieceKudo and Richardson (2018) and applied BPE with a vocabulary size of 32k. We use sacreBLEU for BLEU calculation Post (2018). For models with token-level policies, we trained
Figure 5: Results of translation quality v.s. word-level AL.
models with the official implementations456, and implemented word-level policies based on these implementations. More training details are described in A.2.
Footnote 4: Efficient Wait-k: [https://github.com/elbayadm/attn2d](https://github.com/elbayadm/attn2d)
Footnote 5: MoE Wait-k: [https://github.com/ictnlp/MoE-Waitk](https://github.com/ictnlp/MoE-Waitk)
### Main results
The performance of each system is compared in Figure 5. Notably, when measuring latency at the word level, the word-level policy proves to be highly effective for all three systems across different latency levels and datasets, resulting in superior performance. The only exception is observed in the En-Fr ITST models that demonstrate similar levels of performance. The incorporation of an LM using the proposed LM-fused attention further enhances the performance for all word-level configurations. This observation highlights the suitability of word-level policies for the LM-fused attention approach and underscores the effectiveness of leveraging an LM to enhance SiMT systems.
Notably, as depicted in Figure 11, the word-level policies also outperform or compete with the token-level policies in token-level latency. This can be attributed to the enhanced token representations under the word-level policy, thanks to the contextual information provided by all other tokens belonging to the same word for each token.
## 5 Analysis
To validate the effectiveness of word-level policies from multiple angles, we conduct several analyses on various settings. All the experiments were conducted on WMT De.En with transformer-big unless specified otherwise.
### Ablation study
#### 5.1.1 Effects of Word-level READ and WRITE
To gain insights into the functionality of word-level READ and WRITE actions, we trained Wait-k models with various policy settings and conducted a performance comparison. Specifically, we examined models with the following policy settings:
**WW**: word-level READ and WRITE.
**TW**: token-level READ and word-level WRITE.
**WT**: word-level READ and token-level WRITE.
**TkTk**: a simpler baseline policy which involves alternating reading k source tokens and writing k target tokens without considering word boundaries.
The results are presented in Figure 6 (a). The the word-level policy (**WW**) consistently outperforms **TW** across all latency settings. This is attributed to its imbalance between the number of source and target prefixes processed in each step. Additionally, **WT** achieves a minimal AL of approximately 3.6, indicating that it is not well-suited for scenarios that require low latency. Lastly, **TkTk** shows significantly worse performance than **WW**, suggesting that reading or writing a few consecutive tokens without considering semantic boundaries offers no benefits, unlike word-level policies.
#### 5.1.2 Effects of intra-word bidirectional encoding
In order to assess the impact of the proposed intra-word bidirectional encoding, we trained word-level Wait-K models with and without it and compared the accuracy of the two models across different AL settings. The results are presented in Figure 6 (b).
Remarkably, the model equipped with the intra-word bidirectional encoding consistently achieved higher BLEU scores compared to the model without it, across all tested latency settings. This provides strong evidence of the effectiveness of the intra-word bidirectional encoding in enhancing SiMT performance.
### Effectiveness of word-level policies for LM
In this subsection, we aim to explore the significance of word-level policies in leveraging LM for SiMT. We compare different configurations based on three factors:
**Word vs. Token**: The policy type that the model operates with.
**In-domain vocab vs. LM vocab**: Whether the
Figure 6: Ablation studies for word-level policies. (a): Comparison of word-level Wait-k policies with different policies. (b): Comparison of word-level Wait-k models with and without the intra-word bidirectional encoding.
model uses an in-domain vocabulary obtained from the in-domain training data or uses the LM's vocabulary for the source language. Note that the use of "in-domain vocab" is specific to the "Word" configuration due to the vocabulary mismatch.
**LM-fused attention vs. LM embed**: Whether the model incorporates an LM using the LM-fused attention or replacing the embedding layer of the encoder with the LM's embedding (Xu et al., 2021). The latter approach uses "LM vocab" by design.
Figure 7 showcases the results. The models with word-level policies consistently outperform those with token-level policies by a significant margin in both LM-fused attention and LM embedding settings, underscoring the importance of word-level policy adoption for effective LM integration. The top-performing configuration is the proposed **LMAttn-In-domain vocab-Word**, demonstrating that the highest translation accuracy is achieved when the SiMT model operates with an in-domain vocabulary. Additionally, it is evident that the "LM embed" approach performs notably worse than the proposed LM-fused attention, further affirming the latter's superiority.
### Effects of LM-fused attention with various LMs and training configurations
To assess the effectiveness and broad applicability of our proposed LM integration, we conducted experiments on the IWSLT17 En-Fr dataset with two decoder-only LMs of different sizes: the 137M parameter GPT-2 model (Radford et al., 2018) and the XGLM-564M model. Additionally, we explore the option of training models in the single training stage instead of the two-stage training. The results, presented in Figure 8, demonstrate that GPT-2 model also exhibits improved performance with LM-fused attention, although their impact is naturally less pronounced compared to XGLM due to the difference in model size. Moreover, although models trained using the single-stage training generally exhibit lower performance compared to those trained using the two-stage training, they still outperform models without the LM for most configurations. This indicates that the LM-fused attention is applicable to various types of LMs and remains effective even when using the single-stage training strategy. This flexibility allows users to choose a training approach and model configuration that aligns best with their desired accuracy goals and computational constraints.
### Policy quality comparison
To assess the accuracy of policies in determining when to read or write, we adopt the methodology of estimating the quality of a policy in prior research (Zhang and Feng, 2022b, c; Guo et al., 2022). We measure the quality of a policy by analyzing the proportion of aligned source words received before translating on RWTH De-En alignment dataset 7.
Footnote 7: [https://www-i6.informatik.rwth-aachen.de/goldAlignment/](https://www-i6.informatik.rwth-aachen.de/goldAlignment/)
To ensure accurate word-level alignment calculation, we consider an aligned source word is read before writing a ground truth (GT) target word if the last token of the source word is read before the first token of the target word is written. The results of this analysis are presented in Figure 9. It is observed that word-level policies, both for ITST and Wait-k, exhibit better alignments across most latency settings. This suggests that word-level policies contribute to revising premature WRITE actions by guiding the model to read the remaining tokens of the aligned word, without negatively impacting the model's latency.
Figure 8: Translation accuracy comparison of LM-fused attention models with different training configurations. (a): En-Fr Transformer small (b) De-En Transformer big
Figure 7: Comparison of models with different LM integration.
## 6 Conclusions
This paper explores the potential benefits of word-level operations in SiMT systems. We propose word-level latency calculation to ensure fair and accurate latency comparisons. We introduce a conversion process that transforms token-level policies into word-level policies, enabling the processing of multiple subwords that form a word within a single READ or WRITE action. Additionally, we propose the integration of LM-fused attention, which combines an autoregressive LM model into SiMT models with word-level policies. Experimental results demonstrate the superiority of word-level policies compared to token-level policies, as well as the effectiveness of the LM integration. Our findings highlight the crucial role of word-level policies in the integration process.
## 7 Limitations
While the proposed word-level policy implementation is widely applicable to most existing SiMT systems, it is important to note that systems utilizing languages with a writing style that lacks spaces or other delimiters between words or sentences (e.g., Chinese) are unable to derive benefits from this approach. Furthermore, it is important to consider that while the proposed LM-fused attention proves effective in enhancing translation quality across all latency levels, integrating a large LM may necessitate a faster compute capability to fulfill the low-latency demands of the SiMT task.
|
2306.15820 | Polyhedra with hexagonal and triangular faces and three faces around
each vertex | We analyze polyhedra composed of hexagons and triangles with three faces
around each vertex, and their 3-regular planar graphs of edges and vertices,
which we call "trihexes". Trihexes are analogous to fullerenes, which are
3-regular planar graphs whose faces are all hexagons and pentagons. Every
trihex can be represented as the quotient of a hexagonal tiling of the plane
under a group of isometries generated by $180^\circ$ rotations. Every trihex
can also be described with either one or three "signatures": triples of numbers
$(s, b, f)$ that describe the arrangement of the rotocenters of these
rotations. Simple arithmetic rules relate the three signatures that describe
the same trihex. We obtain a bijection between trihexes and equivalence classes
of signatures as defined by these rules. Labeling trihexes with signatures
allows us to put bounds on the number of trihexes for a given number vertices
$v$ in terms of the prime factorization of $v$ and to prove a conjecture
concerning trihexes that have no "belts" of hexagons. | Linda Green, Stellen Li | 2023-06-27T22:30:12Z | http://arxiv.org/abs/2306.15820v1 | # Polyhedra with hexagonal and triangular faces and three faces around each vertex
###### Abstract
We analyze polyhedra composed of hexagons and triangles with three faces around each vertex, and their \(3\)-regular planar graphs of edges and vertices, which we call "trihexes". Trihexes are analogous to fullerenes, which are \(3\)-regular planar graphs whose faces are all hexagons and pentagons. Every trihex can be represented as the quotient of a hexagonal tiling of the plane under a group of isometries generated by \(180^{\circ}\) rotations. Every trihex can also be described with either one or three "signatures": triples of numbers \((s,b,f)\) that describe the arrangement of the rotocenters of these rotations. Simple arithmetic rules relate the three signatures that describe the same trihex. We obtain a bijection between trihexes and equivalence classes of signatures as defined by these rules. Labeling trihexes with signatures allows us to put bounds on the number of trihexes for a given number vertices \(v\) in terms of the prime factorization of \(v\) and to prove a conjecture concerning trihexes that have no "belts" of hexagons.
## 1 Introduction
Motivated by the study of polyhedra, this paper analyzes \(3\)-regular planar graphs whose faces all have three or six sides. We call these graphs _trihexes_. Trihexes have been analyzed by Deza and Dutour ([2] and [3]), Grubaum and Motzkin [4], and others. We refer to faces with three sides as "triangles" and faces with six sides as "hexagons", even though these faces may not have straight edges and may be unbounded.
Trihexes are analogous to fullerenes, which are \(3\)-regular planar graphs whose faces all have five or six sides. Fullerenes have received much attention because when viewed as polyhedra, they have physical manifestations as carbon molecules. Fullerenes have been analyzed by Brinkmann, Goedgebeur, and McKay [1] and others.
In this paper, Section 3 explains how every triple of numbers \((s,b,f)\) with \(s\geq 0\), \(b\geq 0\), and \(0\leq f\leq s\) (a "signature") describes a unique trihex. The number \(s\) gives the number of hexagons that lie in a chain capped by triangles (a "spine"), \(b\) gives the number of rings of hexagons ("belts") that surround and separate the spines, and \(f\) describes the rotation of the two spines relative to each other. Furthermore, every trihex can be described with at least one signature. Every trihex can also be described as the quotient of a hexagonal tiling of the plane under a group generated by \(180^{\circ}\) rotations, as shown in Section 4. In this context, the signatures \((s,b,f)\) describe the arrangement of the rotocenters of these rotations. Although there can be three distinct signatures that describe the same trihex, simple arithmetic rules given in Section 5 relate the signatures that characterize the same trihex. We thus obtain a bijection between trihexes and equivalence classes of signatures as defined by these rules. In Section 7, we use our classification of trihexes in terms of signatures to put bounds on the number of trihexes with \(v\) vertices in terms of the prime factorization of \(\dfrac{v}{4}\).
In Section 8 we prove a conjecture about the "graph of curvatures" from [2].
The results in this paper can be applied to polyhedra whose faces are all triangles and hexagons and have three faces around each vertex, but are not necessarily convex. We will call these polyhedra _trihex polyhedra_. The tetrahedron is a convex trihex polyhedron described by the triple \((0,0,0)\). All other convex trihex polyhedra are described by triples \((s,b,f)\) with \(s>0\), and cannot be described by triples with \(s=0\). Non-convex trihex polyhedra can be described by triples \((s,b,f)\) where \(s=0\) and \(b>0\). The correspondence between trihexes and convex trihex polyhedra follows from Steinitz's theorem [5] or [8], as explained in Section 6.
## 2 Definitions and preliminaries
**Definition 2.1**.: A _trihex_ is a finite, connected, 3-regular planar graph whose faces all have three or six sides.
**Definition 2.2**.: A _polyhedron_ is a union of polygons in \(R^{3}\) which is homeomorphic to a sphere. Any pair of polygons intersect either in the empty set, a vertex, an edge, or a union of vertices and/or edges.
**Definition 2.3**.: A _trihex polyhedron_ is a polyhedron whose faces all are triangles or hexagons and has three faces around each vertex.
**Definition 2.4**.: Two polyhedra are _equivalent_ if there is an orientation preserving homeomorphism of the sphere that takes the faces, edges, and vertices of one polyhedron to the faces, edges, and vertices, respectively, of the other.
The requirement that the homeomorphism be orientation preserving means that left-handed and right-handed versions of chiral polyhedra are not equivalent. We make the same distinction for trihexes. By a theorem of Whitney ([7], or see [6]) two planar graphs are isomorphic if and only if there is a homeomorphism of the sphere whose restriction to the planar graph gives a graph isomorphism. We consider trihexes equivalent if and only if an orientation-preserving homeomorphism can be found.
**Definition 2.5**.: Two trihexes are _equivalent_ if they are not only isomorphic as graphs but if there is also an orientation-preserving homeomorphism of the plane that takes one graph to the other.
Deza and Dutour ([2] and [3]) describe a family of 2-connected trihexes denoted by \(G_{n}\) or \(T_{n}\), where \(n\) is half the number of hexagons. We will refer to these trihexes as godseyes, after the woven yarn craft figure that they resemble.
**Definition 2.6**.: A _godseye_ is a trihex that consists of two adjacent triangles, surrounded by one or more nested pairs of hexagons, with two more adjacent triangles on the outside. The hexagons in each nested pair meet along opposite sides. See Figure 1.
Godseyes can have any even number of hexagons; however, a standard Euler characteristic argument shows that every trihex has exactly four faces. See, for example, [4]. The argument is as follows. Let
Figure 1: Godseye with three pairs of hexagons.
be the number of hexagons in the trihex and \(f_{3}\) be the number of triangular faces. The number of faces is \(F=f_{6}+f_{3}\). The number of edges is \(E=\dfrac{6f_{6}+3f_{3}}{2}\), since each hexagonal face has six edges, each triangular face has three edges, and each edge is shared by two faces. The number of vertices is \(V=\dfrac{6f_{6}+3f_{3}}{3}\), since each hexagonal face has six vertices, each triangular face has three vertices, and each vertex is shared by three faces. By Euler's formula, we have \(V-E+F=2\). Therefore, \(\dfrac{6f_{6}+3f_{3}}{3}-\dfrac{6f_{6}+3f_{3}}{2}+f_{6}+f_{3}=2\), which implies \(f_{3}=4\).
Euler's formula places no restrictions on the number of hexagonal faces; however, Grunbaum and Motzkin showed that only even numbers of hexagonal faces can be achieved [4].
## 3 Building trihexes from spines and belts
In this section, we describe ways to construct trihexes out of strings of hexagons capped by triangles ("spines"), possibly with rings of hexagons ("belts") separating the spines. Our construction echoes the construction given by Grunbaum and Motzkin in [4] but adds the consideration of "offset" defined below.
**Definition 3.1**.: A _belt_ is a circuit of distinct hexagonal faces in a trihex such that each hexagon is adjacent to its neighbors on opposite edges [2].
**Definition 3.2**.: A _spine_ is a collection of distinct faces in a trihex \(F_{0},F_{1},\cdots,F_{s+1}\), with \(s\geq 0\), such that
1. \(F_{0}\) and \(F_{s+1}\) are triangles,
2. \(F_{1},F_{2},\cdots,F_{s}\) are hexagons, and
3. For each hexagon \(F_{i}\), \(1\leq i\leq s\), \(F_{i}\) is adjacent to \(F_{i-1}\) and to \(F_{i+1}\) along opposite edges of \(F_{i}\).
The _internal edges_ of the spine are the edges shared by \(F_{i}\) and \(F_{i+1}\) for \(0\leq i\leq s\) and the _external edges_ of the spine are all the other edges. The _length_ of the spine is the number \(s\) of hexagonal faces between the triangular faces. Note that a spine of length \(0\) is a pair of triangles that share an edge. We refer to the triangle \(F_{0}\) as the _head triangle_ of the spine and the triangle \(F_{s+1}\) as the _tail triangle_. Note that which triangle is considered the head triangle and which is considered the tail triangle depends only on the choice of numbering. The _head vertex_ of the spine is the "tip" vertex of the head triangle, that is, the vertex that is not on an internal edge. The _tail vertex_ of the spine is the "tip" vertex of the tail triangle.
Figure 2: Spine of length \(4\).
A trihex can be created from two spines of length \(s\) by attaching them along the \(4s+4\) external edges in each of their boundaries. This can be done in multiple ways. See Figure 3 for examples with \(s=5\).
**Definition 3.3**.: Suppose that two spines of length \(s\) are identified along their \(4s+4\) external edges. Starting with the head or tail vertex of one spine, travel counterclockwise around the boundary edges of this spine, until either a head or a tail vertex of the other spine is encountered, and count the number of edges traversed. We say that the two spines are attached with _offset_\(i\mod(s+1)\) if the number of edges traversed is \(2i+1\).
To see that offset is well-defined, first note that the head vertex (or tail vertex) of the second spine must be identified to a vertex of the first spine where two faces of the first spine already meet. Otherwise, the trihex would not be 3-regular. Therefore, the number of edges traversed between the head or tail vertex of the first spine and a head or tail vertex of the second spine must be an odd number, which has the form \(2i+1\) for some number \(i\). In addition, since each spine has head and tail vertices that are \(2s+2\) edges apart, the number of edges traversed, going counterclockwise, to get from the _head_ vertex of the first spine to any head or tail vertex of the second spine will be the same number \(\mod(2s+2)\). It will also be the same number \(\mod(2s+2)\) as the number of edges traversed to get from the _tail_ vertex of the first spine to any head or tail vertex of the second spine. Since \(2i+1\equiv 2j+1\mod(2s+2)\), if and only if \(i\equiv j\mod(s+1)\), the offset is well-defined \(\mod(s+1)\), no matter which head and tail vertices are used. It makes no difference which spine is considered the first spine and which is considered the second, since the same paths of edges are traversed whether traveling counterclockwise around one spine or the other, when going between a head or tail vertex of one spine and a head or tail vertex of the other.
For integers \(s\geq 0\) and \(b>0\), we can also build a trihex out of a pair of spines of length \(s\) together with \(b\) belts of \(2s+2\) hexagons, where the belts lie in between the two spines and encircle each spine. See Figure 4. As before, there are a variety of ways to attach the second spine onto the outermost belt, depending on where the head triangle of the second spine is inserted. Again, these different insertion points can be characterized by offsets.
Suppose first that we have only one belt. Suppose we delete the belt and slide the two spines towards each other to fill in the space. If we slide them straight towards each other, along the edges of the belt that previously separated them, and then shift each spine slightly, either clockwise or counterclockwise around the other spine, then we form a new trihex with no belts between the spines. See Figure 5. Using the convention that we always shift clockwise, we can define the offset of the original trihex to be the offset of the new trihex with no belts between the spines (after shifting clockwise).
To find the offset when there are additional belts between the spines, we repeat the process of deleting belts and shifting the remaining pieces, starting from the outermost belt and working in.
**Definition 3.4**.: When there are one or more belts between the spines, successively delete the belts, starting from the outermost belt, each time shifting one spine clockwise around the other spine. The offset of the
Figure 3: Attaching spines
Figure 4: Spines with belts between them.
Figure 5: Shifting clockwise vs. counterclockwise.
original trihex is defined to be the offset of the resulting trihex that has no belts between the spines.
Note that shifting one spine clockwise around the other spine gives the same configuration as shifting the other spine clockwise around the first spine. Therefore, offset is well-defined irrespective of which spine is shifted with respect to the other and which belt is considered outermost vs. innermost.
For example, the original trihex in Figure 6 has offset \(0\).
**Definition 3.5**.: Let \(s\geq 0\), \(b\geq 0\), and \(0\leq f\leq s\). If a trihex can be formed from two spines of length \(s\) with \(b\) belts between them and with offset \(f\), then the _signature_ of the trihex is the ordered triple \((s,b,f)\).
It is clear from the contruction that any two trihexes built from two spines of length \(s\), with \(b\) belts between them and offset \(f\) are equivalent. In addition, Grunbaum and Motzkin [4] show that any trihex can be decomposed into spines and surrounding belts. Therefore, any trihex can be described with a signature \((s,b,f)\) for some \(s\geq 0\), \(b\geq 0\), and \(0\leq f\leq s\).
We summarize these facts in the following:
Figure 6: Shifting to find offset.
**Theorem 3.6**.:
1. Given \(s\geq 0\), \(b\geq 0\), \(0\leq f\leq s\), there exists a trihex with signature \((s,b,f)\).
2. Any two trihexes with the same signature are equivalent.
3. Any trihex can be described with a signature \((s,b,f)\) for some \(s\geq 0\), \(b\geq 0\), and \(0\leq f\leq s\).
The signature for a trihex is not unique: decomposing a trihex in different ways can produce three different signatures, as detailed in Section 5.
## 4 Trihexes and hexagonal grid coverings
In this section, we create a hexagonal tiling of the plane that covers a given trihex. Doing so allows us to develop rules for finding alternative signatures for a trihex in Section 5.
Consider a hexagonal tiling of the plane, made up of regular hexagons arranged in vertical strips such that two sides of each hexagon are horizontal. Superimpose a grid of parallelograms on the hexagonal tiling, such that each vertex of each parallelogram lies in the center of a hexagon, and each parallelogram has two vertical sides. See Figure 7.
Consider the group of transformations of the plane generated by \(180^{\circ}\) rotations around each vertex of the parallelogram grid. Form a quotient space (an orbifold) by identifying all points in the same orbit of this transformation group. A fundamental domain for this transformation group can be given by a pair of adjacent parallelograms, like the two parallelograms shaded blue in Figure 7, as explained below.
The orbits of points in this pair of parallelograms cover the entire plane, since a rotation around point C and then around the upper point marked A translates the pair of parallelograms up, and a rotation around C and then around the lower point marked A translates the parallelograms down. Repeating these pairs of rotations translates the parallelograms over an entire vertical strip. A rotation around C moves and inverts this vertical strip to cover the strip to the left, while a rotation around D covers the strip to the right. Repeated alternating rotations around points C and D covers all additional vertical strips to the left and the right.
No smaller subset of this double parallelogram region has orbits that cover the entire plane, by the following reasoning. A product of two \(180^{\circ}\) rotations is a translation through a vector twice the length of the vector connecting the rotocenters. A product of translations through two vectors is a translation through the sum of the vectors. Therefore, a product of an even number of \(180^{\circ}\) rotations around parallelogram grid vertices is a translation through a vector that is some sum \(2m\vec{A}\vec{B}+2n\vec{A}C\) for some integers \(m\) and \(n\). A product of an \(180^{\circ}\) rotation and a translation is a \(180^{\circ}\) rotation whose rotocenter is the original rotocenter shifted by half the translation vector. Therefore, a product of an odd number of \(180^{\circ}\) rotations around parallelogram vertices is a \(180^{\circ}\) rotation around a rotocenter that is a parallelogram grid vertex shifted by \(\frac{1}{2}(2m\vec{A}\vec{B}+2n\vec{A}C)\) for some integers \(m\) and \(n\), which is just another parallelogram grid vertex. The points interior to the double parallelogram region cannot be transformed onto each other by \(180^{\circ}\) rotations around parallelogram grid vertices or translations by linear combinations of \(2\vec{A}\vec{B}\) and \(2\vec{A}\vec{C}\). Therefore, the double parallelogram region is a minimal size region whose orbits cover the plane, i.e. a fundamental domain.
Although no points in the interior of the fundamental domain are identified under the transformation group, many pairs of points on the edges of the fundamental domain are identified with each other. This is indicated by the arrows in Figure 7: a rotation around point \(D\) identifies the edges above and below \(D\), a rotation around \(C\) identifies the edges above and below \(C\), and rotation around \(A\) followed by rotation around \(C\) identifies the top and bottom edges between the points marked \(A\) and \(B\). After identifying edges, the resulting quotient is a topological sphere. See Figure 8.
Note that the hexagonal tiling is preserved by all of the \(180^{\circ}\) rotations. Therefore, it can be projected via the quotient map down to the quotient sphere. Each hexagon that lies entirely inside the fundamental domain will project onto a hexagon on the sphere. Also, the partial hexagons that are cut off by edges of the fundamental domain, but do not contain the vertices marked \(A\), \(B\), \(C\), and \(D\), attach up in pairs and
Figure 8: Identifying edges creates a topological sphere.
Figure 7: Hexagonal grid with fundamental domain.
Therefore also project onto hexagons in the quotient sphere. The partial hexagons that contain the vertices marked \(A\), \(B\), \(C\), and \(D\) get identified in such a way that only half a hexagon appears in the quotient sphere. This half hexagon is has its diameter identified by a \(180^{\circ}\) rotation, so that it forms a triangle in the quotient sphere. See Figure 9. Therefore, the hexagonal tiling naturally forms a pattern of hexagons and triangles on the quotient sphere, with three faces around each vertex. So all the requirements for a trihex are satisfied.
A signature for a trihex described as the quotient of a hexagonal tiling can be read off directly from the tiling, as illustrated in Figures 10 and 11. Each vertical string of hexagons between vertices labeled \(A\) and \(C\), together with the hexagons centered at \(A\) and \(C\), projects to one spine, and each vertical string of hexagons between vertices labeled \(B\) and \(D\), together with the hexagons centered at \(B\) and \(D\), projects to a second spine. The hexagons that lie in vertical strips between the vertical sides of a fundamental domain project to belts between the two spines. Therefore the number \(s\) in the signature \((s,b,f)\) is the number of hexagons that lie in a vertical strip strictly between the hexagons at vertex \(A\) and vertex \(B\). The number \(b\) is the number of vertical columns of hexagons in the tiling that lie entirely between the two vertical edges of the fundamental domain.
To find the offset for the trihex, choose a hexagon centered at a vertex \(A\) and translate it along the diagonal strip of hexagons in the approximately southwest (SW) to northeast (NE) direction, until it coincides with a hexagon in the vertical column of hexagons containing vertices \(B\) and \(D\). If we hit a hexagon that is \(k\) hexagons below a vertex \(B\) or \(D\), then our trihex will have offset \(k\). This is because a hexagon at vertex \(A\) projects to a head triangle in the trihex, and translating this hexagon one column to the right in the SW to NE direction corresponds to deleting one belt and shifting the head triangle clockwise in the trihex. The ultimate position of the hexagon at vertex \(A\) after translating all the way to the right edge of the fundamental domain corresponds to the ultimate position that the head vertex of the head triangle is inserted along the second spine in the trihex, which is the offset. For example, the fundamental domain shown in Figure 11 covers the trihex with signature \((4,3,3)\).
It is now possible to conclude the following:
**Theorem 4.1**.: Every trihex can be produced as the quotient of a hexagonal tiling of the plane under a group of transformations generated by \(180^{\circ}\) rotations around the vertices of a superimposed parallelogram grid.
Proof.: Recall from Theorem 3.6 that any trihex can be described with a signature \((s,b,f)\) with \(s\geq 0\), \(b\geq 0\), and \(0\leq f\leq s\). For any such triple of integers \((s,b,f)\), build a hexagonal tiling with a superimposed parallelogram grid as follows. Start with a tiling of the plane by regular hexagons in which two sides of each hexagon are horizontal. Put two vertices of a parallelogram on the centers of two hexagons in the same vertical column that are separated by \(s\) hexagons strictly between them. Put the other two vertices of the parallelogram on the centers of hexagons in another vertical column, \(b+1\) columns to the right of the first column. This second pair of vertices should also be separated by \(s\) hexagons strictly between them. Shift the second pair of vertices up or down as needed, so that when the hexagons containing the first pair of vertices are translated along a SW to NE diagonal, through \(b+1\) columns of hexagons, they end up \(f\) hexagons below the hexagons occupied by the second pair of vertices. We now have one parallelogram whose vertices lie on
Figure 9: The quotient of a hexagon under a \(180^{\circ}\) rotation around its center point.
Figure 11: The fundamental domain that covers the trihex with signature \((4,3,3)\).
Figure 10: Calculating offset.
the centers of hexagons. Tile the plane with translated copies of this parallelogram to create a parallelogram grid. The quotient of the hexagonal tiling by the group generated by \(180^{\circ}\) rotations around parallelogram vertices is a trihex with signature \((s,b,f)\).
The process of creating a hexagonal tiling that covers a given trihex can be thought of as "unwrapping" the trihex around each triangle. Figure 12 shows the unwrapping of the trihex \((4,1,2)\). Numbered hexagons in the trihex correspond to numbered hexagons in the hexagonal tiling.
## 5 Equivalent signatures
The signature \((s,b,f)\) for a trihex is not necessarily unique. This section develops rules for finding alternative signatures for a trihex based on a given signature and shows that there is a one to one correspondence between equivalence classes of signatures, as defined in this section, and equivalence classes of trihexes, as defined in Section 2.
Start with a trihex with signature \((s_{1},b_{1},f_{1})\). As described in Theorem 4.1, this trihex is the quotient of a hexagonal tiling of the plane under a group of transformations generated by \(180^{\circ}\) rotations around vertices of a superimposed parallelogram grid. We will refer to the hexagons centered at vertices of the parallelogram grid as "special hexagons". These hexagons cover triangles in the trihex. The quotients of vertical columns of hexagons that contains special hexagons are spines of the trihex. We also use the phrase "vertical spine" to refer to a strips of vertical hexagons between pairs of special hexagons. For example, a hexagonal tiling for trihex \((5,2,2)\) is shown in Figure 13. Vertical spines are shaded blue. Special hexagons are shaded pink and yellow. The superimposed parallelogram grid is not drawn.
There are two additional ways to describe the trihex. Instead of using vertical strings of hexagons to make spines, we could build spines by setting off from a special hexagon at an angle \(60^{\circ}\) clockwise from due north or at an angle \(120^{\circ}\) clockwise from due north. We will refer to these directions as the southwest (SW) to northeast (NE) direction and the northwest (NW) to southeast (SE) direction. See Figures 14 and 15.
Figure 12: The trihex with signature \((4,1,2)\) and the hexagonal til tiling that covers it.
We will use the notation \((s_{2},b_{2},f_{2})\) to refer to the signature when we build spines in the SW to NE direction and \((s_{3},b_{3},f_{3})\) to refer to the signature when we go in the NW to SE direction.
Suppose we go from SW to NE. See Figure 14, where four special hexagons are labelled \(T,B,L,R\) (top, bottom, left, and right). Since the original offset was \(f_{1}\), this means that if we start at a special hexagon, say \(L\), and translate it through \(b_{1}+1\) vertical columns of hexagons, always along a SW to NE diagonal of hexagons, thereby arriving in another vertical column with special hexagons, we land \(f_{1}\) hexagons below a special hexagon. For each additional \(b_{1}+1\) vertical columns of hexagons we go through in the SW to NE direction, we land an additional \(f_{1}\) hexagons below a special hexagon. If at any moment we land a multiple of \(s_{1}+1\) hexagons below a special hexagon, then we are directly on a special hexagon, since special hexagons appear every \(s_{1}+1\) hexagons in the vertical column. Let \(j_{2}\) be the smallest integer \(\geq 1\) such that \(j_{2}\cdot f_{1}\) is a multiple of \(s_{1}+1\) (i.e. \(j_{2}\) is the order of \(f_{1}\) in \(Z_{s_{1}+1}\)). Then the first time that we land directly on a special hexagon is when we have traveled through \(j_{2}\cdot(b_{1}+1)\) vertical columns. The string of hexagons in the SW to NE diagonal that connects the original special hexagon to this final special hexagon projects to a spine in the quotient trihex. This spine will contain \(j_{2}(b_{1}+1)-1\) hexagons, since the final hexagon projects to a triangle. So this spine has length \(s_{2}=j_{2}(b_{1}+1)-1\). For example, if we start at a special hexagon of a \((5,2,2)\) hexagonal grid and head northeast, we will create spines of length \(8\), because \(b_{1}+1=3\), \(s_{1}+1=6\), \(f_{1}=2\), the order of \(2\) in \(Z_{6}\) is \(j_{2}=3\), and \(j_{2}\cdot(b_{1}+1)-1=3\cdot 3-1=8\). See Figure 14.
Suppose instead that we translate a special hexagon in the NW to SE direction. See Figure 15. If we travel through \(b_{1}+1\) vertical columns of hexagons, thereby arriving in another vertical column with special hexagons, we land \(f_{1}+b_{1}+1\) hexagons below a special hexagon, since each translation through a single column in the NW to SE direction puts us one hexagon below where we would move to when translating in the SW to NE direction. For each additional \(b_{1}+1\) vertical columns of hexagons we go through in the NW to SE direction, we land an additional \(f_{1}+b_{1}+1\) hexagons below a special hexagon. So the first time we hit a special hexagon is when we have traveled through \(j_{3}(b_{1}+1)\) hexagons, where \(j_{3}\) is the smallest integer \(\geq 1\) such that \(j_{3}(f_{1}+b_{1}+1)\) is a multiple of \(s_{1}+1\), i.e. \(j_{3}\) is the order of \(f_{1}+b_{1}+1\) in \(Z_{s_{1}+1}\). At this point we will have created a spine of length \(j_{3}(b_{1}+1)-1\). So \(s_{3}=j_{3}(b_{1}+1)-1\). For example, if we start with \((s_{1},b_{1},f_{1})=(5,2,2)\), then \(f_{1}+b_{1}+1=5\) and \(s_{1}+1=6\), and \(5\) has order \(j_{3}=6\) in \(Z_{6}\). Since \(j_{3}(b_{1}+1)-1=6\cdot 3-1=17\), we will have a spine of length \(17\). See Figure 15.
To find the number of belts in the SW to NE decomposition, note that the total number \(h\) of hexagons, based on the original signature of \((s_{1},b_{1},f_{1})\), is given by \(h=2s_{1}+b_{1}(2s_{1}+2)=2s_{1}b_{1}+2s_{1}+2b_{1}\)
Figure 13: A hexagonal tiling for the trihex \((5,2,2)\).
Figure 14: An alternative spine of (5,2,2) with length 8.
Figure 15: An alternative spine for (5,2,2) with length 17.
since each of the two spines contains \(s_{1}\) hexagons and each of the \(b_{1}\) surrounding belts contains \(2s_{1}+2\) hexagons. If \((s_{2},b_{2},f_{2})\) is the new signature based on spines in the SW to NE direction, then \(h\) must also equal \(2s_{2}b_{2}+2s_{2}+2b_{2}\). So \(b_{2}=\dfrac{h-2s_{2}}{2s_{2}+2}\). Similarly, the number of belts for the NW to SE decomposition with signature \((s_{3},b_{3},f_{3})\) is given by \(b_{3}=\dfrac{h-2s_{3}}{2s_{3}+2}\). For example, for the trihex \((5,2,2)\), we have \(h=2\cdot 5\cdot 2+2\cdot 5+2\cdot 2=34\). We saw that \(s_{2}=8\) and \(s_{3}=17\). So the number of belts using the SW to NE spines is \(b_{2}=\dfrac{34-2\cdot 8}{2\cdot 8+2}=\dfrac{18}{18}=1\). The number of belts using the NW to SE spines is \(b_{3}=\dfrac{34-2\cdot 17}{2\cdot 17+2}=\dfrac{0}{36}=0\). Note that for the SW to NE decomposition, the belts in the trihex are covered by diagonal strips of hexagons in the SW to NE direction that lie between the diagonal strips containing special hexagons. Similarly, for the NW to SE decomposition, the belts in the trihex are covered by diagonal strips of hexagons in the NW to SE direction. We will call these diagonal strips of hexagons "belt strips".
To find the offset for the SW to NE signature, we first need to find a special hexagon that is adjacent to the belt strips around a SW to NE spine. Label the special hexagons on the left and right ends of a fixed diagonal spine \(L\) and \(R\), respectively. Label the special hexagons that are adjacent to the belt strips \(T\) and \(B\), where \(T\) is adjacent on top and \(B\) is adjacent on bottom. See Figure 14. Hexagons \(T\) and \(B\) project to the head and tail triangles of a second SW to NE spine whose position relative to the first SW to NE spine will give the offset. Hexagon \(T\) will be a special hexagon that is \(b_{2}+1\) hexagons above the original diagonal spine, or equivalently, the diagonal spine is \(b_{2}+1\) hexagons below \(T\). Recall that each time we travel \(b_{1}+1\) columns along the diagonal spine in the SW to NE direction, we land an additional \(f_{1}\) hexagons below a special hexagon. Therefore, we need to find a number \(p_{2}\) such that \(p_{2}\cdot f_{1}\equiv(b_{2}+1)\mod(s_{1}+1)\), and travel \(p_{2}(b_{1}+1)\) columns to the right, in order to land in the same column as \(T\), but \(b_{2}+1\) hexagons below it. For simplicity, pick \(p_{2}\) to the smallest number \(\geq 1\) such that \(p_{2}\cdot f_{1}\equiv(b_{2}+1)\mod(s_{1}+1)\).
To find the corresponding offset, notice that since \(T\) is \(p_{2}(b_{1}+1)\) columns to the right of \(L\)'s column, it will be \((s_{2}+1)-p_{2}(b_{1}+1)\) columns to the left of \(R\)'s column. If \(b_{2}=0\), the number of columns to the left of \(R\) is one more than the offset, so \(f_{2}=(s_{2}+1)-p_{2}(b_{1}+1)-1=s_{2}-p_{2}(b_{1}+1)\). If \(b_{2}>0\), then calculating offset involves deleting \(b_{2}\) diagonal belts in the quotient trihex and moving clockwise, which is equivalent to moving \(b_{2}\) columns to the right in the **NW to SE** direction in the hexagonal tiling of the plane. Therefore, the offset will be \(b_{2}\) smaller, that is, \(f_{2}=s_{2}-p_{2}(b_{1}+1)-b_{2}\mod(s_{2}+1)\). Note that offset is defined mod \((s_{2}+1)\) since \(s_{2}\) is the length of the diagonal spine. See Figure 14.
For the \((5,2,2)\) trihex, \(s_{1}=5\), \(b_{1}=2\), \(f_{1}=2\), \(s_{2}=8\), and \(b_{2}=1\). The number \(p_{2}\) is defined as the smallest number \(\geq 1\) such that \(p_{2}\cdot f_{1}\equiv(b_{2}+1)\mod(s_{1}+1)\), i.e. such that \(p_{2}\cdot 2\equiv 2\mod 6\). Therefore, \(p_{2}=1\), and \(f_{2}=8-1\cdot 3-1\mod 9=4\). The \((5,2,2)\) trihex has an alternative signature of \((8,1,4)\).
To find the offset for the NW to SE signature, label the special hexagons on the left and right ends of the diagonal spine in the NW to SE direction with \(L\) and \(R\), respectively, and the special hexagons that are adjacent to the surrounding belt strips \(T\) and \(B\), where \(T\) is adjacent on top and \(B\) is adjacent on bottom. See Figure 15. Hexagon \(T\) will be \(b_{3}+1\) hexagons above the diagonal spine, or equivalently, the diagonal spine is \(b_{3}+1\) hexagons below \(T\). Recall that each time we travel \(b_{1}+1\) columns along the diagonal spine in the NW to SE direction, we land an additional \(f_{1}+b_{1}+1\) hexagons below a special hexagon. Therefore, we need to find a number \(p_{3}\) such that \(p_{3}\cdot(f_{1}+b_{1}+1)\equiv(b_{3}+1)\mod(s_{1}+1)\), and travel \(p_{3}(b_{1}+1)\) columns to the right, in order to land in the same column as \(T\), but \(b_{3}+1\) hexagons below it. For simplicity, pick \(p_{3}\) to the smallest number \(\geq 1\) such that \(p_{3}\cdot(f_{1}+b_{1}+1)\equiv(b_{3}+1)\mod(s_{1}+1)\).
To find the corresponding offset, notice that since \(T\) is \(p_{3}(b_{1}+1)\) columns to the right of \(L\)'s column, it will be \((s_{3}+1)-p_{3}(b_{1}+1)\) columns to the left of \(R\)'s column. If \(b_{3}=0\), the number of columns to the left of \(R\) is equal to the offset, instead of one more than the offset, like it was for the SW to NE spine. So \(f_{3}=(s_{3}+1)-p_{3}(b_{1}+1)\). If \(b_{3}>0\), then calculating offset in the quotient trihex involves deleting \(b_{3}\) diagonal belts and moving clockwise, which is equivalent to simply moving the hexagon \(T\) straight down in its column in the hexagonal grid covering. Therefore, the offset will be \(f_{3}=(s_{3}+1)-p_{3}(b_{1}+1)\mod(s_{3}+1)\). Again, the offset is defined mod \(s_{3}+1\) since \(s_{3}\) is the length of the diagonal spine. See Figure 15.
For the \((5,2,2)\) trihex, \(s_{1}=5\), \(b_{1}=2\), \(f_{1}=2\), \(s_{3}=17\), and \(b_{3}=0\), so \(f_{1}+b_{1}+1=5\). The number \(p_{3}\) is defined as the smallest number \(\geq 1\) such that \(p_{3}\cdot(f_{1}+b_{1}+1)\equiv b_{3}+1\mod(s_{1}+1)\), i.e. such that
\(p_{3}\cdot 5\equiv 1\mod 6\). Therefore, \(p_{3}=5\), and \(f_{3}=18-5\cdot 3\mod 18=3\). The \((5,2,2)\) trihex has an alternative signature of \((17,0,3)\).
**Definition 5.1**.: Given a trihex with signature \((s_{1},b_{1},f_{1})\), the _equivalent signatures_ for this trihex are the original signature \((s_{1},b_{1},f_{1})\) along with signatures \((s_{2},b_{2},f_{2})\) and \((s_{3},b_{3},f_{3})\) found using the following algorithms.
Using the SW to NE spine :
1. Find the smallest number \(j_{2}\geq 1\) such that \(j_{2}\cdot f_{1}\equiv 0\mod(s_{1}+1)\).
2. \(s_{2}=j_{2}(b_{1}+1)-1\).
3. Compute the total number of hexagons in the original trihex: \(h=2s_{1}\cdot b_{1}+2s_{1}+2b_{1}\).
4. \(b_{2}=\dfrac{h-2s_{2}}{2s_{2}+2}\).
5. Find the smallest number \(p_{2}\geq 1\) such that \(p_{2}\cdot f_{1}\equiv(b_{2}+1)\mod(s_{1}+1)\).
6. \(f_{2}=s_{2}-p_{2}(b_{1}+1)-b_{2}\mod(s_{2}+1)\).
Using the NW to SE spine (only steps 1, 5, and 6 are different):
1. Find the smallest number \(j_{3}\geq 1\) such that \(j_{3}(f_{1}+b_{1}+1)\equiv 0\mod(s_{1}+1)\).
2. \(s_{3}=j_{3}(b_{1}+1)-1\).
3. Compute the total number of hexagons in the original trihex: \(h=2s_{1}\cdot b_{1}+2s_{1}+2b_{1}\).
4. \(b_{3}=\dfrac{h-2s_{3}}{2s_{3}+2}\).
5. Find the smallest number \(p_{3}\geq 1\) such that \(p_{3}\cdot(f_{1}+b_{1}+1)\equiv(b_{3}+1)\mod(s_{1}+1)\).
6. \(f_{3}=s_{3}+1-p_{3}(b_{1}+1)\mod(s_{3}+1)\).
The signatures \((s_{1},b_{1},f_{1})\), \((s_{2},b_{2},f_{2})\), and \((s_{3},b_{3},f_{3})\) give the three alternative descriptions of the same arrangement of special hexagons on a hexagonal tiling of the plane, found by rotating the "vertical" direction by 0 or 180 degree, 60 or 240 degrees, and 120 or 300 degrees clockwise, respectively. Therefore, this definition of equivalent signatures does in fact describe an equivalence relationship. Although the three signatures are usually distinct, it is possible for all three to be the same. See Table 1. It is not possible for two of the three signatures to be the same and the third signature different: if two signatures are the same, say \((s_{1},b_{1},f_{1})\)\(=(s_{2},b_{2},f_{2})\), then rotating the hexagonally tiled plane by 60 degrees will produce the same configuration of special hexagons. Therefore, rotating a second time by the 60 degrees will again produce the same configuration of special hexagons, so \((s_{3},b_{3},f_{3})\) will also be the same.
Recall that two trihexes considered equivalent if they are not only isomorphic as graphs, but if there is also an orientation-preserving homeomorphism of the plane that takes one graph to the other. Chiral trihexes that are mirror images of each other are not considered equivalent. Figure 16 illustrates the following relationship:
**Proposition 5.2**.: A trihex with signature \((s,b,f)\) has a mirror image trihex with signature \((s,b,s-f-b\mod(s+1))\).
Proof.: The mirror image of trihex \((s,b,f)\) will still have the same spine lengths as the original \((s)\) and the same number of belts in between them \((b)\). See Figure 16
Suppose that \(b=0\). If the original trihex has offset \(f\), then its mirror image will have offset \(s-f\). Suppose \(b>0\). Since the offset of the original trihex is \(f\), if we delete the \(b\) belts one at a time and shift
the head vertex clockwise each time, then the head vertex lands at offset \(f\). Therefore, in the mirror image trihex, if we delete belts one at a time and shift the head vertex _counterclockwise_ each time, then the head vertex will land at the mirror image position offset \(s-f\). Since moving counterclockwise instead of clockwise increases offset by \(1\) for each belt that is removed, the actual offset for the mirror image found by shifting clockwise will be \(b\) less than \(s-f\), i.e. \(s-f-b\mod(s+1)\).
We can now state our classification of trihexes: trihexes are precisely indexed by the equivalence classes of triples \((s,b,f)\) under the relations for triples stated in Definition 5.1.
**Theorem 5.3**.:
1. Suppose \(T_{0}\) and \(T_{1}\) are trihexes with signatures \((s_{0},b_{0},f_{0})\) and \((s_{1},b_{1},f_{1})\). Then \(T_{0}\) and \(T_{1}\) are equivalent trihexes if and only if \((s_{0},b_{0},f_{0})\) and \((s_{1},b_{1},f_{1})\) are equivalent signatures. Therefore, there is a bijection between equivalence classes of trihexes and equivalence classes of signatures.
2. Suppose \(T_{0}\) and \(T_{1}\) are trihexes with signatures \((s_{0},b_{0},f_{0})\) and \((s_{1},b_{1},f_{1})\). Then \(T_{0}\) and \(T_{1}\) are isomorphic as graphs but not equivalent (i.e. they are mirror images of each other), if and only if \((s_{1},b_{1},f_{1})\) is equivalent to \((s_{0},b_{0},s_{0}-f_{0}-b_{0}\mod(s_{0}+1))\). Therefore there is a bijection between graph isomorphism classes of trihexes and sets of signatures that are either equivalent or mirror equivalent.
Proof.: Suppose the two trihexes \(T_{0}\) and \(T_{1}\) are isomorphic as graphs. By a theorem of Whitney ([7], or see [6]), there is a homeomorphism of the sphere whose restriction to \(T_{0}\) gives a graph isomorphism to \(T_{1}\). Lift this homeomorphism to a map from the hexagonal tiling that covers \(T_{0}\) to the hexagonal tiling that covers \(T_{1}\). This lifted map is a homeomorphism of the hexagonally tiled plane that takes hexagons to hexagons and special hexagons to special hexagons. There is a unique isometry of the plane that agrees with the homeomorphism on all the vertices of the hexagonal tiling. The isometry is orientation preserving if and only if the original homeomorphism is.
If the isometry is orientation preserving, then it must be either a rotation by a multiple of \(60^{\circ}\), or a translation, since these are the only isometries of the plane that preserve the hexagonal grid. A rotation by \(180^{\circ}\) or \(0^{\circ}\) or a translation takes vertical spines to vertical spines. A rotation by \(60^{\circ}\) or \(240^{\circ}\) counterclockwise takes vertical spines to NW to SE spines, and a rotation by \(120^{\circ}\) or \(300^{\circ}\) counterclockwise takes vertical spines to SW to NE spines. Therefore, \((s_{0},f_{0},b_{0})\) must be equivalent to \((s_{1},f_{1},b_{1})\).
If the isometry is orientation reversing, then reflect the first hexagonal tiling through a vertical line centered at special hexagons. Consider the isometry from this reflected hexagonal tiling to the second hexagonal tiling formed as the composition of the reflection followed by the original isometry. This composition gives an orientation preserving isometry from the mirror image of the first hexagonal tiling to the second hexagonal tiling. Therefore, the reflected hexagonal tiling, whose signature is \((s_{0},b_{0},s_{0}-f_{0}-b_{0}\mod(s_{0}+1))\), has signature equivalent to \((s_{1},b_{1},f_{1})\)..
Figure 16: The trihex \((3,1,2)\) and its mirror image \((3,1,0)\)
Conversely, suppose the signature \((s_{1},b_{1},f_{1})\) is equivalent to the signature \((s_{0},b_{0},f_{0})\). By Theorem 4.1, both trihexes \(T_{0}\) and \(T_{1}\) arise as quotients of hexagonal tilings under groups of isometries generated by \(180^{\circ}\) rotations. Since \((s_{0},b_{0},f_{0})\) and \((s_{1},b_{1},f_{1})\) are equivalent signatures, the rotocenters of these rotations are the same, and so these isometry groups are the same. Therefore, the trihexes must be equivalent. If \((s_{1},b_{1},f_{1})\) is equivalent to \((s_{0},b_{0},s_{0}-f_{0}-b_{0}\mod(s_{0}+1))\), then the grids of rotocenters for these rotations are mirror images of each other. Therefore, there is a reflection that takes one hexagonal tiling to the other, takes special hexagons to special hexagons, and is preserved by the action of the rotation groups. This reflection projects to an orientation-reversing homeomorphism between quotient spheres that is a graph isomorphism between \(T_{0}\) and \(T_{1}\).
The existence of bijections now follows from Theorem 3.6, which says that every signature is realized by a trihex and every trihex has a signature.
Table 1 gives the signatures for all trihexes with 20 hexagons or fewer (44 vertices or fewer). Each row give the three equivalent signatures for a trihex. The three signatures are ordered so that the signature with the smallest value of \(b\) is on the left, with preference given to the signature with smaller value of \(f\) in case of a tie. In some cases, the alternative signatures are repetitions. For example, the alternative signatures for \((6,0,2)\) are \((6,0,2)\) and \((6,0,2)\).
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \((s_{1},b_{1},f_{1})\) & \((s_{2},b_{2},f_{2})\) & \((s_{3},b_{3},f_{3})\) & hexagons & vertices \\ \hline \((0,\,0,\,0)\) & \((0,\,0,\,0)\) & \((0,\,0,\,0)\) & \(0\) & \(4\) \\ \((1,\,0,\,0)\) & \((1,\,0,\,1)\) & \((0,\,1,\,0)\) & \(2\) & \(8\) \\ \((2,\,0,\,0)\) & \((2,\,0,\,2)\) & \((0,\,2,\,0)\) & \(4\) & \(12\) \\ \((2,\,0,\,1)\) & \((2,\,0,\,1)\) & \((2,\,0,\,1)\) & \(4\) & \(12\) \\ \((3,\,0,\,0)\) & \((3,\,0,\,3)\) & \((0,\,3,\,0)\) & \(6\) & \(16\) \\ \((3,\,0,\,1)\) & \((3,\,0,\,2)\) & \((1,\,1,\,1)\) & \(6\) & \(16\) \\ \((1,\,1,\,0)\) & \((1,\,1,\,0)\) & \((1,\,1,\,0)\) & \(6\) & \(16\) \\ \((4,\,0,\,0)\) & \((4,\,0,\,4)\) & \((0,\,4,\,0)\) & \(8\) & \(20\) \\ \((4,\,0,\,1)\) & \((4,\,0,\,2)\) & \((4,\,0,\,3)\) & \(8\) & \(20\) \\ \((5,\,0,\,0)\) & \((5,\,0,\,5)\) & \((0,\,5,\,0)\) & \(10\) & \(24\) \\ \((5,\,0,\,1)\) & \((5,\,0,\,4)\) & \((2,\,1,\,2)\) & \(10\) & \(24\) \\ \((5,\,0,\,2)\) & \((2,\,1,\,0)\) & \((1,\,2,\,1)\) & \(10\) & \(24\) \\ \((5,\,0,\,3)\) & \((2,\,1,\,1)\) & \((1,\,2,\,0)\) & \(10\) & \(24\) \\ \((6,\,0,\,0)\) & \((6,\,0,\,6)\) & \((0,\,6,\,0)\) & \(12\) & \(28\) \\ \((6,\,0,\,1)\) & \((6,\,0,\,3)\) & \((6,\,0,\,5)\) & \(12\) & \(28\) \\ \((6,\,0,\,2)\) & \((6,\,0,\,2)\) & \((6,\,0,\,2)\) & \(12\) & \(28\) \\ \((6,\,0,\,4)\) & \((6,\,0,\,4)\) & \((6,\,0,\,4)\) & \(12\) & \(28\) \\ \((7,\,0,\,0)\) & \((7,\,0,\,7)\) & \((0,\,7,\,0)\) & \(14\) & \(32\) \\ \((7,\,0,\,1)\) & \((7,\,0,\,6)\) & \((3,\,1,\,3)\) & \(14\) & \(32\) \\ \((7,\,0,\,2)\) & \((7,\,0,\,5)\) & \((3,\,1,\,1)\) & \(14\) & \(32\) \\ \((7,\,0,\,3)\) & \((7,\,0,\,4)\) & \((1,\,3,\,1)\) & \(14\) & \(32\) \\ \((3,\,1,\,0)\) & \((3,\,1,\,2)\) & \((1,\,3,\,0)\) & \(14\) & \(32\) \\ \((8,\,0,\,0)\) & \((8,\,0,\,8)\) & \((0,\,8,\,0)\) & \(16\) & \(36\) \\ \((8,\,0,\,1)\) & \((8,\,0,\,4)\) & \((8,\,0,\,7)\) & \(16\) & \(36\) \\ \((8,\,0,\,2)\) & \((8,\,0,\,3)\) & \((2,\,2,\,2)\) & \(16\) & \(36\) \\ \((8,\,0,\,5)\) & \((8,\,0,\,6)\) & \((2,\,2,\,1)\) & \(16\) & \(36\) \\ \((2,\,2,\,0)\) & \((2,\,2,\,0)\) & \((2,\,2,\,0)\) & \(16\) & \(36\) \\ \((9,\,0,\,0)\) & \((9,\,0,\,9)\) & \((0,\,9,\,0)\) & \(18\) & \(40\) \\ \((9,\,0,\,1)\) & \((9,\,0,\,8)\) & \((4,\,1,\,4)\) & \(18\) & \(40\) \\ \((9,\,0,\,2)\) & \((9,\,0,\,3)\) & \((4,\,1,\,2)\) & \(18\) & \(40\) \\ \((9,\,0,\,4)\) & \((4,\,1,\,0)\) & \((1,\,4,\,1)\) & \(18\) & \(40\) \\ \((9,\,0,\,5)\) & \((4,\,1,\,3)\) & \((1,\,4,\,0)\) & \(18\) & \(40\) \\ \((9,\,0,\,6)\) & \((9,\,0,\,7)\) & \((4,\,1,\,1)\) & \(18\) & \(40\) \\ \((10,\,\,0,0)\) & \((10,\,0,\,10)\) & \((0,\,10,\,0)\) & \(20\) & \(44\) \\ \((10,\,0,\,1)\) & \((10,\,0,\,5)\) & \((10,\,0,\,9)\) & \(20\) & \(44\) \\ \((10,\,0,\,2)\) & \((10,\,0,\,4)\) & \((10,\,0,\,7)\) & \(20\) & \(44\) \\ \((10,\,0,\,3)\) & \((10,\,0,\,6)\) & \((10,\,0,\,8)\) & \(20\) & \(44\) \\ \hline \end{tabular}
\end{table}
Table 1: The three equivalent signatures for trihexes with 20 or fewer hexagons / 44 or fewer vertices.
## 6 Convex vs. non-convex trihexes
The trihex signature determines whether it can arise from a convex polyhedron. We will first quote some preliminary facts.
**Proposition 6.1**.:
1. Every trihex is \(2\)-connected.
2. If a trihex is not \(3\)-connected, then it is a godeye.
3. A trihex is a simple graph.
Proof.: Parts (i) and (ii) are proved in [2].
Part (iii): If a trihex had an edge loop, then it would have a face with only one edge. If it had a double edge, then it would either have a face with only two edges or else each of the vertices on the double edge would be a separating vertex, contradicting Part (i).
**Theorem 6.2**.: Any trihex with a signature \((0,b,0)\) with \(b>0\) can be represented as the skeleton of a non-convex polyhedron, and it cannot be represented as the skeleton of a convex polyhedron. All other trihexes can be represented as skeletons of convex polyhedra.
Proof.: Steinitz's Theorem [5] or [8], says that a graph can be represented as the skeleton of a convex polyhedron if and only if the graph is simple, planar, and \(3\)-connected. Trihexes with signature \((0,b,0)\) with \(b>0\) are godseyes, which are not \(3\)-connected: removing the two vertices shown in red in Figure 1 disconnects the graph. Therefore they are not the skeletons of convex polyhedra. However, they are the skeletons of non-convex polyhedra, as shown in Figure 17. All other trihexes are \(3\)-connected, simple, planar graphs by Lemma 6.1. So by Steinitz's theorem, they are the skeletons of convex polyhedra.
## 7 How many trihexes of each size?
Given \(v\geq 0\), how many trihexes are there with \(v\) vertices? Let \(\alpha(v)\) be the number of equivalence classes of trihexes with \(v\) vertices and let \(\beta(v)\) be the number of graph isomorphism classes of trihexes with \(v\) vertices. These two quantities can be computed by counting signatures and accounting for duplicates, after establishing the following relationships, which are also evident in [4].
**Lemma 7.1**.:
1. The triple \((s,b,f)\) is a signature for a trihex with \(h\) hexagons if and only if \(s\geq 0\), \(b\geq 0\), \(0\leq f\leq s\), and \(\dfrac{h}{2}+1=(s+1)(b+1)\).
2. The triple \((s,b,f)\) is a signature for a trihex with \(v\) vertices if and only if \(s\geq 0\), \(b\geq 0\), \(0\leq f\leq s\), and \(\dfrac{v}{4}=(s+1)(b+1)\).
3. Consequently, the number of hexagons in a trihex is even and the number of vertices of a trihex is divisible by \(4\).
Proof.: A trihex with signature \((s,b,f)\) has \(2s+b(2s+2)=2(s+1)(b+1)-2\) hexagons, since each of the two spines contains \(s\) hexagons and each of the \(b\) belts contains \(2s+2\) hexagons. So for \(s\geq 0\), \(b\geq 0\), \(0\leq f\leq s\), the triple \((s,b,f)\) is a signature for a trihex with \(h\) hexagons if and only if \(\dfrac{h}{2}=(s+1)(b+1)-1\). Since each hexagon has six vertices and each of the four triangles in a trihex has four vertices, the number of vertices in a trihex is \(v=\dfrac{6h+12}{3}=2h+4=4(s+1)(b+1)\). So the triple \((s,b,f)\) with \(s\geq 0\), \(b\geq 0\), and \(0\leq f\leq s\) is a signature for a trihex with \(v\) vertices if and only if \(\dfrac{v}{4}=(s+1)(b+1)\). Since \(\dfrac{h}{2}\) and \(\dfrac{v}{4}\) are integers, \(h\) is divisibby by \(2\) and \(v\) is divisible by \(4\).
From this relationship, we can compute \(\alpha(v)\) and \(\beta(v)\) as follows. For each desired value of \(v\), we can find the list of all triples \((s,b,f)\) satisfying the inequalities \(s\geq 0\), \(b\geq 0\), and \(0\leq f\leq s\) and the equation \(\dfrac{v}{4}=(s+1)(b+1)\). Then we can test which triples satisfy the relationships in Definition 5.1 and therefore represent equivalent trihexes. Taking these equivalences into account yields \(\alpha(v)\). Checking for triples that satisfy the mirror equivalence relation in Theorem 5.3 yields \(\beta(v)\). See Table 2 for counts of \(\alpha(v)\) and \(\beta(v)\) for \(v\leq 200\).
Let \(\sigma(v)\) be the number of triples that form a signature for a trihex with \(v\) vertices; that is, the number of triples \((s,b,f)\) with \(s\geq 0\), \(b\geq 0\), \(0\leq f\leq s\), and \(v=4(s+1)(b+1)\).
Figure 17: Godseyes can be realized as non-convex polyhedra.
**Lemma 7.2**.: Let \(p_{1}^{m_{1}}p_{2}^{m_{2}}\cdot p_{k}^{m_{k}}\) be the prime factorization of \(\dfrac{v}{4}\). Then \(\sigma(v)=\prod_{i=1}^{k}\dfrac{p_{i}^{m_{i}+1}-1}{p_{i}-1}\).
Proof.: The set of pairs \((s,b)\) that satisfy (i) \(s\geq 0\). (ii) \(b\geq 0\), and (iii) \(v=4(s+1)(b+1)\) is in one-to-one correspondence with the factors of \(\dfrac{v}{4}\), by the correspondence that takes a factor \(d\) to the pair \((s,b)=\left(d-1,\dfrac{v}{4d}-1\right)\). For each such pair \((s,b)\), corresponding to the factor \(d=s+1\) of \(\dfrac{v}{4}\), there are \(s+1\) triples \((s,b,f)\) that satisfy (iv) \(0\leq f\leq s\). Therefore, the number of triples \((s,b,f)\) that satisfy (i), (ii), (iii), and (iv) is equal to the sum of the factors \(d\) of \(\dfrac{v}{4}\). This sum is equal to \(\prod_{i=1}^{k}\dfrac{p_{i}^{m_{i}+1}-1}{p_{i}-1}\).
**Proposition 7.3**.: Let \(p_{1}^{m_{1}}p_{2}^{m_{2}}\cdot p_{k}^{m_{k}}\) be the prime factorization of \(\dfrac{v}{4}\). Then
\[\dfrac{1}{3}\prod_{i=1}^{k}\dfrac{p_{i}^{m_{i}+1}-1}{p_{i}-1}\leq\alpha(v) \leq\prod_{i=1}^{k}\dfrac{p_{i}^{m_{i}+1}-1}{p_{i}-1}\]
and
\[\dfrac{1}{6}\prod_{i=1}^{k}\dfrac{p_{i}^{m_{i}+1}-1}{p_{i}-1}\leq\beta(v)\leq \prod_{i=1}^{k}\dfrac{p_{i}^{m_{i}+1}-1}{p_{i}-1}\]
Proof.: If each of the triples \((s,b,f)\) with \(s\geq 0\), \(b\geq 0\), \(0\leq f\leq s\), and \((s+1)(b+1)=\dfrac{v}{4}\) represented a distinct trihex, there would be \(\sigma(v)\) distinct trihexes with \(v\) vertices. However, some of these triples represent equivalent trihexes, since a trihex can be decomposed in three ways into spines. Therefore \(\dfrac{\sigma(v)}{3}\leq\alpha(v)\leq\sigma(v)\). There could be up to six signatures that represent isomorphic trihexes, since the left-handed and right-handed versions of chiral trihexes each have their own three signatures. So \(\dfrac{\sigma(v)}{6}\leq\beta(v)\leq\sigma(v)\).
Table 2 gives the number \(\alpha(v)\) of equivalence classes of trihexes and the number \(\beta(v)\) of graph isomorphism classes of trihexes for each number \(v\) of vertices for \(0\leq v\leq 200\). Figure 18 presents the same information graphically for \(0\leq v\leq 400\). Note that the counts in the \(\beta(v)\) column of Table 2 are always one greater than the counts in Table 5 of [2], since our counts include non-convex godseyes, and there is exactly one godseye for each possible number of vertices.
For each line in Table 2, the count \(\alpha(v)\) is close to \(\left\lceil\dfrac{\sigma(v)}{3}\right\rceil\) and \(\beta(v)\) is fairly close to \(\left\lceil\dfrac{\sigma(v)}{6}\right\rceil\). For \(200\leq v\leq 4000\), the difference \(\alpha(v)-\left\lceil\dfrac{\sigma(v)}{3}\right\rceil\) is greater than one only \(2.5\%\) of the time and is never more than four. For \(200\leq v\leq 4000\), \(\left(\dfrac{\sigma(v)}{3}\right)\leq\alpha(v)\leq 1.06\left(\dfrac{\sigma(v)}{3}\right)\). For \(200\leq v\leq 4000\), the difference \(\beta(v)-\left\lceil\dfrac{\sigma(v)}{6}\right\rceil\) is greater than one \(36.2\%\) of the time and has a maximum value of \(22\). For \(200\leq v\leq 4000\), \(\left(\dfrac{\sigma(v)}{6}\right)\leq\beta(h)\leq 1.25\left(\dfrac{\sigma(v)}{6}\right)\).
**Conjecture 7.4**.: As \(v\to\infty\), the counts \(\alpha(v)\) and \(\beta(v)\) are respectively asymptotic to \(\dfrac{1}{3}\sigma(v)\) and \(\dfrac{1}{6}\sigma(v)\).
\begin{table}
\begin{tabular}{|c c c c c||c c c c c|} \hline \(v\) & \(\alpha(v)\) & \(\lceil\frac{\sigma(v)}{3}\rceil\) & \(\beta(v)\) & \(\lceil\frac{\sigma(v)}{6}\rceil\) & \(v\) & \(\alpha(v)\) & \(\lceil\frac{\sigma(v)}{3}\rceil\) & \(\beta(v)\) & \(\lceil\frac{\sigma(v)}{6}\rceil\) \\ \hline
[MISSING_PAGE_POST]
\hline \end{tabular}
\end{table}
Table 2: Count \(\alpha(v)\) of equivalence classes of trihexes and count \(\beta(v)\) of graph isomorphism classes of trihexes. Here \(\sigma(v)\) is the sum of the factors of \(\frac{v}{4}\).
Figure 18: The number of distinct trihexes that contain \(v\) vertices.
## 8 Tight trihexes
This section analyzes trihexes that contain no belts. We start with a specialization of a definition from [2].
**Definition 8.1**.: A trihex is _tight_ if it does not contain any belts.
**Proposition 8.2**.: The following are equivalent for a trihex:
1. The trihex is tight.
2. The second coordinate in each of its three signatures is \(0\).
3. For any fixed signature \((s_{1},b_{1},f_{1})\), the three numbers \(f_{1}\), \(f_{1}+1\), and \(s_{1}+1\) are pairwise relatively prime and \(b_{1}=0\).
Proof.: \((i)\iff(ii)\): Suppose the trihex is tight. Then the second coordinates in each of its three signatures must be zero, since the second coordinate indicates the number of belts between spines. Conversely, if the second coordinate in each of its three signatures is \(0\), then in the hexagonal tiling that covers the trihex, there are no infinite strips of hexagons that do not contain special hexagons, either in the vertical direction, the NW to SE direction, or the SW to NE direction. But any belt in the trihex would lift to such a strip, so the trihex must be tight.
\((ii)\implies(iii)\): Consider a fixed signature \((s_{1},b_{1},f_{1})\) and the alternate signatures \((s_{2},b_{2},f_{2})\) and \((s_{3},b_{3},f_{3})\). Since the number of hexagons \(h=2s_{i}b_{i}+2s_{i}+2b_{i}\) for \(i=1,2,3\), and \(b_{i}=0\), we have that \(s_{1}=s_{2}=s_{3}\). In the algorithm given in Definition 5.1, \(s_{2}=j_{2}(b_{1}+1)-1\), where \(j_{2}\) is the order of \(f_{1}\) in \(Z_{s_{1}+1}\). Since \(b_{1}=0\) and \(s_{2}=s_{1}\), this means that \(s_{1}=j_{2}-1\), that is, \(f_{1}\) has order \(s_{1}+1\) in \(Z_{s_{1}+1}\). Equivalently, \(f_{1}\) is relatively prime to \(s_{1}+1\). Similarly, \(s_{3}=j_{3}(b_{1}+1)-1\), so \(s_{1}=s_{3}=j_{3}-1\), where \(j_{3}\) is the order of \(f_{1}+b_{1}+1\) in \(Z_{s_{1}+1}\), that is, the order of \(f_{1}+1\) in \(Z_{s_{1}+1}\). So \(f_{1}+1\) has order \(s_{1}+1\) in \(Z_{s_{1}+1}\) and is therefore relatively prime to \(s_{1}+1\). Since \(f_{1}\) and \(f_{1}+1\) are relatively prime, we have that \(f_{1}\), \(f_{1}+1\), and \(s_{1}+1\) are pairwise relatively prime.
\((iii)\implies(ii)\): Suppose \(b_{1}=0\) and \(f_{1}\), \(f_{1}+1\), and \(s_{1}+1\) are relatively prime. Since \(b_{1}=0\), \(h=2s_{1}\). In Definition 5.1, since \(f_{1}\) is relatively prime to \(s_{1}+1\), \(j_{2}=s_{1}+1\). Therefore, \(s_{2}=j_{2}(b_{1}+1)-1=(s_{1}+1)(0+1)-1=s_{1}\), so \(b_{2}=\dfrac{h-2s_{2}}{2s_{2}+2}=\dfrac{h-2s_{1}}{2s_{1}+2}=b_{1}=0\).
Furthermore, since \(f_{1}+b_{1}+1=f_{1}+1\) is relatively prime to \(s_{1}+1\), \(j_{3}=s_{1}+1\). So \(s_{3}=j_{3}(b_{1}+1)-1=(s_{1}+1)(0+1)-1=s_{1}\), and \(b_{3}=\dfrac{h-2s_{3}}{2s_{3}+2}=\dfrac{h-2s_{1}}{2s_{1}+2}=b_{1}=0\).
Our next result proves Conjecture 5.3 from Deza and Dutour [2]. We first repeat their definition of the graph of curvatures:
**Definition 8.3**.: The _graph of curvatures_ of a trihex is the graph whose vertices are the four triangular faces. Two vertices \(c\) and \(d\) are connected by an edge if there exists a pseudo-road connecting the faces \(c\) and \(d\). A _pseudo-road_ is a sequence of hexagons, say \(a_{1},\dots,a_{\ell}\), such that setting \(a_{0}=c\) and \(a_{\ell+1}=d\), we have that for \(1\leq i\leq\ell\), \(a_{i}\),is adjacent to \(a_{i-1}\) and \(a_{i+1}\) on opposite edges.
**Proposition 8.4**.: The graph of curvatures of any tight trihex is a complete graph on \(4\) vertices.
Proof.: Consider a tight trihex with signature \((s_{1},b_{1},f_{1})\). Recall that the trihex arises as the quotient sphere from a hexagonal tiling of the plane, where the "special hexagons", which correspond to the four triangles, lie on the vertices of a superimposed parallelogram grid. Label these special hexagons \(A\), \(B\), \(C\), and \(D\) as in Figure 19, so that hexagons labeled \(A\) and \(C\) lie in vertical columns, and hexagons labeled \(B\) and \(D\) lie in alternate vertical columns.
By Proposition 8.2, \(b_{1}=0\) and \(f_{1}\) and \(f_{1}+1\) are both relatively prime to \(s_{1}+1\). Therefore, \(s_{1}+1\) must be odd. In Definition 5.1, \(j_{2}\) and \(j_{3}\), which are the orders of \(f_{1}\) and \(f_{1}+1\) in \(Z_{s_{1}+1}\), respectively, must both equal \(s_{1}+1\) and therefore also be odd. Since the columns of hexagons alternate between columns containing \(A\) and \(C\) and columns containing \(B\) and \(D\), if we translate a special hexagon \(A\) in the SW to NE direction by \(j_{2}=s_{1}+1\) columns, it will hit a special hexagon in the \(B/D\) column. Similarly, if we translate \(A\) in by \(j_{3}=s_{1}+1\) columns in the NW to SE direction, it will hit a special hexagon in the \(B/D\) column. Form a triangle with vertices at the centers of the original special hexagon \(A\) and these two special hexagons. The sides with a vertex in \(A\) both have the same length and are at a \(60^{\circ}\) angle from each other. Therefore, this triangle must be an equilateral triangle. So the two special hexagons in the \(B/D\) column must be \(s_{1}\) hexagons apart, so one of them is a hexagon \(B\) and the other a hexagon \(D\). Therefore, the graph of curvatures for the trihex must include an edge from \(A\) to \(B\) and an edge from \(A\) to \(D\). Of course, if we translate a special hexagon \(A\) straight north or south, it will hit a special hexagon \(C\), so the graph of curvatures also includes an edge from \(A\) to \(C\). By repeating this argument starting with hexagons \(B\), \(C\), and \(D\), in turn, instead of \(A\), we see that each vertex in the graph of curvatures connects to every other vertex.
## Acknowledgments
We would like to express our appreciation to our colleague Bob Proctor for excellent suggestions and edits to an early version of this manuscript.
## Statements and Declarations
### Funding
The authors declare that no funds, grants, or other support were received during the preparation of this manuscript.
Figure 19: Hexagonal cover for a tight trihex.
### Competing Interests
The authors have no relevant financial or non-financial interests to disclose.
### Author Contributions
Both authors contributed to the research and read and approved the final manuscript.
### Data Availability
All data generated or analyzed during this study are included in this published article. Python code used to generate this data is available from the corresponding author on request.
|
2303.01053 | Distances to nearby molecular clouds traced by young stars | I present a catalog of distances to 63 molecular clouds located within ~2.5
kpc of the Sun. The cloud distances are derived based on utilizing the Gaia DR3
parallaxes of the young stellar objects (YSOs). By identifying AllWISE YSO
candidates (YSOCs) with infrared excesses and combining them with published
YSOC catalogs, I compile an all-sky YSOC sample that is devoid of a significant
proportion of contaminants. Using Gaia DR3 astrometric measurements, I
associate over 3000 YSOCs with 63 local clouds and obtain the average distance
to each cloud by fitting the YSOC parallax distribution within the cloud. I
find good agreements with typical scatter of <10% between my new cloud
distances and previous distance estimates. Unlike cloud distances obtained
using stellar extinction, my catalog provides distances to the relatively dense
areas of local clouds, which makes them more appropriate references for
investigating the physical properties of nearby dense regions. | Miaomiao Zhang | 2023-03-02T08:23:11Z | http://arxiv.org/abs/2303.01053v1 | # Distances to nearby molecular clouds traced by young stars
###### Abstract
I present a catalog of distances to 63 molecular clouds located within \(\sim\)2.5 kpc of the Sun. The cloud distances are derived based on utilizing the _Gaia_ DR3 parallaxes of the young stellar objects (YSOs). By identifying AllWISE YSO candidates (YSOCs) with infrared excesses and combining them with published YSOC catalogs, I compile an all-sky YSOC sample that is devoid of a significant proportion of contaminants. Using _Gaia_ DR3 astrometric measurements, I associate over 3000 YSOCs with 63 local clouds and obtain the average distance to each cloud by fitting the YSOC parallax distribution within the cloud. I find good agreements with typical scatter of \(\lesssim\)10% between my new cloud distances and previous distance estimates. Unlike cloud distances obtained using stellar extinction, my catalog provides distances to the relatively dense areas of local clouds, which makes them more appropriate references for investigating the physical properties of nearby dense regions.
M 0000-0002-4007-7880]Miaomiao Zhang (Zhang & Yi 2018)
## 1 Introduction
In order to understand the process of star formation in galaxies, it is essential to have a comprehensive understanding of how molecular gas is transformed into stars in our local environment. To gain insight into this process within our solar neighborhood, it is necessary to investigate the structures and distributions of local clouds. However, the gas structures within \(\sim\)2 kpc of the Sun is still under debate (Poppel and Marronetti, 2000; Bouy and Alves, 2015; Zari et al., 2018; Zucker et al., 2022), particularly in light of recent discoveries of several new structures (Alves et al., 2020; Bialy et al., 2021; McBride et al., 2021).
Accurately mapping the three-dimensional (3D) gas structures in our solar neighborhood is contingent upon precise distance measurements to nearby molecular clouds. Several methods have been developed to determine the distances of these clouds. For instance, the statistics of star counts on obscured and non-obscured fields can provide insights into cloud distances (Wolf, 1923; Bok, 1931; Lombardi et al., 2010; Kainulainen et al., 2011; Foster et al., 2012). However, this approach often suffers from large uncertainties due to the uneven distribution of stellar density. Photometric distances of cloud tracers, such as OB associations, can also be utilized to estimate cloud distance (Borganan and Blaauw, 1964; Marison, 1967; Brown et al., 1994; Mayne and Naylor, 2008), but the inherent degeneracy between distance and extinction of stars often results in significant inaccuracies.
Another effective method for estimating cloud distances is by using the trigonometric parallaxes of masers or YSOs located within the clouds. For instance, de Zeeuw et al. (1999) examined nearby OB associations based on HIPPARCOS (Perryman et al., 1997) data and obtained distance estimates for tens of these associations. However, the precision of HIPPARCOS parallax measurements only allows for visits to clouds located within distances of up to 600 pc. The Gould's Belt Distances Survey (BOBELINS, Loinard et al., 2011; Loinard, 2013) utilized radio very long baseline interferometry (VLBI) observations to obtain parallaxes for a large sample of young stars in various nearby molecular clouds. These high-precision parallax measurements enable accurate distance determinations for individual sources. By averaging the distances of young stars, GOBELINS derived the distances to clouds such as Ophiuchus, Orion, Serpens, Taurus, and Perseus (Ortiz-Leon et al., 2017; Kounkel et al., 2017; Ortiz-Leon et al., 2017; Galli et al., 2018; Ortiz-Leon et al., 2018). Nevertheless, VLBI observations are time-consuming, and therefore the number of sources with VLBI parallaxes is limited, which restricts the application of GOBELINS's method to a small number of clouds.
The recent release of astrometric data from _Gaia_ has brought about significant changes. _Gaia_ DR2 (Gaia Col
laboration et al., 2018) and DR3 (Gaia Collaboration et al., 2022) have provided accurate parallax measurements for more than one billion sources, prompting numerous authors to re-calculate the distances to nearby molecular clouds using _Gaia_ data. One approach involves identifying a stellar extinction "jump" caused by molecular clouds between the unreddened foreground stars and the reddened background stars by examining the variation of optical extinction with respect to distance along the line of sight (Knude and Hog, 1998). Hundreds of molecular clouds have had distances derived using this method (Schlafly et al., 2014; Yan et al., 2019; Zucker et al., 2019, 2020). Another approach involves constructing a 3D extinction map by modeling the dust extinction profiles along different lines of sight (Rezaei Kh. et al., 2018; Chen et al., 2019; Green et al., 2019; Lallement et al., 2019; Leike et al., 2020). Cloud distances can then be obtained by searching for dust structures in the 3D dust cubes (Chen et al., 2020; Guo et al., 2022). These techniques rely on estimates of stellar extinction, but the dynamical range of stellar extinction based on optical surveys such as _Gaia_ is limited, only extending up to a few to \(\sim\)10 magnitudes (\(A_{V}\)). As a result, extinction-based techniques are not applicable for estimating distances of relatively dense regions.
The distances of nearby clouds can also be obtained from _Gaia_ parallaxes of YSOs. Several studies have identified a significant number of YSOCs using _Gaia_ photometry and/or kinematics (Zari et al., 2018; Gagne et al., 2018, 2018; Agagne and Faherty, 2018; Kounkel and Covey, 2019; Kounkel et al., 2020; McBride et al., 2021; Prisinzano et al., 2022). While Zari et al. (2018) and McBride et al. (2021) have analyzed the spatial distribution of YSOCs, their focus has mainly been on structures within \(\sim\)500 pc of the Sun. Prisinzano et al. (2022) used machine learning unsupervised clustering algorithm DBSCAN (Density-Based Spatial Clustering of Applications with Noise, Ester et al., 1996) to identify 354 star forming regions and stellar clusters with ages of \(\lesssim\) 10 Myr within \(\sim\)1.5 kpc of the Sun based on their _Gaia_ YSO sample. However, the star forming regions identified by Prisinzano et al. (2022) are stellar structures, and the connections between them and gas structures are still uncertain. On the other hand, Dzib et al. (2018) has compiled a YSO catalog from the literature for 12 nearby clouds within 500 pc of the Sun. Using _Gaia_ parallax measurements of these YSOs, Dzib et al. (2018) investigated the distances and 3D motions of these 12 local clouds.
In this paper, I expand on the distance estimation of local clouds within \(\sim\)2.5 kpc of the Sun using _Gaia_ DR3 parallax measurements. I compile a comprehensive all-sky sample of YSOCs and use it to precisely determine distances to 63 nearby molecular clouds. In Section 2, I provide a detailed description of the data and catalogs utilized, as well as the quality cuts applied to filter out spurious sources. In Section 3, I explain how I constructed my all-sky YSOC sample, which is free of contamination from other types of sources. In Section 4, I present my method for determining cloud distances and provide the resulting distance catalog. In Section 5, I offer a comparison of my distance estimates with those from previous literature, highlighting the advantages and drawbacks of my approach. Finally, my conclusions and summary are presented in Section 6.
## 2 Data and catalogs
I utilize the archival AllWISE catalog to identify YSOCs, and the kinematic information for these YSOCs is provided by the _Gaia_ DR3. In addition, I incorporate several previously published catalogs to augment my YSOC sample and provide distance estimates for the YSOCs. The nearby molecular clouds are delineated using the Planck dust maps. The succeeding sections provide a detailed account of these data and catalogs.
### AllWISE catalog
The space telescope WISE was launched in December 2009 and scanned the whole sky in four infrared passbands, W1, W2, W3, and W4, centered at 3.4, 4.6, 12, and 22 \(\mu\)m, respectively. The angular resolution is about 6\(\farcs\)1, 6\(\farcs\)4, 6\(\farcs\)5, and 12\(\farcs\)0 in W1-W4 bands and the 5\(\sigma\) sensitivity is better than 0.08, 0.11, 1, and 6 mJy in unconfused regions. I use the source catalog taken from AllWISE data release1(Cutri et al., 2013), which was produced by combining WISE data from cryogenic and post-cryogenic survey phases. The detailed description of WISE data acquisition and reduction can be found in Wright et al. (2010), Jarrett et al. (2011), and Explanatory Supplement to the AllWISE Data Release Products2. The AllWISE catalog also provides \(J\), \(H\), \(K_{s}\) photometry by positionally crossmatching with the 2MASS point source catalog (Skrutskie et al., 2006).
Footnote 1: [https://wise2.ipac.caltech.edu/docs/release/allwise/](https://wise2.ipac.caltech.edu/docs/release/allwise/)
Footnote 2: [https://wise2.ipac.caltech.edu/docs/release/allwise/expsup/](https://wise2.ipac.caltech.edu/docs/release/allwise/expsup/)
### Gaia DR3
The third release of the _Gaia_ data3 (DR3) is based on the data collected during the first 34 months of _Gaia_ mission (Gaia Collaboration et al., 2016). It provides high-precision parallax and proper motion, together with homogeneous multi-band photometry for
about 1.8 billion sources. Actually, _Gaia_ DR3 released a vast array of data products, including the spectra, photometric time series, and the astrophysical parameters. In this paper, I only use the astrometry, photometry, and astrophysical parameter catalogs in _Gaia_ DR3. I also note that the astrometry and broad band photometry in _Gaia_ DR3 are the same4 as that in _Gaia_ EDR3 (Gaia Collaboration et al., 2021) which was the first instalment of the full Gaia DR3
Footnote 4: According to Gaia Collaboration et al. (2021), a correction must be made to the G-band magnitudes of some of the sources in _Gaia_ EDR3. This correction is not included in the official EDR3, but it is already incorporated in the official DR3. As a result, technically, the G-band photometry of certain sources in DR3 differs from that in EDR3.
_Gaia_ DR3 has a limiting magnitude of \(G\approx 21\) mag and a typical uncertainty of 0.3 millimagnitude (mmag) (\(G<13\) mag) to 6 mmag (\(G=20\) mag). The typical astrometric uncertainty depends on source brightness: 0.02\(-\)0.04 milliarcseconds (mas) (\(G<15\) mag) to 0.5 mas (\(G=20\) mag) for parallax and 0.02-0.04 mas yr\({}^{-1}\) (\(G<15\) mag) to 0.6 mas yr\({}^{-1}\) (\(G=20\) mag) for proper motion. The _Gaia_ DR3 astrophysical products were produced with 13 different modules using the astrophysical parameters inference system (Apsis, Creevey et al., 2022; Fouesneau et al., 2022). In this paper, I use the astrophysical parameter catalog produced by one of 13 modules in Apsis, i.e., General Stellar Parameterizer from Photometry (GSP-Phot, Andrae et al., 2022), based on the _Gaia_ astrometry, photometry, and low-resolution BP/RP spectra. The GSP-Phot is a Bayesian forward-modelling approach, which provides a homogeneous catalogue of stellar parameters, distances, and extinctions for about 471 million sources with \(G<19\) mag. Andrae et al. (2022) estimated that the typical uncertainty of extinction (\(A_{G}\)) is about 0.06 mag for bright sources. The detailed content of _Gaia_ DR3 can be found in Gaia Collaboration et al. (2022); Babusiaux et al. (2022).
To remove the spurious astrometric solutions from the _Gaia_ DR3 catalog, I use the classifier introduced by Rybizki et al. (2022). Rybizki et al. (2022) constructed the "good" and "bad" astrometric solutions as the training samples with the _Gaia_ EDR3 data and then devised a single "astrometric fidelity" parameter to identify spurious sources based on the machine learning technique. Comparing with the quality cuts using the _Gaia_ catalog columns such as ruev, their astrometric fidelities can yield purer and more complete samples of sources with reliable astrometric solutions. Rybizki et al. (2022) also provided the diagnostics of the level of photometric contamination from neighbors (norm_dg). I decide to use the following cuts to filter out the spurious sources with unreliable astrometric solutions and/or colors in _Gaia_ DR3 as suggested by Rybizki et al. (2022):
fidelity_v2 \[>0.5,\] (1) norm_dg \[=\] nan _or_ norm_dg \[<-3\] (2)
To remove the _Gaia_ sources with possible problematic photometries, I also apply a quality cut as suggested by Prisinzano et al. (2022):
\[\sigma([G-R_{p}])=\sqrt{\sigma(G)^{2}+\sigma(R_{p})^{2}}<0.14 \tag{3}\]
where \(\sigma(G)\) and \(\sigma(R_{p})\) are defined as:
\[\sigma(G) = \sqrt{\left(\frac{1.0857}{\texttt{phot\_g\_mean\_flux\_over\_error} }\right)^{2}+\sigma(G_{0})^{2}}\] \[\sigma(R_{p}) = \sqrt{\left(\frac{1.0857}{\texttt{phot\_rp\_mean\_flux\_over\_error} }\right)^{2}+\sigma(R_{p0})^{2}},\]
and \(\sigma(G_{0})\) and \(\sigma(R_{p0})\) are the _Gaia_ DR3 zero-point uncertainties5.
Footnote 5: [https://www.cosmos.esa.int/web/gaia/dr3-passbands](https://www.cosmos.esa.int/web/gaia/dr3-passbands)
Finally, I correct the _Gaia_ DR3 parallax using the zero point bias suggested by Lindegren et al. (2021), which is a function of source position, brightness, and color6.
Footnote 6: [https://gitlab.com/icc-ub/public/gaiard3_zeropoint](https://gitlab.com/icc-ub/public/gaiard3_zeropoint)
### Planck dust map
The Planck mission (Planck Collaboration et al., 2011) mapped the whole sky in nine passbands in the range of frequencies between 25 and 1000 GHz. Based on the 2013 release of data (Planck Collaboration et al., 2014), Planck Collaboration et al. (2014) fit the emission from Planck data at 353, 545, 857 GHz, and IRAS 100 \(\mu\)m survey (Neugebauer et al., 1984) data using an all-sky dust model that describes the dust spectral energy distribution (SED) with a modified blackbody. The released dust opacity (\(\tau\)) map has a angular resolution of \(5^{\prime}\) and was also calibrated to the reddening \(E(B-V)\) units with extragalactic quasars. I fetch Planck \(E(B-V)\) map using the Python interface of DUSTMAPS (Green, 2018) and transform it to visual extinction units with \(A_{V}=3.1E(B-V)\) assuming \(R_{V}=3.1\).
### Complementary published catalogs
#### 2.4.1 Marton et al. (2016, 2019)
Marton et al. (2016) identified 133 980 Class I/II candidates and 608 606 Class III/evolved YSOCs based on the AllWISE catalog. More specifically, they collected
known sources from SIMBAD7 as the training sample and then classified AllWISE sources into different types with the support vector machine (SVM) technique. After excluding the contaminations such as extragalactic sources, main-sequence (MS) stars, and evolved stars, Marton et al. (2016) finally selected 742 586 YSOCs.
Footnote 7: [http://simbad.u-strasbg.fr/simbad/](http://simbad.u-strasbg.fr/simbad/)
Marton et al. (2019) combined _Gaia_ DR2 and AllWISE catalog and identified 1 768 628 potential YSOCs with the supervised machine learning technique. Their training sample was constructed based on SIMBAD and \(\sim\)80 catalogs from the literature. After comparing the results from tens of different machine learning techniques, Marton et al. (2019) finally selected the Random Forests method to classify _Gaia_ DR2\(\times\)AllWISE sources into different object classes, e.g., YSOCs and contamination such as MS stars and extragalactic objects.
#### 2.4.2 Bailer-Jones et al. (2021)
The transformation from _Gaia_ DR3 parallax to distance needs to account for the non-gaussian profile of probability distribution of the inverse of parallax. Bailer-Jones et al. (2021) calculated the Bayesian distances for about 1.3-1.5 billion _Gaia_ sources by assuming a prior constructed from a 3D Galactic model based on _Gaia_ EDR3 data. Their catalog can provide meaningful distance estimates even for the faint _Gaia_ stars with large parallax uncertainties. The distance catalog presented by Bailer-Jones et al. (2021) has been included in the official _Gaia_ data archive8.
Footnote 8: [https://gea.esac.esa.int/archive/](https://gea.esac.esa.int/archive/)
Bailer-Jones et al. (2021) provided two types of distance: the geometric distance (r_med_geo) was obtained using parallax with a direction-dependent prior on distance; and the photogeometric distance (r_med_photogeo) was obtained by considering additional stellar photometric information. In this paper, I only use the geometric distance in order to get the distances for more _Gaia_ sources. I also emphasize that the Bailer-Jones et al. (2021)'s geometric distances are only used to estimate the extinctions of some YSOCs (see Appendix A) and to exclude the possible contamination (see Sect. 3.4). I do not use them to calculate the final distances of YSOCs in the nearby molecular clouds.
## 3 All-Sky Ysoc Catalog
In this section, I describe the methodology used to identify YSOCs in Section 3.1, the integration of previously published YSOC catalogs in Section 3.2, the reddening and classification of YSOCs in Section 3.3, and the removal of potential contamination sources in Section 3.4. The resulting clean YSOC catalog, along with estimates of contamination and completeness fractions, are presented in Section 3.5.
### YSOC identification
The excessive infrared emission from YSOs can be used to distinguish them from field stars. In this paper, I use the multicolor criteria scheme suggested by Koenig and Leisawitz (2014) to identify YSOCs with infrared excess based on the AllWISE catalog. The details about this multiphase source classification scheme can be found in Koenig and Leisawitz (2014). Here I just give a short description of this process.
Koenig and Leisawitz (2014) found that the spurious detection fraction of AllWISE sources can be up to 30% in the W1 band and \(>\)90% in the W3 and W4 bands. To eliminate the spurious detections, they developed some criteria based on the signal-to-noise and reduced chi-squared parameters given in the AllWISE catalog, which can suppress the contamination rate down to \(<\)7% in any WISE band.
After filtering out the spurious detections, I firstly remove the contamination of star-forming galaxies and Active Galactic Nuclei (AGNs) based on their photometry in W1, W2, and W3 bands. Then the Class I and Class II candidates are selected using the color criteria shown in Fig. 1a. Secondly, the YSOC with \(H\) and \(K_{s}\) detections are identified from remaining sources with the color criteria shown in Fig. 1b. Thirdly, by introducing W4 photometry, transition disks (TDs) are identified from the remaining sources (see Fig. 1c) while the protostars are retrieved from AGN candidates (see Fig. 1d). Finally, all identified YSOCs mentioned above are reexamined to isolate the possible asymptotic giant branch (AGB) stars with the color criteria shown in Fig. 1e and f.
I ultimately obtain 107 401 YSOCs in the whole sky, including 20 317 Class I candidates, 59 543 Class II candidates, 1650 TDs, 141 protostars, and 25 750 AGB candidates. I note that the number of YSOCs (107 401) obtained with above criteria is smaller than that (133 980) presented by Marton et al. (2016) which is also based on AllWISE data, but obtained with a machine-learning method (see Sect. 2.4.1). Marton et al. (2016) compared their results with that derived by Koenig and Leisawitz (2014)'s color criteria: their SVM method is more successful in excluding extragalactic contamination and able to recover higher fraction of known YSOs; Koenig and Leisawitz (2014)'s method is more sensitive to fainter sources and thus efficient in identifying the Galactic contamination, but retrieve a lower fraction of known YSOs.
Therefore, combining these catalogs obtained with different methods could lead to a more complete YSOC sample.
### Assembling a combined YSOC catalog
I cross-match the YSOCs identified in Sect. 3.1 and the published YSOC catalogs by Marton et al. (2016, 2019) (see Sect. 2.4.1) based on the AllWISE source ID and finally obtain a combined YSOC catalog that includes 2 551 895 YSOCs.
I have also conducted a cross-match between the combined YSOC catalog and _Gaia_ DR3 in order to obtain parallax and proper motion information. The cross-matching procedure used is described in the _Gaia_ DR3 documentation (Marrese et al., 2022), which includes the matching of _Gaia_ DR3 with various external survey catalogs, such as the AllWISE catalog, using an algorithm that takes into account source position errors, proper motions, and environment (Marrese et al., 2017, 2019). The matched catalogs that provide the _Gaia_ source IDs and the corresponding external catalog source IDs are included as part of the official _Gaia_ DR3. Initially, I cross-matched our combined YSOC catalog with the BestNeighbour table by Marrese et al. (2022) based on AllWISE source ID, and then retrieved _Gaia_ DR3 entries using the obtained _Gaia_ DR3 source ID. It should be noted that some YSOCs have multiple _Gaia_ DR3 counterparts, and for these sources, I only retained the closest match. Additionally, I cross-matched the combined YSOC catalog with the distance catalog by Bailer-Jones et al. (2021) (as described in Sect. 2.4.2) based on the _Gaia_ DR3 source ID.
The final combined catalog includes over two million YSOCs, of which \(\sim\)68% has _Gaia_ DR3 counterparts and \(\sim\)36% has GSP-Phot extinction estimates. I also note that all _Gaia_ DR3 counterparts have Bailer-Jones et al. (2021)'s geometric distance estimates due to the removal of spurious sources (see Sect. 2.2).
### YSOC de-reddening and classification
To analyze the intrinsic properties of YSOs, it is necessary to correct for extinction in their fluxes. However, estimating the extinction towards individual YSOs is a non-trivial task. Approximately 36% of my YSOCs already have GSP-Phot extinction estimates (\(A_{G}\)), which were obtained by modeling the _Gaia_ BP/RP spectrum, parallax, and apparent \(G\) magnitude using stellar evolutionary models (PARSEC, Bressan et al., 2012) and several synthetic spectra libraries (Andrae et al., 2022). It should be noted that the stellar evolutionary models do not include tracks for YSOs with surrounding disks (Bressan et al., 2012), which means that the \(A_{G}\) values for protostars with significant infrared excess may not be reliable. As a result, I need to classify my YSOCs into different categories based on their infrared excess and recalculate their foreground extinction separately.
The YSO classification is usually based on the spectral index that is defined as
\[\alpha=\frac{d\mathrm{log}(\lambda S_{\lambda})}{d\mathrm{log}(\lambda)}, \tag{4}\]
where \(S_{\lambda}\) is the flux density at wavelength \(\lambda\). By fitting the de-reddened SEDs from 2 to 22 \(\mu\)m, the YSOs can be classified as Class I, Class II, and Class III sources based on the scheme suggested by Lada (1987). Therefore, the reliable YSO classification should be performed after the YSO de-reddening, which results in the coupling of YSO de-reddening and classification.
I finally decide to use a iterative process to do the de-reddening and classification for my YSOCs, which can obtain the foreground extinctions (\(A_{G,\mathrm{final}}\)) and de-reddened spectral indices (\(\alpha_{c}\)) of the YSOCs at once. The idea is to estimate the extinctions of YSOCs based on their classification and then to re-classify the YSOCs using \(\alpha_{c}\), iteratively, until \(\alpha_{c}\) approaches constants. The detailed steps of the process are described in Appendix A. Figure. 2 shows the distributions of the observed spectral indices (\(\alpha\)) and \(\alpha_{c}\) of YSOCs. Based on \(\alpha_{c}\), of over two million YSOCs, I finally obtain \(\sim\)3% Class I candidates, \(\sim\)16% Class II candidates, and \(\sim\)81% Class III candidates using the classification scheme from Lada (1987).
### Estimating and excluding possible contamination
My combined YSOC sample is likely contaminated by different kinds of objects, including the MS stars, AGBs, red giants, and extragalactic sources.
A large number of extragalactic sources have been mitigated during the YSO identification process described in Sect. 2.4.1 and 3.1. However, there could be galaxies remaining in the final YSOC catalog. I use the data from Spitzer Wide-Area Infrared Extragalactic Survey (SWIRE, Lonsdale et al., 2003) to estimate the residual contamination fraction of galaxies and AGNs in my YSOC catalog. SWIRE performed the IRAC and MIPS observations on six sky fields that covered about 65.6 deg\({}^{2}\) in total. I download the Spring'05 catalogs for the ELAIS N1, ELAIS N2, Lockman, and XMM_LSS regions, and Fall'05 catalogs for the CDFS and ELAIS S1 regions (SWIRE Project, 2020). All detected SWIRE sources are cross-matched with my YSOCs using the tolerance of 3''. Finally I obtain 37 SWIRE extragalactic sources in my YSOC catalog, which results in a surface density of 37/65.6\(\sim\)0.56 extragalactic sources per deg\({}^{2}\)
Figure 1: Multicolor criteria scheme used to identify YSOCs from AllWISE catalog for a example of a 11\({}^{\circ}\times\)4\({}^{\circ}\) region towards the Orion A. The gray dots show the distribution of field stars in the color space. The protostars and Class I candidates are marked with magenta and red circles, respectively. The green dots label Class II candidates while the blue circles mark transition disks. The AGB candidates isolated from YSOCs are labelled with cyan circles.
Figure 2: Histograms of the observed spectral indices (\(\alpha\), left panel) and the de-reddened spectral indices (\(\alpha_{c}\), right panel) of YSOCs. The red dashed lines mark the criteria of the YSO classification scheme suggested by Lada (1987).
After scaling to the whole sky, there could be 23 261 galaxies in my YSOC catalog if assuming that the extragalactic sources distributed uniformly in the whole sky. Considering the Galactic extinction, this number should be the upper limit. Therefore, the contamination fraction of the extragalactic sources is negligible (\(<\)1%) in my YSOC catalog.
In Sect. 3.1, I have already isolated a number of possible AGBs with the multicolor criteria. However, the dusty AGBs can also produce infrared excess, which means that it is difficult to distinguish AGBs from YSOs only based on the color criteria. Marton et al. (2016, 2019) classified a large number of AllWISE sources into the evolved stars that is a grouped type including many subtypes such as AGBs, post-AGBs, RGBs, and evolved supergiants. However, their training sample was collected from the SIMBAD and literature, including many candidates rather than the only bona-fide labeled objects. The noise in the training data would propagate to the noise in the predictions. Actually, McBride et al. (2021) checked the parallaxes of Marton et al. (2019)'s _Gaia_-AllWISE YSOC sample in some star forming regions such as Ophiuchus. They found that although the _Gaia_-AllWISE YSOCs recovered some sources associated with the star-forming regions the vast majority of YSOCs were just background sources. Therefore, the contamination fraction of MS stars, giants, and AGBs could be very high in my combined YSOC catalog.
To exclude the possible MS stars, giants, and AGBs, I introduce the PARSEC (Bressan et al., 2012) and COLIBRI (Marigo et al., 2013) stellar evolutionary tracks, and YSO models from Robitaille et al. (2006, 2007). The PARSEC tracks were computed for different chemical compositions and evolutionary phases from pre-main sequence (PMS) till the onset thermally pulsing AGB (TP-AGB) while the COLIBRI tracks extended the evolution to the end of the TP-AGB phase. I use the web interface, CMD9 (version 3.4), to extract the isochrones of MS stars, giants, and AGBs with the age range of 0.01 Myr\(-\)13.5 Gyr and solar metallicity. Robitaille et al. (2006) presented a grid of radiation transfer models of YSOs, including about 200 000 YSO models. The grid covered a wide range of stellar, disk, and envelope masses, and accretion rates. The SEDs of each YSO model were calculated assuming ten different inclination angles and then convolved to many commonly-used filters to produce broadband fluxes within 50 apertures of \(\sim\)100\(-\)100 000 AU. I select the YSO models with the inclination angles between 30\({}^{\circ}\) and 60\({}^{\circ}\), stellar masses between 0.08 and 10 \(M_{\odot}\), disk-to-stellar mass ratios between 0 and 1 and then extract the _Gaia_ and WISE fluxes within the aperture of \(\sim\)45 000 AU.
Footnote 9: [http://stev.oapd.inaf.it/cmd](http://stev.oapd.inaf.it/cmd)
Figure 3 (left panels) shows the color-magnitude diagrams (CMDs) of the absolute magnitudes (\(M_{W1}\) and \(M_{G}\)) versus the intrinsic colors (\([W1-W2]_{0}\) and \([G-R_{p}]_{0}\)) for the selected evolutionary tracks and YSO models mentioned above. The selection of color, \([G-R_{p}]\), is to avoid the overestimation of mean \(B_{p}\) magnitude for the faint sources due to the application of the minimum flux threshold (Riello et al., 2021). I define two polygons in the color spaces of \(M_{W1}\) vs. \([W1-W2]_{0}\) and \(M_{G}\) vs. \([G-R_{p}]_{0}\), respectively. Table 1 gives the vertex of polygons. I discovered that the majority of giants and AGBs are situated outside the polygons, indicating that the polygons' criteria can effectively eliminate the contamination from these sources. However, there are still some limitations to the polygons' use. Firstly, many bright PMS stars are excluded because they overlap with giants and AGBs in the color space. Secondly, despite the defined polygons removing most of the MS stars, some MS stars remain inside the polygons (as seen by the green dots in Fig 3). As a result, using these polygons can produce a relatively clean but less complete YSO sample.
To apply the defined polygons to my YSOCs, I need to estimate their absolute magnitudes and intrinsic colors. First, I select about 2.2 million YSOCs that are detected in the WISE W1 and W2 band with the photometric uncertainties of \(\sigma(W1)<\) 0.2 and \(\sigma(W2)<\) 0.2 mag and have Bailer-Jones et al. (2021)'s geometric distance estimates and positive extinction estimates (\(A_{G,\rm final}\); see details in Appendix A). Figure 3b shows the \(M_{W1}\) vs. \([W1-W2]_{0}\) CMD for the selected YSOCs, of which over 120 thousands YSOCs are located inside the polygon. Second, most of YSOCs inside \(M_{W1}\) vs. \([W1-W2]_{0}\) polygon have _Gaia_ photometry (\(G\) and \(R_{p}\)). Figure 3d shows the \(M_{G}\) vs. \([G-R_{p}]_{0}\) CMD for the YSOCs with _Gaia_ photometries, of which over 70 thousands YSOCs are located inside the polygon. Third, adding about 1000 YSOCs that are located inside the \(M_{W1}\) vs. \([W1-W2]_{0}\) color space polygon but without _Gaia_ photometries, I obtain about 78 thousands YSOCs as a relatively clean YSOC sample. Considering that the fraction of contamination can be higher than 50% for Class III sources which were identified based on infrared excess (Oliveira et al., 2009; Romero et al., 2012; Dunham et al., 2015; Manara et al., 2018), I finally select 24 883 Class I/II YSOCs (\(\alpha_{c}>-2\)) to construct the final clean YSOC sample.
### Output clean YSOC catalog
The final clean YSOC catalog has 24 883 YSOCs. Table 2 shows the entries of the catalog, including the _Gaia_, 2MASS, and WISE photometries, _Gaia_ parallaxes and proper motions, geometric distances from Bailer-Jones et al. (2021), and foreground extinctions and spectral indices estimated in Sect. 3.3. I also calculate the Galactocentric coordinates of YSOCs with the solar motion parameters suggested by Reid et al. (2019), assuming that the sun is located on the x axis of the right-handed system. Figure 4 shows the spatial distribution of this clean YSOC sample. For convenience, I simply use "YSOC catalog" to refer this "clean YSOC catalog" in the subsequent context.
As mentioned in Sect. 3.4, my method finally results in a relatively clean but less complete YSO sample. Many luminous PMS stars have been removed from my YSOC catalog and there are still MS residuals in the catalog.
Figure 3: Criteria used to isolate YSOs from MS, giants, and AGBs. (a): CMD of the absolute magnitude in W1 band, \(M_{W1}\), versus the intrinsic color of \([W1-W2]_{0}\) for the evolutionary tracks of PMS, MS, giants, AGBs (Bressan et al., 2012; Marigo et al., 2013), and the YSO models (Robitaille et al., 2006); (b): \(M_{W1}\) versus \([W1-W2]_{0}\) CMD for the YSOCs with the photometric uncertainties of \(\sigma(W1)<0.2\) mag and \(\sigma(W2)<0.2\) mag; (c): CMD of the absolute magnitude in _Gaia_ G band, \(M_{G}\), versus the _Gaia_ intrinsic color of \([G-R_{p}]_{0}\) for the evolutionary tracks of PMS, MS, giants, AGBs, and the YSO models; (d): \(M_{G}\) versus \([G-R_{p}]_{0}\) CMD for the selected YSOCs (see text for details). The golden dots and green dots mark the PMS and MS tracks with the mass range of 0.08\(-\)10 \(M_{\odot}\), respectively. The blue dots label the giant tracks, including subgiant branch (SGB) and red giant branch (RGB). The early-AGBs, TP-AGBs, and post-AGBs are all marked with the magenta dots. The red curves show the PMS 0.01 Myr isochrones while the cyan curves are the joint isochrones for PMS and MS tracks with the age of 100 Myr. The YSO models from Robitaille et al. (2006) are shown with background gray density maps. The black solid polygons define the criteria and the YSOCs located outside the polygons are identified as contamination.
I compare my YSOC catalog with the YSOs in Orion A (Grossschedl et al., 2019), which can be used to infer the completeness and contamination level of my YSOC catalog.
Grossschedl et al. (2019) compiled a list of YSOs in Orion A molecular cloud by combining the deep near-infrared (NIR) VISTA survey data (VISION, Meingast et al., 2016) and archival mid-infrared (MIR) to far-infrared (FIR) data such as _Spitzer_, _Herschel_, and WISE. They carefully revisited the known YSOs in literature with the aim to evaluate false positives, and then added new YSOs that were obtained with NIR and MIR color criteria. Grossschedl et al. (2019) finally obtained 2 980 YSOs in Orion A. I note that their sample was spatially biased due to the different coverage of infrared surveys. To get a YSO sample with roughly uniform completeness, I extract 2849 YSOs inside the _Spitzer_/IRAC data coverage. As a comparison, there are 718 sources of my YSOC catalog located in the same _Spitzer_/IRAC coverage. I crossmatch these 718 YSOCs with Gro\(\ss\)schedl et al. (2019)'s 2849 YSOs and find that there are 482 sources in common.
Assuming that all YSOs presented by Grossschedl et al. (2019) are bona-fide young stars, the contamination fraction of my YSOC catalog is about 30% in Orion A. This percentage (30%) should be considered only as a rough estimate of the contamination fraction in the entire YSOC catalog, as it does not account for variations in distance and star formation environments across different molecular clouds. Gro\(\ss\)schedl et al. (2019) also estimated the completeness of their YSO sample in the _Spitzer_/IRAC coverage to be about 49%. Therefore, the completeness of my YSOC catalog is \(\sim\)10% in Orion A, which infers that the completeness of the whole YSOC catalog could be \(<\)10% considering the distance of Orion A (\(\sim\)430 pc, Zucker et al., 2019).
My YSOC catalog has potential for use in future follow-up observations and for statistical studies such as investigating star formation in the solar neighborhood. However, there are three caveats that need to be taken into account when using this catalog. First, more than 80% of the sources in my YSOC catalog come from Marton et al. (2019)'s _Gaia_-AllWISE YSOC sample, which was only identified in areas of the sky above a certain dust opacity threshold based on the Planck dust map. Therefore, both the _Gaia_-AllWISE YSOC sample and my YSOC catalog suffer from spatial bias. Second, my method results in the loss of true luminous YSOs, implying that my YSOC catalog is biased towards low-mass young stars. Third, it is important to note that the sources in my YSOC catalog are YSO candidates rather than confirmed young stars. This means that there is a potential for high contamination (e.g., \(\sim\)30% in Orion A). However, without additional spectroscopic information, it is difficult to isolate bona-fide YSOs. Consequently, any statistical analysis based on my YSOC catalog, such as cloud distance estimation (see Sect. 4), is inevitably affected by the potentially high level of contamination.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \(M_{W1}\) & \([W1-W2]_{0}\) & \(M_{G}\) & \([G-R_{p}]_{0}\) \\ (mag) & (mag) & (mag) & (mag) \\ \hline
5.0 & -0.3 & 4.3 & 0.55 \\
5.0 & 0.1 & 4.0 & 0.67 \\ -6.0 & 0.2 & 2.0 & 0.72 \\ -8.5 & 0.6 & 0.0 & 0.83 \\ -4 & 3.0 & -0.9 & 0.95 \\
10 & 3.0 & -1.0 & 1.1 \\
10 & -0.3 & 8.0 & 2.0 \\ \(\cdots\) & \(\cdots\) & 15.0 & 2.0 \\ \(\cdots\) & \(\cdots\) & 16.0 & 1.3 \\ \(\cdots\) & \(\cdots\) & 13.0 & 0.55 \\ \hline \end{tabular}
\end{table}
Table 1: Vertex of polygons in color space
\begin{table}
\begin{tabular}{l c c} \hline \hline Entry & Units & Description \\ \hline AllWISE & \(\cdots\) & AllWISE catalog name \\ RAJ2000 & deg & Right ascension (J2000) \\ DEJ2000 & deg & Declination (J2000) \\ Glon & deg & Galactic longitude \\ \hline \end{tabular}
\end{table}
Table 2: Entries of the clean YSOC catalog
\begin{table}
\begin{tabular}{l l l} \hline \hline \multicolumn{1}{c}{Entry} & Units & Description \\ \hline Glat & deg & Galactic latitude \\ \(X\) & kpc & Galactocentric \(x\) position component \\ \(Y\) & kpc & Galactocentric \(y\) position component \\ \(Z\) & kpc & Galactocentric \(z\) position component \\ \(J\)mag & mag & 2MASS \(J\) band magnitude \\ e\_Jmag & mag & Uncertainty of \(J\) magnitude \\ \(H\)mag & mag & 2MASS \(H\) band magnitude \\ e\_Hmag & mag & Uncertainty of \(H\) magnitude \\ \(K\)mag & mag & 2MASS \(K_{s}\) band magnitude \\ e\_Kmag & mag & Uncertainty of \(K_{s}\) magnitude \\ \(W\)1mag & mag & WISE \(W\)1 band magnitude \\ e\_W1mag & mag & Uncertainty of \(W\)1 magnitude \\ \(W\)2mag & mag & WISE \(W\)2 band magnitude \\ e\_W2mag & mag & Uncertainty of \(W\)2 magnitude \\ \(W\)3mag & mag & WISE \(W\)3 band magnitude \\ e\_W3mag & mag & Uncertainty of \(W\)3 magnitude \\ \(W\)4mag & mag & WISE \(W\)4 band magnitude \\ e\_W4mag & mag & Uncertainty of \(W\)4 magnitude \\ Ref & \multicolumn{2}{c}{References} \\ GaiaDR3\_source\_id & \multicolumn{2}{c}{Unique source identifier in _Gaia_ DR3} \\ fidelity\_v2 & \multicolumn{2}{c}{Astrometric fidelities} \\ norm\_dg & \multicolumn{2}{c}{Diagnostics of contamination from neighbors} \\ Plx & mas & Column parallax in _Gaia_ DR3 \\ e\_Plx & mas & Column parallax\_error in _Gaia_ DR3 \\ \(G\)mag & mag & Column phot\_g\_mean\_mag in _Gaia_ DR3 \\ e\_Gmag & mag & Uncertainty of \(G\)mag, see Sect. 2.2 \\ \(B\)pmag & mag & Column phot\_bp\_mean\_mag in _Gaia_ DR3 \\ e\_Bpmag & mag & Uncertainty of \(B\)pmag, see Sect. 2.2 \\ \(R\)pmag & mag & Column phot\_rp\_mean\_mag in _Gaia_ DR3 \\ e\_Rpmag & mag & Uncertainty of \(R\)pmag, see Sect. 2.2 \\ pmRA & mas yr\({}^{-1}\) & Column pmra in _Gaia_ DR3 \\ e\_pmRA & mas yr\({}^{-1}\) & Column pmra\_error in _Gaia_ DR3 \\ pmDE & mas yr\({}^{-1}\) & Column pmdec in _Gaia_ DR3 \\ e\_pmDE & mas yr\({}^{-1}\) & Column pmdec\_error in _Gaia_ DR3 \\ r\_med\_geo & pc & Median geometric distance \\ r\_lo\_geo & pc & 16th percentile of geometric distance \\ r\_hi\_geo & pc & 84th percentile of geometric distance \\ flag & \multicolumn{2}{c}{Flag of geometric distance} \\ zpt & mas & Zero point of parallax bias \\ \(A_{G}\)\_final & mag & Foreground extinction \\ e\_\(\mathcal{A}_{G}\)\_final & mag & Uncertainty of extinction \\ \hline \end{tabular}
\end{table}
Table 2: _continued_
## 4 Cloud Distance Estimation
In Section 3, I presented an all-sky YSOC catalog. In the subsequent sections, I utilized this catalog to estimate the distances of several tens of nearby molecular clouds. In Section 4.1, I outline the sample selection process for the nearby molecular clouds and the methodology used to determine their boundaries. Section 4.2 explains how YSOCs were isolated within the local clouds, and Section 4.3 describes the method used to estimate cloud distances. Finally, in Section 4.4, I present the catalog of cloud distances.
### Sample of local clouds
The sample of the local molecular clouds is constructed with the cloud catalogs released by Zucker et al. (2019, 2020) and Spilker et al. (2021). Zucker et al. (2019) obtained the accurate distances of 27 nearby molecular clouds that were inferred with the distance and extinction of stars along the sightlines towards clouds based on the stellar photometric catalog and the _Gaia_ DR2 parallax measurements. Zucker et al. (2020) applied the method suggested by Zucker et al. (2019) to the star forming regions described in the Star Formation Handbook (Reipurth, 2008, 2019) and obtained the accurate distances to \(\sim\)60 local star-forming regions. Spilker et al. (2021) compiled a catalog of nearby molecular clouds, including 72 clouds, and analyzed their column density probability distributions. I combine these three catalogs and construct a sample of local molecular clouds, including about a hundred clouds.
I use the Planck dust map (as described in Sect.2.3) and the extinction map10 by Dobashi (2011) to define the boundaries of the local clouds in my sample. To illustrate, Fig.5a displays the Planck dust map for the Orion A molecular cloud, which provides the total column density along lines of sight. However, because of contamination from the diffuse dust component, I cannot define cloud boundaries directly with the Planck dust map. In contrast, Dobashi (2011) produced an all-sky extinction map based on the 2MASS (Skrutskie et al., 2006) point source catalog that eliminates extinction from the diffuse dust component (Dobashi et al., 2013). Thus, their extinction map can trace the cloud column density. However, due to the limited sensitivity of the 2MASS survey and the technique used for extinction mapping (Dobashi et al., 2008; Kainulainen et al., 2011), it cannot effectively trace the dense structures in molecular clouds. Figure 5b shows the Dobashi (2011) extinction map for the Orion A cloud, which reveals that the dense integral-shaped filament (ISF) of Orion A, clearly visible in the Planck dust map, corresponds to an abnormal low extinction region in the extinction map. As a result, using the extinction map to define cloud boundaries could lead to missing the dense regions that are likely closely associated with the YSOs (Gao & Solomon, 2004; Lada et al., 2010; Zhang et al., 2019).
Footnote 10: [http://darkclouds.u-gakugei.ac.jp/](http://darkclouds.u-gakugei.ac.jp/)
To delineate the local clouds reasonably, I devise a method that combines the Planck dust maps with Dobashi (2011)'s extinction maps. This involves subtracting the diffuse dust component from the Planck dust map using the extinction map as a reference. Fig. 5 demonstrates this process for the Orion A cloud. First, I obtain a difference map by subtracting the extinction map from the Planck dust map (Fig. 5c). The difference map highlights the dense regions and the diffuse dust component. Second, I estimate the two-dimensional (2D) background of the difference map using a mode estimator (implemented in Source Extractor, Bertin & Arnouts, 1996) and masking the 5% pixels with the highest \(A_{V}\) values (Fig. 5d). Third, I fit the 2D background map with a 2D polynomial function to obtain a background model (Fig. 5e). Finally, I subtract the background model from the Planck dust map to produce the result shown in Fig. 5f.
\begin{table}
\begin{tabular}{l c l} \hline \hline \multicolumn{1}{c}{Entry} & Units & \multicolumn{1}{c}{Description} \\ \hline alpha & & Observed spectral index \\ e\_alpha & & Uncertainty of observed spectral index \\ alpha & & De-reddened spectral index \\ e\_alpha & & Uncertainty of de-reddened spectral index \\ \hline \end{tabular} Note. – The full catalog can be derived online in the ChinaVO PaperData repository: doi: 10.12149/101210. (This table is available in its entirety in machine-readable form)
\end{table}
Table 2: _(continued)_
I generate background-subtracted Planck dust maps for all the local clouds in my sample. To define the boundaries of these clouds, I use the extinction contour level of \(A_{V}=2\) mag, which was recommended by both Heiderman et al. (2010) and Evans et al. (2014).
### YSOCs likely associated with the local clouds
In Section 3.5, I estimated that the fraction of contamination, such as MS stars, in my YSOC catalog could be as high as 30%. In this section, I aim to remove this contamination from the YSOC catalog for each local cloud using additional astrometric information from _Gaia_ DR3. The fundamental assumption is that YSOs in the same local cloud should have comparable parallaxes and proper motions. In the subsequent context, I use the astrometric notation \(\alpha\) and \(\delta\) for Right ascension and Declination, \(\varpi\) for parallax in units of mas, \(\mu_{\alpha}\cos\delta\) and \(\mu_{\delta}\) for proper motions in units of mas yr\({}^{-1}\). I limit my selection to the YSOCs with _Gaia_ DR3 parallax and proper motion measurements. I also apply a quality cut of \(\varpi>0\) as suggested by Prisinzano et al. (2022). Considering that I only focus on the YSOs in the nearby molecular clouds, this choice does not introduce any bias. Finally I obtained 23 379 YSOCs from my YSOC catalog as the input sample.
I use the coordinates, parallaxes, and proper motions of YSOCs to filter out contamination from the input sample in each local cloud. To illustrate, I show the detailed filtering process for the Orion A molecular cloud in Fig. 6. First, I select YSOCs that are inside the boundary of each cloud, resulting in approximately 4800 YSOCs in about 70 local clouds, noting that around 30 nearby clouds in my cloud sample (as described in Sect. 4.1) have no YSOCs. These clouds are mostly quiescent and without active star formation, such as Pegasus, Aquila South, and Draco. Some nearby clouds, such as Chamaeleon, Lupus, and Cepheus, have different levels of star formation activity in different parts. For these clouds, I only extract sub-clouds with YSOCs, such as Cham I, Lupus I, and Cepheus-L1251.
Second, I use DBSCAN (Ester et al., 1996) as implemented in scikit-learn (Pedregosa et al., 2011) to remove outliers in the 3D parameter space (\(\varpi\), \(\mu_{\alpha}\cos\delta\), \(\mu_{\delta}\)) of YSOCs in each local cloud. DBSCAN identifies core samples with more than \(minPts\) points within a radius \(\epsilon\) of a given point \(\vec{p}\), and constructs clusters with sets of core samples. Points that are not included in any clusters are treated as outliers. The values of \(\epsilon\) and \(minPts\) are critical in identifying clusters.
The three parameters (\(\varpi\), \(\mu_{\alpha}\cos\delta\), \(\mu_{\delta}\)) of YSOCs in each local cloud are first re-scaled using the tool of scikit-learn, RobustScaler, which is based on the statistics robust to outliers. The value of \(minPts\) defines the minimal number of a cluster. I adopt \(minPts=6\) that is twice of the dimensions of parameter space (Sander et al., 1998). The value of \(\epsilon\) is determined using the \(k\)-distance method (Rahmah & Sitangangang, 2016). The \(k\)th near
Figure 4: The spatial distribution of the clean YSOC sample in the mollweide projection. The Class I and Class II candidates are labeled with red and blue filled circles, repectively.
est neighbor distance (\(k\)-distance) can be calculated for each point in (\(\varpi\), \(\mu_{\alpha}\cos\delta\), \(\mu_{\delta}\)) space. If plotting these \(k\)-distances in ascending order, a sharp change of slope, i.e., the knee point, can be found along the \(k\)-distance curve. I use the Python code kneed11(Satopaa et al., 2011) to detect the knee point automatically and then this knee point is adopted as the optimal value of \(\epsilon\). Sander et al. (1998) found that the \(k\) value does not significantly affect the DBSCAN results and thus is not very crucial for the algorithm. I tried several different values of \(k\in[1,6]\) and found that \(k=1\) can remove the outliers more efficiently. Therefore, I finally use \(k=1\) to calculate the optimal value of \(\epsilon\).
Footnote 11: [https://github.com/arrkevi/kneed](https://github.com/arrkevi/kneed)
The DBSCAN algorithm itself does not consider the uncertainties of the parameter. However, the uncertainties of \(\varpi\), \(\mu_{\alpha}\cos\delta\) and \(\mu_{\delta}\) in my YSOC catalog could be relatively large given that I do not perform any quality cuts on their uncertainties. To include the effect of uncertainties, I use a Monte Carlo method to remove the outliers with DBSCAN. Specifically, I generate a random set of (\(\varpi\), \(\mu_{\alpha}\cos\delta\), \(\mu_{\delta}\)) in each local cloud by assuming a gaussian error distribution. The outliers can be marked after running DBSCAN. Repeat the above process 1000 times and then a outlier probability (\(P_{\rm outlier}\)) can be obtained for each YSOC. I calculate the median of \(P_{\rm outlier}\) of YSOCs (\(P_{\rm outlier,med}\)) in each local cloud and require \(P_{\rm outlier}<P_{\rm outlier,med}\) to filter out the contamination of YSOCs. Figure 6b, c, and d show the parallax and proper motion distributions of YSOCs in the region of Orion A with \(A_{V}>2\) mag and the identified outliers are also marked.
Finally I obtain 3 144 YSOCs that are likely to be associated with 63 nearby molecular clouds. Table 3 lists their information, including AllWISE names, parent cloud names, distances obtained with _Kalkayotl_ (see Sect. 4.3), and the heliocentric positions. Further YSOC information such as the photometry and _Gaia_ DR3 parameters can be obtained by cross-matching with Table 2 using the AllWISE name.
### Estimation of distances to local clouds
Figure 5: Method used to obtain the background-subtracted Planck dust map for the Orion A molecular cloud. (a): The Planck dust map; (b): the extinction map from Dobashi (2011); (c): difference map between the Planck dust map and extinction map; (d): 2D background of the difference map calculated with a mode estimator using the Source Extractor (Bertin & Arnouts, 1996) algorithm after masking 5% pixels with highest \(A_{V}\) values; (e): the background model obtained by fitting the 2D background with a 2D polynomial function; (f): the modeled-background-subtracted Planck dust map. The green contours in panel c mark the 95% percentile of \(A_{V}\) values in the difference map.
The YSOCs listed in Table 3 are the youngest optically visible sources in the nearby molecular clouds (see Sect. 4.2). Therefore, they are good proxies of the cloud distances.
I use _Kalkayotl12_ to estimate the cloud distances. _Kalkayotl_ is a free and open code developed by Olivares et al. (2020). It is specifically designed to estimate cluster parameters such as size and distance, as well as the distances to individual members based on their _Gaia_ parallax measurements. _Kalkayotl_ employs a Bayesian hierarchical model to obtain the posterior distributions of distances for both the cluster and its members. The code utilizes distance prior families that are optimized for clusters and accounts for the spatial correlations of parallaxes. Olivares et al. (2020) have demonstrated that _Kalkayotl_ can provide high credibility distance estimates for stellar clusters located within 5 kpc and with a size of \(<\)1 kpc.
Footnote 12: [https://github.com/olivares-j/kalkayotl](https://github.com/olivares-j/kalkayotl)
_Kalkayotl_ needs an initial guess of the cluster distance to construct the prior distribution. Therefore, I first derive a median distance for each local cloud by modeling the parallax distribution of YSOCs in each cloud. Specifically, I make the assumption that the parallax distribution of YSOCs in a given local cloud, as shown in Figure 6d, is drawn from a Weibull probability density function (PDF):
\[f(x)=\frac{\beta}{\eta}\left(\frac{x-\gamma}{\eta}\right)^{\beta-1}e^{-\left( \frac{x-\gamma}{\eta}\right)^{\beta}}, \tag{5}\]
where \(\eta\), \(\beta\), and \(\gamma\) are scale, shape, and location parameters, respectively. The Weibull distribution is a highly adaptable distribution that can replicate a diverse range of distributions through adjusting the shape parameter value. This distribution has the capability to model a broad spectrum of distributions and, as a result, has been extensively utilized in data analysis. I fit the YSOC parallax distribution with Weibull PDF using the maximum likelihood estimation technique as implemented in the _reliability13_ python library (Reid, 2020). In specific, the parallax distribution of YSOCs is fitted twice in each cloud: first with a three-parameter Weibull PDF; and second with a two-parameter Weibull PDF
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multicolumn{1}{c}{ AllWISE} & cloud & \(X_{\rm H}\)a & \(Y_{\rm H}\)a & \(Z_{\rm H}\)a & \(D_{\rm Kal}\)b \\ & & (pc) & (pc) & (pc) & (pc) \\ \hline J000017.17+673045.8 & Cep\_OB4 & -505 & 948 & 96 & \(1078^{+65}_{-61}\) \\ J000040.95+664407.6 & Cep\_OB4 & -535 & 1009 & 87 & \(1145^{+310}_{-247}\) \\ J000100.32+671415.5 & Cep\_OB4 & -703 & 1318 & 127 & \(1499^{+283}_{-225}\) \\ J000112.52+673732.4 & Cep\_OB4 & -640 & 1196 & 124 & \(1362^{+319}_{-268}\) \\ J000129.32+665426.9 & Cep\_OB4 & -693 & 1301 & 116 & \(1479^{+252}_{-205}\) \\ J000145.28+664748.7 & Cep\_OB4 & -537 & 1007 & 88 & \(1144^{+332}_{-280}\) \\ J000148.35+672728.1 & Cep\_OB4 & -492 & 918 & 92 & \(1046^{+600}_{-56}\) \\ J000152.18+672845.7 & Cep\_OB4 & -474 & 884 & 89 & \(1006^{+270}_{-224}\) \\ J000200.37+672356.9 & Cep\_OB4 & -372 & 693 & 69 & \(790^{+62}_{-55}\) \\ J000207.32+672259.6 & Cep\_OB4 & -632 & 1178 & 116 & \(1342^{+326}_{-280}\) \\ \hline \end{tabular} Note. – The machine-readable table can be accessed online in the ChinaVO PaperData repository: doi: 10.12149/101210. A portion is shown here for guidance regarding its form and content.
\end{table}
Table 3: YSOCs likely associated with the local clouds
that forces \(\gamma=0\). I always adopt the fitting result with lower Bayesian information criterion (BIC). I also require \(\beta>1\) in the whole fitting process to avoid infinite probability. The red curve in Fig. 6e shows the Weibull PDF defined by the fitting parameters. Then I use the Monte Carlo method to estimate the cloud median parallax and their uncertainties. In each local cloud, I generate 10 000 random Weibull PDFs assuming a gaussian error for \(\eta\), \(\beta\), and \(\gamma\). Then the mode value is calculated for each random Weibull PDF. The cloud median parallax (\(\varpi_{rel}\)) and its uncertainty are adopted as the median and standard deviation of 10 000 mode values. I adopt the inversion of the cloud median parallax as the initial guess of the cloud distance, i.e., \(D_{rel}\).
Using the implemented Gaussian prior model with a mean distance of \(D_{rel}\), _Kalkayotl_ (version 1.1) reports the samples of the posterior distribution of distance for each local cloud. The final cloud distance (\(D_{\rm{Kal}}\)) and associated uncertainty are calculated based on the median and central 68% quantiles of samples. _Kalkayotl_ also provides the distance estimate for each YSOC in local clouds. Figure 6f as an example shows the _Kalkayotl_ distance distribution of YSOCs in Orion A.
### Catalog of distances to local clouds
The obtained distances to 63 local clouds are given in Table 4. The solid vertical line in Fig. 6f marks the distance of the Orion A while the green shaded area labels the uncertainty of the distance. Figure 7 shows the 3D distribution of these 63 local clouds. I also show the inner surface of the Local Bubble shell modeled by Pelgrims et al. (2020) and the Radcliffe Wave identified by Alves et al. (2020) in Fig 7. It seems that there is condensation of molecular clouds along the Radcliffe Wave, which indicates that my distance catalog can also trace
Figure 6: Method used to isolate the YSOCs in the Orion A molecular cloud and estimate the distance of Orion A. (a): The background-subtracted Planck dust map, overlaid with the YSOCs that are likely to be associated with the Orion A cloud. The green contour label the level of \(A_{V}=2\) mag. The YSOCs are marked with filled circles, color-coded by their Gaia parallax. The orange dashed circles mark the sightline beams towards which Zucker et al. (2020) obtained the Bayesian distances while the orange solid circles label the stellar clusters with distances by Kuhn et al. (2019) ; (b): proper motion distribution of YSOCs in the region of Orion A with \(A_{V}>2\) mag. The side panels show the KDEs of proper motions in R.A. (top) and decl. (right). The orange dots mark the outliers identified with DBSCAN technique (Ester et al., 1996); (c and d): proper motion versus parallax of YSOCs in Orion A. The side panels show the KDEs of proper motions in R.A. or decl. (top) and parallax (right). The orange dots mark the outliers identified with DBSCAN algorithm; (e): parallax distribution of YSOCs in Orion A after outlier removal. The red curve shows the PDF obtained by fitting the YSOC parallaxes with the Weibull model. The vertical solid line mark the mode parallax to the Orion A while the green shaded area label the uncertainty of the parallax. The corresponding distance of the mode parallax is marked on the panel. (f): distance distribution of YSOCs obtained with Kalkayotl in Orion A after outlier removal. The red curve shows the PDF obtained by fitting the YSOC parallaxes with the Kalkayotl program. The vertical solid line mark the distance to the Orion A while the green shaded area label the uncertainty of the distance. The value of the distance is also marked on the panel. The complete figure set (63 images) is available in the online journal.
the Radcliffe Wave. Additionally, it is apparent that local clouds within a distance of about 200 pc from the Sun lie on the surface of the Local Bubble, as suggested by Zucker et al. (2022). In their analysis of the 3D spatial distribution and kinematics of dense gas and young stars in the solar neighborhood, Zucker et al. (2022) propose that the expansion of the Local Bubble has caused the surrounding interstellar medium to be swept up into an extended shell, which fragmented and collapsed to form the local clouds. My distance estimates for these local clouds further support this view.
## 5 Results and Discussion
In Section 4.4, I present a catalog of distances to 63 local clouds, constructed using the parallaxes of YSOCs from _Gaia_ DR3. Additionally, I release an all-sky YSOC sample, consisting of 24,883 YSOCs, in Section 3.5. In this section, I compare our cloud distances with distance estimates from previous literature in Section 5.1, and also discuss the advantages and limitations of my distance estimates in Section 5.2.
### Comparisons with previous cloud distance estimates
First, I compare my catalog of cloud distances (\(D_{\rm{Kal}}\)) presented in Table 4 with the distance estimates provided by Zucker et al. (2020). To ensure a fair comparison, I calculate a median distance (\(D_{\rm{Z20}}\)) for each local cloud by averaging the distances of sightlines located inside: 1) the cloud boundary defined with \(A_{V}=2\) mag in Sect. 4.1 (\(q_{\rm{Z20}}=1\)); or 2) the whole cloud area if there is no sightlines inside the cloud boundary (\(q_{\rm{Z20}}=2\)). The resulting values of \(D_{\rm{Z20}}\) and \(q_{\rm{Z20}}\) for 51 local clouds are listed in Table 5. As an example, the dashed orange circles in Fig. 6a indicate the locations of sightline beams from Zucker et al. (2020) in the Orion A cloud. The relations of \(D_{\rm{Z20}}\) and \(D_{\rm{Z20}}-D_{\rm{Kal}}\) versus \(D_{\rm{Kal}}\) are presented in Fig. 8a and b, respectively. Overall, there is relatively good agreement between \(D_{\rm{Z20}}\) and \(D_{\rm{Kal}}\). The median and standard deviation of \(\frac{D_{\rm{Z20}}-D_{\rm{Kal}}}{D_{\rm{Z20}}}\) are 2% and 11%, respectively. By comparison, the typical errors of \(D_{\rm{Kal}}\) and \(D_{\rm{Z20}}\) are \(\sim\)3% and \(\sim\)7%, respectively. Therefore, I find no systematic difference between \(D_{\rm{Kal}}\) and \(D_{\rm{Z20}}\) and \(D_{\rm{Kal}}\) is consistent with \(D_{\rm{Z20}}\) within a typical scatter of \(\sim\)11%.
Second, I compare \(D_{\rm{Kal}}\) with the distances of young clusters presented by Kuhn et al. (2019). Using _Gaia_ DR2 data, Kuhn et al. (2019) obtained the median distances of 28 young clusters with ages of \(\sim\)1\(-\)5 Myr by averaging the parallax distances of their members. As mentioned above, I also calculate a median distance (\(D_{\rm{K19}}\)) for each local cloud by averaging the distances of young clusters located inside the cloud boundary (\(q_{\rm{K19}}=1\)) or the entire cloud area (\(q_{\rm{K19}}=2\)). Table 5 lists the \(D_{\rm{K19}}\) values for 13 local clouds, and the solid orange circles in Fig. 6a indicate the young clusters used to determine the median distance of Orion A. Figure 8c and d illustrate the relationship between \(D_{\rm{K19}}\) and \(D_{\rm{Kal}}\) for 13 nearby clouds, indicating good agreement between them. The median and standard deviation of \(\frac{D_{\rm{K19}}-D_{\rm{Kal}}}{D_{\rm{K19}}}\) are approximately 3% and 5%, respectively. Considering the typical error of \(D_{\rm{K19}}\) (4%), \(D_{\rm{Kal}}\) is consistent with \(D_{\rm{K19}}\).
Third, I compare \(D_{\rm{Kal}}\) with the cloud distances provided by the GOBELINS project (Loinard et al., 2011; Loinard, 2013). Table 5 shows the GOBELINS distances (\(D_{\rm{GO}}\)) to seven nearby clouds, and Figure 8e and f plot the relation between \(D_{\rm{GO}}\) and \(D_{\rm{Kal}}\) for these clouds. Given the typical uncertainty (\(\sim\)2%) of \(D_{\rm{GO}}\), the comparison reveals good agreement between the two distance estimates, with a median difference of \(<\)1% and a standard deviation of \(<\)3% in \(\frac{D_{\rm{GO}}-D_{\rm{Kal}}}{D_{\rm{GO}}}\).
Finally, I compare \(D_{\rm{Kal}}\) with the _Gaia_ distances (\(D_{Gaia}\)) of 26 nearby clouds that were obtained in previous studies. Table 5 displays \(D_{Gaia}\) and their corresponding references. These distance estimates were obtained using different _Gaia_ data releases and techniques. Some clouds, such as Taurus and Lupus, have several distance estimates from different studies (Luhman, 2018, 2023; Galli et al., 2019, 2020; Luhman, 2020). For these clouds, I generally use the most recent distance estimates with their uncertainties. More studies can be found in Zucker et al. (2022) (and the references therein). Figure 8g and h illustrate the relation between \(D_{Gaia}\) and \(D_{\rm{Kal}}\) for these 26 nearby clouds, indicating good agreement between them. The median and standard deviation of \(\frac{D_{Gaia}-D_{\rm{Kal}}}{D_{\rm{Gaia}}}\) are \(\sim\)1% and \(\sim\)5%, respectively. Considering the typical uncertainty (\(\sim\)2%) of \(D_{Gaia}\), \(D_{\rm{Kal}}\) is in close agreement with \(D_{Gaia}\).
I also note that eight of the local clouds do not have distance estimates available from Zucker et al. (2020), Kuhn et al. (2019), the GOBELINS project, or previous case studies based on _Gaia_ data. Therefore, I have gathered the other available distance estimates (\(D_{\rm{lit}}\)) for them from literature. Table 5 provides a list of \(D_{\rm{lit}}\) and corresponding references. For most of these eight clouds, \(D_{\rm{Kal}}\) agrees reasonably well with \(D_{\rm{lit}}\), with the exception of Vela A. The \(D_{\rm{Kal}}\) value for Vela A is 1674.9\({}^{+123.5}_{-113.4}\) which is roughly twice the value of \(D_{\rm{lit}}=700\pm\)200 pc reported by Liseau et al. (1992).
Liseau et al. (1992) estimated the distance of the Vela Molecular Ridge (VMR) that can be divided into four regions, i.e., Vela A-D (Murphy and May, 1991), based on 1) the photometric distances of some infrared sources; 2) star counts distances; and 3) the distances of reflection nebulae and OB associations from literature. Finally they suggested that Vela ACD have a similar distance of 0.7\(\pm\)0.2 kpc while Vela B has a larger distance of \(\sim\)2 kpc. Recently, Hottier et al. (2021) obtained the 3D extinction density of the Vela complex using the Field Extinction-Distance Relation Deconvolver (FEDReD, Babusiaux et al., 2020; Hottier et al., 2020) algorithm based on the 2MASS and _Gaia_ DR2 data. They found that the Vela C extended from 0.47 kpc to 1.66 kpc and its barycenter was at the distance of 0.90\(\pm\)0.09
kpc, which was consistent with my result of \(D_{\rm{Kal}}=989.6^{+30.5}_{-31.9}\) pc and that of \(D_{\rm{Z20}}=\)0.947\(\pm\)0.048 kpc from Zucker et al. (2020). Hottier et al. (2021) also found that the Vela D extended from 0.75 kpc to 3.30 kpc with a barycenter distance of 1.90\(\pm\)0.38 kpc, which inferred that Liseau et al. (1992) and my work only obtained the distance of the front part of Vela D. Especially, Hottier et al. (2021) suggested that the Vela A and B were actually one unique structure that extended from 1.13 kpc to 3.37 kpc with a barycenter distance of 2.17\(\pm\)0.01 kpc, which contradicted Liseau et al. (1992)'s results. However, my results of \(D_{\rm{Kal,VelaA}}=1674.9^{+123.5}_{-113.4}\) pc and \(D_{\rm{Kal,VelaB}}=1794.4^{+137.6}_{-121.5}\) pc are consistent with Hottier et al. (2021)'s result if assuming that we only see the front part of Vela A and B.
In general, my results are in good agreement with previous distance estimates from the literature. This provides strong support for the accuracy of my cloud distances, particularly for those clouds that lack _Gaia_-based distance measurements, such as the Cartwheel and Vela AB clouds.
Figure 7: 3D distribution of 63 local clouds. The Sun is at (\(X_{\rm{H}}\), \(Y_{\rm{H}}\), \(Z_{\rm{H}}\)) = (0, 0, 0) position that is marked with the intersection of two green lines. The panel a shows the top-down view while panels c and d show the side views. The panel b is the zoom-in view of the central region within 500 pc of the Sun. The YSOCs in the local clouds are marked with gray dots and the local clouds are labeled with black circles. The light blue lines show the inner surface of the Local Bubble shell (Pelgrims et al., 2020; Zucker et al., 2022) and the red curve represents the model of Radcliffe Wave (Alves et al., 2020; Zucker et al., 2022).
### Advantages and caveats of my distance estimates
In Section 5.1, I compared my distance estimates for local clouds with those from the literature, including the distance catalog presented by Zucker et al. (2020). While many studies have investigated the distances to nearby molecular clouds (Schlafly et al., 2014; Dzib et al., 2018; Yan et al., 2019, 2019, 2020, 2021, 2021, 2020, 2020, 2021, 2020, 2020, 2021, 2020, 2020, 2021, 2020, 2021, 2020, 2020, 2021, 2020, 2020, 2021, 2020, 2020, 2021, 2020, 2020, 2021, 2020, 2021, 2020, 2021, 2020, 2021, 2020, 2021, 2020, 2021, 2020,
Figure 8: Comparisons of my distances (\(D_{\rm{Kal}}\)) derived from YSOCs with the average dust Bayesian distances (\(D_{\rm{Z20}}\)) from Zucker et al. (2020) (a and b panels), the average distances (\(D_{\rm{K19}}\)) of young clusters from Kuhn et al. (2019) (c and d panels), the mean distances presented by the GOBELINS project based on VLBI observations (e and f panels), and the average distances presented by the previous studies based on _Gaia_ astrometry (g and h panels) towards the nearby molecular clouds, respectively. The black lines in panels a, c, e, and g mark the one-to-one relation while that in panels b, d, f, and h label the zero difference.
## Appendix A Estimating extinctions and de-reddened spectral indices of YSOCs
I begin by calculating the observed spectral indices (\(\alpha\)) of YSOCs and classify them into Class I, II, and III candidates using the scheme proposed by Lada (1987). Next, I estimate the foreground extinctions of Class I, II, and III YSOCs using different methods. Then, I re-classify the YSOCs based on their de-reddened spectral indices and re-calculate the extinctions. I repeat this process several times until I obtain non-variable de-reddened spectral indices of the YSOCs. The following are the detailed steps of this iterative process:
1. I calculate \(\alpha\) of YSOCs by fitting their observed SEDs from 2 to 22 \(\mu\)m, i.e., fluxes in the \(K_{s}\) and \(W1-W4\) bands.
2. The YSOCs with \(\alpha<-2\) are isolated as the Class III candidates. I find that about 40% of the Class III candidates have GSP-Phot extinction estimates (\(A_{G}\)). Because the Class III sources usually have little infrared excess, their SEDs can be approached with the evolutionary tracks of MS stars. Thus the \(A_{G}\) values of the Class III candidates should be reliable.
3. For the Class III candidates without \(A_{G}\) values, I use PNICER method to estimate their foreground extinctions (\(A_{G,\rm PNICER}\)). PNICER (Meingast et al., 2017) is an unsupervised machine learning technique, which determines the probability distribution of extinction by fitting the features of reference sources in color space. I use the Class III candidates with \(A_{G}\) estimates as reference sources.
Figure 9: The distributions of cloud column densities, i.e., \(A_{V}\)(Cloud), in locations of the sightline beams from Zucker et al. (2020)’s catalog (top) and the YSOCs in the local clouds (bottom). The solid lines are the KDEs calculated from the histograms. I also mark the median value of the distribution in each panel.
I also apply a cut to the de-reddened color of reference sources, i.e., \([J-H]_{0}<1\) mag. Figure 9(a) shows the \(H-K_{s}\) versus \(J-H\) color-color diagram (CCD) for the Class III candidates while Fig. 9(b) shows the de-reddened \([H-K_{s}]_{0}\) versus \([J-H]_{0}\) CCD for the Class III candidates with \(A_{G}\) estimates. The extinctions towards the Class III candidates without \(A_{G}\) are obtained by comparing their observed colors to the de-reddened colors of the Class III candidates with \(A_{G}\) using the PNICER python package [15] based on the extinction law suggested by Wang and Chen (2019). I note that \(\sim\)1% Class III candidates have neither \(A_{G}\) nor positive \(A_{G,\text{Pnicer}}\).
4. For the Class I/II candidates (\(\alpha\geqslant-2\)) with \(J,H,K_{s}\) detections, their extinctions are obtained by employing the \(JHK_{s}\) CCD. A detailed description of the scheme can be found in Fang et al. (2013) and Zhang et al. (2015). Here I just summarize several aspects. The intrinsic color and foreground extinction of a YSO determines its final location in the \(JHK_{s}\) CCD. Figure 9(c) shows the \(H-K_{s}\) versus \(J-H\) CCD of the Class I/II candidates. I split the CCD into three subregions based on the different origins of YSO intrinsic colors. The intrinsic color of a YSO in region 1 is simply assumed to be \([J-H]_{0}=0.6\); the intrinsic color of a YSO in region 2 is obtained by intersecting the reddening vector with the locus of main sequence stars (Bessell and Brett, 1988); in region 3, the intrinsic color of a YSO is derived from where the reddening vector and the locus of classical T Tauri stars (CTTS, Meyer et al., 1997) intersects. The extinction of a YSO (\(A_{G,\text{fit}}\)) is calculated by comparing its observed color and intrinsic color with the extinction law suggested by Wang and Chen (2019). To estimate the uncertainties of extinctions, I generate a random location in CCD for each Class I/II candidate assuming a normal distribution of its photometric error. Then the extinction value can be obtained based on its location in CCD as mentioned above. I repeat this process ten times for each Class I/II candidate and adopt the mean and standard deviation of ten extinction values as the final value and error of \(A_{G,\text{fit}}\). Figure 9(d) shows the de-reddened color \([H-K_{s}]_{0}\) versus \([J-H]_{0}\) CCD for the Class I/II candidates with \(A_{G,\text{fit}}>0\) mag. 1. I combine the remaining sources, including the Class I/II candidates outside the subregions 1,2,3 or without detections in any of \(JHK_{s}\) bands and the Class III candidates without \(A_{G}\) or \(A_{G,\text{PNICER}}\). The extinctions of remaining sources without \(A_{G}\), \(A_{G,\text{PNICER}}\) or \(A_{G,\text{fit}}\) are estimated with the average extinction value of the surrounding YSOCs that have extinction measurements in steps 2, 3, and 4. More specifically, the extinctions (\(A_{G,\text{avg}}\)) of above remaining sources with distance estimates (r_med_geo, see Sect. 2.4.2) are obtained by averaging the neighbours within a radius of 30 pc in 3D space (GLON-GLAT-distance) after excluding outliers with the sigma-clipping technique, and otherwise within a radius of 0.3\({}^{\circ}\) in 2D space (GLON-GLAT). Here the values of 30 pc and 0.3\({}^{\circ}\) are the typical radius of the molecular clouds in our Galaxy suggested by Miville-Deschenes et al. (2017).
6. I de-redden the SEDs of YSOCs based on their extinction values (\(A_{G,\text{final}}=A_{G}\), \(A_{G,\text{PNICER}}\), \(A_{G,\text{fit}}\), or \(A_{G,\text{avg}}\)) using the extinction law suggested by Wang and Chen (2019). Then the de-reddened spectral index, \(\alpha_{i}\) can be obtained, where the subscript \(i\) denotes the \(i\)th iterative loop. Especially, \(\alpha_{0}\) means the observed spectral index. Then I calculate the difference and corresponding uncertainty of \(\alpha_{i}\) and \(A_{G,\text{final},i}\), i.e., \(\Delta\alpha_{i}=|\alpha_{i}-\alpha_{i-1}|\), \(\sigma(\Delta\alpha_{i})=\sqrt{\sigma(\alpha_{i})^{2}+\sigma(\alpha_{i-1})^{2}}\), \(\Delta A_{G,\text{final},i}=|A_{G,\text{final},i}-A_{G,\text{final},i-1}|, \sigma(\Delta A_{G,\text{final},i})=\sqrt{\sigma(A_{G,\text{final},i})^{2}+ \sigma(A_{G,\text{final},i-1})^{2}}\) for each YSOC, where \(\alpha_{i}\) and \(A_{G,\text{final},i}\) represent the \(i\)th iterative spectral index and extinction of the YSOC while \(\sigma(\alpha_{i})\) and \(\sigma(A_{G,\text{final},i})\) were their corresponding uncertainties.
7. Repeat above steps 2\(-\)6 until both \(\alpha\) and \(A_{G,\text{final}}\) are converged for each YSOC. I define the convergence of \(\alpha_{i}\) and \(A_{G,\text{final},i}\) when \(\Delta\alpha_{i}<\sigma(\Delta\alpha_{i})\) and \(\Delta A_{G,\text{final},i}<\sigma(\Delta A_{G,\text{final},i})\) in \(i-2\), \(i-1\), and \(i\) iterations. Figure 11 shows \(\Delta\alpha\) and \(\Delta A_{G,\text{final}}\) as a function of iterations for 26 randomly selected YSOCs. I find that \(\alpha\) and \(A_{G,\text{final}}\) are converged for \(>\)90% YSOCs after 5\(-\)6 iterations. Finally I adopt the 7th iterative spectral indices and extinction values of YSOCs as the de-reddened spectral indices (\(\alpha_{c}\)) and extinction estimates.
Figure 10: The de-reddening process of the YSOCs. The left two panels show the \(H-K_{s}\) versus \(J-H\) CCDs for Class III candidates (a) and Class I/II candidates (c), respectively. The right two panels show the de-reddened \([H-K_{s}]_{0}\) versus \([J-H]_{0}\) CCDs for (b): the Class III references that are as a input to PNICER method; and (d): the Class I/II candidates that are located in sub-regions 1, 2, and 3 marked in panel c. The solid curves show the intrinsic colors for the main-sequence stars (orange) and giants (magenta), individually (Bessell & Brett, 1988). The blue solid lines label the locus of T Tauri stars suggested by Meyer et al. (1997) and the cyan lines are the extrapolation of the T Tauri locus. The red dashed lines show the reddening direction and separate the color plane into three sub-regions (1, 2, and 3). I use different methods to estimate the foreground extinctions of YSOCs in different sub-regions (see text for details). The black arrows mark the reddening vectors (Wang & Chen, 2019). |
2310.02462 | Improved Inference of Human Intent by Combining Plan Recognition and
Language Feedback | Conversational assistive robots can aid people, especially those with
cognitive impairments, to accomplish various tasks such as cooking meals,
performing exercises, or operating machines. However, to interact with people
effectively, robots must recognize human plans and goals from noisy
observations of human actions, even when the user acts sub-optimally. Previous
works on Plan and Goal Recognition (PGR) as planning have used hierarchical
task networks (HTN) to model the actor/human. However, these techniques are
insufficient as they do not have user engagement via natural modes of
interaction such as language. Moreover, they have no mechanisms to let users,
especially those with cognitive impairments, know of a deviation from their
original plan or about any sub-optimal actions taken towards their goal. We
propose a novel framework for plan and goal recognition in partially observable
domains -- Dialogue for Goal Recognition (D4GR) enabling a robot to rectify its
belief in human progress by asking clarification questions about noisy sensor
data and sub-optimal human actions. We evaluate the performance of D4GR over
two simulated domains -- kitchen and blocks domain. With language feedback and
the world state information in a hierarchical task model, we show that D4GR
framework for the highest sensor noise performs 1% better than HTN in goal
accuracy in both domains. For plan accuracy, D4GR outperforms by 4% in the
kitchen domain and 2% in the blocks domain in comparison to HTN. The ALWAYS-ASK
oracle outperforms our policy by 3% in goal recognition and 7%in plan
recognition. D4GR does so by asking 68% fewer questions than an oracle
baseline. We also demonstrate a real-world robot scenario in the kitchen
domain, validating the improved plan and goal recognition of D4GR in a
realistic setting. | Ifrah Idrees, Tian Yun, Naveen Sharma, Yunxin Deng, Nakul Gopalan, George Konidaris, Stefanie Tellex | 2023-10-03T22:13:29Z | http://arxiv.org/abs/2310.02462v1 | # Improved Inference of Human Intent by Combining Plan Recognition and Language Feedback
###### Abstract
Conversational assistive robots can aid people, especially those with cognitive impairments, to accomplish various tasks such as cooking meals, performing exercises, or operating machines. However, to interact with people effectively, robots must recognize human plans and goals from noisy observations of human actions, even when the user acts sub-optimally. Previous works on Plan and Goal Recognition (PGR) as planning have used hierarchical task networks (HTN) to model the actor/human. However, these techniques are insufficient as they do not have user engagement via natural modes of interaction such as language. Moreover, they have no mechanisms to let users, especially those with cognitive impairments, know of a deviation from their original plan or about any sub-optimal actions taken towards their goal. We propose a novel framework for plan and goal recognition in partially observable domains--Dialogue for Goal Recognition (D4GR) enabling a robot to rectify its belief in human progress by asking clarification questions about noisy sensor data and sub-optimal human actions. We evaluate the performance of D4GR over two simulated domains--kitchen and blocks domain. With language feedback and the world state information in a hierarchical task model, we show that D4GR framework for the highest sensor noise performs 1% better than HTN in goal accuracy in both domains. For plan accuracy, D4GR outperforms by 4% in the kitchen domain and 2% in the blocks domain in comparison to HTN. The ALWAYS-ASK oracle outperforms our policy by 3% in goal recognition and 7% in plan recognition. D4GR does so by asking 68% fewer questions than an oracle baseline. We also demonstrate a real-world robot scenario in the kitchen domain, validating the improved plan and goal recognition of D4GR in a realistic setting.
## I Introduction
People with cognitive impairments, such as dementia, often struggle with focusing on everyday tasks and have limited attention spans. Efforts to assist people in tracking task progress have involved various approaches, including modeling tasks as a hierarchical task network (HTN), using a Bayesian Hidden Markov Model, or employing a Partially Observable Markov Decision Process (POMDP) [6, 19]. However, previous approaches focus on observing users rather than engaging them in an interaction. Our work aims to develop a model for a robot capable of assisting people complete tasks with language-based interactions; even when the users perform sub-optimal actions or switch between multiple goals. Our robot tracks the task progress using observations and question-asking using natural language. Such a robot can also benefit an operator building a machine, a child with autism doing their homework, or a child learning to do chores.
Inferring the goals and intents of the human requires plan and goal recognition (PGR) using noisy evidence from action execution that can be done efficiently using planning techniques [11]. One key challenge in human intent recognition as a PGR problem is that the robot has partial observability of human intentions. This is compounded by noisy sensors that also create partial observability of the environment. Modeling human progress during hierarchical tasks has been done using Hierarchical Task Networks (HTNs) [19, 7]. However, these recognition techniques again do not engage with the users and assume that the user acts optimally. Moreover, incorporating clarification questions and language utterance spoken by humans in PGR is challenging because of the huge space of language observations. The existing solution to this problem is heuristics [16, 5, 10], which are prone to fail as the tasks and environment sensors become complex and noisy.
The main contribution of our paper is a novel formulation that combines expressive and hierarchical task representation of HTNs to represent the human mental states with the sequential decision-making capabilities of a Partially Observable Markov Decision Process, in our Dialogue for Goal Recognition (D4GR). Our method keeps track of the environment, user state and dialogue history internally to perform PGR and guide in successful task completion.
POMDPs can model the uncertainty the robot faces as it performs intent recognitions and enables the robot to ask information-seeking questions. However, POMDP planners traditionally do not decide the relevance of the state using the task network at each sequential time step. Further, the HTNs have no notion of rewards to generate a sequence of actions to maximize the agent's utility. To solve these challenges, we assume that the user is a planner with goals and subgoals that are represented hierarchically. Moreover, the robot is a POMDP planner performing long-horizon dialogue policy planning. This enables the robot to reason about asking meaningful questions in ambiguous settings, such as the user switching goals during multiple concurrent tasks and also performing sub-optimal actions. Using this information, the robot can better recognize human intents based on the human actions estimated from noisy sensors and their language feedback. This model also explicitly allows for sub-optimal plans by a human user, which D4GR can detect.
We evaluate the usefulness of D4GR by measuring the improved accuracy in PGR and comparing the planning time and number of questions asked to two state-of-the-art baselines in the simulated domain developed by Wang and Hoey [19]. Our system is able to more accurately infer human intents than these baselines using information gathered from a language
without asking unnecessary questions. We run 880 trials for varying sensor noise levels where the simulated human tries to complete a combination of tasks in two domains - the kitchen and blocks domain. In the kitchen domain, there are three tasks - washing hands, making a cup of tea, and making a cup of coffee while in the blocks domain, tasks include an assortment of stacking letters to make 4-7 lettered words - \(rote\), \(tone\), \(tune\), \(hawk\), \(capstone\). With language feedback and the world state information in a hierarchical task model, we show that our D4GR framework outperforms HTN by 4% on plan accuracy in the kitchen domain and 2% in the blocks domain. In goal accuracy, for the highest sensor noise, our D4GR performs 1% better than HTN in both kitchen and block domains. We also deployed our algorithm on a social robot Kuri as a demonstration of a socially intelligent robot helping confused users complete tasks. In this demonstration, Kuri performs improved PGR by asking clarification questions and reducing uncertainty for a challenging scenario where the user switches between multiple concurrent goals (e.g., washing hands and making coffee) and acts sub-optimally in the kitchen domain. The demo can be found here: [https://youtu.be/Om91zBiDDEY](https://youtu.be/Om91zBiDDEY). An example sequence of the user and robot interaction can be seen in Fig-1.
## II Related Work
**Plan and Goal Recognition as Planning:** The first work on PGR as planning was introduced by Ramirez and Geffner [15]. This research leverages classical planning systems to solve PGR problems. Wang and Hoey [19] proposed an algorithm for PGR based on hierarchical task network [4] that handles noisy sensors and sub-optimality in human actions. However, their approach detects mistakes heuristically using a manually defined threshold while performing a one-step look ahead without long-horizon planning. Additionally, it lacks dialog conversation to improve belief in the actor's progress in the task. Another relevant work that does not engage with the user is by Zhi-Xuan et al. [22], which performs online Bayesian goal inference by modeling the agent as a boundedly rational planning agent but is not designed and evaluated for multiple concurrent hierarchical goals. Mirsky et al. [12] presents favorable results for the hypothesis that feedback from the acting agent can improve plan (goal and step) recognition, but their paper performs goal recognition as reinforcement learning and has a fixed language policy without exploring the observer's strategy for asking clarification question. Holler et al. [7] employs HTN Planning for PGR but struggles with sub-optimal actions and noisy sensors, focusing on handling missing sensor observations instead.
**Context-aware Social Robotics:** In the past, research in social robotics has focused on developing non-verbal social behaviors for robots to assist the elderly [5, 10] during task completion. These works place less emphasis on incorporating language feedback/observations for user intent inference. Further, the robot dialog policy (if involved) does not account for the environmental context, dialog context, and user modeling. Research in situated human-robot dialog by Bohus and Horvitz [1], Thomason et al. [18], Idrees et al. [8] grounds speech response in the environment but asks clarification questions heuristically, using rule-based/greedy approaches and without using a decision-theoretic framework. Such heuristics are prone to failure as the tasks get complex and the environment sensors become complex and noisy.
**POMDP-based Collaborative Dialog:** Partially observable Markov decision processes (POMDPs) provide a rich framework for planning under uncertainty. They excel in optimizing agents' actions over long horizons in complex environments despite incomplete state information from noisy sensors [6].
Fig. 1: Kuri robot performing human intent (plan and goal) recognition. The human, while making coffee, starts a new goal of washing hands. However, the human forgets to use soap after turning on the faucet and instead turns off the faucet. The robot (with access to the Hierarchical Task Network representation of goals) observes that the current world observations do not progress the previous goal and also cannot lead to a new goal completion and hence uses D4GR to ask a clarification question. Based on language feedback, the robot reduces its confidence in the goal wash hands and suggests the next action as using soap to complete one of the most plausible goals of washing hands.
Young et al. [21] and Doshi and Roy [3] built POMDP-based dialog systems. However, these only use language as observations, not world observations, for belief updates to choose actions with the highest reward accumulation. Whitney et al. [20] fuses language and world observations for object fetching tasks but does not model the user's mental state during multi-step task completion. The closest work to ours is by Hoey et al. [6], featuring an engaging assistant that incorporates only world observations and not language observations for multi-step tasks. They can infer human actions and psychological states through hand and towel tracking but can not handle multiple and concurrent tasks and backtracking and are limited to hand washing. Research has recently focused on using reinforcement learning in a collaborative dialog for interactive task learning Chai et al. [2]. However, these works require an existing dataset for offline learning, while our planning approach doesn't necessitate data collection or learning.
## III Technical Approach
Given the sensor measurements \(O_{w}\) of the world state \(W\), an assistive robot needs to infer the probabilities of the hidden user intent--the person's current goal \(G\) and the current primitive human action \(\alpha\). Partial observability of the latent user intent and noise in the sensor observations impact accurate inference of \(G\), \(W\), and \(\alpha\) at every sequence of human actions. We infer the probability distribution of the hidden states \(G\), \(W\), and \(\alpha\) using HTN planning to generate feasible plans for the human-given predefined tasks similar to work of Wang and Hoey [19]. With the ability to ask clarification questions, the robot actively improves the inference of the latent states. The user's response in the form of language observations, \(o_{l}\), gives the agent additional information about their task. With our D4GR framework, the robot decides when to ask clarification questions and which user action to inquire about, based on information gathered from both the sensors and language. Our framework avoids asking unnecessary questions and balances between information-gathering actions, like asking questions, and goal-inducing actions, such as providing the correct next step.
### _POMDP Definition_
We model our PGR problem as a POMDP [9] planning problem,generating an approximately optimal action policy for the robot. Formally, a POMDP is defined as a tuple (\(S,A,T,R,\Omega,O,\gamma,b_{0}\)) where \(S\) is the state space, \(A\) is a the action space, \(T\) is the transition probability, \(R\) is the reward function, \(\Omega\) is a set of observations, \(O\) defines an observation probability and \(\gamma\) is the discount factor. Since \(s_{t}\) is not known exactly, the POMDP model updates, at each timestep, the probability distribution over all possible states (belief state \(b_{t}\)). The POMDP agent uses a planner to generate an optimal policy for the robot's action, which in this case is the communication with the human user.
### _D4GR Formulation_
We define a novel model, **D**ialogue **4**for **G**o**al **R**ecognition, a Partially Observable Markov Decision Process (D4GR) that combines the goal and plan recognition components as described by Wang and Hoey [19] with the POMDP formalism to allow robots to take action in the environment through dialogue for improved PGR. For efficient human intent recognition and estimation of optimal action policy, our D4GR must handle the large space of the world and language observations. Our formulation leverages the hierarchical task structure of HTN and assumes independent assumptions between state variables set of goals \(G\), human user action \(\alpha\), and world state \(W\) for efficient belief update. Our D4GR has the following components: (\(S,A,T,R,\Omega,O,\gamma;b_{0}\)).
**State (S):** The state, \(s_{t}\in S\), consists of a tuple of the user's mental state, \(M_{t}\), and the world state, \(W_{t}\), along with information needed to track the dialog state. We represent the user mental state using HTNs. We assume access to the HTN's fixed knowledge graph \(TaskNet\) for the tasks, where the root node(s) represent the high-level tasks/goals \(G\) that the human can do. The internal nodes are sub-tasks that can be decomposed into leaf nodes depicting primitive human actions. \(\alpha\) denotes the current primitive human action. We model the user's mental state \(M\) represented by \(G\) and \(\alpha\). The partial \(TaskNet\) for our kitchen domain is shown in Fig-2.
The world state \(W_{t}\) combines the states of world smart sensors \(ss_{t}\) and the attributes of objects involved in the task \(att_{t}\), such as dryness of the hand, state of the faucet, etc. The
Fig. 3: Influence diagram for D4GR.
Fig. 2: Hierarchical Task Network for two of our tasks - washing hands, and making tea
dialog state variable \(q_{t}\) stores the latest primitive human action referenced by the robot in its clarification question. Thus, the state \(s_{t}\) can be factored into the following components:\(s_{t}=(M_{t},q_{t},W_{t},)\) where \(W_{t}\) = \((ss_{t},att_{t})\), \(M_{t}=(G_{t},\alpha_{t})\). Here \(G_{t},\alpha_{t},att_{t},ss_{t}\) are the hidden variables, while \(q_{t}\) is the known variables, hence making this a Mixed Observable Markov Decision Process (MOMDP) [13]. The influence diagram can be seen in Fig-3.
**Action(A)** includes the actions of the agent. The robot for this research can perform the following predefined language-based actions - 1) \(Wait\): does nothing but advances the time step. 2) \(Ask\{argmax(\alpha)\}\): The robot asks a clarification question about the primitive action \(\alpha\) with the highest belief. The question template used is: "I believe that you just did action \((\alpha_{i})\), is this correct?". 3) \(inform\_next\_instruction\): informs the next action that the user should perform at the current timestep based on the current belief. This action is chosen based on a fixed policy and is executed when the user provides a negative language response to the robot's clarification question.
**Observations (\(\Omega\))** encompass both the user's language (\(o_{l}\)) and observations about the world state (\(o_{w}\)). \(o_{w}\) includes discrete observations of the world smart sensor's state \(ss_{t}\) and the attributes of the task-related objects \(att_{t}\) such as _hand_dry_ == _true_, _faucet_on_ == _false_, etc. These observations are binary for the states in \(W_{t}\), so the faucet can only be on or off. The language observations are natural language responses.
**Observational Model (\(O\)):** The robot needs a model of \(p(o|s)=p(o_{l},o_{w}|s)\) to update its belief. Most of the complexity of our model is captured in this observation model and belief update defined in the sec-III-C.
**Transition Model (\(T\)) : \(T(s,a,s^{\prime})\equiv p(s_{t+1}|s_{t},a_{t})\).** Our stochastic transition function is factorized as shown in Eq-1, following a similar approach to Wang and Hoey [19]. We factor our mental model \(M_{t}\) into \(G_{t}\) and \(\alpha_{t}\). Additionally, we assume that the last question asked, \(q_{t}\), is independent of \(G\), \(\alpha\), and \(W\).
\[p(s_{t+1}|s_{t},a_{t})=p(G_{t+1}|W_{t+1},G_{t})\times p(W_{t+1}|W _{t},\alpha_{i,t+1})\times\] \[p(\alpha_{t+1}|W_{t},G_{t})\times p(q_{t+1}|q_{t},a). \tag{1}\]
In Eq-1, we assume that \(q_{t}\) changes deterministically from null to \(max(\alpha_{t})\) after the robot asks a clarifying question. Further, \(G\) is deterministically carried forward to the next time step.
\[p(q_{i,t+1}|q_{i,t},a_{t})\] \[= \begin{cases}1\text{ for }max(\alpha)\;\;else\;\;0,&\text{if }a \neq NULL\\ 1\text{ for }q_{i,t},\;\;\;else\;\;\;\;0\;\;\;\;\;\;\;\;a_{t}==NULL.\end{cases}\]
**Reward \((R,a)\)** We provide a positive reward (5) for asking a clarifying question when the user is doing the wrong or suboptimal lowest primitive step. A negative reward (\(-5\)) for asking a clarifying question when the human user is doing the correct primitive step or when the agent asks a question about a wrong primitive action. Thus, doing nothing accumulates zero rewards until the right question is asked, while not asking a question or asking a wrong one results in a penalty.
### _Belief Update for Goal Recognition and Planning_
Our belief update performs human intent recognition by maintaining a belief over the hidden user's mental state \(M_{t}=(G_{t},\alpha_{t})\) and the world state \(W_{t}\). The actions executed by the user produce an observation of the world state \(o_{w}\) indicating the change in the world state \(W_{t}\). The user can also provide speech/language feedback \(o_{l}\) in response to the clarification question asked. We classify the intent of each sentence into positive or negative feedback using the bag of words approach. Negative responses \(r_{n}\) include { 'no', 'nope', 'other', 'not' } while positive responses \(r_{p}\) include the words { 'yes', 'yeah','sure', 'yup' }. Further, our world sensor noise model generates the correct sensor state with probability \(sr\) and the incorrect sensor state with \(1-sr\). We adopt the noise model for the sensor described by Wang and Hoey [19].
The observation model can be further expanded and approximated as follows:
\[p(o_{t}|s_{t};a_{t-1})\propto p(s_{t}|o_{t},a_{t-1})*p(o_{t}|a_{t-1}). \tag{2}\]
Overall, the probability of \(s_{t}\) given \(o_{t}\) and \(a_{t-1}\) can be factored into the world observation model and language observation model in Eq-3. We assume that the world observation \(o_{w,t}\) solely provides information about the \(W\) and \(\alpha\). Meanwhile, the language observation is relevant to the human action \(\alpha\), the last question asked \(q_{t}\), subsequently affecting the goal.
\[p(s_{t}|o_{t},a_{t-1})\propto\underbrace{p(G_{t}|W_{t})*p(W_{t}|o_{w,t})*p( \alpha_{t}|o_{w,t})*p(o_{w,t})}_{\text{world observational model}}\]
\[*\underbrace{p(\alpha_{t},q_{t}|o_{t,t})*p(o_{t,t})*p(q_{t}|a_{t-1})}_{\text {language observational model}}. \tag{3}\]
For both the world and primitive action belief update in eq-3, the components \(p(W_{t}|o_{w,t})\) and \(p(\alpha_{t}|o_{w,t})\) are derived from Wang and Hoey [19]. The Bayesian update is as follows:
\[p(\alpha_{t}|o_{w,t})\propto\sum_{w_{t-1}\in W_{t-1}}\sum_{w_{t}\in W_{t}}p( \alpha_{t},o_{w,t},w_{t-1},w_{t}), \tag{4}\]
\[p(W_{t}|o_{w,t})=\sum_{w_{t-1}\in W_{t-1}}\sum_{\alpha_{i,t}\in\alpha_{t}}p( \alpha_{i,t},o_{w,t},w_{t-1},w_{t}). \tag{5}\]
We adopt the proposed algorithm for goal recognition, \(p(G_{t}|W_{t})\) in Wang and Hoey [19]. The algorithm maintains a goal belief distribution by generating a probabilistic explanation set - \(ExplaSet\). Each \(expla\in ExplaSet\) uses HTN planning to explain the observations so far. The probability of each goal \(g_{i}\) in \(G\) given the world state is the sum of probabilities of \(expla\in ExplaSet\) whose \(PredictedGoal\) == \(g_{i}\). Our algorithm reweighs primitive actions probabilities based on the language observational model described below, influencing the world belief update according to Eq-5 and, consequently, the goal recognition update.
The derivation of the language observational model is:
\[p(\alpha_{t},q_{t}|o_{l,t})*p(o_{l,t})\propto p(o_{l,t}|\alpha_{t},q_{t}). \tag{6}\]
We adopt a bag-of-words approach as our POMDP's observational model instead of utilizing a large language model (LLM) like GPT3. LLMs are not inherently grounded. Our model explicitly establishes a connection between sensor information and semantics through a transition model in the POMDP. Although LLMs could be incorporated for intent classification using the right prompt, we did not pursue this direction as it falls outside the focus of our paper.
To estimate the effect of the language observation \(o_{l}\) on \(\alpha\) and \(q\), we calculate \(p(o_{l,t}|\alpha_{t},q_{t})\). For this, we consider three possibilities for the state: If the highest belief primitive action \(\alpha_{max,t}\) is the same as the question asked, then the user is likely to respond with positive/confirmation feedback. The opposite is true if \(\alpha_{max,t}\neq q_{t}\). If \(q_{t}=Null\), then no question has been asked, so both types of responses are equally likely. The mathematical representation of \(p(o_{l,t}|\alpha_{t},q_{t})\). is governed by the following conditional probability table:
At each time step as the human performs an action, we solve the MOMDP using the POUCT solver [17] to approximate the optimal policy for the robot's communication with the human. The observational model is then employed to update the robot's belief of user's mental state \(M=\{G,\alpha\}\).
## IV Evaluation
Our evaluation aims to test the hypothesis that our hierarchical decision-theoretic framework D4GR improves 1) the accuracy of goal recognition and plan recognition of human activity and 2) the robot's ability to guide the person towards task success. We evaluate the enhanced performance of our algorithm by measuring the accuracy of goal recognition and prediction of the next human action, also referred to as plan recognition, at every time step. We also measure the planning time, the cumulative expected return, and the number of clarification questions asked for completing the tasks by D4GR and compare it against the three presented methods in simulation. We also perform a robot demonstration of D4GR for the scenario where the human switches between washing hand and making tea tasks.
We use the simulation environment introduced by Wang and Hoey [19] for our experiments. The simulator models real environment state changes that result from the primitive actions specified in the HTN for the virtual human. In the simulator, 44 binary virtual sensors are observing the world state. Some of the examples include sensors for \(hand\_dry,faucet\_on,block\_picked\_up\). For our experiments, we vary sensor reliability from 99% to 80%.
### _Domain and Experiment Test Cases_
We test our algorithm in two domains: a block domain and a kitchen domain. The Knowledge Base, \(TaskNet\) of the HTN for the kitchen has three goals: _wash hands_, _make tea_, and _make coffee_. and the block domain has five goals of stacking blocks to make words with varying lengths of 4 to 7. The five goals of our block domain are \(rote\), \(tone\), \(tune\), \(hawk\), and \(capstone\). The two domains differ in their HTN planning structure, as the block's domain has higher goals (root nodes) but a shorter tree depth than the kitchen domain. Such a setup allows us to explore the effect of HTN structure on goal and plan recognition performance. We evaluate the performance of our algorithm over four categories of test case scenarios:
**Single Goal & Correct Steps:** Captures scenarios where the human always executes a correct action sequence for achieving a single goal.
Fig. 4: Results for Top1 Goal Accuracy versus Sensor Reliability for the two domains kitchen and block
**Multiple Goals & Correct Steps:** The person works on multiple goals simultaneously by switching back and forth,
**Single Goal & Wrong Steps:** A human has a single goal but can execute wrong steps affecting progress toward the goal.
**Multiple Goals & Wrong Steps:** The human moves back and forth between goals and executes wrong actions.
The easiest case is Single Goal & Correct Steps, and the hardest is Multiple Goals & Wrong Steps.
### _Baselines:_
We compare D4GR's performance in simulation with three other methods. Our first baseline is **HTN**, a previous method of HTN-based goal recognition introduced by Wang and Hoey [19]. This method passively incorporates partially observable world observations for PGR without engaging with the user. Our second method is **ALWAYS-ASK** which acts as an oracle that always asks the correct clarification question and uses the language feedback for the belief update of D4GR. This baseline always has the highest goal and step recognition accuracy but receives a lower reward because it asks unnecessary questions. Our third baseline is **SIPS1** introduced by Zhi-Xuan et al. [22]. This algorithm is not equipped to handle hierarchical nature of goals.
Footnote 1: The SIPS baseline is adopted from the code repository cited in Zhi-Xuan et al. [22] and uses their default setting: static goal transition.
### _Metric Definitions:_
We measure 1) Top 1 Accuracy for Goal recognition, 2) Accuracy for Plan Recognition similar to Wang and Hoey [19] averaged over all timesteps. These metrics measure the accuracy of our belief update. To evaluate our POMDP formulation, we also measure the planning time taken: runtime averaged over the steps, the cumulative reward for the whole human action sequence, and the number of questions asked averaged over trials.
## V Results
Our proposed algorithm aims to improve the capability of the robot for goal and plan recognition. The performance of D4GR depends on how accurately our cognitive assistive robot estimates the belief states for the human mental model \(M_{t}\): the likelihood of goals \(G\) and the human actions \(\alpha\) at each simulated step. The ground truth of each human action's \(\alpha\) given the goal \(G\) can be obtained from the knowledge base.
### _Exp 1 - Goal Accuracy Performance_
In Fig-4, we present results for the average goal accuracies of D4GR and compare them with the baselines over varying sensor reliability and test case categories. The reason for choosing the sensor reliability range from [0.8 to 0.99] is because most of the deep-learned vision and human action detectors have similar average accuracy [14]. Overall, as the sensor reliability decreases, the accuracy performance of HTN-based methods (ALWAYS-ASK, D4GR, HTN) suffers. The oracle baseline, ALWAYS-ASK, always has the highest goal accuracy. Even at lower sensor reliabilities (higher sensor noise), D4GR's accuracy remains higher than HTN in all experiment categories by 1% on average in both domains. This trend indicates that even when the sensor's observational model fails, D4GR can better predict the belief states than HTN. The SIPS method did not generate functional plans for our kitchen domain even when the input specification was correct. Their particle filter algorithm could not find feasible plans for the goals. Hence, presenting the results in the blocks domain. Compared to the SIPS baseline, our method is 30 % better in the blocks domain. We significantly outperform the SIPS baseline in the multiple goals scenario by almost 43% because SIPS does not handle the hierarchical nature of goals. The problem categories with single goals (correct steps & wrong steps) have the best performance for the lowest sensor reliability. We see D4GR performance boost by 2.8% in the kitchen domain and 1.4% in the blocks domain as compared to
Fig. 5: Results for Top1 Plan Accuracy versus Sensor Reliability for the two domains kitchen and block
HTN. The Oracle, on average, is 6.3% more accurate than the HTN baseline in this category. Our D4GR improves accuracy by inferring when and what to ask a question rather than always asking.
### _Exp 2 - Plan Accuracy Performance_
Similar to goal accuracy, we plot the planning accuracy for D4GR and compare it with the baselines for varying sensor noise in Fig-5. Our algorithm overall is 3% more accurate than HTN in both domains. For the lowest sensor reliabilities, D4GR is 4% better than HTN in the kitchen domain and 2% better than HTN in the block domain. For the multiple goal scenarios (correct and wrong steps), D4GR performs the best with an accuracy improvement of 2.7% in the kitchen domain and 2.3% in the block domain.
### _Exp 3 - Trend in Questions Asked, Rewards Accumulated and Runtime:_
Our proposed algorithm aims to improve the PGR capabilities of the agent by enabling the robot to engage with the users and ask for language feedback. One performance measure is the number of helpful clarification questions asked with varying sensor reliabilities in Table-II. When the environment and human actions are unambiguous (sensor reliability is high and/or the human is performing correct actions), D4GR enables the robot to intelligently infer that it does not need to ask lots of questions. At sensor reliability 0.99, D4GR still asks questions because users can do sub-optimal actions leading to ambiguity. Overall the agent asks questions 32.4% and 31% of the time in the block and kitchen domains respectively. When compared over varying sensor noise, the change in the number of questions asked is insignificant; the numbers lie within the same standard deviation.
Further, asking a large number of clarification questions, especially if they are not relevant to the current progress of the task, takes more computing resources since planning will have to be done at every timestep. This effect is measured by the runtime and the reward accumulated by D4GR. Our reward function penalizes asking a lot of questions, especially when they reference an irrelevant human action. Our results show that D4GR takes more planning time per step as compared to ALWAYS-ASK and HTN but is 48% faster than SIPS. All these measurements are done on a machine with 31 GB RAM, Intel(r) Core(tm) i7-9750H CPU @ 2.60GHz x 12. The code was run single-threaded. D4GR takes more time than HTN but enables accuracy performance boost as noted in previous sections, Sec-V-A and Section-V-B. Further, D4GR accumulates a higher reward/lower penalty as compared to ALWAYS-ASK by 58% in both domains highlighting that D4GR does not ask unnecessary questions.
### _Robot Demonstration_
We performed a robot demonstration to highlight the feasibility of D4GR in the real world. The Kuri robot was used primarily for the demo due to its audio transcription capability. The demonstration consisted of a human performing two interleaving tasks in a kitchen while the Kuri robot observed the actions performed and engaged with the user using the D4GR algorithm. The human begins with making coffee and then moves to wash their hands. Unlike HTN, which struggled to recognize the change in goal, D4GR correctly identified both goals. The demonstration simulated sensor reliability at 0.8 using an oracle and sensor noise model. Kuri used D4GR to intelligently infer when to ask a question and what to ask and is able to perform goal recognition and planning correctly. In the case of negative feedback from humans, Kuri offers its predicted correct step to the human.
## VI Discussion and Future Work
Our deployed algorithm D4GR shows improved accuracy for goal and plan recognition than the baseline \(HTN\) and \(SIPS\). It does so while asking fewer questions than the ALWAYS-ASK oracle policy. Our deployed robot with D4GR performs real-time communication, as demonstrated. The time taken for online planning is influenced by two critical parameters of the POMDP solver: 1) \(d\), representing the finite depth of the probabilistic decision tree constructed with state-action pairs, and 2) \(n\), the finite number of observations sampled from each node. Increasing \(d\) and \(n\) enhances the solver's accuracy and increases runtime. To achieve real-time communication, we conducted empirical experiments and determined that setting \(d=19\) and \(n=6\) provided appropriate action choices within a reasonable time.
Our algorithm shows promising improvements in PGR accuracy, although it comes with increased runtime compared to HTN. Our algorithm is designed to assist users with cognitive impairment in their daily tasks, focusing on non-time-critical activities. By providing delayed feedback, our social assistive robot increases the likelihood of users learning from mistakes
and avoiding continuous repetition of errors compared to using HTN. We can reduce runtime further by retaining only the highest probable explanation sets, denoted as \(ExplaSet\) in HTN planning. However, this impacts the PGR accuracy since \(ExplaSet\) with multiple goals during initial steps can get pruned due to their lower probabilities. In the multi-goal and low sensor reliability setting, D4GR shows slightly lower PGR accuracy than HTN. This is due to D4GR's reliance on noisy beliefs and user switching goals, leading to a higher probability of asking questions about irrelevant actions. The rational language feedback also adversely affects the update of the explast, potentially diminishing its utility in later timesteps of the episode.
Our work is also limited by the type of clarification questions the robot can ask. We have a fixed template for the question. It will be interesting to see how humans respond to various clarification strategies and how the robot can plan over a space of such categories. This will increase the action space requiring more exploration by the POMDP solver. Further, our language observational model is a bag of words model. It can be more expressive by incorporating inference from LLMs.
Further, our work assumes access to a pre-defined knowledge base for the tasks. One thing that we will be exploring in the future is how to make the knowledge base adaptive to a layman user's needs and preferences as the task progresses through interactive dialogue. Our research opens venues for language grounding and human intent recognition in other collaborative tasks like building machines/complex furniture together by humans and robots. This is an encouraging step toward enhancing the sensory capabilities of home-service robots that can assist people in completing tasks with language-based interactions.
## VII Conclusion
We propose a novel algorithm for robots to interactively keep track of people's ongoing progress in a task using questions. Moreover, our D4GR framework can suggest plan improvements to users in solving a task if required. Our work shows that: 1) modeling the user as an HTN and incorporating language feedback improves robots' belief of human's progress in simulation; 2) POMDPs are effective methods for tracking a task's progress and asking clarification questions. Our D4GR formulation has a similar goal and step recognition accuracy as the best baseline ALWAYS-ASK method while asking 68% fewer questions. In future work, we aim to conduct a user study with the targeted population to measure our approach's usefulness during the interaction. D4GR's ability to intelligently balance between clarifying uncertainty with a lesser number of questions allows for realistic interactions between a social robot and a human. This ability in the future can allow for realistic interactions with human users during collaborations over tasks between humans and robots.
## VIII Acknowledgement
We thank our labmates at Brown and Rutgers for their valuable insights. This work was supported by NSF under award number IIS-1652561, ONR under award numbers N00014-21-1-2584 and N00014-22-1-2592, and with funding from Echo Labs.
|
2305.18854 | Implications of time-dependent molecular chemistry in metal-poor dwarf
stars | Binary molecules such as CO, OH, CH, CN, and C$_2$ are often used as
abundance indicators in stars. These species are usually assumed to be formed
in chemical equilibrium. The time-dependent effects of hydrodynamics can affect
the formation and dissociation of these species and may lead to deviations from
chemical equilibrium. We aim to model departures from chemical equilibrium in
dwarf stellar atmospheres by considering time-dependent chemical kinetics
alongside hydrodynamics and radiation transfer. We examine the effects of a
decreasing metallicity and an altered C/O ratio on the chemistry when compared
to the equilibrium state. We used the radiation-(magneto)hydrodynamics code
CO5BOLD, and its own chemical solver to solve for the chemistry of 15 species
and 83 reactions. The species were treated as passive tracers and were advected
by the velocity field. The steady-state chemistry was also computed to isolate
the effects of hydrodynamics.
In most of the photospheres in the models we present, the mean deviations are
smaller than $0.2$ dex, and they generally appear above $\log{\tau} = -2$. The
deviations increase with height because the chemical timescales become longer
with decreasing density and temperature. A reduced metallicity similarly
results in longer chemical timescales and in a reduction in yield that is
proportional to the drop in metallicity; a decrease by a factor $100$ in
metallicity loosely corresponds to an increase by factor $100$ in chemical
timescales. As both CH and OH are formed along reaction pathways to CO, the C/O
ratio means that the more abundant element gives faster timescales to the
constituent molecular species. Overall, the carbon enhancement phenomenon seen
in very metal-poor stars is not a result of an improper treatment of molecular
chemistry for stars up to a metallicity as low as [Fe/H] = $-3.0$. | S. A. Deshmukh, H. -G. Ludwig | 2023-05-30T08:49:33Z | http://arxiv.org/abs/2305.18854v1 | # Implications of time-dependent molecular chemistry in metal-poor dwarf stars
###### Abstract
Context:Binary molecules such as CO, OH, CH, CN, and C\({}_{2}\) are often used as abundance indicators in stars. These species are usually assumed to be formed in chemical equilibrium. The time-dependent effects of hydrodynamics can affect the formation and dissociation of these species and may lead to deviations from chemical equilibrium.
Aims:We aim to model departures from chemical equilibrium in dwarf stellar atmospheres by considering time-dependent chemical kinetics alongside hydrodynamics and radiation transfer. We examine the effects of a decreasing metallicity and an altered C/O ratio on the chemistry when compared to the equilibrium state.
Methods:We used the radiation-(magneto)hydrodynamics code CO\({}^{2}\)BOLD and its own chemical solver to solve for the chemistry of 14 species and 76 reactions. The species were treated as passive tracers and were advected by the velocity field. The steady-state chemistry was also computed to isolate the effects of hydrodynamics.
Results:In most of the photospheres in the models we present, the mean deviations are smaller than 0.2 dex, and they generally appear above \(\log\tau=-2\). The deviations increase with height because the chemical timescales become longer with decreasing density and temperature. A reduced metallicity similarly results in longer chemical timescales and in a reduction in yield that is proportional to the drop in metallicity; a decrease by a factor 100 in metallicity loosely corresponds to an increase by factor 100 in chemical timescales. As both CH and OH are formed along reaction pathways to CO, the C/O ratio means that the more abundant element gives faster timescales to the constituent molecular species. Overall, the carbon enhancement phenomenon seen in very metal-poor stars is not a result of an improper treatment of molecular chemistry for stars up to a metallicity as low as [Fe/H] = \(-3.0\).
Conclusions:
## 1 Introduction
Stellar atmospheres are generally assumed to preserve the makeup of their birth environment. The abundance of elements heavier than helium (known as metals) in a stellar atmosphere is an indication of the stellar age, with older stars being deficient in metals. Spectroscopy is one of the foremost tools in determining the abundances of various elements in stellar atmospheres. Since the first studies on solar abundances in the 1920s (Payne, 1925; Unsold, 1928; Russell, 1929) to modern large-scale surveys such as the Gaia-ESO survey (Gilmore et al., 2012), _Gaia_(Gaia Collaboration et al., 2022), GALAH (Bland-Hawthorn et al., 2016), and _Pristine_(Starkenburg et al., 2017), to name a few, spectroscopically determined stellar parameters have been a key tool in understanding the composition of stellar atmospheres. Instrumentation and modelling have been refined in tandem, with improvements such as the treatment of departure from local thermodynamic equilibrium (LTE) and advancements in one-dimensional (1D) and three-dimensional (3D) model atmospheres. These directly lead to improvements in the determination of solar and stellar abundances because the methods for doing so often rely on model atmospheres and the assumptions therein. As a core component of Galactic archaeology, abundance determinations of stellar photospheres from spectroscopy often assume chemical equilibrium (implicitly assumed within the LTE assumption). While LTE studies have been used historically to determine stellar abundances (Asplund, 2000; Holweger, 2001; Caffau et al., 2011), the accurate treatment of the departure from LTE of level populations (known as radiative NLTE treatment) has been shown to provide more accurate abundances in both solar and stellar photospheres (Bergemann et al., 2013; Wedemeyer, 2001; Amarsi et al., 2019; Mashonkina, 2020; Magg et al., 2022).
Molecular features are important in metal-poor (MP) stars because atomic lines are comparatively weak (Beers et al., 1992; Aoki et al., 2013; Yong et al., 2013; Koch et al., 2019). In recent years, increasingly metal-poor stars have been discovered (Beers et al., 1992; Beveridge & Sneden, 1994; Aoki et al., 2013; Hughes et al., 2022), with a tendency of a carbon enhancement in their atmospheres (Beers & Christlieb, 2005; Sivarani et al., 2006; Carollo et al., 2014; Cohen et al., 2005; Hansen et al., 2016; Lucey et al., 2022). These carbon-enhanced metal-poor (CEMP) stars comprise a large fraction of the low-metallicity tail of the metallicity distribution function in the Galactic halo (Norris et al., 2007; Susmitha et al., 2020). Although NLTE treatment of spectral lines is becoming more prominent (Bergemann et al., 2013, 2019; Mashonkina, 2020), most of the work concerning these abundance determinations is still done under the assumption of chemical equilibrium, that is, that all chemical species are in equilibrium with one another. Most NLTE studies consider radiative NLTE, meaning that the radiation field is not in equilibrium with the local background temperature. This changes the population of energy levels in an atom or molecule. Radiative NLTE is still considered in a time-independent fashion. We instead model the time-dependent chemical processes for a variety
of species to investigate the effects of hydrodynamics on molecular formation and dissociation to study whether the carbon enhancement seen at very low metallicities is a real effect or is due to a lack of consideration for time-dependent chemistry.
Chemical species will react with one another in such a manner as to approach thermodynamic equilibrium, given enough time. However, as the rates of these reactions depend strongly on temperature and density (Horn & Jackson 1972), there may be regions in the star in which chemical equilibrium conditions are not met. In the deeper, hotter, collision-dominated layers, chemical species evolve to equilibrium on timescales much faster than other physical timescales in the system. The assumption of chemical equilibrium therefore implies that the chemistry evolves to its equilibrium state faster than other processes can significantly perturb it. In this work, the other physical processes are hydrodynamical, and the key question is whether the chemistry reaches its local thermodynamic equilibrium before the species are advected. Convection in a stellar atmosphere can also lead to compression shocks, which quickly heat material. When chemical kinetics are coupled to these processes, the chemistry evolves on a finite timescale, and a prevalence of these hydrodynamical effects can push the overall chemistry out of its local equilibrium state.
Metallicity also has a large impact on both the overall structure of the atmosphere and the number densities of the species. At a cursory glance, reducing the metallicity by a factor of 100 immediately results in a reduction by a factor of 100 in the number densities, which naturally results in slower mass-action reaction rates. Relative abundances (especially of C and O) also play a large role in determining the final yield as well as the chemical timescales of different species (Hubeny & Mihalas 2015). As a result of the mass-action rates alone then, the sharp reduction in chemical timescales may cause the chemistry to become out of equilibrium in higher, cooler layers.
Currently, many different codes exist for modelling stellar atmospheres. While 1D atmospheres have been used to great effect (Gustafsson et al. 2008; Allard & Hauschildt 1995), 3D time-dependent modelling is essential for accurately modelling hydrodynamical effects within an atmosphere (Pereira et al. 2013). Codes such as CO\({}^{5}\)BOLD (Freytag et al. 2012), Stagger (Magic et al. 2013), Bifrost (Gudiksen et al. 2011), MuRAM (Vogler et al. 2005), and Mancha (Khomenko et al. 2017) are prominent examples. We used CO\({}^{5}\)BOLD model atmospheres to model hydrodynamics, radiation transfer, and time-dependent chemistry together.
We investigate two distinct methods to treat the chemical evolution in a stellar atmosphere. The first is to evolve the chemistry as a post-processing step, using outputs from model atmospheres (known as snapshots) in order to determine the chemical evolution of various species. This method yields accurate results in regimes where the density-temperature profile is conducive to fast-evolving chemistry (in comparison to advection). The second method is to evolve the chemistry alongside the hydrodynamics, which is usually done after advecting the species. While this is much more expensive computationally, it will yield accurate results even in regimes in which the timescales of the chemistry are comparable to the advection timescale. In principle, both approaches are equivalent given a fine enough cadence because the chemical species are treated as passive scalars. In other words, given a fine enough sampling of snapshots, the post-processing method would tend towards the full time-dependent treatment. The post-processing method of evolving species into equilibrium is hence an approximation; using the final abundances calculated with this method as presented here implicitly assumes the formation of these species in chemical equilibrium. It is precisely this assumption that we investigate by comparing these two methods.
Wedemeyer-Bohm et al. (2005) investigated CO in the solar photosphere and chromosphere in 2D, employing a chemical network with 7 species and 27 reactions. Wedemeyer-Bohm et al. (2006) then expanded this into a 3D analysis, showing the formation of CO clouds at higher layers. We build on this further to include an extended chemical network involving 14 species and 83 reactions, and focus on the photospheres of main-sequence turn-off dwarf stars. We investigate CO, CH, C\({}_{2}\), CN, and OH in detail because these 5 species are spectroscopically interesting for abundance determinations in MP stars.
The numerical methods and chemical network setup are described in Sec. 2. The results of the three-dimensional simulations for the time-dependent and steady-state calculations are presented in Sec. 3 and discussed in Sec. 4. Our conclusions are given in Sec. 5.
## 2 Method
### Chemical reaction network
The chemical reaction network (CRN) that describes the system of differential equations builds on the network presented in Wedemeyer-Bohm et al. (2005), extending it to 14 species and 76 reactions. Table 1 describes these reactions along with the parameters of the rate coefficients. Our network is focused on the evolution of CO, CH, C\({}_{2}\), CN, and OH through reactions with neutral atomic and bimolecular species. Radiative association, species exchange, two- and three-body reactions, and collisional dissociation are included. Each reaction is given in the modified Arrhenius form, parametrised by the pre-exponential factor \(\alpha\), an explicit temperature dependence \(\beta\), and a characterisation of the activation energy \(\gamma\) (see Sec. 2.4 for a more detailed explanation). Some reactions with CO are catalysed reactions and include a characteristic metal M.
The choice of reactions is discussed below. Generally, the CRN was built to analyse the species CO, CH, CN, C\({}_{2}\), and OH. As the network presented in Wedemeyer-Bohm et al. (2005) already includes a good treatment of CO, CH, and OH, we supplement this network with reactions taken from the UMIST Astrochemistry Database (McElroy et al. 2013) to model the other molecular species. Only neutral atomic and bimolecular species were considered due to their prevalence compared to other trace molecules, and the storage limitations imposed by considering a full 3D time-dependent treatment. We neglect photodissociation in this network, but we accept that the effects may not be negligible in very optically thin layers. Additionally, as the reactions used here often come from studies in planetary atmospheres and combustion chemistry, the reactions we present are sometimes defined outside of their temperature limits, especially when considering deep photospheric regions. We chose to focus on higher cooler layers for this reason.
Reaction 58. We chose to use the rate that includes only \(\alpha\), instead of the rate that includes explicit temperature dependence. This is because the temperature limits of this reaction are 10 - 300 K, and including the temperature-dependent rate would lead to a much greater extrapolation due to the comparatively high temperatures in the model atmospheres.
Reactions 116, 133, and 198. For each of these reactions, two rates are presented in the database for temperature limits of \(10-300\) K, and \(300-3000\) K. We opted to use the latter rate as the temperature limits are closer to our use case.
Reaction 206. The reaction is defined for the temperature limits \(298-3300\) K and \(295-4000\) K. We opted to use the latter rate, whichincludes a higher upper-temperature limit.
Reaction 236. The reaction is defined for the temperature limits \(10-500\) K and \(158-5000\) K. We opted to use the latter rate, which includes a higher upper-temperature limit.
Reaction 244. The reaction is defined for the temperature limits \(10-294\) K and \(295-4500\) K. We opted to use the latter rate, which includes a higher upper-temperature limit.
A visualisation of the reaction network is shown in Fig. 1. Atomic species are shown in red, key molecular species are shown in blue, and all other molecular species are shown in grey. The full network with all reactions is too complex to show in full detail, so that we chose to highlight the important reactions as edges between nodes. The network is clearly connected, meaning that any node can be reached starting from any other node, but it is not fully connected because every node does not share an edge with every other node. These properties allowed us to find reaction pathways in the reaction network (see Sec. 4.4).
Figure 1: Graph of the chemical reaction network with atoms (red), key molecular species (blue), and remaining molecular species (grey). The connections describe the reaction pathways qualitatively.
\begin{table}
\begin{tabular}{r l l l r r r r} \hline Index & Reactants & Products & \(\alpha\) & \(\beta\) & \(\gamma\) & Reference \\ \hline \multicolumn{6}{c}{Radiative Association} \\ \hline
3681 & C + H & \(\Longrightarrow\) & CH + \(\gamma\) & 1.00e-17 & 0.00 & 0.0 & UMIST \\
3683 & H + O & \(\Longrightarrow\) & OH + \(\gamma\) & 9.90e-19 & -0.38 & 0.0 & UMIST \\
3703 & C + C & \(\Longrightarrow\) & C\({}_{2}\) + \(\gamma\) & 4.36e-18 & 0.35 & 161.3 & UMIST \\
3705 & C + N & \(\Longrightarrow\) & CN + \(\gamma\) & 5.72e-19 & 0.37 & 51.0 & UMIST \\
3707 & C + O & \(\Longrightarrow\) & CO + \(\gamma\) & 1.58e-17 & 0.34 & 1297.0 & UMIST \\
3730 & O + O & \(\Longrightarrow\) & O\({}_{2}\) + \(\gamma\) & 4.90e-20 & 1.58 & 0.0 & UMIST \\ \hline \multicolumn{6}{c}{3-body association} \\ \hline
4079 & H + M + O & \(\Longrightarrow\) & M + OH & 4.33e-32 & -1.00 & 0.0 & UMIST \\
4097 & C + M + O & \(\Longrightarrow\) & CO + M & 2.14e-29 & -3.08 & -2114.0 & BDDG76 \\
5000 & H + H + M & \(\Longrightarrow\) & H\({}_{2}\) + M & 6.43e-33 & -1.00 & 0.0 & KCD \\
5001 & H + H + H\({}_{2}\) & \(\Longrightarrow\) & H\({}_{2}\) + H\({}_{2}\) & 9.00e-33 & -0.60 & 0.0 & KCD \\
5002 & H + H + H & \(\Longrightarrow\) & H + H\({}_{2}\) & 4.43e-28 & -4.00 & 0.0 & BDHL72 \\
7000 & H + H + O & \(\Longrightarrow\) & H + OH & 1.00e-32 & 0.00 & 0.0 & BDHL72 \\
7001 & C + H + O & \(\Longrightarrow\) & CO + H & 2.14e-29 & -3.08 & -2114.0 & BDDG76 \\ \hline \multicolumn{6}{c}{Species Exchange} \\ \hline
1 & CH + H & \(\Longrightarrow\) & C + H\({}_{2}\) & 2.70e-11 & 0.38 & 0.0 & UMIST \\
3 & H + NH & \(\Longrightarrow\) & H\({}_{2}\) + N & 1.73e-11 & 0.50 & 2400.0 & UMIST \\
8 & H + OH & \(\Longrightarrow\) & H\({}_{2}\) + O & 6.99e-14 & 2.80 & 1950.0 & UMIST \\
11 & C\({}_{2}\) + H & \(\Longrightarrow\) & C + CH & 4.67e-10 & 0.50 & 30450.0 & UMIST \\
14 & CO + H & \(\Longrightarrow\) & C + OH & 5.75e-10 & 0.50 & 77755.0 & W80 \\
[MISSING_PAGE_POST]
\({}_{2}\) + N & C + CN & 5.00e-11 & 0.00 & 0.0 & UMIST \\
133 & CN + N & C + N\({}_{2}\) & 1.00e-10 & 0.40 & 0.0 & UMIST \\
138 & N + NO & N\({}_{2}\) + O & 3.38e-11 & -0.17 & -2.8 & UMIST \\
144 & N + O\({}_{2}\) & NO + O & 2.26e-12 & 0.86 & 3134.0 & UMIST \\
195 & NH + NH & H\({}_{2}\) + N\({}_{2}\) & 1.70e-11 & 0.00 & 0.0 & UMIST \\
197 & NH + O & N + OH & 1.16e-11 & 0.00 & 0.0 & UMIST \\
198 & NH + O & H + NO & 1.80e-10 & 0.00 & 300.0 & UMIST \\
206 & NH + NO & N\({}_{2}\) + OH & 1.46e-11 & -0.58 & 37.0 & UMIST \\
236 & O + OH & H + O\({}_{2}\) & 1.77e-11 & 0.00 & -178.0 & UMIST \\
240 & C\({}_{2}\) + O & C + CO & 2.00e-10 & -0.12 & 0.0 & UMIST \\ \hline \end{tabular}
\end{table}
Table 1: Reactions used in this work. “Index” refers to the index in the UMIST astrochemistry database. All reactions are of modified-Arrhenius form with a rate coefficient \(k(T)=\alpha\left(\frac{T}{300}\right)^{\beta}\exp\left(\frac{-
\begin{tabular}{r l r r r r r r}
243 & CN + O & \(\Longrightarrow\) & C + NO & 5.37e-11 & 0.00 & 13800.0 & UMIST \\
244 & CN + O & \(\Longrightarrow\) & CO + N & 5.00e-11 & 0.00 & 200.0 & UMIST \\
251 & N\({}_{2}\) + O & \(\Longrightarrow\) & N + NO & 2.51e-10 & 0.00 & 38602.0 & UMIST \\
261 & NO + O & \(\Longrightarrow\) & N + O\({}_{2}\) & 1.18e-11 & 0.00 & 20413.0 & UMIST \\
377 & C\({}_{2}\) + O\({}_{2}\) & \(\Longrightarrow\) & CO + CO & 1.50e-11 & 0.00 & 4300.0 & UMIST \\
382 & CN + CN & \(\Longrightarrow\) & C\({}_{2}\) + N\({}_{2}\) & 2.66e-09 & 0.00 & 21638.0 & UMIST \\
387 & CN + NO & \(\Longrightarrow\) & CO + N\({}_{2}\) & 1.60e-13 & 0.00 & 0.0 & UMIST \\
392 & CN + O\({}_{2}\) & \(\Longrightarrow\) & CO + NO & 5.12e-12 & -0.49 & -5.2 & UMIST \\
416 & NO + NO & \(\Longrightarrow\) & N\({}_{2}\) + O\({}_{2}\) & 2.51e-11 & 0.00 & 30653.0 & UMIST \\
7601 & NH + O\({}_{2}\) & \(\Longrightarrow\) & NO + OH & 2.54e-14 & 1.18 & 312.0 & UMIST \\ \hline \multicolumn{6}{c}{Collisional Dissociation} \\ \hline
194 & NH + NH & \(\Longrightarrow\) & H + H + N\({}_{2}\) & 1.16e-09 & 0.00 & 0.0 & UMIST \\
205 & NH + NO & \(\Longrightarrow\) & H + N\({}_{2}\) + O & 7.40e-10 & 0.00 & 10540.0 & UMIST \\
4060 & H + H\({}_{2}\) & \(\Longrightarrow\) & H + H + H & 4.67e-07 & -1.00 & 55000.0 & UMIST \\
4061 & CH + H & \(\Longrightarrow\) & C + H + H & 6.00e-09 & 0.00 & 40200.0 & UMIST \\
4062 & H + OH & \(\Longrightarrow\) & H + H + O & 6.00e-09 & 0.00 & 50900.0 & UMIST \\
4067 & H + O\({}_{2}\) & \(\Longrightarrow\) & H + O + O & 6.00e-09 & 0.00 & 52300.0 & UMIST \\
4069 & H + H\({}_{2}\) & \(\Longrightarrow\) & H + H + H\({}_{2}\) & 1.00e-08 & 0.00 & 84100.0 & UMIST \\
4070 & CH + H\({}_{2}\) & \(\Longrightarrow\) & C + H + H\({}_{2}\) & 6.00e-09 & 0.00 & 40200.0 & UMIST \\
4071 & H\({}_{2}\) + OH & \(\Longrightarrow\) & H + H\({}_{2}\) + O & 6.00e-09 & 0.00 & 50900.0 & UMIST \\
4074 & H\({}_{2}\) + O\({}_{2}\) & \(\Longrightarrow\) & H\({}_{2}\) + O + O & 6.00e-09 & 0.00 & 52300.0 & UMIST \\
4076 & CO + M & \(\Longrightarrow\) & C + M + O & 2.79e-03 & -3.52 & 128700.0 & BDDG76 \\
7002 & CO + H & \(\Longrightarrow\) & C + H + O & 2.79e-03 & -3.52 & 128700.0 & BDDG76 \\
7585 & CH + O\({}_{2}\) & \(\Longrightarrow\) & CO + H + O & 1.14e-11 & 0.00 & 0.0 & UMIST \\ \hline \end{tabular}
\begin{tabular}{r l r r r r r}
243 & CN + O & \(\Longrightarrow\) & C + NO & 5.37e-11 & 0.00 & 13800.0 & UMIST \\
244 & CN + O & \(\Longrightarrow\) & CO + N & 5.00e-11 & 0.00 & 200.0 & UMIST \\
251 & N\({}_{2}\) + O & \(\Longrightarrow\) & N + NO & 2.51e-10 & 0.00 & 38602.0 & UMIST \\
261 & NO + O & \(\Longrightarrow\) & N + O\({}_{2}\) & 1.18e-11 & 0.00 & 20413.0 & UMIST \\
377 & C\({}_{2}\) + O\({}_{2}\) & \(\Longrightarrow\) & CO + CO & 1.50e-11 & 0.00 & 4300.0 & UMIST \\
382 & CN + CN & \(\Longrightarrow\) & C\({}_{2}\) + N\({}_{2}\) & 2.66e-09 & 0.00 & 21638.0 & UMIST \\
387 & CN + NO & \(\Longrightarrow\) & CO + N\({}_{2}\) & 1.60e-13 & 0.00 & 0.0 & UMIST \\
392 & CN + O\({}_{2}\) & \(\Longrightarrow\) & CO + NO & 5.12e-12 & -0.49 & -5.2 & UMIST \\
416 & NO + NO & \(\Longrightarrow\) & N\({}_{2}\) + O\({}_{2}\) & 2.51e-11 & 0.00 & 30653.0 & UMIST \\
7601 & NH + O\({}_{2}\) & \(\Longrightarrow\) & NO + OH & 2.54e-14 & 1.18 & 312.0 & UMIST \\ \hline \multicolumn{6}{c}{Collisional Dissociation} \\ \hline
194 & NH + NH & \(\Longrightarrow\) & H + H + N\({}_{2}\) & 1.16e-09 & 0.00 & 0.0 & UMIST \\
205 & NH + NO & \(\Longrightarrow\) & H + N\({}_{2}\) + O & 7.40e-10 & 0.00 & 10540.0 & UMIST \\
4060 & H + H\({}_{2}\) & \(\Longrightarrow\) & H + H + H & 4.67e-07 & -1.00 & 55000.0 & UMIST \\
4061 & CH + H & \(\Longrightarrow\) & C + H + H & 6.00e-09 & 0.00 & 40200.0 & UMIST \\
4062 & H + OH & \(\Longrightarrow\) & H + H + O & 6.00e-09 & 0.00 & 50900.0 & UMIST \\
4067 & H + O\({}_{2}\) & \(\Longrightarrow\) & H + O + O & 6.00e-09 & 0.00 & 52300.0 & UMIST \\
4069 & H + H\({}_{2}\) & \(\Longrightarrow\) & H + H + H\({}_{2}\) & 1.00e-08 & 0.00 & 84100.0 & UMIST \\
4070 & CH + H\({}_{2}\) & \(\Longrightarrow\) & C + H + H\({}_{2}\) & 6.00e-09 & 0.00 & 40200.0 & UMIST \\
4071 & H\({}_{2}\) + OH & \(\Longrightarrow\) & H + H\({}_{2}\) + O & 6.00e-09 & 0.00 & 50900.0 & UMIST \\
4074 & H\({}_{2}\) + O\({}_{2}\) & \(\Longrightarrow\) & H\({}_{2}\) + O + O & 6.00e-09 & 0.00 & 52300.0 & UMIST \\
4076 & CO + M & \(\Longrightarrow\) & C + M + O & 2.79e-03 & -3.52 & 128700.0 & BDDG76 \\
7002 & CO + H & \(\Longrightarrow\) & C + H + O & 2.79e-03 & -3.52 & 128700.0 & BDDG76 \\
7585 & CH + O\({}_{2}\) & \(\Longrightarrow\) & CO + H + O & 1.14e-11 & 0.00 & 0.0 & UMIST \\ \hline \end{tabular}
The carbon enhancement phenomenon is represented by a number of molecular carbon features, including the strong CH G-band feature at 4300 A (Gray & Corbally 2009), the C\({}_{2}\) feature at 5636 A (Green 2013), the Swan bands (C\({}_{2}\)) at 5635 A and 5585 A, and the 3883 A CN band (Harmer & Pagel 1973). Koch et al. (2019) also used CN features to identify CN-strong and CN-weak stars in globular clusters. Overall, the spectral synthesis in cool carbon stars from 4000 - 10000 A shows that the region harbours many CH, CN, and C\({}_{2}\) lines.
CO has a very high bond-dissociation energy of 11.16 eV (March & Smith 2001). It is the key stable state within the chemical network. In the regions in which molecular features form, it is energetically favourable to form CO over CH and OH, for instance. CO therefore dictates the relative yield of other carbon- and oxygen-bearing molecules. Generally, C and O will be largely consumed to form CO, and any excess then forms other molecular species. With a C/O ratio lower than 1 (e.g. for solar composition) and at temperatures that allow for molecular formation, most of the carbon is locked into CO, leaving very little to form other carbonic molecules. With a C/O ratio greater than 1 (e.g. certain carbon-enhanced stars), oxygen is instead used up first, and more carbonic molecules form (see Fig. 9).
We included OH to investigate the effect of the C/O ratio on molecular species. OH provides an important symmetry to CH when considering the evolution of C, O, CH, OH, and CO (Gallagher et al. 2016, 2017). As the number of non-CO carbon-bearing molecules heavily depends on the C/O ratio, so too does the evolution of OH.
### Numerical method
We used CO\({}^{2}\)BOLD, a conservative finite-volume hydrodynamics solver capable of modelling surface convection, waves, shocks, and other phenomena in stellar objects (Freytag et al. 2012). The hydrodynamics, radiation transfer, and chemistry were treated via operator splitting and were solved on a Cartesian grid in a time-dependent manner. The chemistry was solved after the hydrodynamics and radiative transfer time steps. Standard directional splitting along the directions of the 1D operators was used. A Roe solver computed all updates in a single step, where higher-order terms in time are provided based on the applied reconstruction scheme.
Radiative transfer was solved frequency dependently (non-grey) under the LTE assumption using a multiple short-scale characteristic scheme (Steffen 2017). The opacity tables use 12 frequency bins and are consistent with the atomic abundances used for the chemistry input. The model does not treat frequency-dependent photodissociation of chemical species or heating and cooling via reactions. The equation of state is also consistent with the abundances used in the chemistry input and assumes the formation of molecules in instantaneous equilibrium.
All models used in this work were created by taking a thermally relaxed CO\({}^{5}\)BOLD model output and adding quantity-centred (QUC) quantities (Freytag et al. 2012; Wedemeyer-Bohm et al. 2005). These QUC quantities allow the user to arbitrarily add cell-centred quantities to the simulation. Here, each QUC quantity stores the number densities of a single chemical species across all grid cells. The QUC quantities were advected as prescribed by the velocity field. Periodic boundary conditions were implemented on the lateral edges of the computational domain. The lower boundary layer was open with inflowing entropy and pressure adjustment, while the top layer was transmitting. The number densities in ghost cells were copied from the nearest cells in the computational domain, but were scaled to the mass density of these cells. In this way, the chemistry was still consistent across the boundary, and the number densities of the elements were almost perfectly conserved. We only present 3D models in this work as we focus on the stellar photosphere and it was shown that 1D models are more insensitive to a change in CNO abundances (Plez & Cohen 2005; Gustafsson et al. 2008; Masseron 2008)
The output of the model atmosphere was stored in a sequence of recorded flow properties, commonly called a sequence of snapshots. Each snapshot also functioned as a start model to restart a simulation, or as a template to start a new simulation. To compare the time-dependent chemistry, the same reaction network was solved on a background static snapshot (i.e. a single snapshot without taking advection into account) until the chemistry reached a steady state. This is similar to the treatment of chemistry in equilibrium, but in this case, we still solved the kinetic system instead of relying on equilibrium constants. The method for solving the chemistry independently of the hydrodynamics in post-processing is described in Sec. 2.5.
We used five different 3D model atmospheres that differed in metallicity and chemical composition. A description of the model parameters is given in Table 2. Each model had an effective temperature of 6250 K, a surface gravity of log(\(g\)) = 4.00, a resolution of 140\(\times\)150 cells, and an extent of 26\(\times\)26\(\times\)12.7 Mm (\(x\times y\times z\)). We used standard abundances from the CIFST grid (Caffau et al. 2011) and initialised the molecular species to a number density of 10\({}^{-20}\) g cm\({}^{-3}\). The models are referred to in the rest of their work by their ID, namely AM1, AM2, AM3, AC1, and AC2. The models in this study did not use the MHD module and hence represent only quiet stellar atmospheres without magnetic fields.
### Time-dependent chemistry
The radiation-(magneto)hydrodynamics code CO\({}^{5}\)BOLD includes a time-dependent chemical kinetics solver (Wedemeyer-Bohm et al. 2005; Freytag et al. 2012) that has so far been used to investigate the solar photosphere and chromosphere in two and three dimensions. The code includes modules to advect passive tracers and to solve a chemical reaction network using these passive tracers. Generally, these passive tracers are added to an already thermally relaxed model atmosphere in order to initialise a model with a time-dependent chemistry. When it is initialised, CO\({}^{5}\)BOLD then solves the chemistry for each cell at each time step (alongside the equations of hydrodynamics and radiation transfer), and the species are advected as prescribed by the velocity
\begin{table}
\begin{tabular}{l r r r r r} \hline \hline Model ID & [Fe/H] & A(C) & A(O) & log C / O & Internal ID \\ \hline \multicolumn{7}{c}{(d3t63g40)} \\ \hline AM1 & +0.00 & 8.41 & 8.66 & \(-\)0.25 & mm00 \\ AM2 & \(-\)2.00 & 6.41 & 7.06 & \(-\)0.65 & mm20 \\ AM3 & \(-\)3.00 & 5.41 & 6.06 & \(-\)0.65 & mm30 \\ \hline AC1 & \(-\)3.00 & 7.39 & 7.66 & \(-\)0.27 & mm30c20n20020 \\ AC2 & \(-\)3.00 & 7.39 & 6.06 & \(+\)1.33 & mm30c20n20004 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Model atmosphere parameters for the five models used in the study. Each model has \(T_{\rm eff}=6250\) K and log \(g=4.00\), a resolution of 140 \(\times\) 140 \(\times\) 150 cells, and an extent of 26 \(\times\) 26 \(\times\) 12.7 Mm (\(x\times y\times z\)). The abundances for each model are consistent with those in the respective opacity tables, and we use the internal ID to refer to each model uniquely within this work.
field. That is, the time-dependent chemistry for a species \(n_{i}\) takes the form
\[\frac{\partial n_{i}}{\partial t}+\nabla\cdot(n_{i}\@vec{v})=S, \tag{1}\]
where \(\@vec{v}\) is the velocity field, and \(S\) is a source term. The source term is given by the rate of formation and destruction of each species characterised by the reactions in the network.
Each chemical reaction can be written as a differential equation describing the destruction and formation of species, and together, the reactions form an ordinary differential equation (ODE) system. We considered all reactions in this work to follow mass-action kinetics. The rate of a reaction is then given by
\[w_{r}=k\prod_{j}n_{j}, \tag{2}\]
where \(k\) is the rate coefficient, and the product over \(n_{j}\) includes the stoichiometry of either the reactants (forward reaction) or products (reverse reaction). For a generic reaction \(r\) with a forward rate coefficient \(k_{1}\) and reverse rate coefficient \(k_{2}\),
\[a\mathrm{A}+b\mathrm{B}\rightleftharpoons_{k_{2}}^{k_{1}}\mathrm{C}+d\mathrm{D}, \tag{3}\]
the rates of change of the generic species A, B, C, and D in the reaction \(r\) are related via
\[-\frac{1}{a}\left(\frac{\partial n_{\mathrm{A}}}{\partial t}\right)_{r}=- \frac{1}{b}\left(\frac{\partial n_{\mathrm{B}}}{\partial t}\right)_{r}=\frac{ 1}{c}\left(\frac{\partial n_{\mathrm{C}}}{\partial t}\right)_{r}=\frac{1}{d} \left(\frac{\partial n_{\mathrm{D}}}{\partial t}\right)_{r}. \tag{4}\]
Eq. (2) then gives the forward and reverse reaction rates \(w_{1}\) and \(w_{2}\) as
\[w_{1}=k_{1}n_{\mathrm{A}}^{a}n_{\mathrm{B}}^{b},\quad w_{2}=k_{2}n_{\mathrm{C }}^{c}n_{\mathrm{D},}^{d}\]
respectively. We can then construct the differential \(\left(\frac{\partial n}{\partial t}\right)_{r}\) for a species \(n_{i}\) and reaction \(r\). The full time-dependent chemical evolution of species \(n_{i}\) is then given by the sum over the reactions \(r\),
\[\frac{\partial n_{i}}{\partial t}=\sum_{r}\left(\frac{\partial n_{i}}{\partial t }\right)_{r}. \tag{5}\]
Particle conservation is ensured due to the overall mass continuity equation and the stoichiometry of each chemical reaction. Because only neutral species are considered, no explicit charge conservation is included.
Due to the high computational expense of computing time-dependent chemistry across a large grid, parallelisation is highly recommended. Along with the increased memory load of storing the number densities of QUC species, this limits the size of the network that can be treated time dependently. Even with these steps, solving the chemistry is still the most time-intensive step, taking upwards of 75% of the total runtime.
The DVODE solver (Hindmarsh et al. 2005) was used to solve the system of chemical kinetic equations, making use of the implicit backward differentiation formula (BDF). The solver uses an internally adjusted adaptive time step, which is a requirement when we consider that the system of equations is often very stiff. The solution of the final number densities is provided after the full hydrodynamics time step.
For stability, we used first-order reconstruction schemes for both the hydrodynamics and advection of QUC quantities. Higher-order schemes were found to cause some grid cells to extrapolate beyond the equation-of-state tables or low number densities to become negative. This was not a consistently reproducible effect for a given grid cell, meaning that its source could lie in single-precision numerical errors.
### Rate coefficients
The rate coefficient (sometimes rate constant) of a chemical reaction is an often empirically determined quantity dependent on the temperature. Many of the reactions presented in this work are unfortunately defined outside of their temperature limits simply because we lack studies of chemical reactions in high-temperature regions such as stellar photospheric layers. An uncertainty is also associated with the rate coefficients themselves. Despite these shortcomings, the chosen reaction rates are thought to describe the evolution of our species reasonably well.
The Arrhenius rate coefficient is commonly written as
\[k=\alpha\exp\left(-\frac{E_{\alpha}}{RT}\right), \tag{6}\]
where \(\alpha\) is a pre-exponential factor representing the fraction of species that would react if the activation energy \(E_{\alpha}\) were zero, \(R\) is the gas constant, and \(T\) is the temperature. We used the modified Arrhenius equation, which explicitly includes the temperature dependence
\[k=\alpha\left(\frac{T}{300\ \mathrm{[K]}}\right)^{\beta}\exp\left(-\frac{ \gamma}{T}\right), \tag{7}\]
where \(\beta\) is a fitted constant for a given reaction, and \(\gamma=\frac{E_{\alpha}}{R}\) characterises the activation energy. For a reversible reaction, the forward and reverse coefficients are related to the dimensional equilibrium constant \(K_{\mathrm{eq}}^{\prime}\) by
\[K_{\mathrm{eq}}^{\prime}=\frac{k_{f}}{k_{r}}, \tag{8}\]
where \(k_{f}\) and \(k_{r}\) are the forward and reverse rate coefficients, respectively. This equilibrium constant can be used to determine the chemical equilibrium of a given composition, defined when all forward and reverse processes are balanced (Blecic et al. 2016; Stock et al. 2018). As our reaction network contains irreversible reactions in the thermodynamic domain under study, equilibrium constants cannot be determined for each chemical pathway. Hence, we studied the equilibrium chemistry by solving the chemical kinetics until the chemistry reached a steady state. In the absence of processes such as advection, this steady state should correspond to chemical equilibrium.
### Steady-state chemistry
Steady-state chemistry was treated by solving the chemical kinetic system on a background model atmosphere (a single static snapshot), neglecting advection. The chemistry was evolved long enough to reach a steady state in which processes were balanced for each grid cell. The formulation of the final system of equations is the same as that in Eq 5. In this way, we were able to evaluate the time-dependent effects of advection when compared to the statically post-processed chemistry in steady state.
To solve the chemistry on a background CO\({}^{5}\)BOLD model snapshot, we present the code called graph chemical reaction network (GCRN)1. GCRN strictly handles a chemical kinetics problem and is able to evaluate the solution at arbitrary times, provided the chemical network, initial number densities, and temperature. The chemistry is solved isothermally in each cell. GCRN is able to read and write chemical network files in the format required by CO\({}^{5}\)BOLD as well as that of KROME (Grassi
et al. 2014). The code is written in Python and primarily relies on the numpy, scipy, and networkx libraries.
The numerical solver is the same as wasused in the time-dependent case, namely DVODE with the BDF method. By default, the absolute tolerance was set to \(10^{-30}\) and the relative tolerance to \(10^{-4}\). The Jacobian was computed and evaluated within the DVODE solver itself, but GCRN supports a user-supplied Jacobian matrix. GCRN can also automatically compute an analytical Jacobian based on the equation system and pass this to the solver. Supplying a Jacobian to the solver can help improve stability, but it was not necessary in this work.
GCRN first represents the system of chemical reactions as a weighted, directed graph (see e.g. van der Schaft et al. (2015); Horn (1972)). The vertices of the graph are the left and right sides of the chemical reactions, hereafter complexes, while the edges represent the reactions themselves. The weights of the edges are the reaction rates, evaluated for the provided temperature and initial number densities. For a reaction network with \(c\) complexes and \(r\) reactions, its directed (multi)graph2\(G\) can be characterised by its \(c\times r\) incidence matrix \(\mathbf{D}\), which represents the connection between vertices and edges, that is, which edges connect which vertices. Each column of \(\mathbf{D}\) corresponds to an edge (a reaction) of \(G\). The (\(i\), \(j\))th element of \(\mathbf{D}\) represents the reaction \(j\) containing complex \(i\). It is \(+1\) if \(i\) is a product, and \(-1\) if \(i\) is a reactant. For \(s\) species, the \(s\times c\) complex composition matrix \(\mathbf{Z}\) describes the mapping from the space of complexes to that of species, that is, it describes which species make up which complexes. Multiplying \(\mathbf{Z}\) and \(\mathbf{D}\) yields the \(s\times r\) stoichiometric matrix \(\mathbf{S}=\mathbf{Z}\mathbf{D}\). Finally, to include the mass-action kinetics, we required a vector of reaction rates \(\boldsymbol{v}(\boldsymbol{x})\) as a function of the species vector \(\boldsymbol{x}\). In general, for a single reaction with a reactant complex \(C\) specified by its corresponding column \(\boldsymbol{z}_{C}=[\boldsymbol{z}_{C,1}\ldots\boldsymbol{z}_{C,s}]^{T}\) of \(\mathbf{Z}\), the mass action kinetic rate with the rate coefficient \(k\) is given by
Footnote 2: allows for multiple edges between vertices
\[kx_{1}^{c_{C1}}x_{2}^{c_{C2}}\ldots x_{m}^{c_{Cs}}, \tag{9}\]
or more concisely,
\[k\exp(\boldsymbol{z}_{C}^{T}\mathbf{ln}(\boldsymbol{x})), \tag{10}\]
where \(\mathbf{ln}(\boldsymbol{x})\) is defined as an element-wise operation producing the vector \([\ln(x_{1})\ldots\ln(x_{s})]^{T}\). Similarly, the element-wise operation \(\exp(\boldsymbol{y})\) produces the vector \([\exp(y_{1})\ldots\exp(y_{s})]^{T}\). With this, the mass-action reaction rates of the total network are given by
\[v_{j}(\boldsymbol{x})=k_{j}\exp\left(\boldsymbol{z}_{j}^{T}\mathbf{ln}( \boldsymbol{x})\right),j=1,\ldots,r. \tag{11}\]
This can be written compactly in matrix form. We defined the \(r\times c\) matrix \(\mathbf{K}\) as the matrix whose \((j,\sigma)\)th element is the rate coefficient \(k_{j}\) if the \(\sigma\)th complex is the reactant complex of the \(j\)th reaction, and zero otherwise. Then,
\[\boldsymbol{v}(\boldsymbol{x})=\mathbf{K}\exp\left(\mathbf{Z}^{T}\mathbf{Ln} (\boldsymbol{x})\right), \tag{12}\]
and the mass-action reaction kinetic system can be written as
\[\dot{\boldsymbol{x}}=\mathbf{Z}\,\mathbf{D}\,\mathbf{K}\exp\left(\mathbf{Z}^ {T}\mathbf{ln}\boldsymbol{x}\right). \tag{13}\]
The formulation is equivalent to that in Eq. (5), with the stoichiometric matrix \(\mathbf{S}=\mathbf{Z}\,\mathbf{D}\) supplying the stoichiometric coefficients and the rate vector \(\boldsymbol{v}(\boldsymbol{x})=\mathbf{K}\exp\left(\mathbf{Z}^{T}\mathbf{ln}( \boldsymbol{x})\right)\) supplying the mass-action kinetic rates. A detailed explanation on the graph theoretical formulation and further analyses can be found in van der Schaft et al. (2016).
A graph theoretical approach allows us to investigate certain behaviours across chemical pathways, such as the timescales of processes and the importance of certain species. These graph representations are only created and accessed upon request and are not used when solving the kinetic system. The graph representations then allow for the analysis of the network in more depth before and after solving the system. In solving the actual kinetic system, only the rates vector \(\boldsymbol{v}(\boldsymbol{x})\) changes based on the change in number densities. We refer to Sec 4.4 for an in-depth analysis of the representation of the CRN as a graph.
A drawback of the Python version of GCRN is its low efficiency compared to compiled languages. Although we have implemented a few optimisations, computing the chemical evolution for many snapshots in 3D is still computationally challenging. We therefore used the Julia library Catalyst.jl3 for steady-state calculations across many 3D snapshots. GCRN was used primarily to evaluate 2D slices, 1D averages, and timescales, while large 3D steady-state calculations were performed in Julia. The results are identical between the two.
Footnote 3: [https://catalyst.sciml.ai/dev/](https://catalyst.sciml.ai/dev/)
## 3 Results
### Time-dependent versus steady-state chemistry
We investigated the results of time-dependent (TD) chemistry compared to equilibrium (Eqm) chemistry in 3D. For all models, chemical equilibrium is generally held below \(\log\tau=1\). Fig. 2 shows the absolute number densities and mixing ratios of species across the photosphere for both the time-dependent and steady-state chemistry. The mixing ratio is defined as
\[r=\frac{n_{i}}{n_{\text{total}}}, \tag{14}\]
where \(n_{i}\) is the number density of species \(i\), and \(n_{\text{total}}\) is the number density of all species excluding H, H\({}_{2}\), and M. In this way, the mixing ratio describes the relative abundances of important atomic and molecular species in a given volume. H, H\({}_{2}\), and M are much more abundant than other species, and including these species simply scales the relevant quantities down. We characterise deviations by considering the ratio of TD to CE mixing ratios, which is equivalent to considering the ratio of TD to CE number densities.
Molecular chemistry is clearly in equilibrium in the deeper photospheric layers, generally below \(\log\tau=1\). This is expected because the high temperatures in this collision-dominated regime result in very short timescales (much shorter than characteristic hydrodynamical timescales). In essence, the assumption of chemical equilibrium holds in these regimes. Significant deviations are not present in the AM1 model, but appear above \(\log\tau\approx-2\) in model AM2 and above \(\log\tau\approx-1\) in model AM3. In all cases in which the deviations are non-zero, the time-dependent chemistry is affected by hydrodynamics such that there is insufficient time to reach a local chemical equilibrium.
As expected, a decreasing metallicity decreases the number of molecular species that can be formed. The deviations from equilibrium molecular number densities increase with decreasing metallicity because the chemical timescales are slower. The
largest deviations are seen in C\({}_{2}\) and CN in model AM3, where they reach up to 0.15 dex at \(\log\tau=-4\). The deviations for the other molecules similarly increase with increasing height. These positive deviations are balanced by (smaller) negative deviations in CO. Essentially, there is insufficient time to form the equilibrium yield of CO in these thermodynamic conditions, and the yield of species that would react to form CO is therefore higher.
Differences that are often present around local features such as shocks can be lost in the global picture (averaging over space and time). Even though the chemistry is mostly in equilibrium throughout the atmosphere, investigating cases in which it is out of equilibrium can lead to an understanding of the hydrodynamical effects as well as to insights into where approximations of chemical equilibrium break down. Figs. 3 and 4 show the time-dependent mixing ratios in a horizontal and vertical slice through the AM3 model atmosphere, respectively.
Fig. 3 shows deviations from CE in CN in and around cool features. Mass-action rates depend on temperature, and therefore, cooler cells lead to longer chemical timescales. The instantaneous CE therefore predicts faster dissociation than is possible within the time-dependent scheme. In higher layers, the same reasoning applies, leading to positive deviations in CN, CH, and C\({}_{2}\), offset by negative deviations in CO.
The vertical slice in Fig. 4 shows the evolution of chemistry in various layers and highlights a shock in the upper photosphere. Deviations from CE are seen in all species in higher layers, with the shock being the most prominent example. In CE, all molecular species are immediately dissociated, while the time-dependent shows that even in these higher-temperature regions, CO is not so quickly depleted. While it may seem counter-intuitive that CO then shows a small negative deviation from CE, the mean amount of CO in the time-dependent case is smaller than that in CE. This is reflected in the positive deviations from CE seen in CH and CN, which, due to mass conservation, are offset by the negative deviation in CO. Additionally, the reverse trend is also true, in that the formation of CO after a shock passes is slower than predicted in CE.
### Carbon enhancement
For the models presented thus far, oxygen has been more abundant than carbon. CO, being extremely stable, often dominates the molecular species when it can form. It is possible, however, that this preference towards CO formation is influenced by the enhancement of oxygen relative to carbon present in the atmosphere. We investigated two cases of carbon enhancement in a model atmosphere with metallicity [Fe/H] \(=-3.0\). The first increased both C and O by 2.0 dex (AC1), while the second only increased C by 2.0 dex (AC2). Nitrogen was also increased by 2.0 dex. The increase for all elements included the 0.4 dex enhancement for alpha elements.
Fig. 5 shows the mixing ratios and deviations from equilibrium for the two CEMP model atmospheres presented in this work. In model AC1 (\(\log\) (C/O) \(=-0.26\)), more CO and CH is formed than in the standard metal-poor case, but OH is still more abundant than CH. Almost all C is locked up into CO, hence the next most-abundant molecular species is OH. This is analogous to models AM2 and AM3 because O is still more abundant than C. Carbon-bearing molecules are more abundant than in AM3, but the mixing ratios of CH to OH, for example, clearly show that the carbon enhancement does not necessarily lead to a large increase in all carbon-bearing molecular abundances. In model AC2 (C/O \(=+1.33\)), CO is still the most abundant species, while CH is more abundant than OH. We observe the opposite effect compared to models AM2, AM3, and AC1, ib which instead O is locked up into CO. This results in a significant depletion of OH compared to model AM3 because there is relatively little O left to form OH because C is overabundant. The depletion of O hinders the formation of further CO, and the chemical equilibrium is such that atomic C is the most abundant species. All models hence reinforce the notion that CO is the most stable molecular state in the chemical network.
Figure 2: Mixing ratios and deviations from chemical equilibrium for the AM1, AM2, and AM3 models. In the first and second panels, solid lines show the time-dependent quantities, and the hollow points show the equilibrium quantities. **Left.** [Fe/H] \(=0.0\). **Centre.** [Fe/H] \(=-2.0\). **Right.** [Fe/H] \(=-3.0\).
Figure 3: Mixing ratios of molecular species in a horizontal slice through the photosphere in the AM3 model. **Left.** Time dependent. **Right.** Equilibrium. Molecular formation follows a reversed granulation pattern. The effect of finite chemical timescales is most prominent when contrasting warm and cool regions in CN and CH; CO is seen to be relatively close to CE, as confirmed by Fig. 2c at \(\log\tau=-4\). The white contour traces a temperature of 4500 K.
Figure 4: Time-dependent mixing ratios of molecular species in a vertical slice through the photosphere above \(\log\tau=1\) in the AM3 model atmosphere. The white contour traces a temperature of 4500 K. The colour scale is the same for all molecular species.
Oxygen-bearing species seem to be further out of equilibrium in model AC1, while carbon-bearing species are further out of equilibrium in model AC2. Interestingly, deviations from equilibrium decrease in model AC2, in which the C/O ratio of +1.33 means that carbon is more abundant than oxygen. While this favours the formation of carbon-bearing species such as C\({}_{2}\) and CH, the formation of CO is hindered compared to model AC1 by the lack of OH formation, reinforcing the idea that the pathway for CO formation involving OH is important. The significantly smaller deviations in model AC2 might suggest that oxygen-bearing molecules might show larger deviations from chemical equilibrium due to hydrodynamical effects. All in all, CEMP atmospheres do not seem to be largely out of chemical equilibrium for the species presented in this work.
## 4 Discussion
### Effects of convection
As material is transported from hotter, deeper photospheric layers to cooler, higher layes, the conditions for chemistry to equilibrate change. It is feasible, then, that material from a lower layer can be carried upwards, reach a new equilibrium state, and later return to a deeper layer. In this process, molecular species will be present in greater numbers in cooler regions than in hotter regions. If chemistry does not equilibrate faster than advection occurs, we observe deviations from chemical equilibrium throughout convection cells. This effect is seen in Fig. 3 for CN and CH, where features are traced much more sharply in the equilibrium case than in the time-dependent one. The finite chemical timescales are responsible for the differences in formation in cool regions, and dissociation in hot ones. In this layer, the chemical equilibrium approximation still holds well for CO.
### Behaviour around shocks
While the overall differences in time-dependent and steady-state chemistry are small when averaged over time and space (horizontally), there can be significant differences in individual instances in time. In addition to the shock seen in Fig. 4, Fig. 6 shows the deviations from equilibrium molecular chemistry in the photospheres of the AM3, AC1, and AC2 models. This histogram shows deviations from CE binned in gas density and temperature across all 20 snapshots. The top panel gives the bin counts, showing the difference between background material (high density of points) and transient states (low density of points).
Although the background material is generally in equilibrium, three interesting regimes emerge where the molecular chemistry is clearly out of equilibrium, labelled R\({}_{1}\), R\({}_{2}\), and R\({}_{3}\). R\({}_{1}\) is the regime of convection in the upper photosphere and chromosphere, where hot material is advected upwards to a new layer faster than the molecular chemistry can reach equilibrium. When this material cools and falls, it can sometimes reach very high velocities (around 10 km s\({}^{-1}\)) exceeding the local sound speed. This supersonic material of the shock front is captured in the regime R\({}_{2}\). Equilibrium chemistry predicts an almost instantaneous dissociation of molecular species, while the time-dependent case models this over a finite timescale. An excess of molecular species is therefore present in the time-dependent case. Finally, the regime R\({}_{3}\) is the wake of the shock, where material has cooled and is subsonic. The slower chemical timescales in this regime lead to a depletion of molecular species in the time-dependent case. CO is an outlier here; it is still present in slight excess in R\({}_{3}\) as it does not dissociate as quickly as the other molecular species in the shock.
Models AC1 and AC2 show opposite trends in regimes R\({}_{1}\) when considering CH, CN, C\({}_{2}\), and OH. In model AC1 (\(\log{\rm C/O}=-0.26\)), the carbon-bearing molecules are more abundant in the time-dependent case, and OH is depleted. Model AC2 (\(\log{\rm C/O}=+1.33\)) instead has fewer carbon-bearing molecules, and OH is more abundant. This is due to the relative abundances of C and O. The chemical timescales depend on the abundances of C and O, so that the oxygen-rich atmosphere AC1 has slower dissociation rates for carbon-bearing molecules but a higher yield because the formation rates for OH are faster (and vice versa for the carbon-rich atmosphere AC2). Since CO is a stable end-product of most reaction pathways, it is not as strongly affected by this phenomenon.
Overall, the differences between the time-dependent and steady-state treatments in the photosphere are small, meaning that the chemistry in convection cells is likely not far from its equilibrium state. This is especially evident when averaging over space and time. However, it is possible that the effects would become stronger in stars on the red giant branch (RGB stars) due to larger scale flows and M-type dwarfs due to cooler temperatures, although the latter have smaller velocity fields, meaning that the effects of advection on the evolution of chemical species are reduced. Wedemeyer-Bohm et al. (2005) showed that the need for time-dependent chemistry becomes increasingly important in the solar chromosphere due to the higher frequency of shock waves alongside lower chemical timescales, but that the photosphere of the Sun was generally in chemical equilibrium for CO. We find the same trend when considering metal-poor dwarf stars: chemical equilibrium generally holds for the photospheres of these stars when we average over space and time, and deviations are largely present in their chromospheres. This further shows the need to include accurate time-dependent molecular chemistry when modelling stellar chromospheres.
### 1D analysis
A 1D horizontal cut through the atmosphere shows the instantaneous variations in the parameters and can help identify patterns. Due to mass-action kinetics, the chemical timescales depend on the gas density and temperature. Fig. 7 shows profiles of these quantities alongside the time-dependent and equilibrium number densities of CO across a prototypical downflow feature in the chromosphere of model AM3.
The equilibrium CO number density changes much more sharply across the feature than in the time-dependent case. This shows the finite chemical timescales. The number densities are also more sensitive to fluctuations in temperature, as seen towards the end, when the gas density changes but the temperature is constant. The equilibrium number densities show sharp discontinuities due to the vastly different chemical timescales around the shock front. While these are implausible, the average number densities are very similar (as shown in Fig. 2), showing that the shock here is not disruptive enough for CO chemistry to have a profound impact overall.
### Timescales and pathways
It is perceivable that a metallicity reduction by 2.0 leads to timescales that are slower by a factor of \(\sim 100\) due to the mass action law. Additionally, relative abundances have a strong effect, and the overall yield is lower at lower metallicities. Fig. 8 shows the evolution and equilibrium times for three metallicities, and Fig. 9 shows this for the two CEMP models. The equilibrium times for each model and species of interest are given in Table 3. Because of the time-stepping of the solver, these times are not necessarily exact, but they provide a clear picture of how
the species interact. The equilibrium times here are generally given as the point at which the relative difference in the number densities falls below a threshold \(\epsilon\), that is, \(t_{\text{eqn}}\) is reached when \(\frac{n_{\text{nl}}}{n_{\text{nl}}}\leq\epsilon\). We adopted \(\epsilon=10^{-6}\) for this network. Again, because this definition relies on the solver time-stepping to find \(n_{i}\), \(n_{i+1}\), the times are only exact to the times where the solution is evaluated.
The time for each species to reach equilibrium increases with decreasing metallicity. This is a direct consequence of the mass-action kinetics we used to determine reaction rates. The carbon-enhanced models show faster timescales for the same reason.
Another interesting investigation involves the pathways in which molecular species are formed (and disassociated), and how these change throughout the atmosphere. To pursue this, we represent the reaction network as a weighted directed graph, as shown in Sec. 2.5. The nodes are the left and right sides of the reactions (hereafter complexes), and the edges represent the reactions themselves, weighted by their corresponding inverse rate. As this graph is often disconnected, and because we are interested in the species, we added nodes for each individual chemical species. To connect the graph fully, the individual species nodes have an unweighted edge to each complex that contains it. In this way, we can represent the evolution of one species into another by means of the reaction pathways.
We used pathfinding algorithms to move from a source species to a target species, identifying the chemical pathway and its corresponding timescale (simply the sum of the edge weights). These change not only with the temperature, but also with the number densities of the reactants, meaning that the most frequented pathways for a given source-target pair can change during the chemical evolution.
Figure 5: Mixing ratios and deviations from chemical equilibrium at [Fe/H] \(=-3.0\) and a carbon enhancement of +2.0 dex for two atmospheres with different C/O ratios. In the first and second panels, solid lines show the time-dependent quantities, and the hollow points show the equilibrium quantities. **Left.** Model AC1, C/O \(=-0.26\). **Right.** Model AC2, C/O \(=+1.33\).
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Model & \(t_{\text{eqn}}(\text{C}_{2})\) & \(t_{\text{eqn}}(\text{CH})\) & \(t_{\text{eqn}}(\text{CN})\) & \(t_{\text{eqn}}(\text{CO})\) & \(t_{\text{eqn}}(\text{OH})\) \\ & [s] & [s] & [s] & [s] & [s] \\ \hline AM1 & \(4.5\times 10^{2}\) & \(1.0\times 10^{3}\) & \(2.4\times 10^{3}\) & \(1.7\times 10^{2}\) & \(1.7\times 10^{2}\) \\ AM2 & \(5.7\times 10^{3}\) & \(5.1\times 10^{3}\) & \(6.3\times 10^{4}\) & \(3.9\times 10^{3}\) & \(3.2\times 10^{3}\) \\ AM3 & \(4.9\times 10^{4}\) & \(2.4\times 10^{4}\) & \(2.4\times 10^{5}\) & \(4.0\times 10^{4}\) & \(1.3\times 10^{4}\) \\ \hline AC1 & \(9.0\times 10^{3}\) & \(7.7\times 10^{3}\) & \(1.6\times 10^{4}\) & \(1.6\times 10^{3}\) & \(1.6\times 10^{3}\) \\ AC2 & \(2.2\times 10^{3}\) & \(1.2\times 10^{3}\) & \(2.4\times 10^{3}\) & \(1.8\times 10^{3}\) & \(2.4\times 10^{3}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Time to equilibrium for all models and key molecular species at a temperature of 3500 K and a gas density of \(10^{-9}\) g cm\({}^{-3}\), corresponding to the upper photospheric convection cells. Due to the time-stepping of the solver, these times are not exact, but they provide a useful picture of how quickly various species set into equilibrium at varying chemical compositions.
Figure 6: Heat maps of binned quantities for models AM3, AC1, and AC2. Each quantity was binned using 20 snapshots of each 3D model. Deviations from equilibrium are seen in three distinct regions, labelled R\({}_{1}\) (convective cells in the upper photosphere), R\({}_{2}\) (shock fronts in the chromosphere), and R\({}_{3}\) (wake of the shock).
The custom pathfinding algorithm (based on Dijkstra's shortest-path algorithm (Dijkstra 1959) and taking inspiration from A* pathfinding (Foead et al. 2021)) is described in the following steps:
1. Start on a species source node.
2. If the current node is the target node, return the path.
3. Otherwise, find all nodes connected to the current node.
4. If the last travelled edge had a weight of zero, omit all edges with weights of zero from the next choice of edges.
5. Pick an edge at random and repeat from 2.
6. Pathfinding will converge once all possible paths from source to target have been explored.
Step 4 is necessary to prevent species-species jumps that are included as a side effect of adding chemical species to the graph. These unweighted edges are represented with a weight of 0, and traversing two of these consecutively is unphysical (e.g. moving from CO -> CO + H -> H) as it represents a species transforming into another (often completely unrelated) species without a reaction. However, these connections are still necessary to fully connect the graph; we removed the ability to travel along these connections consecutively, effectively altering the graph during pathfinding.
In our network, we investigated key pathways from C and O to CO, as well as the reverse. In all cases, reducing the metallicity resulted in longer timescales for reactions. Additionally, a single reaction dominates in most pathways and is often referred to as the "rate-limiting step" (Tsai et al. 2017, 2018). Table 4 shows the main reactions involved in the formation and dissociation of CO for the AM3 atmosphere. We qualitatively reproduce the same effects as those explored in Wedemeyer-Bohm et al. (2005), and find that of the three reactions that dissociate CO to C and O, the reaction CO \(\rightarrow\) CO + H is by far the most efficient, even in this extremely metal-poor atmosphere. Additionally, formation via species exchange (especially by OH) is the most preferable set of pathways.
We examined the preferred pathways in the network for OH for three abundance mixtures: AM3, AC1, and AC2. AM3 and AC2 are qualitatively similar, where radiative association of OH via H is a leading timescale. Species exchange with CH and CO
Figure 8: Chemical evolution for three models at differing metallicities at T = 3500 K, \(\rho=10^{-9}\) g cm\({}^{-3}\) (corresponding to the wake of a shock). The vertical dash-dotted lines show the time a species has to set into equilibrium. A reduction in metallicity leads to a corresponding reduction in time to equilibrium and overall yield.
Figure 7: Gas density, temperature, and the number density of CO molecules in a slice across the AM3 atmosphere. The left panels show the 2D heat maps of these quantities, and the right panels show a 1D cut across a prototypical downflow feature, depicted by the solid black line in the top panels. The bottom right panel shows the time-dependent number density as a solid black line and the equilibrium number density as red points.
is not as preferable. AC2 shows exactly the opposite trend, with species exchange routes being significantly better travelled than direct radiative association. Again, this is because in both AM3 and AC2, more free O is available after CO has been formed, while in AC1, very little O is present and OH formation relies on carbonic species.
### Treatment of photochemistry
Our network does not include the effects of photodissociation of species because of the greatly increased complexity required to treat this process properly. In the collision-dominated layers, photochemistry is unlikely to be important, but the situation may be different in higher optically thin layers, where radiation-driven processes become important. The importance of photochemistry is perhaps traced better by the prominence of radiative NLTE effects. The treatment of neutral C in the Sun (Amassi et al. 2019a) and O in the Sun (Steffen et al. 2015) shows that the abundances are affected up to 0.1 dex in relevant line-forming regions. It is feasible that photochemistry is then an important consideration in higher layers, but the treatment of the photochemical reactions of all atomic and molecular species is a considerably difficult and time-consuming endeavour. We welcome any further advancements in this direction.
### Complexity reduction
Ideally, we would like to include as many species and reactions as possible into the network to model it as precisely as possible. Unfortunately, due to the large memory cost of storing 3D arrays as well as the steep scaling of the solution time with the size of the kinetic system, methods that reduce complexity are often required. In this work, we have presented a heavily reduced network that is focused on the formation and disassociation of a
\begin{table}
\begin{tabular}{l c c c c} \hline Pathway & Step & reactants & Products & Timescale [s] \\ \hline C \(\rightarrow\) CO & & & & \\ \hline
**Pathway 1** & 1. & C + OH & \(\rightarrow\) & CO + H & \(\mathbf{7.43\times 10^{-5}}\) \\ & & & & Total: \(7.43\times 10^{-5}\) \\ & & & & — \\
**Pathway 2** & 1. & C + OH & \(\rightarrow\) & CH + O & \(\mathbf{4.10\times 10^{-3}}\) \\ & 2. & CH + O & \(\rightarrow\) & CO + H & \(4.27\times 10^{-4}\) \\ & & & & Total: \(4.53\times 10^{-3}\) \\
**Pathway 3** & 1. & C + NO & \(\rightarrow\) & CN + O & \(\mathbf{1.44\times 10^{1}}\) \\ & 2. & CN + O & \(\rightarrow\) & CO + N & \(6.46\times 10^{-3}\) \\ & & & & Total: \(1.44\times 10^{1}\) \\ \hline O \(\rightarrow\) CO & & & & \\ \hline
**Pathway 1** & 1. & CH + O & \(\rightarrow\) & CO + H & \(\mathbf{4.27\times 10^{-4}}\) \\ & & & & Total: \(4.27\times 10^{-4}\) \\
**Pathway 2** & 1. & CH + O & \(\rightarrow\) & C + OH & \(\mathbf{2.63\times 10^{-3}}\) \\ & 2. & C + OH & \(\rightarrow\) & CO + H & \(7.43\times 10^{-5}\) \\ & & & & Total: \(2.70\times 10^{-3}\) \\
**Pathway 3** & 1. & O + C2 & \(\rightarrow\) & C + CO & \(\mathbf{6.72\times 10^{9}}\) \\ & & & & Total: \(6.72\times 10^{9}\) \\ \hline CO \(\rightarrow\) C & & & & \\ \hline
**Pathway 1** & 1. & CO + H & \(\rightarrow\) & C + OH & \(\mathbf{6.26\times 10^{-5}}\) \\ & & & & Total: \(6.26\times 10^{-5}\) \\
**Pathway 2** & 1. & CO + H & \(\rightarrow\) & C + O + H & \(\mathbf{5.27\times 10^{-1}}\) \\ & & & & Total: \(5.27\times 10^{-1}\) \\
**Pathway 3** & 1. & CO + M & \(\rightarrow\) & C + O + M & \(\mathbf{5.24\times 10^{9}}\) \\ & & & & Total: \(5.24\times 10^{9}\) \\ \hline CO \(\rightarrow\) O & & & & \\ \hline
**Pathway 1** & 1. & CO + H & \(\rightarrow\) & C + OH & \(6.26\times 10^{-5}\) \\ & 2. & C + OH & \(\rightarrow\) & CH + H & \(\mathbf{4.10\times 10^{-3}}\) \\ & & & & Total: \(4.16\times 10^{-3}\) \\
**Pathway 2** & 1. & CO + H & \(\rightarrow\) & C + O + H & \(\mathbf{5.27\times 10^{-1}}\) \\ & & & & Total: \(5.27\times 10^{-1}\) \\
**Pathway 3** & & & & & \\ \hline \end{tabular}
\end{table}
Table 4: Step-by-step reactions and rate-limiting steps for the AM3 model atmosphere at a temperature of 3500 [K] and a gas density of \(10^{-9}\) [g cm\({}^{-3}\)]. The rate-limiting step (longest step in a pathway) is highlighted in bold.
Figure 9: Chemical evolution for two models with [Fe/H] = \(-3.0\), but differing C/O ratios at T = 3500 K, \(\rho=10^{-9}\) g cm\({}^{-3}\) (corresponding to chromospheric background layers). The vertical dash-dotted lines show the time a species has to set into equilibrium. In the oxygen-dominated atmosphere, CH is depleted compared to OH, while the opposite is true in the carbon-dominated atmosphere.
few key molecular species. However, the existence and addition of other species into the network can alter evolution, pathways, and timescales. It is often the case that only a small subset of reactions controls the vast majority of the evolution. Identifying these reactions can prove challenging, but a few methods exist to reduce the complexity of the kinetics problem (Grassi et al. 2012; Pope 1997). In our case, the network was already heavily reduced to the key reactions, and chemical pathways were investigated by Wedemeyer-Bohm et al. (2005), which in part verify this. In the future, we aim to investigate chemical pathways found via a graph theoretical analysis to reduce the number of reactions and species to only those necessary to model significant trends in the regions of interest.
### Potential error from LTE assumptions
In the analysis presented thus far, several assumptions have been made that can introduce small errors into the process. It has been demonstrated that the assumption of chemical equilibrium generally holds well in the photospheres of the stars considered in this work. However, deviations increase in higher layers, and several assumptions made about these higher layers should be addressed. Since chemical feedback is neglected, chemical reactions do not affect the temperature of the model atmosphere directly, despite being exo- or endothermic. Furthermore, the equation of state does not take into account the departures from chemical equilibrium. Finally, radiative NLTE corrections may be significant in the higher layers of the atmospheres we consider.
Firstly, the absence of heating and cooling by chemical reactions contributes only a small error. Here we consider the formation of C and O into CO because CO has the highest bond energy of any species presented here and will naturally dominate the effects of chemical feedback due to this property and its abundance. As an extreme case, we considered the conversion of all C into CO in the AMI atmosphere. This atmosphere is the most metal rich and is oxygen rich, so that this conversion provides an upper limit to the expected change in heat and temperature. The specific heat per unit mass is nearly constant in the upper layers of this atmosphere, with \(c_{\rm V}\approx 10^{8}\) erg g\({}^{-1}\) K\({}^{-1}\). The formation of CO releases \(e_{\rm diss}:=11.16\) eV (\(1.79\times 10^{-11}\) erg) of energy per reaction, and the energy released per mri is found by multiplying by Avogadro's number. The number fraction of C (where the total amount of species is normalised to unity) divided by the mean molecular weight \(\mu:=1.26\) in the neutral stellar atmosphere gives \(A_{\rm C}\approx 2.4\times 10^{-4}\). We therefore find the change in temperature
\[\Delta T=\frac{e_{\rm diss}\ N_{\rm A}\ A_{\rm C}}{\mu\ c_{\rm V}}\approx 20 \ {\rm K}, \tag{15}\]
where \(N_{\rm A}\) is Avogadro's number. In the lower-metallicity models presented here, the effect would be significantly smaller. The lack of chemical feedback therefore adds a negligible error to the total energy and temperature.
Secondly, the effects of deviations from equilibrium number densities could have a small effect on the overall structure, but the deviations shown here are relatively small and are generally confined to trace species. The mean molecular weight is dominated by H and He (metals are around \(\sim 1\%\)), the deviations of which are completely negligible. Any deviations on the scale seen here would therefore not have significant effects on the overall structure of the model atmosphere.
Thirdly, NLTE radiative transfer may be important when considering chemical kinetics in highly optically thin layers. Popa et al. (2022) showed the importance of this feature on the CH molecule in metal-poor red giant atmospheres, which suggests an increase in measured C abundance compared to LTE techniques (increasing with decreasing metallicity). We find a small increase in CH number density compared to equilibrium values, which would offset the large increase in the LTE carbon abundance required to reproduce the NLTE spectra. Additionally, when the collisional dissociation rates of CH are compared (e.g. Reaction 1), the rates are comparable to or higher than the photodissociation rates across the optically thin atmosphere. We estimated the photodissociation rate of CH with the help of the continuous opacity of provided by Kurucz et al. (1987). For the population numbers, LTE was assumed, and the radiation field of the 1D atmosphere was calculated with opacity distribution functions. The obtained rate suggests overall that photodissociation does not completely dominate the process. It is therefore interesting to consider the interplay between these processes, and we look forward to additional efforts in this field.
Finally, it should be stated that the models presented in this work do not include a comprehensive treatment of the stellar chromosphere. While we note that deviations from chemical equilibrium are important in stellar chromospheres, further work in these areas is therefore necessary and welcome.
## 5 Conclusion
We have presented a study of 3D time-dependent molecular formation and dissociation in one solar metallicity and four metal-poor atmospheres. The chemistry was modelled through mass-action kinetics with 76 reactions and 14 species that are advected by the velocity field during the hydrodynamics step. We additionally presented a comparison to the equilibrium abundances, computed with Python or Julia chemical kinetics codes. Deviations from equilibrium are seen primarily in higher photospheric layers, around shocks, and in the temperature differences throughout convection cells.
* In all models presented in this work, molecular species are generally in chemical equilibrium throughout the model photospheres. Molecular species show mean deviations from equilibrium reaching 0.15 in the lower chromosphere, and these deviations increase with decreasing metallicity and increasing height. The largest deviations are in CN, C\({}_{2}\), and CH when log (C/O) \(<1\), and in OH when log (C/O) \(>1\). Above log \(\tau\approx-2\), the less abundant molecule of C or O becomes locked into CO, inhibiting the formation of other molecular species involving that species. This results in comparatively low amounts of CH, CN, and C\({}_{2}\) in all models except AC2, and comparatively low amounts of OH in model AC2.
* The deviations from equilibrium can also be attributed to the behaviour around chromospheric shock waves. In the equilibrium case, the hot shock front contains very low number densities of molecular species, while the time-dependent treatment has greater number densities as the evolution proceeds with a finite timescale. In the uppermost coolest layers (T \(\lesssim\)\(3500\) [K]), slow chemical timescales result in a depletion of CO as there is insufficient time to form it before material is advected to a significantly different thermodynamic state.
* These deviations are unlikely to contribute significantly to spectroscopic measurements for metal-poor dwarfs because the line cores of key molecular species are generally formed in deeper layers (Gallagher et al. 2017). The largest deviations are mostly outside of the range of the contribution functions for the CH G-band and OH-band, but these deviations could still affect spectral line shapes, which can only
be properly reproduced in 3D models. The perceived trend trend increased carbon enhancement with decreasing stellar metallicity is therefore not due to an improper treatment of time-dependent chemistry. An investigation including spectrum synthesis using the time-dependent number densities is however warranted in light of these deviations.
* Relative deviations increase with decreasing metallicity due to slower mass-action reaction rates. The change in metallicity does not lead to a strictly linear increase in chemical timescale or decrease in yield in all layers, but generally, lower metallicities result in longer chemical timescales and lower yields.
* The C/O ratio plays a key role in determining which molecular species are further out of equilibrium. Both CH and OH are formed along reaction pathways to form CO. In the majority of atmospheres we presented, oxygen is present in excess compared to carbon, making OH formation more viable than CH. This leads to faster chemical timescales for reaction pathways involving OH. Changing this ratio so that carbon is in excess likewise changes the pathways to make the formation of carbon-bearing species preferential.
* The lack of chemical feedback contributes a negligible error to the evolution of chemical species, energy, and momentum of the system because these metals are trace species. NLTE radiative transfer has been shown to be of importance in higher layers (Popa et al., 2022), and our deviations point in the opposite direction to those seen through radiative NLTE calculations. Some kinetic rates are as fast or faster than our calculated photodissociation rates across the optically thin atmosphere, suggesting that both radiative NLTE transfer and time-dependent kinetics are important to consider in these layers.
In conclusion, we find molecular species to generally be in a state of chemical equilibrium under photospheric conditions for the models presented in this work. The effect of altering the C/O ratio is directly seen in the final yields of molecular species such as CH and OH. While relative deviations increase with decreasing metallicity because the mass-action kinetic rates are slower, these effects are not large enough to contribute significantly to spectroscopic abundance measurements because the line cores of interest for these stars are generally formed in deeper regions in which chemical equilibrium is well established. The deviations increase with height, and it is likely that there is interesting interplay between radiative and kinetic departures from LTE in the upper photospheres and chromospheres of metal-poor stars.
###### Acknowledgements.
S.A.D. and H.G.L. acknowledge financial support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 138713538 - SFB 881 ("The Milky Way System", subproject A04).
|
2302.02673 | The semiclassical limit of a quantum Zeno dynamics | Motivated by a quantum Zeno dynamics in a cavity quantum electrodynamics
setting, we study the asymptotics of a family of symbols corresponding to a
truncated momentum operator, in the semiclassical limit of vanishing Planck
constant $\hbar\to0$ and large quantum number $N\to\infty$, with $\hbar N$ kept
fixed. In a suitable topology, the limit is the discontinuous symbol
$p\chi_D(x,p)$ where $\chi_D$ is the characteristic function of the classically
permitted region $D$ in phase space. A refined analysis shows that the symbol
is asymptotically close to the function $p\chi_D^{(N)}(x,p)$, where
$\chi_D^{(N)}$ is a smooth version of $\chi_D$ related to the integrated Airy
function. We also discuss the limit from a dynamical point of view. | Fabio Deelan Cunden, Paolo Facchi, Marilena Ligabò | 2023-02-06T10:24:15Z | http://arxiv.org/abs/2302.02673v3 | # The semiclassical limit of a quantum Zeno dynamics
###### Abstract.
Motivated by a quantum Zeno dynamics in a cavity quantum electrodynamics setting, we study the asymptotics of a family of symbols corresponding to a truncated momentum operator, in the semiclassical limit of vanishing Planck constant \(\hbar\to 0\) and large quantum number \(N\to\infty\), with \(\hbar N\) kept fixed. In a suitable topology, the limit is the discontinuous symbol \(p\chi_{D}\left(x,\,p\right)\) where \(\chi_{D}\) is the characteristic function of the classically permitted region \(D\) in phase space. A refined analysis shows that the symbol is asymptotically close to the function \(p\chi_{D}^{\left(N\right)}\left(x,\,p\right)\), where \(\chi_{D}^{\left(N\right)}\) is a smooth version of \(\chi_{D}\) related to the integrated Airy function. We also discuss the limit from a dynamical point of view.
## 1. Introduction
In the _quantum Zeno effect_, frequent projective measurements can slow down the evolution of a quantum system and eventually hinder any transition to states different from the initial one. The situation is much richer when the measurement does not confine the system in a single state, but rather in a multidimensional subspace of its Hilbert space. This gives rise to a _quantum Zeno dynamics_ (QZD): the system evolves in the projected subspace under the action of its projected Hamiltonian. This phenomenon, first considered by Beskow and Nilsson [1] in their study of the decay of unstable systems, was dubbed quantum Zeno effect (QZE) by Misra and Sudarshan [2] who suggested a parallelism with the paradox of the 'flying arrow at rest' by the philosopher Zeno of Elea. Since then, QZE has received constant attention by physicists and mathematicians, who explored different aspects of the phenomenon.
From the mathematical point of view, QZD is related to the limit of a product formula obtained by intertwining the dynamical time evolution group with the orthogonal projection associated with the measurements performed on the system. It can be viewed as a generalization of Trotter-Kato product formulas [3, 4, 5, 6] to more singular objects in which one semigroup is replaced by a projection. The structure of the QZD product formula has been thoroughly investigated and has been well characterized under quite general assumptions [7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17].
QZE has been observed experimentally in a variety of systems, on experiments involving photons, nuclear spins, ions, optical pumping, photons in a cavity, ultracold atoms, and Bose-Einstein condensates, see [18] and references therein. In all the abovementioned implementations, the quantum system is forced to remain in its initial state through a measurement associated with a one-dimensional projection. The present study is inspired by
a proposal by Raimond _et al._[19, 20] for generating a multidimensional QZD in a cavity quantum electrodynamics experiment. We briefly describe the proposal, skipping most of the non-mathematical details.
The mode of the quantized electromagnetic field in a cavity can be conveniently described in the Fock space representation. The Hamiltonian of the quantized field is that of a harmonic oscillator (with angular frequency \(\omega=1\))
\[H_{\mathrm{h.o.}}=\frac{1}{2}\left(-\hbar^{2}\frac{d^{2}}{dx^{2}}+\hat{x}^{2} \right), \tag{1.1}\]
where \(\hat{x}\) is the position operator and \(\hat{p}=-i\hbar\frac{d}{dx}\) is the momentum operator on \(L^{2}(\mathbb{R})\). The operators \(\hat{x}\), \(\hat{p}\), and \(H_{\mathrm{h.o.}}\) are essentially self-adjoint on the common core (\(\mathbb{R}\)), the Schwartz space of rapidly decreasing functions. The eigenfunction \(\psi_{n}\) of \(H_{\mathrm{h.o.}}\) represents a cavity state with \(n\) photons (\(n=0,1,2,\dots\)) and energy \(\lambda_{n}=\hbar(n+1/2)\).
The cavity field undergoes a stroboscopic evolution alternating a short continuous time evolution \(e^{-i\frac{\hat{p}}{\hbar}\hat{p}}\) given by a _displacement operator_, that without loss of generality is taken to be generated by \(\hat{p}\), and an instantaneous interaction
\[P_{<N}=\sum_{k=0}^{N-1}|\psi_{k}\rangle\langle\psi_{k}|=\chi_{(-\infty,\hbar N )}(H_{\mathrm{h.o.}}), \tag{1.2}\]
with atoms injected into the cavity to ascertain whether in the cavity there are less than \(N\) photons (\(N\geq 1\) is a chosen maximal photon number).
The quantum Zeno dynamics consists in performing a series of \(P_{<N}\)-measurements in a fixed time interval \([0,t]\) at times \(t_{j}=j\tau\), \(j=0,\dots,n\), with period \(\tau=t/n\). The intertwining of the continuous time evolutions and the projective measurements corresponds to the evolution operator
\[V_{n}(t)=\left(P_{<N}e^{-\frac{it}{\hbar}\hat{p}}P_{<N}\right)^{n}.\]
Observe that since \(\mathrm{Ran}\,P_{<N}\subset D(\hat{p})=H^{1}(\mathbb{R})\), we have [17]
\[\lim_{n\to\infty}V_{n}(t)=P_{<N}e^{-itH_{N}/\hbar},\]
in the strong operator topology, uniformly for \(t\) in compact subsets of \(\mathbb{R}\), where the _Zeno Hamiltonian_\(H_{N}\) is a rank-\(N\) truncation of \(\hat{p}\):
\[H_{N} =P_{<N}\,\hat{p}P_{<N}\] \[=\chi_{(-\infty,\hbar N)}(H_{\mathrm{h.o.}})\,\hat{p}\,\chi_{(- \infty,\hbar N)}(H_{\mathrm{h.o.}}). \tag{1.3}\]
Hence the QZD establishes a sort of 'hard wall' in Hilbert space, corresponding to the \(N\)-photon state state \(\psi_{N}\): the state of the system evolves unitarily within the \(N\)-dimensional subspace spanned by states with at most \(N-1\) photons, \(\psi_{0},\dots,\psi_{N-1}\). This hard wall induces remarkable features in the quantum evolution [19, 20].
The question addressed in this paper is: _What is the semiclassical limit of the Zeno Hamiltonian \(H_{N}\) and of its corresponding quantum dynamics?_
### Semiclassical limit of the Zeno Hamiltonian
Semiclassical theory concerns the asymptotic analysis for vanishing Planck constant (\(\hbar\to 0\)) of operators and vectors, with the ultimate goal of understanding the quantum-to-classical transition. It is therefore convenient to use a phase space description of quantum mechanics where operators are represented by functions on the classical phase space (called Weyl symbols), states are described by quasi-probability distributions (called Wigner functions), and the noncommutative product
of operators is mapped in a twisted convolution product of symbols (called Moyal product), see e.g. [21, 22].
If we describe the QZD in the phase space, in the semiclassical limit \(N\to\infty,\hbar\to 0\) with the product \(\hbar N=\mu\) kept fixed, we expect the motion to be confined in the classically allowed region. The level sets of the classical harmonic oscillator
\[\mathfrak{h}_{\mathrm{h.o.}}(x,p)=\frac{1}{2}\left(p^{2}+x^{2}\right) \tag{1.4}\]
are circles centered at the origin of the phase space \(\mathbb{R}_{x}\times\mathbb{R}_{p}\). In qualitative terms, the hard wall can be viewed in the phase space as a circle with a radius \(\propto\sqrt{\hbar N}\). In the limit, the corresponding classically allowed region is the disk
\[D:=\{(x,p)\in\mathbb{R}^{2}\colon\mathfrak{h}_{\mathrm{h.o.}}(x,p)<\mu\}=\{p^{ 2}+x^{2}<2\mu\},\]
whose boundary
\[\partial D:=\{(x,p)\in\mathbb{R}^{2}\colon\mathfrak{h}_{\mathrm{h.o.}}(x,p)= \mu\}=\{p^{2}+x^{2}=2\mu\}\]
is the circle of radius \(\sqrt{2\mu}\). This is what Raimond _et al._[19, 20] called the 'exclusion circle': it separates \(D\) from the classically forbidden region where \(\mathfrak{h}_{\mathrm{h.o.}}>\mu\).
Let \(\chi_{D}(x,p)=\chi_{-\infty,\mu}\big{(}\mathfrak{h}_{\mathrm{h.o.}}(x,p)\big{)}\) be the characteristic function of the disk \(D\). The first main result of the paper is the identification of the limit of the Weyl symbols \(\sigma_{P_{<N}}^{\hbar}(x,p)\) and \(\sigma_{H_{N}}^{\hbar}(x,p)\) of the projection operator \(P_{<N}\) and the Zeno Hamiltonian \(H_{N}\), respectively (the definition of the Weyl symbol of an operator is given in Definition 1).
Figure 1. Plot of the symbol \(\sigma_{H_{N}}^{\hbar}\) in the phase plane \((x,p)\). Here \(N=17\) and \(\mu=2\). Already for such a small value of \(N\), the graph of the symbol resembles a (rippled) tilted coin in the disk \(D\) and zero outside.
**Theorem 1** (Convergence of the symbols).: _Set \(\mu>0\). Then,_
\[\lim_{\begin{subarray}{c}N\to\infty,\hbar\to 0\\ \hbar N=\mu\end{subarray}}\int\limits_{\mathbb{R}_{x}\times\mathbb{R}_{p}} \left[\sigma_{P_{<N}}^{\hbar}(x,p)-\chi_{D}(x,p)\right]\varphi(x,p)dxdp=0, \tag{1.6}\] \[\lim_{\begin{subarray}{c}N\to\infty,\hbar\to 0\\ \hbar N=\mu\end{subarray}}\int\limits_{\mathbb{R}_{x}\times\mathbb{R}_{p}} \left[\sigma_{H_{N}}^{\hbar}(x,p)-p\chi_{D}(x,p)\right]\varphi(x,p)dxdp=0, \tag{1.5}\]
_for all \(\varphi\in\mathcal{A}\)._
_Remark 1_.: Here \(\mathcal{A}\) is the space of test functions introduced by Lions and Paul [23] as the completion of the smooth functions of compact support in the phase space \(C_{c}^{\infty}(\mathbb{R}_{x}\times\mathbb{R}_{p})\) under the norm
\[\|\varphi\|_{\mathcal{A}}:=\int_{\mathbb{R}}dy\sup_{x}|\mathcal{F}_{2} \varphi(x,y)|. \tag{1.7}\]
In this paper \(\mathcal{F}_{2}\varphi\) denotes the partial Fourier transform of \(\varphi\) in the second variable,
\[\mathcal{F}_{2}\varphi(x,y):=\int_{\mathbb{R}}\varphi(x,p)e^{-ipy}dp. \tag{1.8}\]
_Remark 2_.: Theorem 1 makes precise the heuristic expectation that the symbol \(\sigma_{P_{<N}}^{\hbar}(x,p)\) of the projection operator converges to the characteristic function \(\chi_{D}(x,p)\) of the classically allowed region, and the symbol of the Zeno Hamiltonian \(\sigma_{H_{N}}^{\hbar}(x,p)\) converges to \(p\chi_{D}(x,p)\). The content of Theorem 1 is schematically summarised in Table 1.
_Remark 3_.: A plot of the Weyl symbol exhibits pronounced oscillations, also known as _quantum ripples_[24], in the vicinity of the classical points of inversion of motion \(\partial D=\{\mathfrak{h}_{\mathrm{h.o.}}(x,p)=\mu\}\). If the oscillations are smoothed out, then the graph of \(\sigma_{H_{N}}^{\hbar}\) is asymptotically close to a 'tilted coin'. See Fig. 1 and Fig. 2.
We see that at the boundary \(\partial D\) the symbols \(\sigma_{P_{<N}}^{\hbar}\) and \(\sigma_{H_{N}}^{\hbar}\) develop a jump, for large \(N\). The second main result of the paper concerns a finer asymptotics of \(\sigma_{P_{<N}}^{\hbar}\) and \(\sigma_{H_{N}}^{\hbar}\) near \(\partial D\). By zooming in at the edge \(\partial D\), one sees that the symbols have nontrivial scaling limits related to the integrated Airy function
\[\mathrm{Ai}_{1}(\xi):=\int_{\xi}^{+\infty}\mathrm{Ai}(u)\ du,\qquad\xi\in \mathbb{R},\]
\begin{table}
\begin{tabular}{l|l} Quantum & Classical \\ \(N\in\mathbb{N}\) & \(\hbar\to 0\), \(N\to\infty\) \\ \(\hbar>0\) & with \(\hbar N=\mu\) \\ \hline \hline \(P_{<N}=\chi_{(-\infty,\hbar N)}(H_{\mathrm{h.o.}})\) & \(\chi_{(-\infty,\mu)}(\mathfrak{h}_{\mathrm{h.o.}}(x,p))=\chi_{(-\infty,\sqrt{2 \mu})}(\sqrt{x^{2}+p^{2}})\) \\ \(H_{N}=\chi_{(-\infty,\hbar N)}(H_{\mathrm{h.o.}})\,\hat{p}\,\chi_{(-\infty, \hbar N)}(H_{\mathrm{h.o.}})\) & \(p\chi_{(-\infty,\mu)}(\mathfrak{h}_{\mathrm{h.o.}}(x,p))=p\chi_{(-\infty,\sqrt{ 2\mu})}(\sqrt{x^{2}+p^{2}})\) \\ \end{tabular}
\end{table}
Table 1. Summary of the operators and their semiclassical limits.
see (B.11). More precisely, set
\[\chi_{D}^{(N)}(x,p):=\operatorname{Ai}_{1}\left(\frac{(2N)^{\frac{2}{3}}}{\mu} \left(\mathfrak{h}_{\mathrm{h.o.}}(x,p)-\mu\right)\right),\qquad x,p\in\mathbb{R}. \tag{1.9}\]
It follows from (B.12) that \(\chi_{D}^{(N)}\) is a sequence of rotational symmetric smooth functions on the phase space that approximate the characteristic function,
\[\lim_{N\to\infty}\chi_{D}^{(N)}(x,p)=\chi_{D}(x,p)\]
for all \((x,p)\notin\partial D\). (On the boundary \(\partial D\), \(\chi_{D}^{(N)}=1/3\), for all \(N\).)
We can now state our second main result.
**Theorem 2** (Asymptotics at the boundary).: _Fix \(\mu>0\). For all \(g\in C_{c}^{\infty}(\mathbb{R})\),_
\[\lim_{\begin{subarray}{c}N\to\infty,\hbar\to 0\\ \hbar N=\mu\end{subarray}}\int\limits_{\mathbb{R}_{x}\times\mathbb{R}_{p}} \left[\sigma_{P_{<N}}^{\hbar}(x,p)-\chi_{D}^{(N)}(x,p)\right]\frac{1}{\hbar^ {\frac{2}{3}}}g\left(\frac{x^{2}+p^{2}-2\mu}{\hbar^{\frac{2}{3}}}\right)dxdp=0. \tag{1.10}\]
_and_
\[\lim_{\begin{subarray}{c}N\to\infty,\hbar\to 0\\ \hbar N=\mu\end{subarray}}\int\limits_{\mathbb{R}_{x}\times\mathbb{R}_{p}} \left[\sigma_{H_{N}}^{\hbar}(x,p)-p\chi_{D}^{(N)}(x,p)\right]\frac{1}{\hbar^ {\frac{2}{3}}}g\left(\frac{x^{2}+p^{2}-2\mu}{\hbar^{\frac{2}{3}}}\right)dxdp=0. \tag{1.11}\]
Remark 4: In order to zoom at \(\partial D\), we need to integrate the symbols \(\sigma_{P_{<N}}^{\hbar}\) and \(\sigma_{H_{N}}^{\hbar}\) against (sequences of) compactly supported test functions that concentrate around \(\partial D\). Since \(\partial D\) is invariant under rotations, without loss of generality we consider test functions that are also rotational symmetric. The idea is to consider, for \(g\in C_{c}^{\infty}(\mathbb{R})\), the rescaling \(\epsilon^{-2}g(\epsilon^{-2}(x^{2}+p^{2}-2\mu))\) that is nonzero in a region of order \(\mathrm{O}(\epsilon)\) within the boundary \(\partial D\). The blow-up scale that gives rise to a nontrivial limit is \(\epsilon=\hbar^{\frac{1}{3}}\). The reason for this choice will emerge in the following (see Section 3). Note that the space of test functions \(\mathcal{A}\) in Theorem 1 does not depend on the details of the model. On the contrary, in Theorem 2 we integrate the symbols \(\sigma_{P_{<N}}^{\hbar}\) and \(\sigma_{H_{N}}^{\hbar}\) against test functions that concentrate around \(\partial D\) in a suitable way.
Remark 5: The limits in Theorems 1 and 2 do _not_ hold pointwise, in general. For instance, it is easy to show (by using the parity of the harmonic oscillator eigenfunctions) that
\[\sigma_{P_{<N}}^{\hbar}\left(0,0\right)=1+\left(-1\right)^{N+1}. \tag{1.12}\]
The reader is invited to have a glance at Fig. 2. Inside the disk \(D\), the symbols oscillate with frequency of order \(\mathrm{O}(N)\), while in the classically forbidden region \(D^{c}=(\mathbb{R}_{x}\times\mathbb{R}_{p})\setminus D\) the symbols are exponentially suppressed. The monotonic behaviour outside the disk suggests that for \((x,p)\in D^{c}\) the convergence to the limits may hold in a stronger sense. In fact, a slight adaptation of the proof of Theorem 2 shows that outside the disk, \(\sigma_{P_{<N}}^{\hbar}\) and \(\sigma_{H_{N}}^{\hbar}\) converge pointwise to the limit symbols.
Theorem 3 (Pointwise asymptotics in the classically forbidden region).: _Fix \(\mu>0\). For all \((x,p)\in D^{c}\),_
\[\lim_{\begin{subarray}{c}N\to\infty,\hbar\to 0\\ \hbar N=\mu\end{subarray}}\left[\sigma_{P_{<N}}^{\hbar}\left(x,p\right)-\chi_{ D}^{\left(N\right)}\left(x,p\right)\right]=0, \tag{1.13}\]
_and_
\[\lim_{\begin{subarray}{c}N\to\infty,\hbar\to 0\\ \hbar N=\mu\end{subarray}}\left[\sigma_{H_{N}}^{\hbar}\left(x,p\right)-p\chi_{ D}^{\left(N\right)}\left(x,p\right)\right]=0. \tag{1.14}\]
### Semiclassical limit of the quantum Zeno dynamics
The quantum dynamics in phase space is ruled by two elements: the Weyl symbol of the Zeno Hamiltonian \(\sigma_{H_{N}}^{\hbar}\) and the Moyal bracket (that does depend on \(\hbar\)) [21, 22]. Hence, the semiclassical limit of the dynamics should encompass a simultaneous \(\hbar\to 0\) limit of the symbol (the generator of the dynamics) _and_ the Moyal structure.
By Theorem 1, the symbol \(\sigma_{H_{N}}^{\hbar}\) of the Zeno Hamiltonian converges as \(\hbar\to 0\), \(N\to\infty\), with \(\hbar N=\mu>0\), to \(p\chi_{D}\left(x,p\right)\). Moreover, the Moyal bracket has an asymptotic expansion in powers of \(\hbar\) whose leading term (zero-th order) is the classical Poisson bracket. Hence, it is reasonable to expect that the limiting dynamics is well described by the Hamiltonian evolution (i.e. Poisson) in phase space where the Hamiltonian is the limit symbol \(p\chi_{D}\left(x,p\right)\).
However, in this naive approach we immediately face an obstruction: the symbol \(p\chi_{D}\left(x,p\right)\) is _not_ smooth, and hence it is not possible to write Hamilton's equations of
motion! If we insist in writing, formally, Hamilton's equations, we get
\[\begin{cases}\dot{x}=\frac{\partial}{\partial p}\left(p\chi_{D}(x,p)\right)=\chi _{[0,\sqrt{2\mu})}(r)+p\delta_{\sqrt{2\mu}}(r)\frac{p}{r},\\ \dot{p}=-\frac{\partial}{\partial x}\left(p\chi_{D}(x,p)\right)=-p\delta_{ \sqrt{2\mu}}(r)\frac{x}{r},\end{cases} \tag{1.15}\]
where \(r=\sqrt{x^{2}+p^{\,2}}\). The Dirac delta \(\delta_{\sqrt{2\mu}}(r)\) arises as distributional derivative of the step function. We stress again that the above expressions are formal: the Hamiltonian is discontinuous at \(\partial D\), and its vector field in \((\diamond)\) is singular.
We can now look at the corresponding phase portrait. First, the Hamiltonian vector field is zero outside the closure of the disk \(D\). Thus, all points there are equilibrium points. If the particle is in \(D\), then the equation of motions are \(\dot{x}=1\), \(\dot{p}=0\), and the particle moves with constant momentum
\[x(t)=x_{0}+t,\qquad p(t)=p_{0}.\]
It is thus proceeding at a constant velocity along the \(x\)-axis. When it hits the boundary \(\partial D\), the evolution is given by the singular contributions, proportional to the delta functions: \(\dot{x}=p\delta_{\sqrt{2\mu}}(r)p/r\), \(\dot{p}=-p\delta_{\sqrt{2\mu}}(r)x/r\). Heuristically, these equations would correspond to a field tangential to the boundary of \(D\) that yields a motion along the circle \(\partial D\) at 'infinite' speed. The particle reappears on the other side of the boundary (with the same momentum \(p=p_{0}\)) and resumes its motion along the \(x\)-axis at a constant velocity. The collision at the edge \(\partial D\) thus realizes, in this semiclassical picture, a reflection around the \(p\)-axis of the phase space, transforming \(\left(\sqrt{2\mu-p_{0}^{\,2}},p_{0}\right)\) into \(\left(-\sqrt{2\mu-p_{0}^{\,2}},p_{0}\right)\).
An interesting interpretation of the semiclassical limit of the Zeno dynamics is as follows. In the limit dynamics, the points \((x,p)\), \((-x,p)\) on the circle \(\partial D\subset\mathbb{R}_{x}\times\mathbb{R}_{p}\) are identified. Hence, one can think of the \(N\to\infty,\hbar\to 0\) limit, with \(\hbar N=\mu\), as yielding a _change of topology_: the dynamics on the disk becomes a motion on the sphere! We emphasise again that all this is formal, although very close to what was observed in [19, 20], and called 'phase inversion mechanism'. The function \(p\chi_{D}(x,p)\) is _not_ smooth and therefore it is not the generator of a classical Hamiltonian dynamics.
We know, however, by Theorem 2, that the symbol \(\sigma_{H_{N}}^{\hbar}(x,p)\) is asymptotically close to a smoothed version
\[p\chi_{D}^{(N)}(x,p)=p\operatorname{Ai}_{1}\left(\frac{(2N)^{\frac{2}{3}}}{2 \mu}\left(r^{2}-2\mu\right)\right). \tag{1.16}\]
For each \(N\), it makes sense to consider the Hamiltonian system generated by \(p\chi_{D}^{(N)}(x,p)\),
\[\begin{cases}\dot{x}=\frac{\partial}{\partial p}\left(p\chi_{D}^{(N)}(x,p) \right)\\ \dot{p}=-\frac{\partial}{\partial x}\left(p\chi_{D}^{(N)}(x,p)\right)\end{cases} \qquad(\spadesuit). \tag{1.17}\]
This is a family of well-defined Hamilton equations and we can expect, for large \(N\), the solutions of \((\spadesuit)\) to be 'close' to the sought semiclassical limiting dynamics.
We give here a sketch of an argument showing that for large \(N\), the solutions of \((\spadesuit)\) behave as the formal solutions of the singular problem \((\diamond)\). The equations of motions from \((\spadesuit)\) are
\[\dot{x}=\chi_{[0,\sqrt{2\mu})}^{(N)}(r)+p\delta_{\sqrt{2\mu}}^{(N)}(r)\frac{p} {r},\qquad\dot{p}=-p\delta_{\sqrt{2\mu}}^{(N)}(r)\frac{x}{r}, \tag{1.18}\]
where
\[\chi^{(N)}_{[0,\sqrt{2\mu})}(r):=\operatorname{Ai}_{1}\left(\frac{(2N)^{\frac{2}{3 }}}{2\mu}\left(r^{2}-2\mu\right)\right),\quad\delta^{(N)}_{\sqrt{2\mu}}(r):=-r \,\frac{(2N)^{\frac{2}{3}}}{\mu}\,\operatorname{Ai}\left(\frac{(2N)^{\frac{2}{3 }}}{2\mu}\left(r^{2}-2\mu\right)\right). \tag{1.19}\]
Observe that \(\chi^{(N)}_{[0,\sqrt{2\mu})}(r)\) are uniformly bounded functions that converge, as \(N\to\infty\) to the characteristic function \(\chi_{[0,\sqrt{2\mu})}\), see (B.12). The corresponding component of the field is of order \(\operatorname{O}(1)\). The sequence of functions \(\delta^{(N)}_{\sqrt{2\mu}}(r)\) converges to \(\delta_{\sqrt{2\mu}}(r)\) in a distributional sense, as \(N\to\infty\). This can be seen, in Fourier space, from the identity \(\int_{\mathbb{R}}\operatorname{Ai}(x)e^{-ikx}dx=e^{ik^{3}/3}\).
We conclude that the Hamiltonian vector field generated by \(p\chi^{(N)}_{D}(x,p)\) converges pointwise to the singular vector field generated by \(p\chi_{D}(x,p)\). The component of the field containing \(\delta^{(N)}_{\sqrt{2\mu}}(r)\) is of order \(\operatorname{O}(N^{\frac{2}{3}})\) and generates a motion at speed \(\propto N^{\frac{2}{3}}\), that becomes 'infinite' in the singular limit. Fig. 3 shows a comparison of the phase portraits of the Hamiltonian dynamics generated by the Weyl symbol \(\sigma^{h}_{H_{N}}\left(x,p\right)\), the smooth Hamiltonian \(p\chi^{(N)}_{D}(x,p)\) and the discontinuous function \(p\chi_{D}(x,p)\). Note the effective change of topology in the limit singular case that results from the instantaneous motion along the circle \(\partial D\).
### Spectral analysis of the Zeno Hamiltonian \(H_{n}\)
The matrix representation of \(H_{N}\) (in the Hermite basis \(\left\{\psi^{h}_{k}\right\}_{k\in\mathbb{N}}\), see Appendix A) is the \(N\times N\) complex hermitian
Figure 3. Phase portraits for the Hamiltonian dynamics. The red solid line is the boundary \(\partial D\) of the disk. Here \(\mu=2\).
matrix
\[H_{N}=i\sqrt{\frac{\hbar}{2}}\begin{bmatrix}0&1&0&\cdots&0\\ -1&0&\sqrt{2}&&\vdots\\ 0&-\sqrt{2}&\ddots&&\\ \vdots&&&0&\sqrt{N-1}\\ 0&\cdots&&-\sqrt{N-1}&0\end{bmatrix}.\]
This is a _Jacobi matrix_ about which we have very precise spectral information (characteristic polynomial, eigenvalues and their counting measure) for all \(N\).
**Proposition 1**.: _For all \(N\geq 1\),_
\[\det\left(yI_{N}-H_{N}\right)=\left(\frac{\sqrt{\hbar}}{2}\right)^{N}h_{N} \,\left(\sqrt{\hbar}y\right), \tag{1.20}\]
_where \(h_{N}\) is the Hermite polynomial of degree \(N\), see Appendix A. In particular, the eigenvalues of \(H_{N}\) are the \(N\) (simple and real) zeros of the Hermite function \(\psi_{N}^{\hbar}(z)\)._
Proof.: \(H_{N}\) is unitarily equivalent to \(P_{<N}\hat{x}P_{<N}\) (see equations (A.7)-(A.8)), and so the two operators have equal characteristic polynomial. The claim now follows from a result for general orthogonal polynomials on the real line due to Simon [47, Prop. 2.2].
If \(y_{N}^{(j)}\) are the zeros of \(\psi_{N}^{\hbar}\left(y\right)\), we define the eigenvalues counting measure \(\nu_{N}\) of the Zeno Hamiltonian \(H_{N}\) to be the nonnegative measure that puts weight \(1/N\) on each eigenvalue of \(H_{N}\) (the \(y_{N}^{(j)}\)'s). From well-known results on Hermite polynomials [34] it follows that the measure \(\nu_{N}\) weakly converges to the semicircular density in the simultaneous limit \(\hbar\to 0\), \(N\to\infty\) with \(\hbar N\) asymptotically fixed. See Figure 4.
**Proposition 2**.: _For all continuous bounded functions \(f\),_
\[\int_{\mathbb{R}}f(y)d\nu_{N}\left(y\right)\to\int_{\mathbb{R}}f(y)\rho_{\mu}( y)dy, \tag{1.21}\]
_as \(N\to\infty\), \(\hbar\to 0\), with the product \(\hbar N\) converging to \(\mu>0\)._
Figure 4. Illustration of Proposition 2. The histogram of the eigenvalues of the Zeno Hamiltonian \(H_{N}\) for \(N=2000\), and \(\hbar N=\mu=2\) is compared with the semicircular density \(\rho_{\mu}(y)=\frac{1}{\pi\mu}\sqrt{(2\mu-y^{2})_{+}}\) of Eq. (B.1).
Remark 6: The semicircular spectral distribution can be obtained formally from the semiclassical limit of Theorem 1. Indeed, in the limit symbol \(p\chi_{D}(x,p)\) of the Zeno Hamiltonian is concentrated on the disk \(D\) of radius \(\sqrt{2\mu}\). Semiclassically, the density of the eigenvalues is the fraction of the phase space volume with energy between \(y\) and \(y+dy\):
\[\frac{\operatorname{Area}\left(\{y\leq p\leq y+dy\}\cap D\right)}{\operatorname {Area}\left(D\right)}=\frac{2\sqrt{2\mu-y^{2}}dy}{\pi(2\mu)}=\rho_{\mu}(y).\]
### Proof strategy and relations to other works
When \(N\) is large, the symbols \(\sigma_{P_{<N}}^{\hbar}\) and \(\sigma_{H_{N}}^{\hbar}\) are highly oscillating smooth functions. As discussed in Remark 5, looking for a global semiclassical limit in a pointwise sense is hopeless. It turns out that the sought convergence of the symbols holds in a weak sense if the set of test functions is chosen to be \(\mathcal{A}\).
The proofs presented in this paper are based on the following observations:
1. The asymptotics of integrals of the Weyl symbols \(\sigma_{P_{<N}}^{\hbar}(x,p)\) and \(\sigma_{H_{N}}^{\hbar}(x,p)\) against functions on the phase space is related to the pointwise asymptotics of the Fourier transforms \(\mathcal{F}_{2}\sigma_{P_{<N}}^{\hbar}(x,y)\) and \(\mathcal{F}_{2}\sigma_{H_{N}}^{\hbar}(x,y)\);
2. The function \(\mathcal{F}_{2}\sigma_{P_{<N}}^{\hbar}(x,y)\) is a sum of \(N\) terms (cross products of Hermite functions), see Eq. (2.11). However, thanks to the Christoffel-Darboux formula (Lemma 3) this sum can be expressed in terms of the \(N\)-th and \((N-1)\)-th Hermite functions only. Hence, studying the large \(N\) asymptotics with \(\hbar N=\mu\) amounts to study the large degree asymptotics of Hermite functions. This is a well-studied topic in the theory of orthogonal polynomials from which we can freely borrow explicit asymptotic formulae. So, to prove the convergence of the symbols we will show the convergence of the Christoffel-Darboux kernel along with its derivatives to the _sine_ and _Airy kernels_ (in the formulation presented in the book of Anderson, Guionnet and Zeitouni [26])
3. The symbol \(\sigma_{H_{N}}^{\hbar}(x,p)\) is 'asymptotically close' to \(p\sigma_{P_{<N}}^{\hbar}(x,p)\) in the dual space \(\mathcal{A}^{\prime}\) (Proposition 7). This is suggested by the heuristic observation that, in the limit \(\hbar\to 0\), the algebra of observables should become commutative. What we gain is that, once we know the asymptotics of \(\sigma_{P_{<N}}^{\hbar}(x,p)\) we can directly deduce the asymptotics of \(\sigma_{H_{N}}^{\hbar}(x,p)\).
The seminal paper by Lions and Paul [23] on the semiclassical limit of Wigner measures, and the more recent developments [25, 37, 31, 27] were instrumental in our study.
We mention that the symbol \(\sigma_{P_{<N}}^{\hbar}(x,p)\) of the orthogonal projection \(P_{<N}\) studied in the present paper have close connection to the fuzzy approximation of two-dimensional disk proposed by Lizzi, Vitale and Zampini [40, 41]. A _fuzzy space_ is an approximation of an abelian algebra of functions on an ordinary space with a sequence of finite-rank matrix algebras, which preserve the symmetries of the original space, at the price of non-commutativity. Eq. (1.5) of Theorem 1 is the precise mathematical statement behind the numerical results of [40, 41]. To our knowledge, the finer asymptotics of Theorem 2 is a new result, that has not been observed numerically neither.
The convergence of symbols of projection operators to the characteristic function of the classically allowed region is folklore in theoretical physics. In recent years, there has been an explosion of results on the asymptotics of the Christoffel-Darboux kernel for orthogonal polynomials on the real line, especially in connections to eigenvalue statistics of random matrices and integrable probability models [34, 42, 44, 45]. The interest to these asymptotics
in the theoretical and mathematical physics community has been mostly motivated by applications to the number statistics of non-interacting fermions. The asymptotics at the 'edge' has been also investigated at various level of rigour. See, e.g. [24, 28, 29, 30, 32, 33, 35, 36, 49].
Given the universality results on the asymptotics of orthogonal polynomials and random matrices [34], we expect that Theorem 1 is valid for a rather large class of symbols associated to finite-rank orthogonal projections. The recent paper by Deleporte and Lambert [35] suggest that Theorem 2 would be valid as long as the gradient of the confining potential does not vanish at the points of classical inversion of motion. In any case the statement of analogues of Theorem 2 should depend on the geometry of the level sets of the corresponding classical Hamiltonian function. Further study is in progress.
### Outline of the paper
The structure of the paper is as follows. In the next section we recall some preliminary background material, introduce a precise presentation of the model and provide the calculation of the symbols. In Section 3 we discuss the different scaling limits in Theorems 1 and 2. Section 4 is entirely devoted to the proofs of the main technical results, and of Theorems 1, 2 and 3. The paper includes two appendices. In Appendix A we collect known formulae on the Hermite functions. Appendix B contains the definition and a few properties of the sine and the Airy kernel.
## 2. Notation, preliminaries, Weyl symbols and kernels
We first introduce some notation and preliminary notions that we use throughout this work. For any \(f,g\in L^{2}(\mathbb{R})\), we denote
\[\langle f,g\rangle:=\int\overline{f(x,p)}g(x,p)dxdp,\]
the scalar product on \(L^{2}(\mathbb{R})\). For a linear operator \(L\) on \(L^{2}(\mathbb{R})\) we write \(L\doteq L(u,v)\) to indicate that \(L\) has kernel \(L(u,v)\in L^{2}(\mathbb{R}\times\mathbb{R})\). In this paper all kernels are continuous. For \(A,B\) linear operators, we use the notation \([A,B]:=AB-BA\) for the commutator. Let \(D\) be the linear operator defined, for appropriate \(f\), by the formula \((Df)(u)=\frac{d}{du}f(u)\). We have
\[[D,L]\doteq\left(\frac{\partial}{\partial u}+\frac{\partial}{\partial v}\right) L(u,v). \tag{2.1}\]
For \(x\in\mathbb{R}\) and \(\gamma>0\), let
\[V_{x,\gamma}:L^{2}(\mathbb{R}) \longrightarrow L^{2}(\mathbb{R})\] \[f(u) \longmapsto\left(V_{x,\gamma}f\right)(u):=\sqrt{\gamma}f\left(x+ \gamma u\right). \tag{2.2}\]
Of course \(\left(V_{x,\gamma}^{-1}f\right)(u)=\sqrt{1/\gamma}f\left(\gamma^{-1}(u-x)\right)\), and \(V_{x,\gamma}\) is unitary. If we conjugate the operator \(L\) by the scaling unitary \(V_{x,\gamma}\), its kernel gets changed into
\[V_{x,\gamma}LV_{x,\gamma}^{-1}\doteq\gamma L\left(x+\gamma u,x+\gamma v \right). \tag{2.3}\]
We shall consider the following space of test functions introduced by Lions and Paul [23],
\[\mathcal{A}=\left\{f\in C_{0}^{\infty}(\mathbb{R}_{x}\times\mathbb{R}_{p}) \colon\|f\|_{\mathcal{A}}:=\int_{\mathbb{R}}dy\sup_{x}|\mathcal{F}_{2}f(x,y)| <\infty\right\}, \tag{2.4}\]
where \(C_{0}^{\infty}(\mathbb{R}_{x}\times\mathbb{R}_{p})\) is the usual space of continuous functions tending to zero at infinity.
\(\mathcal{A}\) is a Banach algebra with the following properties (see [23]):
* \((\mathbb{R}_{x}\times\mathbb{R}_{p})\), \(C_{c}^{\infty}(\mathbb{R}_{x}\times\mathbb{R}_{p})\), and \(\mathcal{B}=\{f\in\mathcal{F}_{2}f\in C_{c}^{\infty}(\mathbb{R}_{x}\times \mathbb{R}_{y})\}\) are dense subspaces in \(\mathcal{A}\).
* \(\sup_{x,p}|f(x,p)|\leq(1/2\pi)\|f\|_{\mathcal{A}}\), hence \(\mathcal{A}\) is contained in the space of bounded continuous functions in the phase space \(C_{b}(\mathbb{R}_{x}\times\mathbb{R}_{p})\).
Let \(\mathcal{A}^{\prime}\) be the dual of \(\mathcal{A}\). From the Parseval identity it follows that
\[\|h\|_{\mathcal{A}^{\prime}}=2\pi\sup_{y}\int|\mathcal{F}_{2}h(x,y)|\,dx.\]
A basic property is \(\|h\|_{\mathcal{A}^{\prime}}\leq 2\pi\|h\|_{L^{1}}\) (hence \(L^{1}(\mathbb{R}_{x}\times\mathbb{R}_{p})\subset\mathcal{A}^{\prime}\)).
**Definition 1**.: _Given a number \(\hbar>0\), the Weyl symbol of the operator \(L\doteq L(u,v)\) is defined as_
\[\sigma_{L}^{\hbar}(x,p):=\int_{\mathbb{R}_{y}}\hbar L\left(x-\frac{\hbar y}{ 2},x+\frac{\hbar y}{2}\right)e^{ipy}dy. \tag{2.5}\]
_Equivalently, \(\sigma_{L}^{\hbar}\) is defined by the identity_
\[\left(\mathcal{F}_{2}\sigma_{L}^{\hbar}\right)(x,y)=\left(2\pi\hbar\right)L \left(x-\frac{\hbar y}{2},x+\frac{\hbar y}{2}\right) \tag{2.6}\]
_and, by Plancherel's theorem,_
\[\int\limits_{\mathbb{R}_{x}\times\mathbb{R}_{p}}\overline{\sigma_{L}^{\hbar} (x,p)}\varphi(x,p)dxdp=\int\limits_{\mathbb{R}_{x}\times\mathbb{R}_{y}} \overline{\hbar L\left(x-\frac{\hbar y}{2},x+\frac{\hbar y}{2}\right)} \mathcal{F}_{2}\varphi(x,y)dxdy. \tag{2.7}\]
We recall that the Weyl symbol of the product of two operators \(A\), \(B\) is _not_ the ordinary product of the symbols \(\sigma_{AB}^{\hbar}\neq\sigma_{A}^{\hbar}\sigma_{B}^{\hbar}\), unless \(A\) and \(B\) commute. The noncommutative Moyal product \(\sharp\) is defined as the composition law that does the job: \(\sigma_{AB}^{\hbar}=\sigma_{A}^{\hbar}\sharp\sigma_{B}^{\hbar}\)[21].
**Definition 2**.: _Given two linear operators \(A\) and \(B\) on \(L^{2}(\mathbb{R})\) with Weyl symbols \(\sigma_{A}^{\hbar}\) and \(\sigma_{B}^{\hbar}\) respectively, the Moyal product is defined as follows:_
\[\sigma_{A}^{\hbar}\sharp\sigma_{B}^{\hbar}(x,p)=\int_{\mathbb{R}^{4}}\sigma_ {A}^{\hbar}(x_{1},p_{1})\sigma_{B}^{\hbar}(x_{2},p_{2})e^{\frac{2\pi}{\hbar} \left[(x-x_{1})(p-p_{2})-(x-x_{2})(p-p_{1})\right]}\frac{dx_{1}dp_{1}dx_{2}dp _{2}}{(\pi\hbar)^{2}}.\]
Recall that the normalised eigenfunctions of the harmonic oscillator operator \(H_{\rm h.o.}\) in (1.1) are the Hermite functions
\[\psi_{k}^{\hbar}(x)=\sqrt{\frac{\alpha}{\sqrt{\pi}2^{k}\,k!}}\exp\left(-\frac {1}{2}\alpha^{2}x^{2}\right)h_{k}(\alpha x),\qquad k=0,1,2,\ldots \tag{2.8}\]
where \(\alpha^{2}=1/\hbar\) and
\[h_{k}(y)=(-1)^{k}e^{y^{2}}\,\frac{d^{k}}{dy^{k}}e^{-y^{2}} \tag{2.9}\]
is the \(k\)-th _Hermite polynomials_, see Appendix A. Consider the _orthogonal projection_
\[P_{<N}=\chi_{(-\infty,\hbar N)}\left(H_{\rm h.o.}\right)\]
onto the span of the first \(N\) Hermite eigenfunctions in (1.2). The _Zeno Hamiltonian_ in (1.3) is the _truncated momentum operator_
\[H_{N}=P_{<N}\,\hat{p}P_{<N}=\hat{p}P_{<N}-\left[\hat{p},P_{<N}\right]P_{<N}\,. \tag{2.10}\]
Proposition 3 (Integral kernels).: (2.11) \[P_{<N}\,\doteq\,K_{N}\,(u,v)=\sum_{k=0}^{N-1}\psi_{k}^{\,\hbar}(u)\psi_{k}^{\, \hbar}(v)\]
\[P_{<N}\,\hat{p}P_{<N}\,\doteq\,Q_{N}\,(u,v) =\int_{\mathbb{R}}K_{N}\,(u,w)\left(-i\hbar\frac{\partial}{ \partial w}\right)K_{N}\,(w,v)dw \tag{2.12}\] \[=i\sqrt{\frac{\hbar}{2}}\sum_{j=0}^{N-2}\sqrt{j+1}\,\left[\psi_{j+ 1}^{\,\hbar}\,(u)\,\psi_{j}^{\,\hbar}\,(v)-\psi_{j}^{\,\hbar}\,(u)\,\psi_{j+1}^ {\,\hbar}\,(v)\right]\,,\]
\[\left[\hat{p},P_{<N}\right]P_{<N}\,\doteq\,R_{N}\,(u,v)=i\sqrt{\frac{\hbar N} {2}}\psi_{N}^{\,\hbar}\,\left(u\right)\,\psi_{N-1}^{\,\hbar}\,(v)\;. \tag{2.13}\]
Proof.: Formula (2.11) follows directly by the definition of the Hermite functions. Formula (2.12) is obtained by a direct calculation using the three-term recurrence (A.8), while (2.13) follows by applying the identity (2.1) to \(P_{<N}\), and using the orthonormality of the eigenfunctions \(\{\psi_{k}^{\,\hbar}\}_{k\in\mathbb{N}}\).
Remark 7.: The kernels \(K_{N}\,,Q_{N}\) and \(R_{N}\) are rapidly decreasing functions in \((\mathbb{R}_{u}\times\mathbb{R}_{v})\).
The Weyl symbols of \(P_{<N}\) and \(H_{N}\)
\[\sigma_{P_{<N}}^{\,\hbar}(x,p) =\int_{\mathbb{R}}\hbar K_{N}\,\left(x-\frac{\hbar y}{2},x+\frac{ \hbar y}{2}\right)e^{ipy}dy, \tag{2.15}\] \[\sigma_{H_{N}}^{\,\hbar}(x,p) =\int_{\mathbb{R}}\hbar Q_{N}\,\left(x-\frac{\hbar y}{2},x+\frac{ \hbar y}{2}\right)e^{ipy}dy, \tag{2.14}\]
have explicit representations in terms of associated Laguerre polynomials (this is a manifestation of the so-called 'Laguerre connection' [21, SS1.9]).
Proposition 4 (Weyl symbols).: _For all \(x,p\in\mathbb{R}\):_
\[\sigma_{P_{<N}}^{\,\hbar}\,(x,p) =2e^{-(p^{2}+x^{2})/\hbar}\sum_{j=0}^{N-1}(-1)^{j}L_{j}\left(2(p^ {2}+x^{2})/\hbar\right), \tag{2.17}\] \[\sigma_{H_{N}}^{\,\hbar}\,(x,p) =4pe^{-(p^{2}+x^{2})/\hbar}\sum_{j=0}^{N-2}(-1)^{j}L_{j}^{\,(1)} \left(2(p^{2}+x^{2})/\hbar\right)\,. \tag{2.16}\]
Proof.: A consequence of the following formula by Groenewold [38] valid for all \(j\leq k\) (we write the formula as in [31, Eq. (30)]),
\[\int\psi_{j}^{\,\hbar}\left(x-\frac{y}{2}\right)\psi_{k}^{\,\hbar }\left(x+\frac{y}{2}\right)e^{ipy}dy\\ =2\sqrt{\left(\frac{2}{\hbar}\right)^{k-j}\frac{j!}{k!}\,(x+ip)^{ k-j}\,e^{-(p^{2}+x^{2})/\hbar}(-1)^{j}L_{j}^{\,(k-j)}\left(2(p^{2}+x^{2})/ \hbar\right)}\,, \tag{2.18}\]
where
\[L_{k}^{\,(j)}(y)=\sum_{m=0}^{k}\frac{(k+j)!}{(k-m)!(j+m)!m!}(-y)^{m}\]
are the _associated Laguerre polynomials_.
**Remark 8**.: The symbols \(\sigma^{\hbar}_{P_{<N}}\) and \(\sigma^{\hbar}_{H_{N}}\) are rapidly decreasing functions in \((\mathbb{R}_{x}\times\mathbb{R}_{p})\). Notice that \(\sigma^{\hbar}_{P_{<N}}\) is rotational symmetric. It may be convenient in the following to consider \(\sigma^{\hbar}_{P_{<N}}\) and \(\sigma^{\hbar}_{H_{N}}\) as complex-valued functions defined on the complexification \(\mathbb{C}_{x}\times\mathbb{C}_{p}\) of the real phase space. They are entire functions in both variables \(x\) and \(p\).
## 3. Scaling limits
In this section we provide an heuristic explanation of the different scaling limits in Theorems 1 and 2. The following discussion is somewhat breezy. For a more careful exposition of similar ideas, see [29, 30, 35].
Recall that \(P_{<N}\doteq K_{N}(u,v)\) and
\[\mathcal{F}_{2}\sigma^{\hbar}_{P_{<N}}(x,y)=2\pi\hbar K_{N}\left(x+\frac{ \hbar y}{2},x-\frac{\hbar y}{2}\right). \tag{3.1}\]
At scale \(\hbar\), the kernel has an asymptotic limit that can be identified as follows. We start by writing the rescaled kernel in terms of the conjugation of a unitary transformation on the operator. If we conjugate the projection \(P_{<N}\) by the scaling unitary \(V_{x,\hbar}\) in (2.2), we get that the kernel of the rescaled projection is the rescaled kernel:
\[V_{x,\hbar}P_{<N}V_{x,\hbar}^{-1}=\chi_{(-\infty,\hbar N)}\left(V_{x,\hbar}H_{ \mathrm{h.o.}}V_{x,\hbar}^{-1}\right)\doteq\hbar K_{N}\ (x+\hbar u,x+\hbar v). \tag{3.2}\]
The action of the rescaled harmonic oscillator operator on a function \(f\) in its domain is
\[\left(V_{x,\hbar}H_{\mathrm{h.o.}}V_{x,\hbar}^{-1}f\right)(u)=\frac{1}{2}\left[ -\frac{d^{2}}{du^{2}}+\hbar^{2}u^{2}+2\hbar ux+x^{2}\right]f(u).\]
So we expect that
\[\chi_{(-\infty,\hbar N)}\left(V_{x,\hbar}H_{\mathrm{h.o.}}V_{x,\hbar}^{-1} \right)\simeq\chi_{(-\infty,2\mu-x^{2})}\left(-\frac{d^{2}}{du^{2}}\right), \quad\text{ for }\hbar\to 0,\,N\to\infty,\,\text{with }\hbar N=\mu. \tag{3.3}\]
We recall the following result adapted from [29, Lemma A.5].
**Lemma 1**.: _The operator \(-\frac{d^{2}}{du^{2}}\) is essentially self-adjoint on \(C^{\infty}_{c}(\mathbb{R})\), and its unique self-adjoint extension has only absolutely continuous spectrum \(\sigma\left(-\frac{d^{2}}{du^{2}}\right)=\sigma_{\mathrm{ac}}\left(-\frac{d^{ 2}}{du^{2}}\right)=[0,\infty)\). Moreover,_
\[\chi_{(-\infty,2\mu-x^{2})}\left(-\frac{d^{2}}{du^{2}}\right)\doteq\mu\rho_{ \mu}(x)K_{\mathrm{sine}}\left(\mu\rho_{\mu}(x)u,\mu\rho_{\mu}(x)v\right), \tag{3.4}\]
_where \(K_{\mathrm{sine}}\) is the sine kernel (B.3)._
From (3.1), we see that a rescaling \(\hbar\) in the Hilbert space \(L^{2}(\mathbb{R})\) corresponds to zooming at scale \(\hbar^{0}\) in the phase space. The precise statement of (3.3) is Proposition 5.
To explain how a different asymptotics arises at the boundary \(\partial D\), we need to study the rescaled harmonic oscillator operator in a neighbourhood of the classical turning points \(x=\pm\sqrt{2\mu}\). Let us zoom at scale \(\hbar^{\alpha}\), with \(\alpha>0\) an exponent to be determined:
\[\left(V_{\sqrt{2\mu},\hbar^{\alpha}}H_{\mathrm{h.o.}}V_{\sqrt{2\mu},\hbar^{ \alpha}}^{-1}f\right)(u)=\frac{1}{2}\left[-\hbar^{2(1-\alpha)}\frac{d^{2}}{ du^{2}}+\hbar^{2\alpha}u^{2}+2^{\frac{\alpha}{2}}\mu^{\frac{1}{2}}\hbar^{ \alpha}u+2\mu\right]f(u).\]
If we choose \(\alpha=\frac{2}{3}\) we then expect that
\[\chi_{(-\infty,\hbar N)}\left(V_{\sqrt{2\mu},\hbar^{\frac{2}{3}}}H_{\mathrm{h. o.}}V_{\sqrt{2\mu},\hbar^{\frac{2}{3}}}^{-1}\right)\simeq\chi_{(-\infty,0)} \left(-\frac{d^{2}}{du^{2}}+c_{\mu}^{3}\dot{u}\right), \tag{3.5}\]
for \(\hbar\to 0\), \(N\rightarrow\infty\), with \(\hbar N=\mu\), where \(\hat{u}\) is the position operator and \(c_{\mu}=2^{\frac{1}{2}}\mu^{\frac{1}{6}}\) is a constant given in (B.2). Thus, the limit at the edge is related to the Airy differential operator for which we have the following spectral result, adapted from [29, Lemma A.7].
**Lemma 2**.: _The operator \(-\frac{d^{2}}{du^{2}}+c_{\mu}^{3}\hat{u}\) is essentially self-adjoint on \(C_{c}^{\infty}(\mathbb{R})\), and its self-adjoint extension has only absolutely continuous spectrum \(\sigma\left(-\frac{d^{2}}{du^{2}}+c_{\mu}^{3}\hat{u}\right)=\sigma_{\rm ac} \left(-\frac{d^{2}}{du^{2}}+c_{\mu}^{3}\hat{u}\right)=(-\infty,\infty)\). Moreover,_
\[\chi_{(-\infty,0)}\left(-\frac{d^{2}}{du^{2}}+c_{\mu}^{3}\hat{u}\right)\, \triangleq\,c_{\mu}K_{\rm Ai}(c_{\mu}u,c_{\mu}v). \tag{3.6}\]
_where \(K_{\rm Ai}\) is the Airy kernel (B.4)._
The precise statement of (3.5) is Proposition 6.
From (3.1), we see that a rescaling \(\hbar^{\frac{2}{3}}\) at the edge in the Hilbert space \(L^{2}(\mathbb{R})\) corresponds to zooming at scale \(\hbar^{\frac{2}{3}-1}=\hbar^{-\frac{1}{3}}\) around the boundary \(\partial D\) in the phase space. This explains the rescaling in Theorem 2.
## 4. Proofs
The proofs presented in this section are based on the following three observations:
1. The asymptotics of the Weyl symbols \(\sigma_{P_{<N}}^{\hbar}\), \(\sigma_{H_{N}}^{\hbar}\) is related (by Fourier transform in the second variable \(\mathcal{F}_{2}\)) to the asymptotics of the integral kernels \(K_{N}(x,y)\) and \(Q_{N}(x,y)\).
2. The kernel \(K_{N}(x,y)\) is a sum of \(N\) terms (cross products of Hermite functions), see Eq. (2.11). However, thanks to Christoffel-Darboux formula this sum can be expressed in terms of the \(N\)-th and \((N-1)\)-th Hermite functions only. Hence, studying the large \(N\) asymptotics with \(\hbar N\sim\mu\) amounts to study the large degree asymptotics of the Hermite functions.
3. \(\sigma_{H_{N}}^{\hbar}(x,p)\) is 'asymptotically close' to \(p\sigma_{P_{<N}}^{\hbar}(x,p)\) for \(N\rightarrow\infty\), \(\hbar\to 0\) with \(\hbar N=\mu>0\), see Proposition 7, therefore once we know the asymptotics of \(K_{N}\) (and hence of \(\sigma_{P_{<N}}^{\hbar}\)) we can directly deduce the asymptotics of \(\sigma_{H_{N}}^{\hbar}\).
### Asymptotics of the kernels
By telescoping the sum in (2.11) and using the three-term relation (A.7), we get the celebrated _Christoffel-Darboux formula_.
**Lemma 3**.: _For all \(u,v\in\mathbb{R}\),_
\[K_{N}(u,v)=\begin{cases}\sqrt{\frac{\hbar N}{2}}\frac{\psi_{N}^{ \hbar}(u)\psi_{N-1}^{\hbar}(v)-\psi_{N-1}^{\hbar}(u)\psi_{N}^{\hbar}(v)}{u-v} &\quad\text{if }u\neq v\\ \\ \sqrt{\frac{\hbar N}{2}}(\psi_{N}^{\hbar}\,{}^{\prime}(u)\psi_{N-1}^{ \hbar}(u)-\psi_{N}^{\hbar}(u)\psi_{N-1}^{\hbar}\,{}^{\prime}(u))&\text{if }u=v\end{cases}. \tag{4.1}\]
Thus, the large-\(N\) asymptotics of the _Christoffel-Darboux kernel_\(K_{N}(x,y)\) boils down to the classical subject of large degree asymptotics of orthogonal polynomials. A consequence of the Plancherel-Rotach asymptotics for \(\psi_{N}^{\hbar}(x)\) (Equations (A.10)-(A.12)) are the following asymptotic behaviours of the kernel \(K_{N}(x,y)\).
**Proposition 5** (Bulk asymptotics of the Christoffel-Darboux kernel).: _Suppose that \(\hbar=\hbar_{N}\) is the sequence defined by the condition \(\hbar N=\mu\). Then, for any compact sets \(U\Subset\mathbb{R}\) and \(V\Subset\mathbb{R}^{2}\), and for any \(\alpha,\beta\in\{0,1\}\), there exists a constant \(C>0\) such that_
\[\sup_{x\in U}\sup_{(t,s)\in V}\left|\partial_{t}^{\alpha}\partial_{s}^{\beta} \left\{\hbar K_{N}\,\left(x+\hbar t,x+\hbar s\right)-\mu\rho_{\mu}(x)K_{\rm sine }\left(\mu\rho_{\mu}(x)t,\mu\rho_{\mu}(x)s\right)\right\}\right|\leq C\hbar, \tag{4.2}\]
_where \(\rho_{\mu}(x)\) is the semicircular density (B.1), and \(K_{\rm sine}\) is the sine kernel (B.3)._
**Proposition 6** (Edge asymptotics of the Christoffel-Darboux kernel).: _Suppose that \(\hbar=\hbar_{N}\) is the sequence defined by the condition \(\hbar N=\mu\). For any compact set \(W\Subset\mathbb{C}^{2}\), and for any \(\alpha,\beta\in\{0,1\}\), there exists a constant \(C>0\) such that_
\[\sup_{(t,s)\in W}\left|\partial_{t}^{\alpha}\partial_{s}^{\beta}\left\{\hbar^ {\frac{3}{3}}K_{N}\left(\sqrt{2\mu}+\hbar^{\frac{2}{3}}t,\sqrt{2\mu}+\hbar^{ \frac{2}{3}}s\right)-c_{\mu}K_{\rm Ai}\left(c_{\mu}t,c_{\mu}s\right)\right\} \right|\leq C\hbar^{\frac{1}{3}}, \tag{4.3}\]
_where \(c_{\mu}\) is given in (B.2), and \(K_{\rm Ai}\) is the Airy kernel (B.4)._
The scaling limits of the Christoffel-Darboux kernel to the sine and Airy kernel are well-known results. It is perhaps less known that the local uniform convergence can be promoted to their derivatives as well. We outline here a proof, adapting the presentation of the book by Anderson, Guionnet and Zeitouni [26, Chap. 3].
**Notation**.: From now on, \((\hbar_{N})_{N\geq 1}\) is the positive sequence such that product \(\hbar_{N}\,N=\mu\), where \(\mu\) is a fixed positive number. We will write \(\hbar\) instead of \(\hbar_{N}\) for short, when no confusion arises. We will also use the following shorthand
\[K_{N,x_{0},\gamma}(t,s):=\gamma K_{N}(x_{0}+\gamma t,x_{0}+\gamma s). \tag{4.4}\]
Proof of Propositions 5 and 6.: Consider first the case \(\alpha=\beta=0\):
\[\sup_{x\in U}\sup_{(t,s)\in V}\left|K_{N,x,\hbar}\left(t,s\right)-\mu\rho_{\mu }(x)K_{\rm sine}\left(\mu\rho_{\mu}(x)t,\mu\rho_{\mu}(x)s\right)\right|\leq C\hbar, \tag{4.5}\]
and
\[\sup_{(t,s)\in W}\left|K_{N,\sqrt{2\mu},\hbar^{\frac{2}{3}}}\left(t,s\right)-c _{\mu}K_{\rm Ai}\left(c_{\mu}t,c_{\mu}s\right)\right|\leq C\hbar^{\frac{1}{3}}. \tag{4.6}\]
It is useful to get rid of the removable singularity \(t=s\) in \(K_{N,x,\hbar}\). Toward this end, noting that for any differentiable functions \(f,g\) on \(\mathbb{R}\),
\[\frac{f(t)g(s)-f(s)g(t)}{t-s}=g(s)\int_{0}^{1}f^{\prime}(\lambda t+(1-\lambda) s)d\lambda-f(s)\int_{0}^{1}g^{\prime}(\lambda t+(1-\lambda)s)d\lambda,\]
we deduce that
\[K_{N,x,\hbar}(t,s) =\sqrt{\frac{\hbar N}{2}}\psi_{N-1}^{\hbar}(x+\hbar s)\int_{0}^{ 1}{\psi_{N}^{\hbar}}^{\prime}(\lambda(x+\hbar t)+(1-\lambda)(x+\hbar s))d\lambda\] \[-\sqrt{\frac{\hbar N}{2}}\psi_{N}^{\hbar}(x+\hbar s)\int_{0}^{1}{ \psi_{N-1}^{\hbar}}^{\prime}(\lambda(x+\hbar t)+(1-\lambda)(x+\hbar s))d\lambda\] \[=\sqrt{\frac{\hbar N}{2}}\psi_{N-1}^{\hbar}(x+\hbar s)\int_{0}^{ 1}\left(\sqrt{\frac{2N}{\hbar}}\psi_{N-1}^{\hbar}(z)-\frac{z}{\hbar}\psi_{N}^ {\hbar}(z)\right)_{z=x+\hbar[\lambda t+(1-\lambda)s]}d\lambda\] \[-\sqrt{\frac{\hbar N}{2}}\psi_{N}^{\hbar}(x+\hbar s)\int_{0}^{1} \left(\sqrt{\frac{2N-2}{\hbar}}\psi_{N-2}^{\hbar}(z)-\frac{z}{\hbar}\psi_{N-1 }^{\hbar}(z)\right)_{z=x+\hbar[\lambda t+(1-\lambda)s]}d\lambda\]
where we used relation (A.9) in the last equality.
We can now insert the uniform Plancherel-Rotach asymptotics (A.10)-(A.11), perform the integrals and use elementary trigonometric identities to conclude the proof of (4.5).
To prove the \(C^{1}\)-local uniform convergence, we start by taking the derivative(s) of the Christoffel-Darboux kernel \(\partial_{t}^{\alpha}\partial_{s}^{\beta}K_{N,x,\hbar}\left(t,s\right)\). This entails computing the derivatives of Hermite functions. Now the trick is to write the derivative \(\psi_{n}^{\hbar\,^{\prime}}\) as a combination of Hermite functions (not differentiated) using again formula (A.9). Hence, the local uniform asymptotics of \(\psi_{n}^{\hbar\,^{\prime}}\) can be read off from the Plancherel-Rotach asymptotics (A.10)-(A.11) of \(\psi_{n}^{\hbar}\). The proof of the \(C^{1}\)-convergence is therefore a simple modification of the proof of (4.5).
To prove (4.6), we use again (A.9) to write the kernel as
\[K_{N}\left(x,y\right)=\frac{\hbar}{2}\frac{\psi_{N}^{\hbar}\left(x\right){ \psi_{N}^{\hbar\,^{\prime}}}(y)-\psi_{N}^{\hbar}\left(y\right){\psi_{N}^{ \hbar\,^{\prime}}}(x)}{x-y}-\frac{1}{2}\psi_{N}^{\hbar}\left(x\right)\psi_{N} ^{\hbar}\left(y\right).\]
If we set
\[\Psi_{N}^{\hbar}\left(t\right):=\hbar^{-\frac{1}{6}}\left(V_{\sqrt{2\mu}, \hbar^{\frac{2}{3}}}\psi_{N}^{\hbar}\right)\left(t\right)=\hbar^{\frac{1}{6}} \psi_{N}^{\hbar}\left(\sqrt{2\mu}+\hbar^{\frac{2}{3}}t\right), \tag{4.7}\]
then,
\[K_{N,\sqrt{2\mu},\hbar^{\frac{2}{3}}}\left(t,s\right)=\frac{1}{2}\frac{\Psi_{N }^{\hbar}\left(t\right){\Psi_{N}^{\hbar\,^{\prime}}}(s)-\Psi_{N}^{\hbar}\left( s\right){\Psi_{N}^{\hbar\,^{\prime}}}(t)}{t-s}-\frac{\hbar^{\frac{1}{3}}}{2}\Psi_{N }^{\hbar}\left(t\right)\Psi_{N}^{\hbar}\left(s\right). \tag{4.8}\]
By the Plancherel-Rotach asymptotics (A.12), for any compact set \(J\Subset{C}\),
\[\lim_{N\to\infty}\sup_{t\in J}|\Psi_{N}^{\hbar}\left(t\right)-\mathrm{Ai}(c_{ \mu}t)|=0. \tag{4.9}\]
Since the functions \(\Psi_{N}^{\hbar}\) are entire, the above locally uniform convergence entails the uniform convergence of \({\Psi_{N}^{\hbar\,^{\prime}}}\) to \(\mathrm{Ai}^{\prime}\) on compact subsets of \(\mathbb{C}\) (a standard application of Cauchy's integral formula).
By the very same argument, each finite-\(N\) kernel \(K_{N,\sqrt{2\mu},\hbar^{\frac{2}{3}}}\) is analytic, and hence their derivatives converge to the derivatives of the Airy kernel. The proof is complete.
**Remark 9**.: Since \(\sigma_{P_{<N}}^{\hbar}\in(\mathbb{R}_{x}\times\mathbb{R}_{p})\), we have that the Fourier transform are rapidly decreasing functions too, \(\mathcal{F}_{2}\sigma_{P_{<N}}^{\hbar}\in(\mathbb{R}_{x}\times\mathbb{R}_{y})\). On the contrary,
\[\mathcal{F}_{2}\chi_{D}(x,y)=\mu\rho_{\mu}(x)K_{\mathrm{sine}}\left(-\mu\rho_ {\mu}(x)y/2,\mu\rho_{\mu}(x)y/2\right)=\frac{\sin\left[\sqrt{(2\mu-x^{2})_{+}} y\right]}{\pi y}\]
is not integrable in \(\mathbb{R}_{x}\times\mathbb{R}_{y}\) and this tells that we cannot get in (4.2) a convergence stronger than uniform on compact subsets.
In order to prove Theorem 2 we will also need to show that the Airy kernel on the antidiagonal is dominated by and integrable function.
**Lemma 4**.: _There exist positive constants \(C\), \(c\) such that, for all \(N\in\mathbb{N}\), with \(\hbar N\) fixed,_
\[\left|K_{N,\sqrt{2\mu},\hbar^{\frac{2}{3}}}\left(-y,y\right)\right|\leq Ce^{- c\left|y\right|^{\frac{2}{3}}},\quad\text{for all }y\in\mathbb{R}.\]
Proof.: We start from the formula (recall the notation (2.2)):
\[K_{N,\sqrt{2hN},h^{\frac{1}{2}}\mu^{\frac{1}{6}}N^{-\frac{1}{6}}}(s,t)=\] \[\sqrt{2}\mu^{\frac{1}{6}}N^{\frac{1}{3}}\int_{0}^{\infty}\left(V_{ \sqrt{2hN},h^{\frac{1}{2}}\mu^{\frac{1}{6}}N^{-\frac{1}{6}}}\psi_{N}^{h}\right) \left(u+s\right)\left(V_{\sqrt{2hN},h^{\frac{1}{2}}\mu^{\frac{1}{6}}N^{-\frac{1 }{6}}}\psi_{N}^{h}\right)\left(u+t\right)\ du\] \[+\frac{\mu^{\frac{1}{3}}}{2N^{\frac{1}{3}}}\int_{0}^{\infty}\left( s+t+2u\right)\left(V_{\sqrt{2hN},h^{\frac{1}{2}}\mu^{\frac{1}{6}}N^{-\frac{1}{6}}} \psi_{N}^{h}\right)\left(u+s\right)\left(V_{\sqrt{2hN},h^{\frac{1}{2}}\mu^{ \frac{1}{6}}N^{-\frac{1}{6}}}\psi_{N}^{h}\right)\left(u+t\right)\ du\] \[+\frac{1}{2}\int_{0}^{\infty}\left(V_{\sqrt{2hN},h^{\frac{1}{2}} \mu^{\frac{1}{6}}N^{-\frac{1}{6}}}\psi_{N}^{h}\right)^{\prime}\left(u+s\right) \left(V_{\sqrt{2hN},h^{\frac{1}{2}}\mu^{\frac{1}{6}}N^{-\frac{1}{6}}}\psi_{N}^ {h}\right)\left(u+t\right)+\] \[\left[\left(V_{\sqrt{2hN},h^{\frac{1}{2}}\mu^{\frac{1}{6}}N^{- \frac{1}{6}}}\psi_{N}^{h}\right)\left(u+s\right)\left(V_{\sqrt{2hN},h^{\frac{1 }{2}}\mu^{\frac{1}{6}}N^{-\frac{1}{6}}}\psi_{N}^{h}\right)^{\prime}\left(u+t \right)\right]du. \tag{4.10}\]
This is an identity true for all \(\hbar\), \(N\), and \(\mu\). It can be proved from the representation (4.8) using the differential equation for the Hermite functions.
If \(\hbar N=\mu\), on the antidiagonal \(-s=t=y\) we thus get
\[K_{N,\sqrt{2\mu},h^{\frac{2}{3}}}(-y,y) =\underbrace{\sqrt{2}\mu^{\frac{2}{3}}\int_{0}^{\infty}\Psi_{N}^{ h}(u-y)\Psi_{N}^{h}(u+y)\ du}_{I_{1}}\] \[+\underbrace{\frac{\hbar^{\frac{2}{3}}}{2}\int_{0}^{\infty}u\, \Psi_{N}^{h}\left(u-y\right)\Psi_{N}^{h}\left(u+y\right)du}_{I_{2}}\] \[+\underbrace{\frac{\hbar^{\frac{1}{3}}}{2}\int_{0}^{\infty}\left[ {\Psi_{N}^{h}}^{\prime}(u-y)\Psi_{N}^{h}\left(u+y\right)+\Psi_{N}^{h}\left(u-y \right)\Psi_{N}^{h}\,{}^{\prime}(u+y)\right]\ du}_{I_{3}}, \tag{4.11}\]
where we used the notation (4.7). To estimate \(I_{1}\), \(I_{2}\), and \(I_{3}\), we need some explicit bounds on the rescaled wavefunctions \(\Psi_{N}^{h}\). Note that \(K_{N,\sqrt{2\mu},h^{\frac{2}{3}}}(-y,y)\) is even, and so it suffices to study the case \(y>0\). A useful bound is
\[\left|\psi_{N}^{h}\left(y\right)\right|\leq\frac{C^{\prime}}{N^{\frac{1}{12}} \hbar^{\frac{1}{4}}},\]
for all \(y\), see [39]. Hence the rescaled wavefunctions are uniformly bounded by a constant
\[\left|\Psi_{N}^{h}\left(y\right)\right|\leq\frac{C^{\prime}}{N^{\frac{1}{12}} \hbar^{\frac{1}{12}}}\leq C^{\prime\prime}. \tag{4.12}\]
To get an integrable estimate on \(\Psi_{N}^{h}\left(y\right)\) for \(y>0\) we employ a theorem by Sonin and Polya [48, Theorem 7.31.1] giving quantitative growth information on the solutions of Sturm-Liouville equation. We observe that \(\Psi_{N}^{h}\left(y\right)\) satisfies the differential equation
\[\Psi_{N}^{h}\,{}^{\prime\prime}=V\Psi_{N}^{h},\quad\text{where}\quad V(y)=2 \sqrt{2\mu}y+\hbar^{\frac{2}{3}}y^{2}-\hbar^{\frac{1}{3}},\]
for all \(y\in\mathbb{R}\). If \(b\) denotes the positive zero of \(V\), then we have
1. \(\Psi_{N}^{h}>0\) on \([b,+\infty)\);
2. \(\lim_{y\to\infty}\left(\log\Psi_{N}^{h}\left(y\right)\right)^{\prime}=-\infty\).
3. \(V>0\) and \(V^{\prime}>0\) on \([b,+\infty)\);
(For the first we use known bounds [39] on the largest zero of the Hermite polynomial of degree \(N\); the second is true because \(\Psi_{N}^{h}\) is a polynomial times a Gaussian.) The above mentioned theorem of Sonin and Polya (see the formulation in [26, Lemma 3.9.31]) allows to conclude that
\[\left(\log\Psi_{N}^{h}\left(y\right)\right)^{\prime}\leq-\sqrt{V}\quad\text{on } [b,+\infty).\]
Hence,
\[\Psi_{N}^{h}\left(y\right) \leq\Psi_{N}^{h}\left(b\right)\exp\left(-\int_{b}^{y}\sqrt{V(y^{ \prime})}dy^{\prime}\right)\] \[\leq\Psi_{N}^{h}\left(b\right)\exp\left(-\int_{0}^{y}\sqrt{\left( 2\sqrt{2\mu y}+\hbar^{\frac{2}{3}}y^{2}-\hbar^{\frac{1}{3}}\right)_{+}}dy^{ \prime}\right)\] \[\leq\Psi_{N}^{h}\left(b\right)\exp\left(-\int_{0}^{y}\sqrt{\left( 2\sqrt{2\mu y}-\hbar^{\frac{1}{3}}\right)_{+}}dy^{\prime}\right)\] \[\leq c^{\prime}\exp\left(-\frac{2}{3}c^{\prime}\left(y-c^{ \prime}\hbar^{\frac{1}{3}}\right)^{\frac{3}{2}}\right),\]
for all \(y\geq b\). A short calculation shows that \(0<b<\frac{1}{2}\frac{\hbar^{\frac{1}{3}}}{(2\mu)^{\frac{1}{2}}}+\frac{1}{8} \frac{\hbar^{\frac{3}{3}}}{(2\mu)^{\frac{3}{2}}}\). Since \(\Psi_{N}^{h}\left(y\right)\to\operatorname{Ai}(y)\) pointwise, with different constants we have
\[\Psi_{N}^{h}\left(y\right)\leq c^{\prime}\exp\left(-c^{\prime}y^{\frac{3}{2}} \right),\quad\text{for }y\geq 0. \tag{4.13}\]
We can now estimate, for \(y>0\),
\[I_{1} \leq C\int_{0}^{\infty}\left|\Psi_{N}^{h}\left(u-y\right)\right| \left|\Psi_{N}^{h}\left(u+y\right)\right|\,du\] \[\leq C\exp\left(-c\,y^{\frac{3}{2}}\right)\int_{0}^{+\infty}\left| \Psi_{N}^{h}\left(u-y\right)\right|du\] \[\leq C\exp\left(-c\,y^{\frac{3}{2}}\right)\left(\int_{0}^{y} \left|\Psi_{N}^{h}\left(u-y\right)\right|du+\int_{y}^{\infty}\left|\Psi_{N}^{h }\left(u-y\right)\right|du\right)\] \[\leq C\exp\left(-c\,y^{\frac{3}{2}}\right)\left(\int_{-y}^{0} \Psi_{N}^{h}\left(u\right)du+\int_{0}^{\infty}\Psi_{N}^{h}\left(u\right)du\right)\] \[\leq C\exp\left(-c\,y^{\frac{3}{2}}\right)\left(Cy+C\right)\] \[\leq C\exp\left(-c\,y^{\frac{3}{2}}\right).\]
where \(C,c\) denote different constants in each line. In the second to last step we used the uniform bound (4.12) on \(\mathbb{R}\) and the integrable bound (4.13) on \([0,\infty)\).
The analysis of \(I_{2}\) and \(I_{3}\) as functions of \(y\) proceeds almost verbatim. Moreover, they are \(o(1)\) as \(N\to\infty\), so that their contribution is negligible.
Remark 10.: Since \(K_{N,\sqrt{2\mu},h^{\frac{2}{3}}}(-y,y)\to K_{\operatorname{Ai}}(-y,y)\) pointwise, it follows that \(K_{\operatorname{Ai}}(-y,y)\) is also dominated by \(Ce^{-c\,|y|^{\frac{3}{2}}}\). For an illustration of the kernels see Fig. 5.
### Asymptotics of the symbols
Note that \(\sigma_{H_{N}}^{\hbar}\left(x,p\right)\neq p\sigma_{P_{<N}}^{\hbar}\left(x,p\right)\). This is _not_ surprising since the operator \(\hat{p}\) and \(P_{<N}\) do not commute.
**Lemma 5**.: _For all \(x,p\in\mathbb{R}\):_
\[\sigma_{H_{N}}^{\hbar}\left(x,p\right)=\ p\sigma_{P_{<N}}^{\hbar} \left(x,p\right)\\ +\frac{i}{2}\sqrt{\frac{\hbar N}{2}}\int_{\mathbb{R}_{y}}\hbar \left[\psi_{N-1}^{\hbar}\left(x-\frac{\hbar y}{2}\right)\psi_{N}^{\hbar}\left( x+\frac{\hbar y}{2}\right)-\psi_{N}^{\hbar}\left(x-\frac{\hbar y}{2}\right)\psi_{N-1}^{ \hbar}\left(x+\frac{\hbar y}{2}\right)\right]e^{ipy}dy. \tag{4.14}\]
Proof.: An application of the three-term recurrence of the Hermite functions (A.9).
**Proposition 7**.: _The families \(\{\sigma_{P_{<N}}^{\hbar}\}_{N\geq 1}\) and \(\{\sigma_{H_{N}}^{\hbar}\}_{N\geq 1}\) are bounded in \(\mathcal{A}^{\prime}\). Moreover,_
\[\|\sigma_{H_{N}}^{\hbar}-p\sigma_{P_{<N}}^{\hbar}\|_{\mathcal{A}^{\prime}}\leq \hbar\sqrt{\frac{\mu}{2}}, \tag{4.15}\]
_thus the distance between \(\sigma_{H_{N}}^{\hbar}\left(x,p\right)\) and \(p\sigma_{P_{<N}}^{\hbar}\left(x,p\right)\) is asymptotically small in \(\mathcal{A}^{\prime}\), as \(N\to\infty\), \(\hbar\to 0\), with \(\hbar N\to\mu>0\)._
Proof of Proposition 7.: Let \(f\in\mathcal{A}\). From Plancherel's theorem
\[\langle\sigma_{P_{<N}}^{\hbar},f\rangle =\int_{\mathbb{R}_{x}\times\mathbb{R}_{y}}\overline{\frac{\hbar K _{N}\left(x-\frac{\hbar y}{2},x+\frac{\hbar y}{2}\right)}{\mathcal{F}}_{2}f(x,y)dydx},\] \[\langle\sigma_{H_{N}}^{\hbar},f\rangle =\int_{\mathbb{R}_{x}\times\mathbb{R}_{y}}\overline{\frac{\hbar Q _{N}\left(x-\frac{\hbar y}{2},x+\frac{\hbar y}{2}\right)}{\mathcal{F}}_{2}f(x,y)dydx}.\]
We can estimate
\[\left|\langle\sigma_{P_{<N}}^{\hbar},f\rangle\right| \leq\hbar\left(\int\sup_{x}|\mathcal{F}_{2}f(x,y)|\,dy\right) \left(\sup_{y}\int\,\left|K_{N}\left(x-\frac{\hbar y}{2},x+\frac{\hbar y}{2} \right)\right|dx\right)\] \[\leq\hbar\|f\|_{\mathcal{A}}\sup_{y}\sum_{j=0}^{N-1}\left(\int \,\left|\psi_{k}^{\hbar}\left(x-\hbar y/2\right)\psi_{k}^{\hbar}\left(x+\hbar y /2\right)\right|dx\right)\] \[\leq\hbar\|f\|_{\mathcal{A}}\sup_{y}\sum_{j=0}^{N-1}\left(\int \,\left|\psi_{k}^{\hbar}\left(x-\hbar y/2\right)\right|^{2}dx\,\int\,\left| \psi_{k}^{\hbar}\left(x+\hbar y/2\right)\right|^{2}dx\right)^{1/2}\] \[\leq\hbar N\|f\|_{\mathcal{A}}.\]
Similarly,
\[\left|\langle\sigma_{H_{N}}^{\hbar},f\rangle\right|\leq\hbar\|f\|_{\mathcal{A }}\sqrt{\frac{\hbar}{2}}\sup_{y}\sum_{j=0}^{N-2}2\sqrt{j+1}\leq\sqrt{2}(\hbar N )^{3/2}\|f\|_{\mathcal{A}}.\]
The convergent sequence \(\hbar N\) is bounded from above. The proof of the uniform boundedness of the symbols is complete.
With the help of Lemma 5, similar calculations are used in the proof of (4.15),
\[\|\sigma_{H_{N}}^{\hbar}-p\sigma_{P_{<N}}^{\hbar}\|_{\mathcal{A}}\] \[\leq\sup_{y}\frac{\hbar}{2}\sqrt{\frac{\mu}{2}}\int_{\mathbb{R}} \left|\psi_{N}^{\hbar}\left(x-\frac{\hbar y}{2}\right)\psi_{N-1}^{\hbar}\left( x+\frac{\hbar y}{2}\right)-\psi_{N-1}^{\hbar}\left(x-\frac{\hbar y}{2}\right) \psi_{N}^{\hbar}\left(x+\frac{\hbar y}{2}\right)\right|dx\] \[\leq\hbar\sqrt{\frac{\mu}{2}}\|\psi_{N}^{\hbar}\|_{2}\|\psi_{N-1 }^{\hbar}\|_{2}.\]
### Proofs of Theorems 1, 2 and 3
Proof of Theorem 1.: Notice that, by Proposition 5, for any compact sets \(U,V\Subset\mathbb{R}\), there is a constant \(C=C(U,V,\mu)>0\), such that
\[\sup_{x\in U}\sup_{y\in V}\left|\left(\mathcal{F}_{2}\sigma_{P_{<N }}^{\hbar}-\mathcal{F}_{2}\chi_{D}\right)\left(x,y\right)\right|\leq\frac{C}{ N}, \tag{4.17}\] \[\sup_{x\in U}\sup_{y\in V}\left|\left(\mathcal{F}_{2}p\sigma_{P_ {<N}}^{\hbar}-\mathcal{F}_{2}p\chi_{D}\right)\left(x,y\right)\right|\leq\frac {C}{N}, \tag{4.16}\]
for all \(N\geq 1\), where
\[\mathcal{F}_{2}\chi_{D}(x,y)=\mu\rho_{\mu}(x)K_{\mathrm{sine}}\left(-\mu\rho_ {\mu}(x)y/2,\mu\rho_{\mu}(x)y/2\right).\]
It is enough to show the two claims (1.5)-(1.6) for all \(f\in\mathcal{B}\). By the density of \(\mathcal{B}\) in \(\mathcal{A}\) the thesis will follow. Let \(f\in\mathcal{B}\) (so that \(\mathcal{F}_{2}f\) has compact support \(J\Subset\mathbb{R}_{x}\times\mathbb{R}_{y}\), see Section 2). Then,
\[\left|\int\,\overline{\left|\sigma_{P_{<N}}^{\hbar}\left(x,p \right)-\chi_{D}(x,p)\right|}f(x,p)dxdp\right|\] \[=\left|\int\,\overline{\left[\mathcal{F}_{2}\sigma_{P_{<N}}^{ \hbar}\left(x,y\right)-\mathcal{F}_{2}\chi_{D}(x,y)\right]}\mathcal{F}_{2}f(x, y)dxdy\right|\] \[\leq\|\mathcal{F}_{2}f\|_{\infty}\,\int_{J}\left|\mathcal{F}_{2 }\sigma_{P_{<N}}^{\hbar}\left(x,y\right)-\mathcal{F}_{2}\chi_{D}(x,y)\right| dxdy\,\leq C\|\mathcal{F}_{2}f\|_{\infty}N^{-1}.\]
for some constant \(C\) (dependent on \(J\)). Hence, for all \(f\in\mathcal{B}\),
\[\lim_{N\to\infty}\langle\sigma_{P_{<N}}^{\hbar}-\chi_{D},f\rangle=0.\]
Similarly, for \(f\in\mathcal{B}\),
\[\left|\langle\sigma_{H_{N}}^{\hbar}-p\chi_{D},f\rangle\right|\leq\left|\langle \sigma_{H_{N}}^{\hbar}-p\sigma_{P_{<N}}^{\hbar},f\rangle\right|+\left|\langle p \sigma_{P_{<N}}^{\hbar}-p\chi_{D},f\rangle\right|.\]
The first term is of order \(O\left(N^{-1}\right)\) by Proposition 7; the second term is also \(O(N^{-1})\) by Eq. (4.17). Hence, for all \(f\in\mathcal{B}\),
\[\lim_{N\to\infty}\langle\sigma_{H_{N}}^{\hbar}-p\chi_{D},f\rangle=0.\]
Before proving Theorem 2, we need some notation. Consider the change of coordinates
\[T\colon\mathbb{R}_{x}\times\mathbb{R}_{p} \to\mathbb{C}^{2} \tag{4.18}\] \[(x,p) \mapsto(\theta,\zeta)\]
where \(\zeta\) is solution of
\[x^{2}+p^{2}=2\mu+\zeta^{2}, \tag{4.19}\]
and \(\theta\in[0,2\pi)\) is given by
\[\theta=\begin{cases}\arctan\frac{p}{x}&\text{if }x\neq 0\\ \frac{\pi}{2}&\text{if }x=0,\,p>0\\ \frac{3\pi}{2}&\text{if }x=0,\,p<0\end{cases}. \tag{4.20}\]
If \((x,p)\notin D\) then \(\zeta\in(0,\infty)\); if \(\mathrm{t}(x,p)\in D\), then \(\zeta\in[0,i\sqrt{2\mu}]\). \(T\) is a bijection from \(\mathbb{R}_{x}\times\mathbb{R}_{p}\) to \(T(\mathbb{R}^{2}\setminus(0,0))=[0,2\pi)\times(\mathbb{R}_{+}\cup[0,i\sqrt{2 \mu}])\) with Jacobian determinant
\[|J|:=\left|\det\frac{\partial T(x,p)}{\partial(\zeta,\theta)}\right|=|\zeta|. \tag{4.21}\]
We can now prove Theorem 2.
Proof of Theorem 2.: Let \(g\in C_{c}^{\infty}(\mathbb{R})\). We have
\[\int_{\mathbb{R}_{x}\times\mathbb{R}_{p}}\sigma_{P_{<N}}^{\hbar} (x,p)\frac{1}{\hbar^{\frac{2}{3}}}g\left(\frac{x^{2}+p^{2}-2\mu}{\hbar^{\frac {2}{3}}}\right)dxdp\] \[=2\pi\int_{0}^{+\infty}\sigma_{P_{<N}}^{\hbar}(\sqrt{2\mu},zh^{ \frac{1}{3}})zg(z^{2})dz+2\pi\int_{0}^{+\infty}\sigma_{P_{<N}}^{\hbar}(\sqrt{2 \mu},iz\hbar^{\frac{1}{3}})zg(-z^{2})\chi_{(0,\frac{2\mu}{\hbar^{2/3}})}(z)dz\] \[=2\pi\int_{\mathbb{R}_{y}}\int_{0}^{+\infty}K_{N,\sqrt{2\mu}, \hbar^{\frac{2}{3}}}(-y/2,y/2)\,e^{-izy}zg(z^{2})dzdy\] \[+2\pi\int_{\mathbb{R}_{y}}\int_{0}^{+\infty}K_{N,\sqrt{2\mu}, \hbar^{\frac{2}{3}}}(-y/2,y/2)\,e^{zy}zg(-z^{2})\chi_{(0,\frac{2\mu}{\hbar^{2/ 3}})}(z)dzdy,\]
where in the third line we used the rotational symmetry of the symbol \(\sigma_{P_{<N}}^{h}\) and we considered the symbol \(\sigma_{P_{<N}}^{h}\) as a function on \(\mathbb{C}_{x}\times\mathbb{C}_{p}\) (see Remark 8). Similarly,
\[\int_{\mathbb{R}_{x}\times\mathbb{R}_{p}}\chi_{D}^{(N)}\,\left(x,p \right)\frac{1}{\hbar^{\frac{2}{3}}}g\left(\frac{x^{2}+p^{2}-2\mu}{\hbar^{\frac {2}{3}}}\right)dxdp\] \[=2\pi\int_{0}^{+\infty}\mathrm{Ai}_{1}\left(\frac{1}{2^{\frac{2} {3}}\mu^{\frac{1}{3}}}\left(\frac{x^{2}+p^{2}-2\mu}{\hbar^{\frac{2}{3}}} \right)\right)\frac{1}{\hbar^{\frac{2}{3}}}g\left(\frac{x^{2}+p^{2}-2\mu}{ \hbar^{\frac{2}{3}}}\right)dxdp\] \[=2\pi\int_{\mathbb{R}_{y}}\int_{0}^{+\infty}c_{\mu}K_{\mathrm{Ai }}(-c_{\mu}y/2,c_{\mu}y/2)e^{-izy}zg\left(z^{2}\right)dzdy\] \[+2\pi\int_{\mathbb{R}_{y}}\int_{0}^{+\infty}c_{\mu}K_{\mathrm{Ai }}(-c_{\mu}y/2,c_{\mu}y/2)e^{zy}zg(-z^{2})\chi_{\left(0,\frac{2\mu}{\hbar^{2/3 }}\right)}(z)dzdy.\]
Hence,
\[\left|\int_{\mathbb{R}_{x}\times\mathbb{R}_{p}}\left[\sigma_{P_{<N}}^{h}\left( x,p\right)-\chi_{D}^{(N)}\,\left(x,p\right)\right]\,\frac{1}{\hbar^{\frac{2}{3}}}g \left(\frac{x^{2}+p^{2}-2\mu}{\hbar^{\frac{2}{3}}}\right)dxdp\right|\leq I_{N }+J_{N},\]
where
\[I_{N}=2\pi\int_{0}^{+\infty}\left(\int_{\mathbb{R}_{y}}f_{N}\left(y\right)dy \right)\left|zg\left(z^{2}\right)\right|dz,\quad J_{N}=2\pi\int_{0}^{+\infty} \left(\int_{\mathbb{R}_{y}}f_{N}\left(y\right)e^{zy}dy\right)\left|zg\left(z^ {2}\right)\right|dz,\]
and \(f_{N}\left(y\right):=\left|K_{N,\sqrt{2\mu},\hbar^{\frac{2}{3}}}\left(-y/2,y/ 2\right)-c_{\mu}K_{\mathrm{Ai}}(-c_{\mu}y/2,c_{\mu}y/2)\right|\). By Proposition 6\(f_{N}\left(y\right)\) tends to zero uniformly on compact sets (and hence pointwise). The tail estimate of Lemma 4 implies that the sequences \(f_{N}\left(y\right)\) and \(f_{N}\left(y\right)e^{zy}\) are dominated by an integrable function, and so by the dominated convergence theorem both \(\int f_{N}\left(y\right)dy\) and \(\int f_{N}\left(y\right)e^{zy}dy\) tend to zero (the latter for any \(z\)). Since \(\mathrm{supp}\,g\Subset\mathbb{R}_{z}\), we conclude that both \(I_{N}\) and \(J_{N}\) go to zero as \(N\to\infty\). This proves (1.10) from which (1.11) follows by recalling Lemma 5.
Proof of Theorem 3.: It is again enough to prove (1.13). The second claim (1.14) will follow by Lemma 5. Fix \(\epsilon>0\). By rotational symmetry we can assume \(x=\sqrt{2\mu}\) and \(p=z\in\mathbb{R}\).
\[\left|\sigma_{P_{<N}}^{h}\left(\sqrt{2\mu},z\right)-\chi_{D}^{(N)} (\sqrt{2\mu},z)\right| \leq\int_{\mathbb{R}_{y}}\left|K_{N,\sqrt{2\mu},\hbar^{\frac{2}{3} }}\left(y/2,-y/2\right)-c_{\mu}K_{\mathrm{Ai}}(-c_{\mu}y/2,c_{\mu}y/2)\right|dy\] \[\leq\int_{-L}^{L}\left|K_{N,\sqrt{2\mu},\hbar^{\frac{2}{3}}}\left( -y/2,y/2\right)-c_{\mu}K_{\mathrm{Ai}}(-c_{\mu}y/2,c_{\mu}y/2)\right|dy\] \[+2\int_{L}^{+\infty}\left|K_{N,\sqrt{2\mu},\hbar^{\frac{2}{3}}} \left(-y/2,y/2\right)\right|dy\] \[+2\int_{L}^{+\infty}\left|c_{\mu}K_{\mathrm{Ai}}(-c_{\mu}y/2,c_{ \mu}y/2)\right|dy:=I_{1}+I_{2}+I_{3}\]
for every \(L>0\). Choose \(L_{0}\) such that the \(I_{2}\) and \(I_{3}\) are each bounded by \(\epsilon/3\). The first integral \(I_{1}\) is bounded by \(Ch^{\frac{1}{3}}\), with \(C=C(L_{0})\). Take \(\hbar=\left(\epsilon/(3C)\right)^{3}\) and conclude the proof.
## Acknowledgements
FDC thanks Nick Simm for helpful correspondence. We acknowledge the support by the Italian National Group of Mathematical Physics (GNFM-INdAM), by PNRR MUR
projects CN00000013-'Italian National Centre on HPC, Big Data and Quantum Computing' and PE0000023-NQSTI, by Regione Puglia through the project 'Research for Innovation' - UNIBA024, and by Istituto Nazionale di Fisica Nucleare (INFN) through the project 'QUANTUM'.
## Appendix A Quantum harmonic oscillator and Hermite polynomials
The classical harmonic oscillator Hamiltonian function is
(A.1) \[\mathfrak{h}_{\mathrm{h.o.}}(x,p)=\frac{1}{2}\left(p^{2}+x^{2}\right).\]
The _harmonic oscillator Schrodinger operator_ is
(A.2) \[H_{\mathrm{h.o.}}=\frac{1}{2}\left(-\hbar^{2}\partial_{x}^{2}+x^{2}\right).\]
The _Hermite functions_ (\(\alpha^{2}=1/\hbar\)) are
(A.3) \[\psi_{k}^{\hbar}(x)=\sqrt{\frac{\alpha}{\sqrt{\pi}2^{k}\,k!}}\exp\left(-\frac {1}{2}\alpha^{2}x^{2}\right)h_{k}\left(\alpha x\right),\qquad k=0,1,2,\ldots\]
where
(A.4) \[h_{k}\left(y\right)=\left(-1\right)^{k}e^{y^{2}}\,\frac{d^{k}}{dy^{k}}e^{-y^{ 2}}\]
is the \(k\)-th _Hermite polynomials_. The Hermite functions are eigenfunctions of the harmonic oscillator
(A.5) \[H_{\mathrm{h.o.}}\psi_{k}^{\hbar}(x)=\lambda_{k}\psi_{k}^{\hbar}(x),\quad k=0,1,2,\ldots,\]
with eigenvalues \(\lambda_{k}=\hbar\left(k+\frac{1}{2}\right)\), and form an orthonormal basis in \(L^{2}(\mathbb{R})\)
(A.6) \[\int_{\mathbb{R}}\psi_{k}^{\hbar}(x)\psi_{\ell}^{\hbar}(x)dx=\delta_{k,\ell}.\]
Useful formulae are the following three-term relations written in terms of position operator \(\hat{x}\) and momentum operator \(\hat{p}=-i\hbar\frac{d}{dx}\):
(A.7) \[(\hat{x}\psi_{k}^{\hbar})(x) =\sqrt{\frac{\hbar}{2}}\left[\sqrt{k+1}\psi_{k+1}^{\hbar}(x)+ \sqrt{k}\psi_{k-1}^{\hbar}(x)\right]\] (A.8) \[(\hat{p}\psi_{k}^{\hbar})(x) =i\sqrt{\frac{\hbar}{2}}\left[\sqrt{k+1}\psi_{k+1}^{\hbar}(x)- \sqrt{k}\psi_{k-1}^{\hbar}(x)\right].\]
When combined they give the useful relation
(A.9) \[\frac{d}{dz}\psi_{k}^{\hbar}(z)=\sqrt{\frac{2k}{\hbar}}\psi_{k-1}^{\hbar}(z)- \frac{z}{\hbar}\psi_{k}^{\hbar}(z).\]
We have the following Plancherel-Rotach asymptotics formulae (see [48, Theorem 8.22.9]). Let \(\epsilon<\epsilon^{\prime}\) be fixed positive numbers, and \(n\in\mathbb{Z}\) fixed. Let \(\hbar=\hbar_{N}\) so that \(\hbar_{N}N=\mu\), where \(\mu>0\) is a fixed number. The following asymptotics hold true:
1. If \(x=\sqrt{(2+1/N)\mu}\cos\phi\), \(\epsilon\leq\phi\leq\pi-\epsilon\), then (A.10) \[\psi_{N+n}^{\hbar}\left(x\right)=\left(\frac{2}{\mu}\right)^{\frac{1}{4}} \left(\frac{1}{\pi\sin\phi}\right)^{\frac{1}{2}}\left\{\sin\left[\left(\frac {N}{2}+\frac{1}{4}\right)\left(\sin(2\phi)-2\phi\right)+\frac{3\pi}{4}-n\phi \right]+O\left(N^{-1}\right)\right\};\]
2. If \(x=\sqrt{(2+1/N)\mu}\cosh\phi\), \(\epsilon\leq\phi\leq\epsilon^{\prime}\), then (A.11) \[\psi_{N+n}^{h}\left(x\right)=\left(\frac{1}{8\mu}\right)^{\frac{1}{4}}\left( \frac{1}{\pi\sinh\phi}\right)^{\frac{1}{2}}\exp\left(-\left(\frac{N}{2}+\frac{1 }{4}\right)\left(\sinh(2\phi)-2\phi\right)+n\phi\right)\left(1+O(N^{-1}) \right);\] (A.12) \[\psi_{N}^{h}\left(x\right)=\left(\sqrt{2/\mu}N^{1/3}\right)^{\frac{1}{2}} \operatorname{Ai}(t)+O(N^{-1/2}).\]
In all these formulae, the \(O\)-terms hold uniformly. Note that the choice \(\hbar N=\mu\) is the right scaling of a vanishing Planck constant that gives rise to a nontrivial asymptotics.
## Appendix B Sine kernel, Airy kernel and their Fourier transforms
We denote the normalised _semicircular density_ of radius \(\sqrt{2\mu}>0\),
(B.1) \[\rho_{\mu}(x)=\frac{1}{\pi\mu}\sqrt{(2\mu-x^{2})_{+}},\]
and
(B.2) \[c_{\mu}=2^{\frac{1}{2}}\mu^{\frac{1}{6}}.\]
Here is why these factors pop out in all formulae. If the energy of a (classical) harmonic oscillator is \(\mathfrak{h}_{\mathrm{h.o.}}(x,p)=\mu\), then the momentum as a function of position is \(p(x)=\sqrt{(2\mu-x^{2})_{+}}\). At the points of inversion of motion \(x=\pm\sqrt{2\mu}\), the momentum is zero, and its one-side derivative is \(|p^{\prime}(\pm\sqrt{2\mu})|=c_{\mu}^{3}\).
The sine and the Airy kernels are
(B.3) \[K_{\mathrm{sine}}(u,v) =\frac{\sin\pi\left(u-v\right)}{\pi(u-v)} \text{(\emph{sine kernel})},\] (B.4) \[K_{\mathrm{Ai}}(u,v) =\frac{\operatorname{Ai}(u)\operatorname{Ai}^{\prime}(v)- \operatorname{Ai}^{\prime}(u)\operatorname{Ai}(v)}{u-v} \text{(\emph{Airy kernel})},\]
where the _Airy function_ is defined by the formula
(B.5) \[\operatorname{Ai}(x):=\frac{1}{2\pi i}\int_{C}e^{\xi^{3}/3-\chi\xi}d\zeta,\]
with \(C\) a countour in the complex \(\zeta\)-plane consisting of the ray joining \(e^{-i\pi/3}\infty\) to the origin plus the ray joining the origin to \(e^{i\pi/3}\infty\).
The kernels (B.3)-(B.4) are defined for \(u=v\) in the unique way making them continuous (and in fact \(C^{\infty}\)). The sine kernel can be viewed as the'square' of another symmetric kernel,
(B.6) \[K_{\mathrm{sine}}(u,v)=\int_{-1}^{1}e^{\pi ibu\lambda}e^{\pi ibv\lambda}d\lambda.\]
A similar identity holds for the Airy kernel,
(B.7) \[K_{\mathrm{Ai}}(u,v)=\int_{0}^{+\infty}\operatorname{Ai}(u+\lambda) \operatorname{Ai}(v+\lambda)d\lambda.\]
(Use (2.1) and the Airy differential equation, see [50].) Using a trick one gets the following useful representation:
(B.8) \[K_{\mathrm{Ai}}(u,v)=\int_{\mathbb{R}}e^{iq(t-s)}\left(\int_{0}^{+\infty} \operatorname{Ai}\left(\lambda+2^{2/3}q^{2}+(u+v)/2^{1/3}\right)d\lambda \right)\frac{dq}{2\pi}.\]
Note that \(K_{\rm sine}(u,-u)\) is locally integrable (but not in \(L_{1}(\mathbb{R})\)), while \(\int_{\mathbb{R}}|K_{\rm Ai}(u,-u)|\ du<\infty\).
\(K_{\rm sine}\) is the Fourier transform of the characteristic function of the disk:
(B.9) \[\int_{\mathbb{R}}\mu\rho_{\mu}(x)K_{\rm sine}(-\mu\rho_{\mu}(x)y/2,\mu\rho_{\mu} (x)y/2)e^{ipy}dy=\chi_{D}(x,p).\]
From (B.8) we also get an explicit formula for the Fourier transform of \(K_{\rm Ai}\):
(B.10) \[\int_{\mathbb{R}}c_{\mu}K_{\rm Ai}(-c_{\mu}y/2,c_{\mu}y/2)e^{izy}dy={\rm Ai}_{ 1}\left(\frac{z^{2}}{(2\mu)^{\frac{1}{3}}}\right),\]
where
(B.11) \[{\rm Ai}_{1}(\xi):=\int_{\xi}^{+\infty}{\rm Ai}\left(u\right)du\]
is the _integrated Airy function_. We have \({\rm Ai}_{1}(-\infty)=\int_{\mathbb{R}}{\rm Ai}\left(u\right)du=1\). The function \({\rm Ai}_{1}(\xi)\) has the following large \(|\xi|\) asymptotics [43, Eq. (9.10.4)-(9.10.6)]:
(B.12) \[{\rm Ai}_{1}(\xi)\sim\begin{cases}\frac{1}{2\pi^{1/2}|\xi|^{3/4}}e^{-\frac{2 }{3}|\xi|^{2/3}},&\text{for $\xi\to+\infty$},\\ \\ 1-\frac{1}{\pi^{1/2}|\xi|^{3/4}}\cos\left(\frac{2}{3}|\xi|^{2/3}+\frac{\pi}{4} \right),&\text{for $\xi\to-\infty$}.\end{cases}\]
|
2305.09781 | SpecInfer: Accelerating Generative Large Language Model Serving with
Tree-based Speculative Inference and Verification | This paper introduces SpecInfer, a system that accelerates generative large
language model (LLM) serving with tree-based speculative inference and
verification. The key idea behind SpecInfer is leveraging small speculative
models to predict the LLM's outputs; the predictions are organized as a token
tree, whose nodes each represent a candidate token sequence. The correctness of
all candidate token sequences represented by a token tree is verified against
the LLM in parallel using a novel tree-based parallel decoding mechanism.
SpecInfer uses an LLM as a token tree verifier instead of an incremental
decoder, which significantly reduces the end-to-end latency and computational
requirement for serving generative LLMs while provably preserving model
quality. Our evaluation shows that SpecInfer outperforms existing LLM serving
systems by 1.5-2.8x for distributed LLM inference and by 2.6-3.5x for
offloading-based LLM inference, while preserving the same generative
performance. SpecInfer is publicly available at
https://github.com/flexflow/FlexFlow/ | Xupeng Miao, Gabriele Oliaro, Zhihao Zhang, Xinhao Cheng, Zeyu Wang, Zhengxin Zhang, Rae Ying Yee Wong, Alan Zhu, Lijie Yang, Xiaoxiang Shi, Chunan Shi, Zhuoming Chen, Daiyaan Arfeen, Reyna Abhyankar, Zhihao Jia | 2023-05-16T20:12:59Z | http://arxiv.org/abs/2305.09781v4 | SpecInfer: Accelerating Generative LLM Serving with Speculative Inference and Token Tree Verification
###### Abstract
The high computational and memory requirements of generative large language models (LLMs) make it challenging to serve them quickly and cheaply. This paper introduces SpecInfer, an LLM serving system that accelerates generative LLM inference with speculative inference and token tree verification. A key insight behind SpecInfer is to combine various collectively boost-tuned small language models to jointly predict the LLM's outputs; the predictions are organized as a token tree, whose nodes each represent a candidate token sequence. The correctness of all candidate token sequences represented by a token tree is verified by the LLM in parallel using a novel tree-based parallel decoding mechanism. SpecInfer uses an LLM as a token tree verifier instead of an incremental decoder, which significantly reduces the end-to-end latency and computational requirement for serving generative LLMs while provably preserving model quality.
## 1 Introduction
Generative large language models (LLMs), such as ChatGPT [3] and GPT-4 [24], have demonstrated remarkable capabilities of creating natural language texts across various application domains, including summarization, instruction following, and question answering [43; 22]. However, it is challenging to quickly and cheaply serving these LLMs due to their large volume of parameters, complex architectures, and high computational requirements. For example, the GPT-3 architecture has 175 billion parameters, which require more than 16 NVIDIA 40GB A100 GPUs to store in single-precision floating points, and take several seconds to serve a single inference request [3].
A generative LLM generally takes input as a sequence of tokens, called _prompt_, and generates subsequent tokens one at a time, as shown in Figure 0(a). The generation of each token in the sequence is conditioned on the input prompt and previously generated tokens and does not consider future tokens. This approach is also called _autoregressive_ decoding because each generated token is also used as input for generating future tokens. This dependency between tokens is crucial for many NLP tasks that require preserving the order and context of the generated tokens, such as text completion.
Existing LLM systems generally use an _incremental decoding_ approach to serving a request where the system computes the activations for all prompt tokens in a single step and then iteratively decodes _one_ new token using the input prompt and all previously generated tokens. This approach respects data dependencies between tokens, but achieves suboptimal runtime performance and limited GPU utilization, since the degree of parallelism within each request is greatly limited in the incremental phase. In addition, the attention mechanism of Transformer [36] requires accessing the keys and values of all previous tokens to compute the attention output of a new token. To avoid recomputing the keys and values for all preceding tokens, today's LLM serving systems use a caching mechanism
to store their keys and values for reuse in future iterations. For long-sequence generative tasks (e.g., GPT-4 supports up to 32K tokens in a request), caching keys and values introduces significant memory overhead, which prevents existing systems from serving a large number of requests in parallel due to the memory requirement of caching their keys and values.
This paper introduces SpecInfer, an LLM serving system that improves the end-to-end latency and computational efficiency of generative LLM inference with _speculative inference_ and _token tree verification_. A key insight behind the design of SpecInfer is to use an LLM as a token tree verifier instead of an incremental decoder. For a given sequence of tokens, SpecInfer uses a _learning-based speculator_ that combines user-provided functions (e.g., document retriever) and multiple collectively _boost-tuned_ small speculative models (SSMs) to jointly generate a token tree, whose nodes each represent a candidate token sequence The correctness of _all_ token sequences represented by a token tree is then verified against the LLM's original output in parallel using a novel _tree-based parallel_ decoding algorithm. This approach allows SpecInfer to opportunistically verify multiple tokens in a single decoding step as long as the speculated token tree overlaps with the LLM's output.
Compared to incremental decoding, SpecInfer's speculative inference and token tree verification introduce small computation and memory overheads for generating and verifying speculated token
Figure 1: Comparing the incremental decoding approach used by existing LLM serving systems and the speculative inference and token tree verification approach used by SpecInfer.
trees. However, by maximizing the number of tokens that can be successfully verified in a single LLM decoding step, SpecInfer greatly reduces the end-to-end inference latency and improves the computational efficiency for serving generative LLMs. We evaluate SpecInfer on two LLM families (i.e., LLaMA [34] and OPT [44]) and five prompt datasets. Our evaluation shows that SpecInfer can reduce the number of LLM decoding steps by up to \(4.4\times\) (\(3.7\times\) on average) and reduce the end-to-end inference latency by up to \(2.8\times\).
## 2 Overview
```
1:Input: A sequence of input tokens \(\mathcal{I}\)
2:Output: A sequence of generated tokens
3:\(\mathcal{S}=\mathcal{I}\)
4:while true do
5:\(t=\textsc{Decode}(\textsc{LLM},\mathcal{S})\)
6:\(\mathcal{S}\).append(\(t\))
7:if\(t=\langle\textsc{EOS}\rangle\)then
8:Return\(\mathcal{S}\)
```
**Algorithm 1** The incremental decoding algorithm used in existing LLM serving systems.
```
1:Input: A sequence of input tokens \(\mathcal{I}\)
2:Output: A sequence of generated tokens
3:\(\mathcal{S}=\mathcal{I}\)
4:while true do
5:\(\mathcal{N}=\textsc{Speculate}(\mathcal{S})\)
6:\(\mathcal{O}=\textsc{TreeParallelDecode}(\textsc{LLM},\mathcal{N})\)
7:\(\mathcal{V}=\textsc{Verify}(\mathcal{O},\mathcal{N})\)
8:for\(t\in\mathcal{V}\)do
9:\(\mathcal{S}\).append(\(t\))
10:if\(t=\langle\textsc{EOS}\rangle\)then
11:return\(\mathcal{S}\)
12:
13:functionVerify(\(\mathcal{O},\mathcal{N}\))
14:\(\mathcal{V}=\emptyset\)
15:\(u\leftarrow\) the root of token tree \(\mathcal{N}\)
16:while\(\exists v\in\mathcal{N}.p_{v}=u\) and \(t_{v}=\mathcal{O}(u)\)do
17:\(u=v\)
18:\(\mathcal{V}\).append(\(t_{v}\))
19:\(\mathcal{V}\).append(\(\mathcal{O}(u)\))
20:return\(\mathcal{V}\)
```
**Algorithm 2** The speculative inference and token tree verification algorithm used by SpecInfer. Speculate takes the current token sequence \(\mathcal{S}\) as an input and generates a speculated token tree \(\mathcal{N}\). SpecInfer's use of an LLM is different from existing systems: the LLM takes a token tree \(\mathcal{N}\) as an input and generates a token \(\mathcal{O}(u)\) for each node \(u\in\mathcal{N}\). Note that the TreeParallelDecode function can generate all tokens in \(\mathcal{O}\) in a single LLM decoding step (see Section 4). Finally, Verify examines the speculated token tree \(\mathcal{N}\) against the LLM's output \(\mathcal{O}\) and produces a sequence of verified tokens \(\mathcal{V}\), which can be directly appended to the current token sequence \(\mathcal{S}\).
Figure 1c shows an overview of our approach. SpecInfer includes a _learning-based speculator_ that takes as input a sequence of tokens, and produces a _speculated token tree_. The goal of the speculator is to predict the LLM's output by maximizing the overlap between the speculated token tree and the token sequence generated by the LLM using incremental decoding. As shown at the top of Figure 1c, the speculator combines (1) user-provided functions that predict future tokens based on heuristics and/or retrieval-augmented documents, and (2) multiple distilled and/or pruned versions of the LLM, which we call small speculative models (SSMs).
There are a number of ways to prepare SSMs for speculative inference. First, modern LLMs generally have many much smaller architectures pre-trained together with the LLM using the same datasets.
For example, in addition to the OPT-175B model with 175 billion parameters, the OPT model family also includes OPT-125M and OPT-350M, two variants with 125 million and 350 million parameters, which were pre-trained using the same datasets as OPT-175B [44]. These pre-trained small models can be directly used as SSMs in SpecInfer. Second, to maximize the coverage of speculated token trees, in addition to using these pre-trained SSMs, SpecInfer also introduces a novel fine-tuning technique called _collective boost-tuning_ to cooperatively fine-tune a set of SSMs by aligning their aggregated prediction with the LLM's output using adaptive boosting [13].
The speculator automatically combines the candidate token sequences predicted by individual SSMs to construct a token tree, as shown in Figure 0(c). Since SpecInfer executes multiple SSMs in parallel, using more SSMs does not directly increase the speculative inference latency. However, using a large number of SSMs will result in a large token tree, which requires more memory and computation resources for verification. To address this challenge, SpecInfer uses a _learning-based_ speculative scheduler to learn to decide which SSMs to use for a given input token sequence and the speculative configurations for these SSMs (e.g., the beam search width and depth when running an SSM using beam search).
SpecInfer's usage of the LLM is also different from that of existing LLM serving systems. Instead of using the LLM as an incremental decoding engine that predicts the next single token, SpecInfer uses the LLM as a token tree verifier that verifies whether the speculated token tree overlaps with the true token sequence. For each token, SpecInfer computes its activations by considering all of its ancestors in the token tree as its preceding tokens. For example, the attention output of the token \(t_{3,0}\) is calculated based on sequence \((t_{0},t_{1,0},t_{2,1},t_{3,0})\), where \(t_{0}\), \(t_{1,0}\), and \(t_{2,1}\) are \(t_{3,0}\)'s ancestors in the token tree. SpecInfer includes a novel tree-based parallel decoding algorithm to simultaneously verify _all_ tokens in a speculated token tree in a single LLM decoding step.
SpecInfer's speculative inference and token tree verification provides two key advantages over the incremental decoding approach of existing LLM inference systems.
Reduced memory accesses to LLM parameters.The performance of generative LLM inference is largely limited by GPU memory accesses. In existing incremental decoding approach, generating a single token requires accessing all parameters of an LLM. The problem is exacerbated for offloading-based LLM inference systems, which use limited computational resources such as a single commodity GPU to serve LLMs by utilizing CPU DRAM and persistent storage to save model parameters and loading these parameters to GPU's high bandwidth memory (HBM) for computation. Compared to the incremental decoding approach, SpecInfer significantly reduces accesses to LLM parameters whenever the overlap between a speculated token tree and the LLM's actual output is not empty. Reduced accesses to GPU device memory and reduced data transfers between GPU and CPU memory can also directly translate to decreased energy consumption, since accessing GPU HBM consumes two or three orders of magnitude more energy than floating point arithmetic operations.
Reduced end-to-end inference latency.Serving LLMs suffers from long end-to-end inference latency. For example, the GPT-3 architecture includes 175 billion parameters and requires many seconds to serve a request. In existing incremental decoding approach, the computation for generating each token depends on the keys and values of all previously generated tokens, which introduces sequential dependencies between tokens and requires modern LLM serving systems to serialize the generation of different tokens for each request. In SpecInfer, LLMs are used as a verifier that takes a speculated token tree as an input and can simultaneously examine _all_ tokens in the token tree by making a single verification pass over the LLM. This approach enables parallelization across different tokens in a single request and reduces the LLM's end-to-end inference latency.
## 3 Speculative Inference
One major factor of SpecInfer is the design and implementation of the speculator. On the one hand, more accurate speculation can lead to speculated token trees with longer matching lengths, which in turn results in fewer LLM verification steps. On the other hand, due to the intrinsic expression dynamism where some phrases in a sentence are easier to speculate while others are more challenging, a fixed configuration to perform speculation (e.g., the beam width and depth when speculating using beam search) leads to suboptimal performance, since a very small speculation window may result in
missed opportunities to match longer token sequences, while a very large speculation window may produce unnecessary tokens.
SpecInfer includes two key techniques to address this challenge. First, to improve the speculative performance of a token tree, Section 3.1 introduces collective boost-tuning, a novel fine-tuning technique that aligns the aggregated prediction of a set of SSMs with the LLM's output using adaptive boosting. Second, to tackle the dynamism across different speculations, Section 3.2 presents a learning-based speculative scheduler that learns to discover the best speculative configuration for a given input token sequence and a set of SSMs.
### Collective Boost-Tuning
As identified in previous works [21; 32], a key limitation of using a single SSM for speculative inference is that the alignment between the SSM and LLM is inherently bounded by the model capacity gap between the two models. Our preliminary exploration shows that using a larger model achieves better speculative performance but introduces additional memory overhead and inference latency to run the larger speculative model.
Consequently, SpecInfer uses an unsupervised approach to collectively fine-tuning a pool of SSMs to align their outputs with that of the LLM by leveraging the adaptive boosting technique, as shown in Figure 2. SpecInfer's SSMs are used to predict the next few tokens that will be generated by an LLM, therefore SpecInfer uses general text datasets (e.g., the OpenWebText corpus [14] in our evaluation) to adaptively align the aggregated output of multiple SSMs with the LLM in a fully unsupervised fashion. In particular, we convert a text corpus into a collection of prompt samples and uses the LLM to generate a token sequence for each prompt. SpecInfer first fine-tunes one SSM at a time to the fullest and marks all prompt samples where the SSM and LLM generate identical subsequent tokens. Next, SpecInfer filters all marked prompt samples and uses all remaining samples in the corpus to fine-tune the next SSM to the fullest. By repeating this process for every SSM in the pool, SpecInfer obtains a diverse set of SSMs whose aggregated output largely overlaps with the LLM's output on the training corpus. All SSMs have roughly identical inference latency, and therefore running all SSMs on different GPUs in parallel does not increase the latency of speculative inference compared to using a single SSM. Note that using multiple SSMs increases the memory overhead for storing their parameters on GPUs. However, our evaluation shows that SpecInfer can achieve significant performance improvement by using SSMs 40-100\(\times\) smaller than the LLM, making the overhead of hosting these SSMs negligible. In our evaluation, we perform collective boost-tuning offline on publicly available datasets.
Figure 2: Illustrating SpecInfer’s collective boost-tuning technique. When using a single SSM to generate token trees, SpecInfer can verified 2.6 tokens on average in each LLM decoding step. This is due to the misalignment between SSM 1 and the LLM on the first four token sequences. By collectively boost-tuning three SSMs, the average number of verified tokens per LLM decoding step is improved to 7.2.
### Learning-based Speculative Scheduler
To discover an optimal configuration to launch multiple SSMs at each decoding step, we design a _learning-based_ speculative scheduler that learns to decide which SSMs to use for a given input token sequence and the speculative configurations for these SSMs.
The scheduler includes a matching length predictor and cost model. The matching length predictor takes as input the latest feature representation of the final hidden layer from the LLM and outputs a vector of continuous numbers, each corresponding to the expected matching length under a specific speculative configuration. SpecInfer uses a three-layer MLP as the neural architecture of the matching length predictor and considers a configuration space of beam search for each SSM, where the beam width \(b\in[1,2,4]\) and the beam depth \(d\in[1,2,4,8,16]\), therefore the MLP outputs a vector of 15 numbers, each represent the predicted matching length for a speculative configuration. The predictor is also trained on publicly available datasets in an offload fashion. Note that obtaining the input feature vector for the predictor does not involve extra cost as it's self-contained in the SpecInfer's verifier (see Section 4).
To achieve higher matching length per unit time, we define the following cost function:
\[cost(b,d\mid h)=\frac{f(b,d\mid h)}{L_{\text{verify}}(b,d)+L_{\text{speculate }}(b,d)}, \tag{1}\]
where \(b\) and \(d\) are the beam search width and depth, \(h\) is the input feature vector to the predictor, and \(f(b,d\mid h)\) is the predicted matching length for the given speculative configuration \((b,d)\) and current context \(h\). \(L_{\text{verify}}(b,d)\) and \(L_{\text{speculate}}(b,d)\) are the estimated inference latency for the verifier and speculator, respectively, which are measured by profiling the SpecInfer runtime system. Using the cost function defined in Equation (1), SpecInfer chooses the configuration that minimizes the expected cost for each SSM:
\[(b,d)=\arg\max_{(b,d)}cost(b,d\mid h) \tag{2}\]
## 4 Token Tree Verifier
This section introduces SpecInfer's token tree verifier, which takes as input a token tree generated by the speculator and verifies the correctness of its token sequences against a given LLM.
Token tree.SpecInfer uses a _token tree_ to store the results generated by the learning-based speculator. Each token tree \(\mathcal{N}\) is a tree structure, where each node \(u\in\mathcal{N}\) is labelled by token \(t_{u}\), and \(p_{u}\) represents \(u\)'s parent node in the token tree. For each node \(u\), \(S_{u}\) represents a sequence of tokens identified by concatenating \(S_{p_{u}}\) and \(\{t_{u}\}\)1.
Footnote 1: For the root node \(r\), \(S_{r}\) represents the token sequence \(\{t_{r}\}\).
SpecInfer receives multiple token sequences generated by different SSMs, each of which can be considered as a token tree (with linear tree structure). SpecInfer first merges these token trees into a single tree structure.
**Definition 4.1** (Tree Merge).: \(\mathcal{M}\) is the tree merge of \(m\) token trees \(\{\mathcal{N}_{i}\}\) (\(1\leq i\leq m\)) if and only if \(\forall 1\leq i\leq m,\forall u\in\mathcal{N}_{i},\exists v\in\mathcal{M}\) such that \(S_{v}=S_{u}\) and vice versa.
Intuitively, each token tree represents a set of token sequences. Merging multiple token trees produces a new tree that includes all token sequences of the original trees.
A key idea behind the design of SpecInfer is _simultaneously_ verifying all sequences of a token tree against the original LLM's output by making a single pass over the LLM architecture. Token tree verification allows SpecInfer to opportunistically decode multiple tokens (instead of a single token in the incremental decoding approach), resulting in reduced accesses to the LLM's parameters. A challenge SpecInfer must address in token tree verification is efficiently computing the attention scores for _all_ sequences of a token tree. SpecInfer performs _tree attention_, a fast and cheap approach to performing Transformer-based attention computation for a token tree, and a number of important system-level optimizations to address this challenge.
Section 4.1 describes tree attention, Section 4.2 introduces the mechanism SpecInfer uses to verify a token tree against an LLM's output, and Section 4.3 presents SpecInfer's optimizations to accelerate token tree verification.
### Tree Attention
Transformer-based language models use the attention mechanism to reason about sequential information [36]. Modern LLMs generally use decoder-only, multi-head self-attention layers, each of which takes a single input tensor \(X\) and computes an output tensor \(O\) via scaled multiplicative formulations as follows.
\[Q_{i}=X\times W_{i}^{Q}, K_{i}=X\times W_{i}^{K}, V_{i}=X\times W_{i}^{V}, \tag{3}\] \[A_{i}=\frac{(Q_{i}\times K_{i}^{T})}{\sqrt{d}}, H_{i}=\text{softmax}(\text{mask}(A_{i}))V_{i}, O=(H_{1},...,H_{h})W^{O} \tag{4}\]
where \(Q_{i}\), \(K_{i}\), and \(V_{i}\) denote the query, key, and value tensors of the \(i\)-th attention head (\(1\leq i\leq h\)), \(W_{i}^{Q}\), \(W_{i}^{K}\), and \(W_{i}^{V}\) are the corresponding weight matrices. \(A_{i}\) is an \(l\times l\) matrix that represents the attention scores between different tokens in the input sequence, where \(l\) is the sequence length. To preserve causality when generating tokens (i.e., a token in the sequence should not affect the hidden states of any preceding tokens), the following casual mask function is applied:
\[\text{mask}(A)_{jk}=\begin{cases}A_{jk}&j\geq k\\ -\infty&j<k\end{cases} \tag{5}\]
Intuitively, when computing the attention output of the \(j\)-th token in the sequence, all subsequent tokens should have an attention score of \(-\infty\) to indicate that the subsequent tokens will not affect the attention output of the \(j\)-th token2. In Equation 4, \(H_{i}\) represents the output of the \(i\)-th attention head, and \(W_{O}\) is a weight matrix used for computing the final output of the attention layer.
Footnote 2: The \(i\)-th attention head is a weight matrix that is used for computing the attention score of the \(i\)-th attention head.
Note that the attention mechanism described above applies to a sequence of tokens. Therefore, a straightforward approach to verifying a token tree is computing the attention scores for individual token sequences (i.e., \(S_{u}\) for all \(u\in\mathcal{N}\)). However, this approach is computationally very expensive and involves redundant computations, since two token sequences sharing a common prefix have the same attention outputs for the common prefix due to the casual mask in Equation 4. To address this issue, we generalize the attention mechanism to apply it to tree structures. For each node \(u\) in a token tree, its attention output is defined as the output of computing attention on \(S_{u}\) (i.e., the token sequence represented by \(u\)). Note that the semantic of SpecInfer's tree attention is different from prior tree-structured attention work, which we discuss in Section 7.
### Verification
For a given speculated token tree \(\mathcal{N}\), SpecInfer uses the tree attention mechanism described in Section 4.1 to compute an attention output for each node \(u\in\mathcal{N}\). A key advantage of this approach is enabling SpecInfer to examine all tokens in parallel by visiting the LLM's parameters once. This parallel decoding procedure generates an output tensor \(\mathcal{O}\) that includes a token for each node \(u\in\mathcal{N}\). Algorithm 2 shows SpecInfer's verification process, which starts from the root of \(\mathcal{N}\) and iteratively examines a node's speculated results against the LLM's original output. For a node \(u\in\mathcal{N}\), SpecInfer successfully speculates its next token if \(u\) includes a child node \(v\) (i.e., \(p_{v}=u\)) whose token matches the LLM's output (i.e., \(t_{v}=\mathcal{O}(u)\)). In this case, SpecInfer finishes its verification for node \(u\) and moves on to examine its child \(v\). When the node \(u\) does not include a child that contains the LLM's output, SpecInfer adds \(\mathcal{O}(v)\) as a verified node in \(\mathcal{N}\) and terminates the verification process. Finally, all verified nodes are appended to the current generated token sequence \(\mathcal{S}\). Token tree verification allows SpecInfer to opportunistically decode multiple tokens (instead of a single token in the incremental decoding approach), while preserving the same generative performance as incremental decoding.
### Optimizations
This section describes a number of system-level optimizations in SpecInfer to accelerate token tree verification.
Depth-first search to update key-value cache.As shown in Equation 4, the attention mechanism of Transformer [36] requires accessing the keys and values of all preceding tokens to compute the attention output of each new token. To avoid recomputing these keys and values, today's LLM inference systems generally cache the keys and values of all tokens for reuse in future iterations, since the casual relation guarantees that a token's key and value remain unchanged in subsequent iterations.
A key challenge SpecInfer must address in verifying a token tree is that different sequences of in the token tree may include conflicting key-value caches. For the speculated token tree at the top of Figure 3, two token sequences \((t_{2},t_{3},t_{4},t_{5})\) and \((t_{2},t_{3},t_{8},t_{9})\) have different keys and values for the third and fourth positions. A straight forward approach to supporting key-value cache is employing the sequence-based decoding of existing LLM inference systems and have a different key-value cache for each sequence of a token tree, as shown in the top-left of Figure 3. However, this approach requires multiple replicas of key-value caches for verifying different sequences and introduces redundant computations since sequences in a token tree may share common prefixes.
Instead of caching the keys and values for individual token sequences of a token tree, SpecInfer reuses the same key-value cache across all token sequences by leveraging a _depth-first search_ mechanism to traverse the token tree, as shown in the top-right of Figure 3, where the arrows indicate how the key-value cache is updated when decoding different tokens. By following a depth-first order to traverse the token tree and update the shared key-value cache, SpecInfer is able to maintain the correct keys and values for all preceding tokens when computing the attention output of a new token.
Figure 3: Comparing SpecInfer’s tree-based parallel decoding with sequence and token-based decoding.
Tree-based parallel decoding.Existing LLM inference systems use an incremental decoding approach that decodes a single token in each iteration during the generative phase. Therefore, a similar approach for computing tree attention is iteratively calculating the attention output for individual tokens in the token tree by following the depth-first order described earlier. However, this approach would result in high GPU kernel launch overhead since each kernel only computes tree attention for a single token. A key challenge that prevents SpecInfer from batching multiple tokens is that the attention computation for different tokens require different key-value caches and therefore cannot be processed in parallel. For example, the token-based decoding in Figure 3 shows the key-value caches needed for each token.
SpecInfer uses a _tree-based parallel decoding_ algorithm to opportunistically batch multiple tokens in a token tree. Specifically, SpecInfer leverages the casual mask of generative LLM inference and groups multiple tokens into a single kernel if each token is the subsequent token's parent. For example, a depth-first search to traverse the token tree in Figure 3 is \((t_{3},t_{4},t_{5},t_{6},t_{7},t_{8},t_{9})\). Instead of launching 7 individual kernels to compute the tree attention for these tokens, SpecInfer groups them into three kernels: \((t_{3},t_{4},t_{5})\), \((t_{6},t_{7})\), and \((t_{8},t_{9})\), within each of which a token is a child of the previous token. To batch attention computation, SpecInfer uses the key-value cache of the kernel's last token (i.e., \(t_{5}\) for the first kernel), which results in attention scores that violate the casual dependency. SpecInfer then fixes the attention scores for these pairs. This approach computes the exact same attention output as incremental decoding, while achieving much fewer kernel launches compared to the sequence and token-based decoding mechanism.
## 5 Discussion
### Overheads of Speculative Inference and Token Tree Verification
SpecInfer accelerates generative LLM inference at the cost of memory and computation overheads. This section analyzes these overheads and show that they are generally one or two orders of magnitude smaller than the memory and computation cost of performing LLM inference using incremental decoding.
Memory overhead.The memory overhead of SpecInfer's speculation-verification approach comes from two aspects. First, in addition to serving an LLM, SpecInfer also needs to allocate memory for saving the parameters of one or multiple small models, which collectively speculate the LLM's output. Our evaluation shows that SpecInfer can achieve significant performance improvement by using speculative models 40-100\(\times\) smaller than the LLM. As a result, hosting each small speculative model (SSM) increases the overall memory requirement by 1-2%. A second source of memory overhead comes from the token tree verification engine, which verifies an entire token tree instead of decoding a single token. Therefore, additional memory is needed for storing the keys, values, and attention scores for all tokens in a token tree. Due to the necessity for supporting very long sequence length in today's LLM serving, we observe that the memory overhead associated with the token tree is negligible compared to the key-value cache. For example, GPT-4 supports processing up to 32K tokens in a single request; our evaluation shows that a token tree of size 32 or 64 already allows SpecInfer to match xxx tokens on average.
Computation overhead.Similarly, the computation overhead introduced by speculation inference and verification also comes from two aspects. First, SpecInfer needs to run multiple SSMs in the incremental-decoding mode to generate candidate token sequences. SpecInfer processes the SSMs in parallel across GPUs to minimize the latency for generating a speculated token tree. Our evaluation shows that the latency of running a SSM in the incremental-decoding mode is 3.7\(\times\) better than that of an LLM. Second, SpecInfer verifies a token tree by computing the attention outputs for all token sequences of the tree, most of which do not match the LLM's output and therefore are unnecessary in the incremental-decoding inference. However, the key-value cache mechanism of existing LLM inference systems prevents them from serving a large number of requests in parallel, resulting in under-utilized computation resources on GPUs when serving LLMs in incremental decoding. SpecInfer's token tree verification leverages these under-utilized resources and therefore introduces negligible runtime overhead compared to incremental decoding.
### Applications
Our speculative inference and token tree verification techniques can be directly applied to a variety of generative LLM applications. We identify two practical scenarios where generative LLM inference can significantly benefit from our techniques.
Distributed generative LLM inference.The memory requirements of modern LLMs exceed the capacity of a single compute node with one or multiple GPUs, and the current approach to addressing the high memory requirement is distributing the LLM's parameters across multiple GPUs. For example, serving a single inference pipeline for GPT-3 with 175 billion parameters requires more than 16 NVIDIA A100-40GB GPUs to store the model parameters in single-precision floating points. Distributed generative LLM inference is largely limited by the latency to transfer intermediate activations between GPUs for each LLM decoding step. While SpecInfer's approach does not directly reduce the amount of inter-GPU communications for LLM inference, SpecInfer verification mechanism can increase the communication granularity and reduce the number of LLM decoding steps.
Offloading-based generative LLM inference.Another practical scenario where SpecInfer's techniques can help is to help reduce the end-to-end inference latency for offloading-based generative LLM serving systems, which leverages CPU DRAM to store an LLM's parameters and loads a subset of these parameters to GPUs for computation in a pipeline fashion [30]. By opportunistically verifying multiple tokens, SpecInfer can effectively reduce the number of LLM decoding steps and the overall communication between CPU DRAM and GPU HBM.
## 6 Evaluation
### Implementation
SpecInfer was implemented on top of FlexFlow [19; 35], a distributed multi-GPU runtime for DNN computation. FlexFlow exposes an API that allows the user to define a DNN model in terms of its layers. The user can also provide a parallelization plan, specifying the degree of data, model, and pipeline parallelism of each layer. During the training phase, FlexFlow can automatically discover the best parallelization plan, which can then be saved and reused in the inference stage.
Internally, FlexFlow represents a DNN as a computational graph where each node is a region of memory, and each edge is an operation on one or more regions. Operations can be represented using three levels of abstraction: layers, operators, and tasks. The FlexFlow compiler transforms the
Figure 4: Comparing the end-to-end inference latency of incremental decoding and SpecInfer on five prompt datasets. We use LLaMA-7B as the LLM and all SSMs are derived from LLaMA-160M. The performance is normalized by incremental decoding, and the numbers on the SpecInfer bars indicate the speedups over incremental decoding.
computational graph from the highest abstractions (layers) to the lowest (tasks). Tasks are also the unit of parallelization; they are non-preemptible, and are executed asynchronously.
### Experimental Setup
Datasets.We evaluate SpecInfer on five conversational datasets, namely Chatbot Instruction Prompts (CIP) [25], ChatGpt Prompts (CP) [23], WebQA [1], Alpaca [33; 27], and PIQA [2]. We only use the prompts/questions from these datasets to form our input prompts to simulate the real-world conversation trace. We randomly selected at most 1000 prompts from each dataset in our evaluation.
Models.To test our system against mainstream generative LLMs, we evaluate our results using two publicly available language models: OPT [44] and LLaMA [34]. More specifically, we select OPT-13B and LLaMA-7B as the LLMs and collectively boost-tune SSMs from OPT-125M and LLaMA-160M. The pre-trained model parameters for OPT-13B, LLaMA-7B, and OPT-125M were directly acquired from their HuggingFace repositories [17]. We didn't find a publicly available pre-trained version of small LLaMA models, and therefore trained a LLaMA-160M from scratch for one epoch using the Wikipedia dataset [10], which took approximately 35 hours on a single NVIDIA A100 GPU. We also used the OpenWebText Corpus [14] to (1) collectively boost-tune multiple SSMs for speculative inference, and (2) collect training data for the learning-based speculative scheduler. Section 6.4 and Section 6.5 report our evaluation on these two components.
Platform.The experiments were conducted on an AWS g4dn.12xlarge instance, each of which is equipped with four NVIDIA T4 16GB GPUs, 48 CPU cores, and 192 GB DRAM. The LLMs used in our evaluation do not fit on a single T4 GPU. Therefore SpecInfer performs LLM inference in single-precision floating points and serves the LLMs across the four GPUs using pipeline model parallelism. SpecInfer serves each SSM on a dedicated GPU and runs these SSMs in parallel for a given sequence of tokens.
### End-to-end Performance
We compare the end-to-end inference latency between incremental decoding and SpecInfer on the five prompt datasets. For each prompt dataset, we measured the inference latency of the two approaches on up to 1000 prompts and reported the average inference latency. Figure 4 shows the results. Compared to incremental decoding, SpecInfer reduces the inference latency by 1.9 - 2.7\(\times\) while generating the exact same sequence of tokens as incremental decoding for all prompts. The performance improvement is mostly realized by SpecInfer's ability to verify multiple tokens in a single LLM decoding step. Next, we evaluate how collective boost-tuning and the learning-based speculative scheduler help improve SpecInfer's inference performance.
### Collective Boost-Tuning
In this section, we demonstrate the effectiveness of collective boost-tuning in terms of improving the average number of verified tokens in each LLM decoding step. For both the OPT and LLaMA experiments, we fine-tuned four SSMs over the OpenWebText Corpus using collective boost-tuning on top of the pre-trained OPT-125M and LLaMA-160M models, which provides a collection of five SSMs (including the base SSM) in each experiment. As shown in Figure 5 and Figure 6, the average number of tokens verified by SpecInfer in each LLM decoding step increases consistently across all five datasets due to better alignment between the LLM and our tuned collection of SSMs. Table 1 and Table 2 further list the corresponding values and show an overall improvement of \(26.4\%\) and \(24.8\%\) respectively compared to using only a single pre-trained SSM.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \# SSMs & 1 & 2 & 3 & 4 & 5 \\ \hline CIP & 3.00 & 3.39 & 3.52 & 3.58 & **3.74** \\ CP & 2.95 & 3.35 & 3.49 & 3.52 & **3.68** \\ WebQA & 2.51 & 2.92 & 3.04 & 3.09 & **3.20** \\ Alpaca & 3.33 & 3.89 & 4.06 & 4.17 & **4.35** \\ PIQA & 2.75 & 3.14 & 3.26 & 3.31 & **3.43** \\ \hline Avg & 2.91 & 3.34 & 3.47 & 3.53 & **3.68** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Average number of tokens verified by SpecInfer in a decoding step. We used OPT-13B as the LLM and used different numbers of collectively boost-tuned SSMs, all of which were derived from OPT-125M. The beam depth is 16 for all SSMs.
Figure 5: Average number of tokens verified by SpecInfer in each LLM decoding step over five datasets. We use a fixed speculation length of 16 for all the SSMs in this experiment. We used OPT-13B as the LLM and used four SSMs boost-tuned from OPT-125M.
Figure 6: Average number of tokens verified by SpecInfer in each LLM decoding step over five datasets. We use a fixed speculation length of 16 for all the SSMs in this experiment. We used LLaMA-7B as the LLM and used four SSMs boost-tuned from LLaMA-160M.
### Learning-based Speculative Scheduler
For the learning-based speculative scheduler, we demonstrate some preliminary results on the matching length predictor in this section. We use a three-layer MLP with a hidden feature size of 64 as our predictor. We train the predictor on 200K samples over the OpenWebText corpus. The labels are generated using OPT-13B as the LLM and OPT-125M as the SSM. As shown in Table 3, using the predictor can achieve similar LLM runs while reducing the SSM runs significantly due to dynamic speculation length. Nevertheless, there is still plenty of space to improve the predictor as the optimal SSM run would be the average matching length times the LLM run.
## 7 Related Work
Transformer-based [36] generative LLMs have demonstrated significant potential in numerous human-level language modeling tasks by continuously increasing their sizes [28, 31, 9, 7]. As GPT-3 [3] becomes the first model to surpass 100B parameters, multiple LLMs (\(>\)100B) have been released, including OPT-175B [44], Bloom-176B [29], and PaLM [7]. Recent work has proposed a variety of approaches to accelerating generative LLM inference, which can be categorized into two classes.
Lossless acceleration.Prior work has explored the idea of using an LLM as a verifier instead of a decoder to boost inference. For example, Yang et al. [41] introduced _inference with reference_, which leverages the overlap between an LLM's output and the references obtained by retrieving documents, and checks each reference's appropriateness by examining the decoding results of the LLM. Motivated by the idea of speculative execution in processor optimizations [4, 15], recent work proposed _speculative decoding_, which uses a small language model to produce a sequence of tokens and examines the correctness of these tokens using an LLM [21, 39, 32, 5, 20]. There are three key differences between SpecInfer and these prior works. First, instead of only considering a single sequence of tokens, SpecInfer generates and verifies a token tree, whose nodes each represent a unique token sequence. SpecInfer performs tree attention to compute the attention output of these token sequences in parallel and uses a novel tree-based decoding algorithm to reuse intermediate results shared across these sequences. Second, prior attempts generally consider a single small language
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \# SSMs & 1 & 2 & 3 & 4 & 5 \\ \hline CIP & 2.47 & 2.81 & 2.93 & 3.10 & **3.13** \\ CP & 1.97 & 2.37 & 2.47 & 2.64 & **2.65** \\ WebQA & 2.09 & 2.28 & 2.34 & 2.45 & **2.47** \\ Alpaca & 2.29 & 2.56 & 2.62 & 2.74 & **2.76** \\ PIQA & 1.89 & 2.13 & 2.21 & 2.34 & **2.36** \\ \hline Avg & 2.14 & 2.43 & 2.51 & 2.65 & 2.67 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Average number of tokens verified by SpecInfer in a decoding step. We used LLaMA-7B as the LLM and used different numbers of collectively boost-tuned SSMs, all of which were derived from LLaMA-160M. The beam depth is 16 for all SSMs.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & \multicolumn{2}{c}{LLM run} & \multicolumn{2}{c}{SSM run} \\ \cline{2-5} & w/ predictor & w/o predictor & w/ predictor & w/o predictor \\ \hline CIP & 8812 & 8449 & 56401 & 135184 \\ CP & 3625 & 3462 & 23172 & 55392 \\ WebQA & 12624 & 12080 & 74953 & 193280 \\ Alpaca & 11123 & 10684 & 72863 & 170944 \\ PIQA & 12625 & 11560 & 74548 & 184960 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The number of LLM runs and SSM runs with or without the presence of the matching length predictor. When there is no predictor, we use a fixed speculation length of 16 for all the SSMs in this experiment. LLM: OPT-13B, SSMs: OPT-125M
model for speculation, which cannot align well with an LLM due to the model capacity gap between them. SpecInfer introduces collective boost-tuning to adapt different SSMs to align with an LLM under different scenarios, which largely increases the coverage of the speculated token trees produced by SpecInfer. Third, an additional challenge SpecInfer has to address is deciding the speculative configuration for a given speculation task. SpecInfer leverages an important observation that the tokens generated by an LLM involve diverse difficulties to speculate, and uses a learning-based speculator to learn to decide which SSMs to use and the speculative configurations for them.
Prior work has also introduced a variety of techniques to optimize ML computations on modern hardware platforms. For example, TVM [6] and Ansor [45] automatically generate efficient kernels for a given tensor program. TASO [18] and PET [38] automatically discover graph-level transformations to optimize the computation graph of a neural architecture. SpecInfer's techniques are orthogonal and can be combined with these systems to accelerate generative LLM computation, which we believe is a promising avenue for future work.
Lossy acceleration.Another line of research leverages model compression to reduce LLM inference latency while compromising the predictive performance of the LLM. For example, prior work proposed to leverage weight/activation quantization of LLMs to reduce the memory and computation requirements of serving these LLMs [40, 12, 26, 42, 8]. Recent work further explores a variety of structured pruning techniques for accelerating Transformer-based architectures [11, 37, 16]. A key difference between SpecInfer and these prior works is that SpecInfer does not directly reduce the computation requirement for performing LLM inference, but instead reorganizing LLM inference computation in a more parallelizable way, which reduces memory accesses and inference latency at the cost of manageable memory and computation overheads.
## 8 Conclusion
This paper introduces SpecInfer, an LLM serving system that accelerates generative LLM inference with speculative inference and token tree verification. A key insight behind SpecInfer is to combine various collectively boost-tuned versions of small language models to efficiently predict the LLM's outputs. SpecInfer significantly reduces the memory accesses to the LLM's parameters and the end-to-end LLM inference latency.
|
2310.12671 | Neural networks for insurance pricing with frequency and severity data:
a benchmark study from data preprocessing to technical tariff | Insurers usually turn to generalized linear models for modeling claim
frequency and severity data. Due to their success in other fields, machine
learning techniques are gaining popularity within the actuarial toolbox. Our
paper contributes to the literature on frequency-severity insurance pricing
with machine learning via deep learning structures. We present a benchmark
study on four insurance data sets with frequency and severity targets in the
presence of multiple types of input features. We compare in detail the
performance of: a generalized linear model on binned input data, a
gradient-boosted tree model, a feed-forward neural network (FFNN), and the
combined actuarial neural network (CANN). The CANNs combine a baseline
prediction established with a GLM and GBM, respectively, with a neural network
correction. We explain the data preprocessing steps with specific focus on the
multiple types of input features typically present in tabular insurance data
sets, such as postal codes, numeric and categorical covariates. Autoencoders
are used to embed the categorical variables into the neural network, and we
explore their potential advantages in a frequency-severity setting. Model
performance is evaluated not only on out-of-sample deviance but also using
statistical and calibration performance criteria and managerial tools to get
more nuanced insights. Finally, we construct global surrogate models for the
neural nets' frequency and severity models. These surrogates enable the
translation of the essential insights captured by the FFNNs or CANNs to GLMs.
As such, a technical tariff table results that can easily be deployed in
practice. | Freek Holvoet, Katrien Antonio, Roel Henckaerts | 2023-10-19T12:00:33Z | http://arxiv.org/abs/2310.12671v3 | # Neural networks for insurance pricing with frequency and severity data:
###### Abstract
Insurers usually turn to generalized linear models for modelling claim frequency and severity data. Due to their success in other fields, machine learning techniques are gaining popularity within the actuarial toolbox. Our paper contributes to the literature on frequency-severity insurance pricing with machine learning via deep learning structures. We present a benchmark study on four insurance data sets with frequency and severity targets in the presence of multiple types of input features. We compare in detail the performance of: a generalized linear model on binned input data, a gradient-boosted tree model, a feed-forward neural network (FFNN), and the combined actuarial neural network (CANN). Our CANNs combine a baseline prediction established with a GLM and GBM, respectively, with a neural network correction. We explain the data preprocessing steps with specific focus on the multiple types of input features typically present in tabular insurance data sets, such as postal codes, numeric and categorical covariates. Autoencoders are used to embed the categorical variables into the neural network and we explore their potential advantages in a frequency-severity setting. Finally, we construct global surrogate models for the neural nets' frequency and severity models. These surrogates enable the translation of the essential insights captured by the FFNNs or CANNs to GLMs. As such, a technical tariff table results that can easily be deployed in practice.
**Practical applications summary:** This paper explores how insights captured with deep learning models can enhance the insurance pricing practice. Hereto we discuss the required data preprocessing and calibration steps, and we present a work flow to construct GLMs for frequency and severity data by leveraging the insights obtained with a carefully designed neural network.
**JEL classification:** G22
**Key words:** property and casualty insurance, pricing, neural networks, embeddings, interpretable machine learning
## 1 Introduction
One of the central problems in actuarial science is the technical pricing of insurance contracts. Premiums are determined at the time of underwriting, while the actual cost of the contract
is only known when claims are processed. The technical premium is defined as the expected loss on a contract. In property and casualty insurance (P&C), expected losses are often estimated by independently modelling the frequency and severity of claims in function of policy and policyholder information. Hence, the modelling of historical data sets, with policyholder characteristics and the observed claim frequency and severity, is key in the design of predictive models. These historical data sets are of tabular structure, containing both numerical, categorical and spatial variables.
Industry-standard is the use of generalized linear models (GLM), introduced by Nelder and Wedderburn (1972), as a predictive modelling tool for claim frequency and severity. Haberman and Renshaw (1996), De Jong and Heller (2008), Ohlsson and Johansson (2010) and Denuit et al. (2019) apply GLMs for non-life insurance pricing. Frees and Valdez (2008) and Antonio et al. (2010) convert the numerical inputs to categorical format for use in a frequency GLM. Henckaerts et al. (2018) present a data-driven method for constructing both a frequency and severity GLM on categorized input data, by combining evolutionary trees and generalized additive models to convert the numerical inputs to categorical variables.
In recent years, machine learning techniques for actuarial purposes have been rising in popularity because of their strong predictive powers. Both Wuthrich and Buser (2021) and Denuit et al. (2020) detail the use of tree-based models in an actuarial context. Liu et al. (2014) use Adaboost for claim frequency modelling. Henckaerts et al. (2021) compare the performance of decision trees, random forests, and gradient boosted trees for modelling claim frequency and severity. Moreover, their paper studies a range of interpretational tools to look under the hood of these predictive models and compares the resulting technical tariffs with managerial tools. Instead of modelling the claim frequency and severity independently, the total loss random variable can be modelled directly via a gradient boosting model with Tweedie distributional assumption, see Yang et al. (2018) and Hainaut et al. (2022). Henckaerts and Antonio (2022) combine tabular contract and policyholder specific information with telematics data in a gradient boosting model for usage-based pricing. Henckaerts et al. (2022) construct a surrogate model on top of a gradient boosting model (GBM) to translate the insights captured by a GBM into a tariff table. A benchmark study on six data sets then examines the robustness of the proposed strategy.
Deep learning methods have been popular in the field of machine learning for many years. An early study of deep learning in an actuarial context is Dugas et al. (2003), comparing the performance of a GLM, decision tree, neural network and a support vector machine for the construction of a technical insurance tariff. Ferrario et al. (2020) use neural networks for frequency modelling and discuss various preprocessing steps. Wuthrich (2019) compares the performance of neural networks and GLMs on a frequency case study. Both Wuthrich (2019) and Schelldorfer and Wuthrich (2019) propose a combined actuarial neural network (CANN) for claim frequency modelling. The CANN starts with a GLM and builds a neural network adjustment on top of the GLM predictions, via a skip connection between input and output layer.
Categorical or factor data must be transformed into numerical representations in order to be utilized by neural networks (Guo and Berkhahn, 2016). This transformation is known in the literature as embedding, which maps categorical variables into numerical vectors. The choice of embedding technique can significantly impact the neural network's performance; see, for example, the claim severity study by Kuo and Richman (2021) where embedding layers are used in both a feed-forward neural network and a transformer network. Embedding layers allow a neural network to learn meaningful representations from the categorical inputs during the training of the neural network. Delong and Kozak (2023) suggest using autoencoders as an alternative method for categorical embedding. An autoencoder is a type of neural network that
learns to compress and to reconstruct data in an unsupervised manner. Using an autoencoder, a compact, numerical representation of the factor input data results that can then be used in both frequency as well as severity modelling. Delong and Kozak (2023) compare different setups of the autoencoder for claim frequency modelling and highlight the importance of normalization of the resulting numerical representation before using it in a feed-forward neural network. Meng et al. (2022) use the same technique in a claim frequency case study with telematic input data and extend the autoencoder with convolutional layers to process input data in image format.
Table 1 gives an overview of the discussed literature on deep learning for insurance pricing. We list the treatment techniques applied to categorical input data, the model architectures used and the extent of the case studies covered by these papers. Lastly, we summarize the interpretation tools used by the authors to extract insights from the model architectures.
Historical claim data sets are often of tabular structure, meaning they can be represented in matrix notation, with each column representing an input variable and each row representing a
\begin{table}
\begin{tabular}{c|c c c c c} \hline \hline
vector of policyholder information. Several papers recently questioned the performance of neural networks on tabular data. Borisov et al. (2022) compare 23 deep learning models on five tabular data sets and show how different tree-based ensemble methods outperform them. They highlight the predictive powers of techniques that combine gradient boosting models with neural network models, such as DeepGBM (Ke et al., 2019), which combines a GBM and a neural network for, respectively, numerical and categorical input features and TabNN (Ke et al., 2018), which bins the input features based on a GBM and uses the resulting bins in a neural network. Shwartz-Ziv and Armon (2022) analyze eight tabular data sets and compare five ensemble methods with four deep learning methods, concluding that the best performer combines gradient boosted trees and a neural network. Grinsztajn et al. (2022) compare the performance of gradient boosted trees, a random forest and different neural network structures on 45 different tabular data sets, highlighting the importance of data normalization and categorical treatment for deep learning models.
In light of these recent papers questioning the performance of deep learning architectures on tabular data, this paper aims to explore the added value of deep learning for non-life insurance pricing using tabular frequency and severity data. For this, we extend the analyses performed in Henckaerts et al. (2021) to deep learning models. Our study is an extension of the existing literature in five directions. First, we extend the CANN model architecture from Schellorfer and Wuthrich (2019) by combining a GBM baseline with neural network adjustments. Moreover, we study both trainable and non-trainable adjustments. Second, we compare a neural network, the proposed CANN structures and two benchmark models, a GLM and a GBM, by considering predictive accuracy and using interpretation tools. The GLM is constructed on categorized input data, following the approach outlined in Henckaerts et al. (2018); the GBM follows the setup from Henckaerts et al. (2021). Third, we study the autoencoder embedding technique from Delong and Kozak (2023) and highlight its importance in frequency-severity modelling. Because the autoencoder is trained in an unsupervised setting, the embedding can be learned on the frequency data and transferred to the severity setting, where we typically have fewer data points. Fourth, our case study is not limited to frequency modelling only but studies both frequency and severity modelling. We use four different insurance data sets to study the impact of sample size and the composition of the input data. Lastly, we use a set of interpretation techniques to capture insights from the constructed frequency and severity models and extend the surrogate GLM technique from Henckaerts et al. (2022) to deep learning architectures. We compare the resulting technical tariffs based on their Lorenz curves and look at the balance achieved by each model at portfolio level. This allows us to get a robust look at the possibilities of neural networks for frequency-severity pricing, from preprocessing steps to technical tariff.
## 2 Technical insurance pricing: notation and set-up
This paper assumes access to an insurance data set with tabular structure, meaning the data can be written in matrix notation, with each column representing a variable and each row representing a data point. We denote a data set as \(\mathcal{D}=\left(\mathbf{x}_{i},y_{i}\right)_{i=1}^{n}\), where each \(\mathbf{x}_{i}\) is a \(p\)-dimensional data point with response \(y_{i}\). Each data point \(\mathbf{x}_{i}\) can be written as a vector \(\left(x_{i,1},\ldots,x_{i,p}\right)\), where each entry \(x_{i,j}\) represents the value of input variable \(j\) for data point \(i\). When not referencing a specific observation \(i\), we often omit the subscript \(i\) and write \(\left(\mathbf{x},y\right)\), with \(\mathbf{x}=\left(x_{1},\ldots,x_{p}\right)\), each \(x_{j}\) representing a variable in our data set \(\mathcal{D}\).
The variables in our data sets can be either numerical or categorical. Assuming \(c\) categorical
variables, we order the variables in \(\mathcal{D}\) as follows:
\[\mathcal{D}=\big{(}\underbrace{x_{1},\ldots,x_{p-c}}_{\text{numerical variables}},\underbrace{x_{p-c+1},\ldots,x_{p}}_{\text{categorical variables}},\underbrace{y}_{\text{response variable}}\big{)}.\]
Insurance data sets can also contain spatial information. A spatial variable is either numerical, i.e., latitude and longitude coordinates, or categorical, i.e., postal code of residence. We do not denote spatial variables separately, but count them as a numerical or categorical variable. When introducing a data set, we specify how the spatial information is encoded.
For frequency-severity modelling, we work with a frequency data set \(\mathcal{D}^{\text{freq}}\) and a severity data set \(\mathcal{D}^{\text{sev}}\), where a data point \(\mathbf{x}_{i}\) represents information about policyholder \(i\). In \(\mathcal{D}^{\text{freq}}\), the response \(y_{i}\) is the number of claims reported by policyholder \(i\). The severity data set \(\mathcal{D}^{\text{sev}}\) consists of the policyholders from \(\mathcal{D}^{\text{freq}}\) who had at least one claim. In \(\mathcal{D}^{\text{sev}}\) we use the average claim size over all reported claims as the response \(y\). Because \(\mathcal{D}^{\text{sev}}\subseteq\mathcal{D}^{\text{freq}}\), but with a different response, we often omit the superscript freq and use \(\mathcal{D}\) and \(\mathcal{D}^{\text{sev}}\). We denote the number of observations as \(n_{f}\) for \(\mathcal{D}\) and \(n_{s}\) for \(\mathcal{D}^{\text{sev}}\), with \(n_{s}\leq n_{f}\). Note that for both \(\mathcal{D}\) and \(\mathcal{D}^{\text{sev}}\), we have the same variables \(x_{1},\ldots,x_{p}\), except that we add the extra variable _exposure-to-risk_\(e\) to the frequency data set. Exposure is the fraction of the year the insurance covered the policyholder. This is only relevant for frequency modelling, hence we do not add this variable to the severity data set. In \(\mathcal{D}^{\text{sev}}\) we do take into account the observed number of claims for each data point, to be used as a weight in the loss function.
For a regression model \(f(\cdot)\) with input covariates \((x_{1},\ldots,x_{p})\) and target response \(y\), we write the model prediction for data point \(i\) as \(f\left(x_{i,1},\ldots,x_{i,p}\right)=\hat{y}_{i}\). We train \(f\) on a training set, denoted as \(\mathcal{D}^{\text{train}}\subset\mathcal{D}\), by choosing the model-specific parameters that minimize a chosen loss function \(\sum_{\mathbf{x}_{i}\in\mathcal{D}^{\text{train}}}\mathscr{L}\left(\hat{y}_{i},y_ {i}\right)\). The out-of-sample performance of a trained model is calculated on the test set \(\mathcal{D}^{\text{test}}=\mathcal{D}\backslash\mathcal{D}^{\text{train}}\) as \(\sum_{\mathbf{x}_{i}\in\mathcal{D}^{\text{test}}}\mathscr{L}\left(\hat{y}_{i},y_ {i}\right)\). We follow the loss functions proposed by Wuthrich and Buser (2021) and Henckaerts et al. (2021) for modelling claim frequency and severity. For claim frequency modelling, where the claim count is typically assumed to be Poisson distributed, we use the Poisson deviance:
\[D_{\text{Poisson}}(f(\mathbf{x}),\mathbf{y})=\frac{2}{n_{f}}\sum_{i=1}^{n_{f}}\left(y_ {i}\ln\frac{y_{i}}{f(\mathbf{x}_{i})}-(y_{i}-f(\mathbf{x}_{i}))\right). \tag{1}\]
Note that when using the exposure-to-risk \(e\) in the frequency model, we replace each prediction \(f(\mathbf{x}_{i})\) with \(e_{i}\cdot f(\mathbf{x}_{i})\) in the Poisson loss function above. Claim severity data are often assumed to be long-tailed and right-skewed, so we use the gamma deviance given by
\[D_{\text{gamma}}(f(\mathbf{x}),\mathbf{y})=\frac{2}{n_{s}}\sum_{i=1}^{n_{s}}\alpha_{i }\left(\frac{y_{i}-f(\mathbf{x}_{i})}{f(\mathbf{x}_{i})}-\ln\frac{y_{i}}{f(\mathbf{x}_{i} )}\right), \tag{2}\]
where the weight \(\alpha_{i}\) is the observed number of claims for data point \(i\).
## 3 Deep learning architectures and preprocessing steps
### Neural network architectures
Feed-forward neural networkA feed-forward neural network (FFNN) is a type of machine learning model that utilizes interconnected layers, represented by \(\mathbf{z}^{(m)}\) with \(m=0,\ldots,M+1\)
The input layer, represented by \(\mathbf{z}^{(0)}\), provides the network with input data, while the output layer, represented by \(\mathbf{z}^{(M+1)}\), gives the network's prediction. Between the input and output layers, there can be one or more hidden layers, represented by \(\mathbf{z}^{(1)},\dots,\mathbf{z}^{(M)}\). When there are two or more hidden layers, we call the neural network a deep learning model. Each layer \(\mathbf{z}^{(m)}\) consists of \(q_{m}\) nodes, so it can be expressed as a vector \(\mathbf{z}^{(m)}=\left(z_{1}^{(m)},\dots,z_{q_{m}}^{(m)}\right)\).
Each node in a layer, excluding the input layer, is connected to all nodes in the previous layer through weights, represented by \(W_{m}\in\mathbb{R}^{q_{m}\times q_{m-1}}\), and a bias term, represented by \(\mathbf{b}_{m}\in\mathbb{R}^{q_{m}}\). An activation function \(\sigma^{(m)}\), \(m=1,\dots,M+1\), adds non-linearity to the network and allows it to learn complex relationships between inputs and outputs. The activation function is applied to the weighted sum of inputs to a node, along with its bias. Each layer \(\mathbf{z}^{(m)}\) can be written in function of the previous layer as follows:
\[\mathbf{z}^{(m)}=\sigma^{(m)}\left(W_{m}\cdot\mathbf{z}^{(m-1)}+\mathbf{b}_{m}\right). \tag{3}\]
Calculating the output of the FFNN in function of the input consists of performing a matrix multiplication for each layer and applying the activation functions. The value of a layer \(\mathbf{z}^{(m)}\) for input \(\mathbf{x}_{i}\) is denoted as \(\mathbf{z}^{(m)}_{i}\) and the value of a specific node \(j\) as \(z^{(m)}_{ij}\). When referencing a node without a specific input, we omit the subscript \(i\) and write \(z^{(m)}_{j}\).
The inputs of the neural network are the data points in a data set \(\mathcal{D}\), the dimension \(q_{0}\) of the input layer is equal to the number of variables \(p\) in the data set1. We write the input layer as \((x_{1},\dots,x_{p})\) to indicate that each node in the input layer represents an input variable from the data set. The target variable \(y\) in our insurance data sets is one-dimensional, so the output layer \(z^{(M+1)}\) has only one node and \(q_{M+1}=1\). We write the output node as \(\hat{y}\). Figure 1 gives a schematic overview of a feed-forward neural network.
Footnote 1: The dimension of the input layer can be larger than \(p\) when using an encoding technique, such as one-hot encoding.
Figure 1: Structure of a feed-forward neural network with \(p\)-dimensional input layer, hidden layers \(\mathbf{z}^{(1)},\dots,\mathbf{z}^{(M)}\), with \(q_{1},\dots,q_{M}\) nodes, respectively. The network has a single output node \(\hat{y}\).
When modelling claim frequency and severity data with GLMs, an actuary typically relies on a Poisson GLM with a log-link function for frequency and a gamma GLM with a log-link function for severity modelling. To mimic this log-link relationship between covariates and output in our FFNN, we use an exponential activation function for the output layer in both the frequency and the severity model. As such, we obtain strictly positive predictions for claim counts and claim amounts.
Combined actuarial neural networksWuthrich (2019) and Schelldorfer and Wuthrich (2019) propose a combination of a GLM with a FFNN, called the Combined Actuarial Neural Network (CANN). A CANN model calibrates a neural network adjustment on top of the GLM prediction. We refer to the GLM prediction as the _initial model prediction_, denoted as \(\hat{y}^{\text{IN}}\). We use \(\hat{y}^{\text{IN}}\) as an input node in a FFNN but do not connect this node to the hidden layers. Instead, \(\hat{y}^{\text{IN}}\) directly connects to the output node of the FFNN via a so-called skip connection. The adjustment made by the neural network on the initial model prediction is called the _adjustment model prediction_ and denoted as \(\hat{y}^{\text{NN}}\). The combination of the initial model prediction and the adjustment calibrated by the neural net is the resulting CANN model prediction, denoted as \(\hat{y}\). Figure 2 shows the structure of the CANN model.
The output node of the CANN model, \(\hat{y}\), is only connected to the initial model input \(\hat{y}^{\text{IN}}\) and the neural network adjustment \(\hat{y}^{\text{NN}}\). We use the exponential activation function in the output layer to ensure the log-link relationship between inputs and the predicted output. Because \(\hat{y}^{\text{IN}}\) is a prediction at the level of the response, we apply a log transform on the initial model predictions. The output of the CANN model is then calculated as:
\[\hat{y}=\exp\left(w_{\text{NN}}\cdot\hat{y}^{\text{NN}}+w_{\text{IN}}\cdot \ln\left(\hat{y}^{\text{IN}}\right)+b\right). \tag{4}\]
Figure 2: Structure of a Combined Actuarial Neural Network (CANN). The initial model prediction \(\hat{y}^{\text{IN}}\) is connected via a skip-connection to output node of the FFNN.
The case study in Schelldorfer and Wuthrich (2019) fixes the weights and bias in the output of the CANN as follows
\[w_{\text{NN}}=1,\,w_{\text{IN}}=1\,\text{and}\,b=0.\]
Following Gielis (2020), we call this the _fixed_ CANN, as the output weights are fixed and not trainable. In our case study, we also run experiments with trainable weights in the output layer and refer to this model as the _flexible_ CANN. This flexibility allows the training of the neural network to put more, or less, weight on the initial model prediction. This can potentially improve the predictive accuracy of the flexible CANN compared to the fixed CANN. Moreover, the initial model input is not restricted to GLM predictions and we will also run experiments in Section 4.4 with an input prediction established with a carefully trained GBM. According to Henckaerts et al. (2021) the GBMs are capable of achieving a higher predictive accuracy compared to a GLM. Using the GBM predictions as initial model input can therefore potentially increase the performance of the CANN model, compared to a CANN using the GLM predictions.
### Preprocessing steps
Continuous variablesWe normalize the continuous input variables to ensure that each variable in the input data has a similar scale. This is important because most neural network training algorithms use gradient-based optimization, which can be sensitive to the scale of the input data (Sola and Sevilla, 1997). For a continuous variable \(x_{j}\) in the input data \(\mathcal{D}\), we use normalization around zero as a scaling technique. Hereto, we replace each value \(x_{i,j}\) as follows:
\[x_{i,j}\mapsto\tilde{x}_{i,j}=\frac{x_{i,j}-\mu_{x_{j}}}{\sigma_{x_{j}}}, \tag{5}\]
where \(\mu_{x_{j}}\) and \(\sigma_{x_{j}}\) are the mean and standard deviation of the variable \(x_{j}\) in the data set \(\mathcal{D}\). When using a subset \(\mathcal{D}^{\text{train}}\subset\mathcal{D}\) to train the model, we calculate the \(\mu_{x_{j}}\) and \(\sigma_{x_{j}}\) only on the data set \(\mathcal{D}^{\text{train}}\) to avoid data leakage.
Categorical variablesThe FFNN and CANN models generate output by performing matrix multiplications and applying activation functions. Therefore, all inputs must be in numerical format. So-called embedding techniques convert categorical input variables to a numerical format. In this study, we utilize the _autoencoder embedding_ proposed by Delong and Kozak (2023). Autoencoders are neural networks commonly used for dimensionality reduction (Goodfellow et al., 2016). They consist of two components: an encoder and a decoder. The encoder maps a numerical input vector to a lower-dimensional representation, while the decoder reconstructs the original input from this representation. During training, the autoencoder minimizes the difference between the original and reconstructed inputs, resulting in an encoder that captures the most important characteristics of the data.
Figure 2(a) shows the general structure of such an autoencoder. It consists of an input layer of dimension \(c\), one hidden layer \(\mathbf{z}^{\text{enc}}\) of dimension \(d\) and an output layer of the same dimension as the input layer. The encoding layer is defined by the activation function \(\sigma^{\text{(enc)}}\), weights matrix \(W_{\text{enc}}\in\mathbb{R}^{d\times c}\) and bias vector \(\mathbf{b}_{\text{enc}}\in\mathbb{R}^{d}\). Similarly, the output layer is defined by activation function \(\sigma^{\text{(dec)}}\), weight matrix \(W_{\text{dec}}\in\mathbb{R}^{c\times d}\) and bias vector \(\mathbf{b}_{\text{dec}}\in\mathbb{R}^{c}\). For input \(\mathbf{x}_{i}\in\mathbb{R}^{c}\), the encoded and decoded representations are calculated as
\[\mathbf{z}_{i}^{\text{enc}} =\sigma^{\text{(enc)}}\left(W_{\text{enc}}\cdot\mathbf{x}_{i}+\mathbf{b}_{ \text{enc}}\right), \tag{6}\] \[\mathbf{x}_{i}^{\text{dec}} =\sigma^{\text{(dec)}}\left(W_{\text{dec}}\cdot\mathbf{z}_{i}^{\text {enc}}+\mathbf{b}_{\text{dec}}\right).\]
The autoencoder is trained on all data points \(\mathbf{x}_{i}\) in a data set \(\mathcal{D}\) by adjusting the weight matrices and bias vectors in order to minimize a chosen loss function \(\sum_{\mathbf{x}_{i}\in\mathcal{D}}\mathscr{L}\left(\mathbf{x}_{i}^{\text{dec}},\mathbf{x}_ {i}\right)\).
Our study employs an autoencoder to construct an embedding for multiple categorical input variables. First, we construct the one-hot encoded representation of each categorical variable (Ferrario et al., 2020). One-hot encoding maps a categorical variable \(x_{j}\) with \(L_{j}\) levels to a binary vector \(x_{j}^{\text{OH}}=\left(x_{1}^{(j)},\ldots,x_{L_{j}}^{(j)}\right)\) in the space \(\{0,1\}^{L_{j}}\). If we have \(c\) categorical variables, the dimension of all one-hot representations together equals \(\sum_{j=1}^{c}L_{j}\).
Second, we train an autoencoder using the combined one-hot representations of the categorical variables as input nodes. As such, the input layer has a dimension of \(\sum_{j=1}^{c}L_{j}\). The input layer is connected to an encoded layer of dimension \(d\), which is then connected back to the output layer of dimension \(\sum_{j=1}^{c}L_{j}\). We use the identity function as activation function for both \(\sigma^{\text{(enc)}}\) and \(\sigma^{\text{(dec)}}\) in Equation (6).
Following the construction in Delong and Kozak (2023), we apply a softmax transformation on the output layer of the autoencoder after the activation function \(\sigma^{\text{(dec)}}\). For each categorical variable \(x_{j}\), exactly one value in the input nodes \(\left(x_{1}^{(j)},\ldots,x_{L_{j}}^{(j)}\right)\) is one and the rest of the input nodes takes the value zero. Therefore, we apply the softmax activation function to the output layer of the autoencoder for each group of nodes corresponding to the one-hot encoding of a categorical variable. For each categorical variable \(x_{j}^{\text{OH}}=\left(x_{1}^{(j)},\ldots,x_{L_{j}}^{(j)}\right)\) and for each
Figure 3: Our proposed network structure combines the autoencoder embedding technique from Delong and Kozak (2023) and the CANN structure from Schelldorfer and Wüthrich (2019).
\(h\in\{1,\ldots,L_{j}\}\), the softmax transformation of the output node \(x_{h}^{(j,\text{dec})}\) is defined as:
\[x_{h}^{(j,\text{dec})}\mapsto\tilde{x}_{h}^{(j,\text{dec})}=\frac{\exp\left(x_{h }^{(j,\text{dec})}\right)}{\exp\left(x_{1}^{(j,\text{dec})}+\ldots+x_{L_{j}}^{ (j,\text{dec})}\right)},\qquad h=1,\ldots,L_{j}. \tag{7}\]
The use of the softmax activation function ensures that the values of the decoded vectors \(\left(x_{1}^{(j,\text{dec})},\ldots,x_{L_{j}}^{(j,\text{dec})}\right)\) sum up to one for each variable \(x_{j}\).
To train the autoencoder, we use the cross-entropy loss function, which is suitable because of the 0/1 values in the input data. With \(\mathbf{x}_{i}^{\text{OH}}\) the one-hot encoding of all categorical variables for policyholder \(i\) and \(\tilde{\mathbf{x}}_{i}^{\text{dec}}\) the values of the autoencoder's output layer for policyholder \(i\), the cross-entropy loss function is defined as:
\[\mathscr{L}^{\text{CE}}\left(\tilde{\mathbf{x}}_{i}^{\text{dec}},\mathbf{x}_{i}^{\text {OH}}\right)=-\sum_{j=1}^{c}\sum_{h=1}^{L_{j}}x_{ih}^{(j)}\cdot\log\left(\tilde {x}_{ih}^{(j,\text{dec})}\right). \tag{8}\]
After training the autoencoder and applying the trained autoencoder on each policyholder \(\mathbf{x}_{i}\in\mathcal{D}\), the vector of categorical inputs \((x_{i,p-c+1},\ldots,x_{i,p})\) is accurate, compact and numerically represented in the vector \((z_{i1}^{\text{enc}},\ldots,z_{id}^{\text{enc}})\) as calculated by Equation (6). We call the vector \(\mathbf{z}_{i}^{\text{enc}}\) the embedding of the categorical inputs of \(\mathbf{x}_{i}\). To use the embedding together with the numerical features of \(\mathbf{x}_{i}\), we normalize the values in the nodes \(z_{1}^{\text{enc}},\ldots,z_{d}^{\text{enc}}\) by scaling the weight matrix \(W_{\text{enc}}\) and bias vector \(\mathbf{b}_{\text{enc}}\) of the trained encoder. With \(\mu_{1},\ldots,\mu_{d}\) the means, and \(\sigma_{1},\ldots,\sigma_{d}\) the standard deviations, of the values \(z_{i1}^{\text{enc}},\ldots,z_{id}^{\text{enc}}\) for all \(\mathbf{x}_{i}\in\mathcal{D}^{\text{train}}\), we scale the weight matrix \(W_{\text{enc}}\) and bias vector \(\mathbf{b}_{\text{enc}}\) of the pre-trained encoder as follows:
\[W_{\text{enc}}\mapsto\tilde{W}_{\text{enc}}=\left(\begin{array}{cccc}\frac{ w_{11}}{\sigma_{1}}&\frac{w_{12}}{\sigma_{1}}&\ldots&\frac{w_{1c}}{\sigma_{1}} \\ \frac{w_{21}}{\sigma_{2}}&\frac{w_{12}}{\sigma_{2}}&\ldots&\frac{w_{1c}}{\sigma_ {2}}\\ \vdots&\vdots&\ddots&\vdots\\ \frac{w_{d-1,1}}{\sigma_{d-1}}&\frac{w_{d-1,2}}{\sigma_{d-1}}&\ldots&\frac{w_ {d-1,c}}{\sigma_{d-1}}\\ \frac{w_{d1}}{\sigma_{d}}&\frac{w_{d2}}{\sigma_{d}}&\ldots&\frac{w_{d}}{\sigma _{d}}\\ \end{array}\right),\mathbf{b}_{\text{enc}}\mapsto\tilde{\mathbf{b}}_{\text{enc}}= \left(\begin{array}{c}\frac{b_{1}-\mu_{1}}{\sigma_{1}}\\ \frac{b_{2}-\mu_{2}}{\sigma_{2}}\\ \vdots\\ \frac{b_{d-1}-\mu_{d-1}}{\sigma_{d-1}}\\ \frac{b_{d-1}-\mu_{d-1}}{\sigma_{d-1}}\\ \frac{b_{d-1}-\mu_{d}}{\sigma_{d}}\\ \end{array}\right). \tag{9}\]
Having access to the trained and scaled autoencoder, we now add the encoder part to the FFNN and the CANN structures by replacing the input nodes of the categorical variables in Figure 1 and 2 with the encoding part of the trained autoencoder, as shown in Figure 2(b) for the CANN. For clarity, we omit the one-hot encoding notation of each variable in Figure 3. We say the autoencoder is _pre-trained_ because we perform a first training and scaling of the autoencoder before training the neural network architectures with the added encoder. Adding the encoder to the network allows the network to finetune the weights and biases of the pre-trained encoder with respect to the considered regression task and its applicable loss function as in Equation(1) or Equation (2).
Autoencoders used to embed categorical variables provide several advantages over one-hot encoding (Delong and Kozak, 2023). Firstly, they allow for a significantly smaller dimension of the encoding compared to the dimension resulting from one-hot encoding. Secondly, autoencoders enable the encoding of all categorical variables together, capturing interactions between variables more effectively than variable specific encoding does. Lastly, autoencoders prove advantageous in multi-task scenarios such as frequency-severity modeling. Learning to encode
categorical variables solely on the severity dataset can be problematic due to its smaller size. Since autoencoders are unsupervised learning methods, we can train the autoencoder using all data available, and add the resulting pre-trained encoder to both frequency and severity models.
### Training and tuning neural networks
We train the FFNN and CANN models using the Adam optimization algorithm. Adam, introduced by Kingma and Ba (2014), is a stochastic gradient descent algorithm with an adaptive learning rate. Iteratively, the Adam algorithm changes the weights and biases in the network to minimize the loss between predictions \(\hat{y}\) and the observed responses \(y\). We use batches of training data for each training iteration to speed up optimization; see Keskar et al. (2016). The size of the batches is a parameter that needs to be tuned. The network size is also tuned; the number of hidden layers \(M\), and the number of nodes in each layer \(q_{1},\ldots,q_{M}\) are tuning parameters. We use a drop-out rate (Srivastava et al., 2014) to avoid overfitting, and consider this rate to be a tuning parameter as well. The drop-out rate is the percentage of nodes in each layer that are disconnected from the next and previous layer during each iteration of the Adam algorithm. The last tuning parameter is the choice of activation functions \(\sigma^{(1)},\ldots,\sigma^{(M)}\). To simplify the tuning process, we use layers of equal sizes, \(q_{1}=\ldots=q_{M}=q\), and apply the same activation function for all hidden layers, \(\sigma^{(1)}=\ldots=\sigma^{(M)}=\sigma\). Hence, only the value for \(q\) and the activation function \(\sigma\) are tuned and applied to each hidden layer.
We deploy a random grid search, introduced by Bergstra and Bengio (2012), to determine the optimal value for each tuning parameter. For each tuning parameter \(t_{k}\), with \(k=1,\ldots,K\), we define a range of possible values \([t_{k,\min},t_{k,\max}]\). The search space \(\mathcal{S}\) is the space consisting of all possible values for all tuning parameters:
\[\mathcal{S}=[t_{1,\min},t_{1,\max}]\times\ldots\times[t_{K,\min},t_{K,\max}]\,.\]
The _random grid_\(\mathcal{R}\subset\mathcal{S}\) consists of randomly drawn points in the search space \(\mathcal{S}\). Each point \(s\in\mathcal{R}\) represents a set of candidate tuning parameter values. Out of the random grid \(\mathcal{R}\), we select the optimal point \(s^{*}\) with a cross-validation scheme. In Figure 4, we give an example of a search space defined by two tuning parameters and a random grid of size nine sampled in the search space.
We use the extensive cross-validation scheme proposed by Henckaerts et al. (2021), as sketched in Figure 5. We divide the data set \(\mathcal{D}\) in six disjoint and stratified subsets \(\mathcal{D}_{1},\ldots,\mathcal{D}_{6}\). We define six data folds; in data fold \(\ell\), for \(\ell=1,\ldots,6\), we select a hold-out test set \(\mathcal{D}_{\ell}\) and use five-fold cross-validation (Hastie et al., 2009) on the data set \(\mathcal{D}\backslash\mathcal{D}_{\ell}\). Each cross-validation loop uses four out of the five data subsets in \(\mathcal{D}\backslash\mathcal{D}_{\ell}\) to train the neural network. The fifth subset is used both for early stopping and to calculate the validation error. The cross-validation error is the average validation error over the five validation sets. We then determine the optimal point \(s^{*}_{\ell}\in\mathcal{R}\) which minimizes the cross-validation error for data fold \(\ell\). We use the six optimal tuning parameter sets \(s^{*}_{1},\ldots,s^{*}_{6}\) to determine out-of-sample performance on the test set \(\mathcal{D}_{1},\ldots,\mathcal{D}_{6}\) of each data fold \(\ell=1,\ldots,6\). As such, we obtain an out-of-sample prediction for every point in the data set.
## 4 Performance comparison between benchmark models and deep learning architectures
Section 4.1 introduces four data sets that are used in our benchmark study. In this study we compare the performance of the deep learning architectures against two benchmark mod
els introduced in Section 4.2. Section 4.3 covers the used tuning parameter grid for both the autoencoder and the deep learning architectures. We compare the statistical out-of-sample performance of the models under study in Section 4.4. Lastly, Section 4.5 compares the autoencoder embedding against using the one-hot encoding when used in the deep learning models under consideration.
Figure 4: Example of random grid search with two tuning parameters \(t_{1}\) and \(t_{2}\). The search space \(\mathcal{S}=[t_{1,\min},t_{1,\max}]\times[t_{2,\min},t_{2,\max}]\) is shown in the figure by the dotted square. The random grid \(\mathcal{R}\) consists of nine randomly drawn points \(s_{1},\ldots,s_{9}\) from \(\mathcal{S}\). The optimal point \(s^{*}\in\mathcal{R}\) is then selected via a cross-validation scheme.
Figure 5: Representation of the 6 times 5-fold cross-validation scheme, figure from Henckaerts et al. (2021).
### Data sets
The used data sets are an Australian, Belgian, French2 and Norwegian MTPL data set, available through the R packages CASdatasets(Dutang and Charpentier, 2019) and maidrr(Henckaerts and Antonio, 2022; Henckaerts, 2021). Table 2 gives an overview of the number of records for each data set and the number of continuous, categorical and spatial variables.
Footnote 2: The French data set in the CASdatasets package contains 35 560 claims, but only 24 000 claims have a claim amount. We exclude the policies with claims but without claim amount from our study.
The spatial variables are listed separately in Table 2. The Belgian spatial variable is the postal code, which is converted to two continuous variables, the latitude and longitude coordinates of the center of that postal code. The French data includes two spatial variables: the French district, which is categorical, and the logarithm of the population density of the place of residence, which is a continuous variable. The Norwegian data has one spatial variable denoting the population density of the region of residence as a categorical variable.
### Benchmark models
To enable an assessment of the predictive performance of the neural network and CANN structures we construct two benchmark models; a generalized linear model (GLM) and a gradient boosting model (GBM), for both frequency as well as severity. Predictions from these benchmark models are then also used as the initial model inputs in the CANN models. For the Belgian data set we use the GLM constructed in Henckaerts et al. (2018) and the GBM from Henckaerts et al. (2021). For the other data sets, we follow the construction methods outlined in the mentioned papers.
For the construction of the GLM, we follow the strategy proposed in Henckaerts et al. (2018) and start from a generalized additive model (GAM), including interaction effects between continuous variables. Based on the insights from the GAM, we bin the continuous variables using a regression tree. On the binned input data, we construct a GLM. We repeat the construction of the GLM six times, each time withholding a subset \(\mathcal{D}_{\ell}\), \(\ell=1,\ldots,6\). This way we obtain GLM based out-of-sample predictions for all observations in the data set \(\mathcal{D}\).
GBM is an ensemble method combining multiple decision trees (Friedman, 2001). A GBM has two tuning parameters; the number of trees and the depth of each tree. We use a tuning grid
\begin{table}
\begin{tabular}{c c c c} \hline \hline Australian MTPL & Belgian MTPL & French MTPL & Norwegian MTPL \\ \hline \hline \multicolumn{3}{c}{**Number of observations**} \\ \hline Frequency & 67 856 & 163 212 & 668 897 & 183 999 \\ Severity & 4 624 & 18 276 & 24 944 & 8 444 \\ \hline \multicolumn{3}{c}{**Covariates: number and type**} \\ \hline Continuous & 1 & 4 & 2 & 0 \\ Categorical & 4 & 5 & 5 & 3 \\ Spatial & 0 & 1 & 2 & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Overview of the structure of the data sets used in the benchmark study. The number of records for both frequency and severity modelling is given, as well as the number of different input variables per type.
with the following values:
\[\text{Number of trees:}\,\{100,300,500,\ldots,5\,000\},\] \[\text{Depth of each tree:}\,\{1,2,3,4,5,6,7,8,9,10\}.\]
We use three hyper parameters of which the values are not tuned: shrinkage = 0.01, bagging fraction = 0.75 and minimum observations per node is 0.75% of the number of records in the training data. The loss functions in Equation (1) and Equation (2) are used for, respectively, frequency and severity modelling. We follow the repeated 5-fold cross-validation scheme as described in Section 3.3. With optimal tuning parameters, we fit a GBM for each data fold and look at the prediction for the observations in the corresponding test set. As such, we obtain an out-of-sample prediction for every data point in the portfolio.
### Neural network models
The pre-training of the autoencoder uses the Nadam optimizer algorithm, a batch size of 1 000 and a randomly selected validation set of 20% of the frequency data set \(\mathcal{D}\) for early stopping. The number of nodes \(d\) in the encoding layer is tuned, testing across the values \(\{5,10,15\}\), selecting the lowest value of \(d\) while the loss \(\mathscr{L}^{\text{CE}}(\cdot,\cdot)<0.001\), as calculated with Equation (8). After the autoencoder is trained and scaled, the encoder is used in each FFNN and CANN structure, for both frequency and severity modelling.
For both the FFNN and the CANN models, a random grid \(\mathcal{R}\) of size 40 is sampled from the search space \(\mathcal{S}\) defined by the tuning parameters and their respective ranges as shown in Table 3.
The cross-validation scheme is shown in Algorithm 1, starting with the pre-processing steps, and resulting in out-of-sample performances for each holdout test set \(\mathcal{D}_{1},\ldots,\mathcal{D}_{6}\). For \(\ell=1,\ldots,6\), we train a network on the data \(\mathcal{D}\backslash\mathcal{D}_{\ell}\), choosing a random validation set consisting of 20% of the training data for early stopping. With this model, we construct out-of-sample predictions on the test set \(\mathcal{D}_{\ell}\) and calculate the out-of-sample loss using the loss functions in Equation (1) and (2). Because optimization in a neural network is dependent on the random initialization of the weights, we train the model three times, and use the average out-of-sample loss over the three trainings. This ensures an objective out-of-sample loss evaluation, without the risk of accidentally getting well or badly initialised weights.
\begin{table}
\begin{tabular}{l l c} \hline \hline
**Tuning parameter** & **Range** & \\ \hline Activation function for hidden layers & ReLU, sigmoid, softmax\({}^{3}\) & \\ Batch size & \([10\,000,50\,000]\) & Frequency \\ & \([200,10\,000]\) & Severity \\ Number of hidden layers & \([1,4]\) & \\ Nodes per hidden layer & \([10,50]\) & \\ Dropout rate & \([0,0.1]\) & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Collection of tuning parameters and their respective ranges for the random grid search tuning strategy. This range is used for both the FFNN and CANN structures.
```
Input: model class (mclass) and corresponding tuning grid \(\mathcal{R}\) data \(\mathcal{D}\) with 6 disjoint stratified subsets \(\mathcal{D}_{1},\ldots,\mathcal{D}_{6}\); for\(\ell=1,\ldots,6\)do leave out \(\mathcal{D}_{\ell}\) as test set; foreach continuous variable \(x_{j}\in\mathcal{D}\)do calculate mean \(\mu_{x_{j}}\) and standard deviation \(\sigma_{x_{j}}\) on the data \(\mathcal{D}\setminus\mathcal{D}_{\ell}\); normalize the variable \(x_{j}\) in the data set \(\mathcal{D}\) with Equation (5); foreach categorical variable \(x_{j}\in\mathcal{D}\)do construct one-hot encoding \(\left(x_{1}^{(j)},\ldots,x_{L_{j}}^{(j)}\right)\) of variable \(x_{j}\); for\(d\in\{5,10,15\}\)do train an autoencoder \(f_{d}^{\mathrm{AE}}\) using the one-hot encoding of all categorical variables in \(\mathcal{D}\setminus\mathcal{D}_{\ell}\) as input and \(d\) nodes in the encoding layer; evaluate the model performance on \(\mathcal{D}\setminus\mathcal{D}_{\ell}\) using loss function \(\mathscr{L}^{\mathrm{BCE}}(\cdot,\cdot)\); select \(f_{d}^{\mathrm{AE}}\) with the lowest \(d\) while \(\mathscr{L}^{\mathrm{BCE}}(\cdot,\cdot)<0.001\); calculate the scaled \(\tilde{W}_{\mathrm{enc}}\) and bias vector \(\tilde{\mathbf{b}}_{\mathrm{enc}}\) for the encoder part of \(f_{d}^{\mathrm{AE}}\); add the encoder part of \(f_{d}^{\mathrm{AE}}\), with \(\tilde{W}_{\mathrm{enc}}\) and \(\tilde{\mathbf{b}}_{\mathrm{enc}}\), to the model class mclass; foreach tuning parameter point \(s\in\mathcal{R}\)do for\(k\in\{1,\ldots,6\}\setminus\ell\)do train a model \(f_{\ell k}\) of mclass on \(\mathcal{D}\setminus\{\mathcal{D}_{\ell},\mathcal{D}_{k}\}\); evaluate the model performance on \(\mathcal{D}_{k}\) using loss function \(\mathscr{L}(\cdot,\cdot)\); valid_error\({}_{\ell k}\leftarrow\frac{1}{|\mathcal{D}_{k}|}\underset{\mathbf{x}_{i}\in\mathcal{D}_{k}} {\sum}\mathscr{L}\{y_{i},f_{\ell k}(\mathbf{x}_{i})\}\); valid_error\({}_{\ell}\leftarrow\frac{1}{5}\sum_{k\in\{1,\ldots,6\}\setminus\ell}\) valid_error\({}_{\ell k}\); optimal parameter point \(s_{\ell}^{*}\in\mathcal{R}\) minimizes valid_error\({}_{\ell}\); for\(\mathrm{rep}\in\{1,2,3\}\)do select a validation set \(\mathcal{D}_{\mathrm{val}}\) containing 20% of the records in \(\mathcal{D}\setminus\mathcal{D}_{\ell}\); train a model \(f_{\ell,\mathrm{rep}}\) of mclass on \(\mathcal{D}\setminus\{\mathcal{D}_{\ell},\mathcal{D}_{\mathrm{val}}\}\) using the optimal parameter point \(s_{\ell}^{*}\) and using \(\mathcal{D}_{\mathrm{val}}\) for early stopping; evaluate the model performance on \(\mathcal{D}_{\ell}\) using loss function \(\mathscr{L}(\cdot,\cdot)\); test_error\({}_{\ell,\mathrm{rep}}\leftarrow\frac{1}{|\mathcal{D}_{\ell}|}\underset{\mathbf{x}_{i} \in\mathcal{D}_{\ell}}{\sum}\mathscr{L}\{y_{i},f_{\ell,\mathrm{rep}}(\mathbf{x}_{i })\}\); test_error\({}_{\ell}\leftarrow\frac{1}{3}\sum_{\mathrm{rep}}\) test_error\({}_{\ell,\mathrm{rep}}\); Output: optimal tuning parameters + performance measure for each of the six folds.
```
**Algorithm 1**Pseudocode to sketch the pipeline for calculating out-of-sample performances with the neural network structures from Section 3.1, including the data pre-processing steps, the cross-validation scheme as outlined in Henckaerts et al. (2021) with the random grid search methodology and the repeated out-of-sample loss calculation to avoid local minima solutions.
### Out-of-sample performances
The two benchmark models enable a comparison between their performance and those obtained with the proposed neural network architectures. Moreover, they serve as initial model input for the CANN models. We investigate the predictive performance of seven models for each data set: GLM, GBM, neural network, the CANN with GLM input, with fixed and flexible output layer, and the CANN with GBM input, with both fixed and flexible output layer. We compare the out-of-sample performances of the benchmark models and the neural network structures in Figure 6. We have out-of-sample deviances for each withheld test set, measured in Poisson deviance (1) or gamma deviance (2). We show frequency on the left-hand side and severity on the right-hand side.
Among the four data sets analyzed, the combination of a neural network and a gradient boosting model (CANN GBM flex) consistently yields the lowest deviance when modelling claim frequency. This aligns with recent research highlighting the predictive performance of combining a gradient boosting model with a neural network (Borisov et al., 2022; Ke et al., 2019; Shwartz-Ziv and Armon, 2022). However, for the Norwegian data set, which has few input variables, the impact was less pronounced, with similar performance observed across all models except for the feed-forward neural network, which exhibits slightly higher deviance. Regarding claim severity modelling, no single model consistently achieves the lowest deviance across all data sets and test sets. For the Australian and Norwegian data sets, all models perform comparably in terms of deviance. The CANN models with GBM input demonstrate the lowest deviance for the Belgian data set, while for the French data set, the CANN model with GLM input achieves the best results. Notably, the CANN models with a flexible output layer structure outperform those with a fixed output layer in most cases, for both frequency and severity modelling. This suggests that the more adaptable combination of the initial model input and the neural network adjustment leads to reduced deviance.
### Comparison of categorical embedding methods
We investigate the impact of the autoencoder embedding compared to directly utilizing one-hot encoded categorical variables. For each data set, we train a FFNN and a CANN model using the one-hot encoding of each categorical variable and a FFNN and a CANN model using the autoencoder embedding. We do not tune the models but choose a set of tuning parameters that we apply to all shown models. This means the deviance of each model is not relevant here, only the difference in deviance between the model with one-hot encoding and the model with autoencoder embedding. This approach allows us to isolate the embedding technique's effect on a model's predictive performance. Each model is trained on \(\mathcal{D}\setminus\mathcal{D}_{1}\), and the out-of-sample performance is calculated on the out-of-sample test set \(\mathcal{D}_{1}\).
Figure 7 displays the predictive accuracy of each model under consideration, with the frequency models in the top and the severity models in the bottom row. For frequency modeling, the autoencoder embedding has the most pronounced effect on the performance of the FFNNs, leading to a lower deviance compared to models utilizing one-hot encoding. However, the impact on CANN models appears to be negligible. In the case of severity modeling, both FFNNs and CANNs demonstrate an improved predictive performance when using the autoencoder embedding. Only the FFNN on Australian data set and the CANN model on Belgian data set perform similarly when comparing the one-hot encoding and the autoencoder embedding for claim severity modelling. The reduced deviance in most severity models highlights the benefits of unsupervised learning through the autoencoder approach.
Figure 6: Out-of-sample performance comparison between the different models for each data set. The left-hand side shows the performance of the frequency models and the right-hand side for the severity models. From top to bottom, we show the results on the Australian, Belgian, French and Norwegian data sets. The deviances for the GLM and GBM on the Belgian data correspond to the results reported in, respectively, Henckaerts et al. (2018) and Henckaerts et al. (2021).
## 5 Looking under the hood: interpretation tools and surrogate models
First, we use two model interpretation tools to look under the hood of the constructed models. Second, we translate the model insights into a tariff structure by constructing GLM surrogates along the work flow presented in Henckaerts et al. (2022). All results shown in this section are calculated using data fold one, meaning the models are trained using data subsets \(\mathcal{D}_{2},\ldots,\mathcal{D}_{6}\) and the shown results are calculated on the test set \(\mathcal{D}_{1}\).
### Variable importance
We measure variable importance using the permutation method from Olden et al. (2004). Hereby, we consider the average change in predictions when a variable is randomly permuted. For a trained model \(f\), we measure the importance of a variable \(x_{j}\) by calculating
\[\mathrm{VIP}_{x_{j}}=\sum_{\mathbf{x}_{i}\in\mathcal{D}}\mathrm{abs}\big{(}f\left( x_{i,1},\ldots,x_{i,j},\ldots,x_{i,p}\right)-f\left(x_{i,1},\ldots,\tilde{x}_{i,j}, \ldots,x_{i,p}\right)\big{)}, \tag{10}\]
where \(\tilde{x}_{i,j}\) is a random permutation of the values observed for \(x_{j}\) in the data set \(\mathcal{D}\). A large value for \(\mathrm{VIP}_{x_{j}}\) indicates that the variable significantly influences the model output and is therefore considered important. Figure 8 shows the variable importance of each variable in the four data sets for both frequency and severity modelling. For clarity, we show the relative VIP of each variable, calculated as
\[\overline{\mathrm{VIP}}_{x_{j}}=\frac{\mathrm{VIP}_{x_{j}}}{\sum_{x_{j}\in \mathcal{D}}\mathrm{VIP}_{x_{j}}},\qquad\text{where the sum runs over all over all variables $x_{j}$.} \tag{11}\]
Figure 7: Comparison of one-hot encoding and autoencoder embedding on the out-of-sample performance of both the FFNN and the CANN model. Top row shows the effect on frequency modelling and bottom row on severity modelling.
By comparing the variable importance of the GBM with the CANN model, we can evaluate the impact of the neural network adjustment component within the CANN. In general, most variables show similar importance in both the GBM and the CANN GBM flexible models, indicating that the adjustment calibrated by the neural network does not substantially alter the importance of the relationships between input variables and the response variable. However, notable changes are observed for certain variables, such as the postal code in the frequency model for the Belgian data set and the vehicle age and brand in the frequency model for the French data set. When we compare the variable importance of the GBM and CANN GBM with the FFNN, we observe more substantial changes, particularly in claim severity modelling. This shows that the FFNN models a significantly different relationship between the input variables and the output variable, compared to the GBM and CANN GBM flexible model.
### Partial dependence effects
We consider partial dependence effects (Hastie et al., 2009; Henckaerts et al., 2021) to explore the relationship between an input variable and the model output. Let the variable space \(X_{j}\) be the vector containing all possible values for variable \(x_{j}\). For a trained model \(f\), the partial dependency effect of a variable \(x_{j}\) is the vector calculated as
\[\mathrm{PD}_{x_{j}}=\left\{\frac{1}{|\mathcal{D}|}\sum_{\mathbf{x}_{i}\in\mathcal{ D}}f\left(x_{i,1},\ldots,X_{o,j},\ldots,x_{i,p}\right)\text{ ; }\forall X_{o,j}\in X_{j}\right\}, \tag{12}\]
where \((x_{i,1},\ldots,X_{o,j},\ldots,x_{i,p})\) is the data point \(\mathbf{x}_{i}\in\mathcal{D}\) with element \(x_{i,j}\) replaced by the value \(X_{o,j}\in X_{j}\). The vector \(\mathrm{PD}_{x_{j}}\) can be seen as the average prediction on \(\mathcal{D}\), while letting the
Figure 8: Relative variable importance in the GBM, the FFNN and the CANN GBM flexible. Top row shows the effects for the frequency models, the bottom row for the severity models.
variable \(x_{j}\) range over all possible values in the variable space \(X_{j}\). A partial dependence plot is the plotted effect between \(X_{j}\) and \(\text{PD}_{x_{j}}\). Equation (12) can be extended to a two-way interaction partial dependence effect by letting two variables range over their respective variable spaces.
Figure 9 shows the partial dependence effect between the policyholder's age and the predicted claim frequency across the four data sets in the benchmark study. We compare the effects of the benchmark GBM, the FFNN and the CANN GBM flexible. The effect in all three models is similar for the Australian, French and Norwegian data. However, for the Belgian data set, the GBM and CANN GBM flexible show a similar partial dependence effect, while the FFNN shows a very different pattern. The partial dependence effect of this FFNN shows a less complex, less nuanced relationship between age of the policyholder and claim frequency. Across the four data sets, the average predicted claim frequency decreases with age, which is an expected relationship between age and claim frequency. For the Belgian and French data sets, we observe an increasing effect for the older ages.
Figure 10 displays the partial dependence effect of the policyholder's age when calibrated on the claim severity data. Similar to the effects portrayed in Figure 9, the three models applied to the Australian and Norwegian data sets exhibit a comparable effect. For the Belgian and French data sets, the FFNN showcases a notably distinct partial dependence effect. Specifically
Figure 10: Partial dependence effect of the policyholder’s age across the four data sets, claim severity models. We compare the benchmark GBM, the FFNN and the CANN GBM flexible.
Figure 9: Partial dependence effect of the policyholder’s age across the four data sets, claim frequency models. We compare the benchmark GBM, the FFNN and the CANN GBM flexible.
for the French data, the FFNN model reveals an almost flat effect across all age groups.
Figure 11 shows the partial dependence effect of the bonus-malus score for the Belgian and French frequency data sets. For both data sets, the three models show an increasing relation between the level occupied in the bonus-malus scale and the expected claim frequency. According to the FFNN, the partial dependence is a distinctly smoother effect compared to the effect calibrated by the GBM and CANN GBM flexible, showing again the less complex, less nuanced relationships captured by the FFNN.
We consider the partial dependence effect of the postal code in the Belgian frequency data set in Figure 12. We compare the partial dependence effect with the empirical claim frequency in the Belgian data, calculated as the number of claims per postal code divided by the sum of the exposure for that postal code. The effect in the GBM and CANN GBM flexible is very similar, with a higher expected claim frequency around the capital of Belgium. The effect in the FFNN also shows a higher expected number of claims in the capital but the calibrated spatial effect is much smoother. This aligns with the smoother partial dependence effects for the policyholder age and bonus-malus in the Belgian frequency FFNN model. Empirically, we see a higher concentration of claims per unit of exposure in and around the capital and for some postal codes in the west and east of Belgium. This effect is visible for the GBM and CANN model but not for the FFNN.
### Surrogate models for practical applications
Surrogate model constructionHenckaerts et al. (2022) present a workflow for constructing a surrogate GLM by leveraging insights obtained with a black box model. In our study, we apply this technique to the CANN model with GBM input, as discussed in Section 3 and calibrated in Section 4. To create the surrogates, we first calculate the partial dependence effect for each individual variable and for interactions between any two variables, as discussed in Section 5.2. Next, we use the dynamic programming algorithm introduced by Wang and Song (2011) to segment the input data into homogeneous groups based on these partial dependence effects. On the resulting binned data set, we fit a generalized linear model. Constructing the surrogate
Figure 11: Partial dependence effect of the bonus-malus score for the Belgian data set, claim frequency model (left) and claim severity model (right). We compare the benchmark GBM, the FFNN and the CANN GBM flexible.
GLM on the segmented frequency and severity data leads to a tabular premium structure incorporating the insights captured by the CANN architectures.
Figure 13 shows the partial dependence effects of the CANN GBM flexible for the bonus
Figure 12: Partial dependence relationship of the spatial variable and the expected number of claims in the GBM, FFNN and CANN GBM flexible for the Belgian data. We compare the modelled effects with the empirical claim frequency in the Belgian data set.
Figure 13: Partial dependence plots for three variables in the French data set; left to right: bonus-malus scale, policyholder age and region. In color, we show the binning of the input data with the frequency surrogate GLM. Each color represents one bin of the input variable.
malus score, policyholder age and the region variable from the French data set, with respect to frequency modelling. In color, we show the obtained data segmentation. The so-called surrogate GLM is fitted on the segmented input data. The benchmark GLM, constructed via the approach in Henckaerts et al. (2018) (hereafter referred to as binned GLM), is also fitted on binned data, therefore, it is insightful to compare both the predictive accuracy and the selected variables as obtained with both techniques. To avoid data leakage in the comparison between two models, we compare the predictive accuracy on a withheld test set.
Table 4 shows the variables included in the binned GLMs and the surrogate GLMs for the Australian, Belgian and French data set. The out-of-sample performance of these models are evaluated on the withheld data set \(\mathcal{D}_{1}\). We excluded the Norwegian data set from the surrogate fitting, as this data set only consists of categorical variables. The surrogate technique selects more variables and performs better on the out-of-sample test set than the binned GLM. This finding is consistent across all three data sets. Hence, the surrogate GLM benefits from the insights learned from the neural network adjustments in the CANN compared to the direct construction of the binned GLM.
Identification of risk profilesWe estimate the number of claims and the claim severities using the surrogate models constructed for the frequency and severity CANN GBM flexible, respectively.
We construct a low, medium, and high-risk profile based on the frequency surrogate GLM for the French data set. Table 5 compares these profiles via their expected claim frequency according to the surrogate GLM and the CANN GBM flexible model. We compare the influence of each variable on the assessed risk using two local interpretation tools in Figure 14. For the GLM, we
\begin{table}
\end{table}
Table 4: Comparison between the benchmark GLM and the surrogate GLM for frequency modelling on the Australian, Belgian and French data sets. The surrogate GLM is constructed from the CANN with GBM input and flexible output layer. The last row shows the Poisson deviance of both GLMs on the out-of-sample data set \(\mathcal{D}_{1}\).
show the fitted coefficients on the response scale. A value lower (higher) than one means the feature's value leads to a lower (higher) prediction than the baseline prediction obtained with the intercept of the GLM. The uncertainty of each contribution is shown with the 95% confidence interval. Shapley values (Shapley et al., 1953) are used to compare the feature contributions of the GLM to the influences in the CANN model. A positive (negative) Shapley value indicates that this feature's value leads to a higher (lower) than average prediction. The effect in the GLM and CANN model mostly align. We see a strong impact of the variables region, driver age and bonus-malus score on the predicted number of claims. The variable area was not selected in the surrogate GLM construction, and its Shapely value is negligible in all tree risk profiles.
## 6 Managerial insights: a comparison between technical tariff stuctures
We now combine the predictions for claim frequency and severity to a technical tariff. For each data set and each fold, we make predictions for all observations in the test set \(\mathcal{D}_{\ell}\), using the model trained on the data subsets \(\mathcal{D}\setminus\mathcal{D}_{\ell}\). As such, we obtain out-of-sample predictions for both the expected number of claims and the expected claim severities for each policyholder in the data set. The predicted loss, or technical tariff, for each policyholder is the expected number of claims times the expected claim severity.
Table 6 shows the total predicted loss next to the total loss observed in each data set. We compare the results from the benchmark GLM and GBM, the CANN GBM flexible and the surrogate GLM. We also show the ratio of predicted losses over the observed losses. A ratio of one means the model has perfect balance at portfolio level. For the Norwegian data set, the predicted losses are very close to the observed losses for all models. For the Australian and Belgian data, both GLM models are close to balance, meaning the predicted losses are close to the observed losses. Although a canonical link GLM satisfies the balance property (Nelder and Wedderburn, 1972), our severity models use a gamma distribution with non-canonical log-link,
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Variables** & **Low risk** & **Medium risk** & **High risk** \\ \hline Vehicle power & 4 & 6 & 9 \\ Vehicle age & 3 & 2 & 1 \\ Policyholder age & \([21,26[\) & \(4\,[30,40[\) & \(\geq 70\) \\ Bonus-malus scale & 50 & 70 & 190 \\ Vehicle brand & B12 & B5 & B11 \\ Fuel type & Regular & Regular & Diesel \\ Population density of area & 2.71 & 665.14 & \(22\,026.47\) \\ District of residence & Midi-Pyrenees & Basse-Normandie & Corse \\ \hline
**Predicted number of claims** & & & \\ \hline Surrogate GLM & 0.020 & 0.106 & 0.361 \\ CANN GBM flexible & 0.021 & 0.101 & 0.519 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Example of a low, medium and high risk profile for the French data set, using the surrogate based on the CANN model with GBM input and flexible output layer, withholding test set one. We compare the predicted number of claims for each profile.
and the tariff structures shown here are based on out-of-sample predictions. The GBM and the CANN model deviate slightly from perfect balance.
To compare tariff structures, we follow the methodology from Henckaerts and Antonio (2022) using risk scores. For a model \(f\), let \(F_{n}\) be the empirical cumulative distribution function of the predictions made by the model \(f\). For each policyholder \(i\), the risk score \(r_{i}^{f}\) is the evaluation of \(F_{n}\) in \(f(\mathbf{x}_{i})\). For frequency-severity modelling, with a frequency model \(f^{\text{freq}}\) and a severity model \(f^{\text{sev}}\), the risk score is calculated as
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multicolumn{1}{c}{**Observed and predicted losses**} & \multicolumn{1}{c}{**GDM**} & \multicolumn{1}{c}{**GBM**} & \multicolumn{1}{c}{**CANN**} & \multicolumn{1}{c}{**Surrogate GLM**} \\ \hline Australia (AU8) & 9 314 604 & 9 345 113 & 9 136 324 & 9 154 467 & 9 355 718 \\ Belgium (€ & 26 464 970 & 26 399 027 & 26 079 709 & 25 720 143 & 26 345 969 \\ France (€ & 58 872 147 & 56 053 341 & 56 207 993 & 58 629 584 & 57 048 375 \\ Norway (NOK) & 206 649 080 & 206 634 401 & 206 475 980 & 206 494 683 & - \\ \hline \multicolumn{1}{c}{**Ratio of predicted losses over observed losses**} & & & & \\ \hline Australia & - & 1.00 & 0.98 & 0.98 & 1.00 \\ Belgium & - & 1.00 & 0.99 & 0.97 & 1.00 \\ France & - & 0.95 & 0.95 & 1.00 & 0.97 \\ Norway & - & 1.00 & 1.00 & 1.00 & - \\ \hline \hline \end{tabular}
\end{table}
Table 6: Comparison between the total observed losses for each data set and the total predicted losses for the GLM, GBM, CANN GBM flex and the surrogate GLM. We also show the ratio of total predicted losses over observed losses.
Figure 14: Comparison between the low, medium and high risk profiles in the frequency models on the French data set according to the Shapley values of the CANN GBM flexible model (top row) and the fitted coefficients in the surrogate GLM (bottom row).
\[r_{i}^{f}=F_{n}\left\{f^{\text{freq}}(\mathbf{x}_{i})\times f^{\text{sev}}(\mathbf{x}_{i}) \right\}. \tag{13}\]
We compare the risk scores of multiple models using Lorenz curves (Lorenz, 1905). For a model \(f\), the Lorenz curve evaluated in \(s\in[0,1]\) is
\[LC^{f}(s)=\frac{\sum_{i=1}^{n}L_{i}\,\mathbbm{1}\{r_{i}^{f}\leq s\}}{\sum_{i=1} ^{n}L_{i}}, \tag{14}\]
with \(L_{i}\) the observed loss for policyholder \(i\). We visualize the Lorenz curve by plotting the pairs \(\left(s,LC^{f}(s)\right)\), for \(s\in[0,1]\). The Lorenz curves shows the accumulation of losses, ordered by risk score \(r_{i}^{f}\) obtained with model \(f\). A model with a better risk classification accumulates losses slower for low risk scores and faster for high risk scores. A Lorenz curve further away from the equality line at 45%, represents a better risk classification.
Figure 15 shows the Lorenz curves for the four data sets in the benchmark study where the top row compares the benchmark GBM with the CANN GBM flexible model and the bottom row the benchmark GLM with the surrogate GLM. For the Australian and French data sets, the tariff structure from the CANN model is (slightly) preferred to that of the benchmark GBM according to the Lorenz curve. For the Belgian and Norwegian data sets, the CANN model is also preferred, but the two curves are very similar. For the Australian, Belgian and French data sets, the Lorenz curves of the benchmark GLM and the surrogate GLM show a very similar pattern, with the surrogate model having a preferable curve over the benchmark GLM, showing that the higher predictive accuracy of the surrogate model also results in a slightly better risk classification in the tariff structure.
Figure 15: Lorenz curve comparison between the GBM benchmark model and the CANN GBM flexible model in the top row. Bottom row compares the GLM benchmark with the surrogate GLM. A dashed line is added to show line of equality.
## 7 Conclusion
This paper explores the potential of deep learning models for the analysis of tabular frequency and severity data in non-life insurance pricing. We detail a benchmark study using extensive cross-validation on multiple model architectures. Categorical input features are embedded by use of an autoencoder of which the encoder part is integrated into the FFNN and CANN structures. Our results demonstrate the performance gains achieved using the autoencoder embedding technique for categorical input variables, especially in modelling claim severities. Interpretation techniques are applied to both frequency and severity modelling.
The literature often questions the value created when analyzing tabular data with deep learning models. Indeed, our feed-forward neural network does not improve upon a carefully designed GBM. Combining gradient-boosted trees and neural networks leads to higher accuracy for frequency modelling. This aligns with what we see in other fields, where GBM and neural network combinations outperform the corresponding stand-alone models on tabular data. In modelling the severity data, out-of-sample deviances are relatively similar across benchmark models and deep learning architectures. This suggests that the added value created by using the deep learning approach is limited when applied to these datasets. Data sets with a high dimensional set of input variables and/or complex input features might benefit more from using deep learning models to model claim frequency and severity data. Gao et al. (2022), for instance, use deep learning models to analyze high-frequency time series of telematics data.
The end result of our study is a technical tariff structure on categorized input data, using a GLM that is carefully designed as a global surrogate for the deep learning model. Hence, this surrogate GLM leverages the insights from the deep learning models. The surrogate GLMs lead to a lower out-of-sample deviance than the benchmark GLMs and have a better risk differentiation. The workflow to construct a GLM as a global surrogate for a deep learning model can potentially be of interest to insurance companies aiming to harvest refined insights from their available data while aiming for an interpretable, explainable technical tariff. The latter consideration is of much interest in light of the GDPR algorithmic accountability clause. Further research could look into data sets with high dimensional feature sets, including features with images, text or times series, and explore the value of deep learning architectures via a carefully designed benchmark study, as outlined in this paper.
## Acknowledgements
Katrien Antonio gratefully acknowledges funding from the FWO and Fonds De La Recherche Scientifique - FNRS (F.R.S.-FNRS) under the Excellence of Science (EOS) program, project ASTeRISK Research Foundation Flanders [grant number 40007517]. The authors gratefully acknowledge support from the Ageas research chair on insurance analytics at KU Leuven, from the Chaire DIALog sponored by CNP Assurances and the FWO network W001021N. The authors thank Simon Gielis for his contributions (as MSc student) in the early stage of the research.
## Declaration of interest statement
The authors declare no potential conflict of interests.
## Supplemental material
The results in this paper were obtained using R. All code is available through github: [https://github.com/freekholvoet/NNforFreqSevPricing](https://github.com/freekholvoet/NNforFreqSevPricing). An R Markdown demonstration is available on that github page, called NNforFreqSevPricing.nb.html.
|
2305.07974 | Simplicial techniques for operator solutions of linear constraint
systems | A linear constraint system is specified by linear equations over the group
$\ZZ_d$ of integers modulo $d$. Their operator solutions play an important role
in the study of quantum contextuality and non-local games. In this paper, we
use the theory of simplicial sets to develop a framework for studying operator
solutions of linear systems. Our approach refines the well-known
group-theoretical approach based on solution groups by identifying these groups
as algebraic invariants closely related to the fundamental group of a space. In
this respect, our approach also makes a connection to the earlier homotopical
approach based on cell complexes. Within our framework, we introduce a new
class of linear systems that come from simplicial sets and show that any linear
system can be reduced to one of that form. Then we specialize in linear systems
that are associated with groups. We provide significant evidence for a
conjecture stating that for odd $d$ every linear system admitting a solution in
a group admits a solution in $\ZZ_d$. | Ho Yiu Chung, Cihan Okay, Igor Sikora | 2023-05-13T17:34:29Z | http://arxiv.org/abs/2305.07974v1 | # Simplicial techniques for operator solutions of linear constraint systems
###### Abstract
A linear constraint system is specified by linear equations over the group \(\mathbb{Z}_{d}\) of integers modulo \(d\). Their operator solutions play an important role in the study of quantum contextuality and non-local games. In this paper, we use the theory of simplicial sets to develop a framework for studying operator solutions of linear systems. Our approach refines the well-known group-theoretical approach based on solution groups by identifying these groups as algebraic invariants closely related to the fundamental group of a space. In this respect, our approach also makes a connection to the earlier homotopical approach based on cell complexes. Within our framework, we introduce a new class of linear systems that come from simplicial sets and show that any linear system can be reduced to one of that form. Then we specialize in linear systems that are associated with groups. We provide significant evidence for a conjecture stating that for odd \(d\) every linear system admitting a solution in a group admits a solution in \(\mathbb{Z}_{d}\).
###### Contents
* 1 Introduction
* 2 Linear systems
* 2.1 Simplicial realizations
* 2.2 Simplicial distributions
* 2.3 Linear systems from simplicial sets
* 3 Twisted products
* 3.1 Commutative fundamental group
* 3.2 Fundamental group of twisted products
* 3.3 Characterizations of solutions
* 3.4 Power maps
* 3.5 The \(K_{3,3}\) linear system
* 4 Linear systems from groups
* 4.1 Homotopical methods
* 4.2 Finite \(p\)-groups
* 4.3 Extraspecial \(p\)-groups
* 4.4 Higher odd prime torsion groups
* A Proof of Proposition 2.18
* B Classification of fibrations
C Proof of Lemma 4.8
## 1 Introduction
Linear (constraint) systems are a source of contextual distributions that arise in quantum theory [11, 12] and play a prominent role in studying non-local games [13, 14]. A linear system over the group \(\mathbb{Z}_{d}\) of integers modulo \(d\) is specified by an equation \(Ax=b\) where \(A\) is an \(r\times c\)-matrix and \(b\) is a column of size \(r\) both with entries in \(\mathbb{Z}_{d}\). A solution of such a linear system in a group \(G\) consists of group elements \(T_{1},\cdots,T_{c}\) satisfying
\[T_{1}^{A_{i1}}T_{2}^{A_{i2}}\cdots T_{c}^{A_{ic}}=J_{G}^{b_{i}}\]
where \(J_{G}\) is a fixed central element of order \(d\) in the group. In addition to these product equations, a solution has to satisfy (1) \(d\)-torsion property, that is \(T_{i}^{d}\) for all \(i=1,\cdots,c\), and (2) commutativity property: \(T_{i}T_{j}=T_{j}T_{i}\) whenever \(A_{ki}\) and \(A_{kj}\) both non-zero for some row index \(k\). In the literature, it is common practice to take \(G\) as a unitary group acting on a finite-dimensional Hilbert space. In this case, the solutions are usually called operator solutions. There is a group-theoretic approach for studying operator solutions of linear systems centered around the properties of the solution group \(\Gamma(A,b)\); see, for example, [13]. In this paper, we associate spaces to linear systems such that their algebraic invariants are closely related to the solution group. Our methods connect both to the solution group via a construction akin to the fundamental group of a space, and the homotopical methods of [12] based on cell complexes.
We use simpicial sets [11] as combinatorial models of spaces. Simplicial sets are fundamental objects of modern homotopy theory. They are more expressive than their close relatives simplicial complexes. One can associate a simplicial complex \(\Sigma\) to a matrix \(A\) specifying a linear system: The vertices of \(\Sigma\) are given by \(v_{1},\cdots,v_{c}\) and maximal simplices by \(\sigma_{1},\cdots,\sigma_{r}\) where each \(\sigma_{i}\) consists of vertices \(v_{j}\) such that \(A_{ij}\neq 0\). Each row of \(A\) can be regarded as a function \(A_{i}:\Sigma_{0}\to\mathbb{Z}_{d}\) on the vertex set. A simplicial set consists of a set of \(n\)-simplices for each \(n\geq 0\) together with the simplicial relations describing how simplices of various dimensions are glued. Our simplicial realizations are motivated by a well-known construction in algebraic topology. The nerve space \(NG\) of a group consists of \(n\)-tuples of group elements as its \(n\)-simplices. Given a matrix \(A\) with the associated simplicial complex \(\Sigma\) we construct a simplicial set, denoted by \(N(\mathbb{Z}_{d},\Sigma)\), whose set of \(n\)-simplices are given by tuples \((s_{1},s_{2},\cdots,s_{n})\) of functions \(s_{i}:\Sigma_{0}\to\mathbb{Z}_{d}\) such that the union of the supports \(\operatorname{supp}(s_{i})\) is a simplex of \(\Sigma\). The rows \(A_{i}\) can be regarded as \(1\)-simplices of this simplicial set. Another construction we need is the simplicial set \(N(\mathbb{Z}_{d},G)\), which we refer to as the \(d\)-torsion commutative nerve of \(G\), consisting of \(n\)-simplices given by tuples of pairwise commuting \(d\)-torsion group elements. Closely related versions of the nerve construction are first introduced in [1], and their homotopy theory has been studied recently; see, for example, [1, 1].
**Proposition 2.11**.: _There is a bijective correspondence between solutions of \((A,b)\) in a group \(G\) and the simplicial set maps \(f\) making the following diagram commute_
The maps that appear in this diagram are described in Section 2.1. Briefly, \(\alpha\), \(\beta\) and \(\iota\) correspond to \(A_{i}\)'s, \(b_{i}\)'s, and the central element \(J_{G}\) in a way that the commutativity of the diagram coincides with the notion of a solution in \(G\) introduced above. The cofiber of the \(\alpha\) map, which we denote by \(\bar{N}(\mathbb{Z}_{d},\Sigma)\), together with a cohomology class \(\gamma_{b}\) serves as the simplicial realization of the linear system \((A,b)\). The cohomology class depends on the column vector \(b\) and is defined using the cohomology long exact sequence associated to the cofiber sequence. There is a converse to this procedure which associates a linear system \((A_{X},b_{\gamma})\) to a pair \((X,\gamma)\) consisting of a simplicial set \(X\) and a cohomology class \([\gamma]\in H^{2}(X)\), where the coefficients of the cohomology group are in \(\mathbb{Z}_{d}\). Applying this procedure to \(X=\bar{N}(\mathbb{Z}_{d},\Sigma)\) and \([\gamma]=\gamma_{b}\) produces a linear system that is closely related to the original linear system \((A,b)\). The solution groups of these two linear
systems turn out to be isomorphic
\[\Gamma(A,b)\xrightarrow{\cong}\Gamma(A_{X},b_{\gamma}),\]
thus their solution sets in a group \(G\) are in bijective correspondence (Proposition 2.18).
Motivated by this result, we focus on linear systems \((A_{X},b_{\gamma})\) that come from simplicial sets. We use the theory of twisted products [10]. Given \((X,\gamma)\) the twisted product is a simplicial set \(X_{\gamma}\) which fits into a fibration sequence
\[N\mathbb{Z}_{d}\to X_{\gamma}\to X.\]
The set of \(n\)-simplices of \(X_{\gamma}\) is given by the product \(N\mathbb{Z}_{d}\times X\), but its simplicial structure maps are twisted by \(\gamma\).
**Theorem 3.11**.: _For a (connected) simplicial set \(X\), there is a canonical isomorphism of groups_
\[\pi_{1}(\mathbb{Z}_{d},X_{\gamma})\xrightarrow{\cong}\Gamma(A_{X},b_{\gamma})\]
The group \(\pi_{1}(\mathbb{Z}_{d},\cdot)\) is a version of the fundamental group. Briefly, its generators are the \(1\)-simplices (edges) of the simplicial set, and simplicial relations come from the \(2\)-simplices together with additional relations imposing commutativity for the edges on the boundary and \(d\)-torsion condition for each generator. This algebraic invariant is first introduced in [1].
We can associate a linear system to a pair \((G,J_{G})\) in a canonical way. Such linear systems are of particular interest in this paper. Our starting point is the central extension
\[1\to\langle J_{G}\rangle\to G\to\bar{G}\to 1\]
where \(\bar{G}\) denotes the quotient group. This extension is classified by a cohomology class \(\gamma_{G}\) in the second cohomology \(H^{2}(\bar{G})\) of the quotient group with coefficients in \(\mathbb{Z}_{d}\). This cohomology class can be represented by a \(2\)-cocycle constructed using a set-theoretic section \(\phi:\bar{G}\to G\) of the quotient homomorphism. On the level of simplicial sets, the extension above gives rise to a fibration sequence
\[N\mathbb{Z}_{d}\to N(\mathbb{Z}_{d},G)\to\bar{N}(\mathbb{Z}_{d},G)\]
classified by a cohomology class similar to \(\gamma_{G}\). The section \(\phi\) can be used to construct a representing cocycle \(\gamma_{\phi,d}\) for this class. The linear system associated to \((G,J_{G})\) is the one associated to the pair \((\bar{N}(\mathbb{Z}_{d},G),\gamma_{\phi,d})\). Writing \((A_{G},b_{\phi})\) for this linear system we have a group homomorphism
\[\Gamma(A_{G},b_{\phi})\to G.\]
We describe the kernel of this homomorphism using homotopical methods (Proposition 4.4).
When \(d>1\) is an odd integer, in the known cases a linear system admitting a solution in a group \(G\) admits a solution in \(\mathbb{Z}_{d}\). We state this as a conjecture (Conjecture 3.15) and provide evidence when \(d\) is an odd prime. The classes of groups satisfying this conjecture are certain types of extraspecial \(p\)-groups (Theorem 4.10) and a class of groups introduced in [12] (Theorem 4.12). Other interesting results we obtain using simplicial methods are as follows:
* We show that if a linear system does not admit a solution in \(\mathbb{Z}_{d}\) then any simplicial distribution induced by a density operator is contextual (Proposition 2.15). The theory of simplicial distributions introduced in [1] is a framework based on simplicial sets for studying contextuality.
* We provide four equivalent characterizations of solutions of linear systems based on group-theoretic and cohomological criteria (Corollary 3.14).
* We consider power maps \(\omega_{m}\) acting on \(N(\mathbb{Z}_{d},G)\) by raising a tuple of group elements to the \(m\)-th power. Corollary 3.23 applies this map to prove that for odd \(d\) certain relations in the solution group simplify. This result appears as the main theorem of [13].
* We compute the solution group of the \(K_{3,3}\) linear system using our simplicial approach and reproduce some of the key results of [1].
Acknowledgments.This work is supported by the US Air Force Office of Scientific Research under award number FA9550-21-1-0002. The second author would like to thank the Institute for Quantum Computing for their hospitality during a visit in January 2023, and William Slofstra for fruitful discussions and providing the proof of Proposition 2.6.
## 2 Linear systems
In this section we will introduce linear systems and their simplicial realizations. Let \(d>1\) be an integer and \(\mathbb{Z}_{d}\) denote the additive group of integers modulo \(d\). A _linear system_\((A,b)\) over \(\mathbb{Z}_{d}\) is a set of linear equations \(Ax=b\) specified by a matrix \(A\in\mathbb{Z}_{d}^{r}\times\mathbb{Z}_{d}^{c}\) and a column vector \(b\in\mathbb{Z}_{d}^{c}\). To a linear system we can associate the following data:
* A simplicial complex \(\Sigma_{A}\) with vertex set \((\Sigma_{A})_{0}=\{v_{1},v_{2},\cdots,v_{c}\}\) and maximal simplices \[\sigma_{i}=\{v_{j}:\,A_{ij}\neq 0,\;1\leq j\leq c\}\] where \(1\leq i\leq r\).
* A collection of functions \[A_{i}:(\Sigma_{A})_{0}\to\mathbb{Z}_{d}\] defined by \(A_{i}(v_{j})=A_{ij}\), where \(1\leq i\leq r\).
* A function \[b:\{\sigma_{i}:1\leq i\leq r\}\to\mathbb{Z}_{d}\] defined by \(b(\sigma_{i})=b_{i}\).
**Definition 2.1**.: Let \(G\) be a group with a central element \(J_{G}\) of order \(d\), i.e., \(J_{G}^{d}=1\). A _solution_ of a linear system \((A,b)\) in the group \(G\) is a function \(T:(\Sigma_{A})_{0}\to G\) satisfying the following conditions:
* \(T(v)\) is \(d\)-torsion, i.e., \(T(v)^{d}=1_{G}\), for all \(v\in\Sigma_{0}\),
* \(\{T(v):\,v\in\sigma_{i}\}\) pairwise commute for all \(1\leq i\leq r\),
* for all \(1\leq i\leq r\) we have \[\prod_{v_{j}\in\sigma_{i}}T(v_{j})^{A_{ij}}=J_{G}^{b_{i}}.\]
We will write \(\operatorname{Sol}(A,b;G)\) for the set of solutions of a linear system \((A,b)\) in the group \(G\).
Solutions in \(\mathbb{Z}_{d}\) in this sense coincide with the ordinary solutions of the linear system in \(\mathbb{Z}_{d}\).
For a set \(U\) we will write \(P(U)\) for the power set, that is, the collection of subsets of \(U\). We will write \(\mathbb{Z}_{d}^{U}\) for the set of functions \(U\to\mathbb{Z}_{d}\). This set of functions has a group structure induced by \(\mathbb{Z}_{d}\). Note that the rows of \(A\) when regarded as functions \(A_{i}:\Sigma_{0}\to\mathbb{Z}_{d}\) give elements in the group \(\mathbb{Z}_{d}^{\Sigma_{0}}\). We will write \(\langle A_{i}\rangle\) for the subgroup generated by this function.
**Example 2.2**.: Let \(A\) be a matrix such that the simplicial complex \(\Sigma_{A}\) consists of a unique maximal simplex. That is, \(\Sigma_{A}\) is given by the power set \(P(\Sigma_{0})\). In other words, \(A_{ij}\neq 0\) for all \(1\leq i\leq r\) and \(1\leq j\leq c\). In this case a solution specified by a function \(T:\Sigma_{0}\to G\) satisfies that \([T(v),T(w)]=T(v)^{-1}T(w)^{-1}T(v)T(w)=1\) for all vertices \(v,w\). We can convert the linear system \((A,b)\) into a row echelon form \((A^{\prime},b^{\prime})\) by row operations:
\[A^{\prime}=\begin{pmatrix}A^{\prime\prime}\\ 0\end{pmatrix},\quad b^{\prime}=\begin{pmatrix}b_{1}\\ b_{2}\end{pmatrix}\]
where \(A^{\prime\prime}\) contains no rows with all entries zero. Then there is a bijection between \(\operatorname{Sol}(A,b;G)\) and \(\operatorname{Sol}(A^{\prime},b^{\prime};G)\). The latter set is non-empty if \(b_{2}\neq 0\). Thus when studying solutions of linear systems, it is a simplifying assumption to require that \(\langle A_{i}\rangle\neq\langle A_{j}\rangle\) for every \(i\neq j\).
In this paper we will consider linear systems satisfying the following two conditions:
1. Each row \(A_{i}\) satisfies \(\langle A_{i}\rangle\cong\mathbb{Z}_{d}\).
2. For two distinct rows \(A_{i}\) and \(A_{j}\) we have \(\langle A_{i}\rangle\neq\langle A_{j}\rangle\).
These conditions simplify the description of simplicial realizations of linear systems given in Section 2.1. The last property is motivated by Example 2.2. The first property is satisfied by the linear systems of interest, mainly those that come from simplicial sets introduced in Section 2.3. Later in Proposition 2.18, we will show that the solution group of any linear system can be described as the solution group of a linear system satisfying this property.
**Definition 2.3**.: The _solution group_\(\Gamma(A,b)\) of a linear system \((A,b)\) is the finitely presented group generated by \(e_{v}\), where \(v\in\Sigma_{0}\), and \(J\) subject to the following relations:
* \(d\)-torsion relations: \(J^{d}=e_{v}^{d}=1\) for all \(v\in\Sigma_{0}\),
* commutativity relations: \(\{J,e_{v}:\,v\in\sigma\}\) pairwise commutes for all \(\sigma\in\Sigma\),
* product relations: for all \(\sigma_{i}\) we have \[\prod_{v_{j}\in\sigma_{i}}e_{v_{j}}^{A_{ij}}=J^{b_{i}}.\] (1)
Let \(\mathbf{Grp}\) denote the category of groups. We will write \(\mathbf{Grp}(G,H)\) for the set of group homomorphisms. We introduce a category by restricting the morphisms in the category of groups which will be useful in describing solutions of linear systems. Let \(\mathbf{Grp}_{J}\) denote the following category:
* Objects are pairs \((G,J_{G})\) where \(J_{G}\in G\) is a central element of order \(d\).
* A morphism \((G,J_{G})\to(H,J_{H})\) is given by a group homomorphism \(f:G\to H\) such that \(f(J_{G})=J_{H}\).
We will write \(\mathbf{Grp}_{J}(G,H)\) for the set of morphisms in this category.
**Proposition 2.4**.: _For a linear system \((A,b)\) the following properties hold._
1. _There is a bijection_ \[\text{Sol}(A,b;G)\cong\mathbf{Grp}_{J}(\Gamma(A,b),G).\]
2. _The set_ \(\text{Sol}(A,b;G)\) _of solutions is non-empty for some_ \(G\) _if and only if_ \(J\in\Gamma(A,b)\) _has order_ \(d\)_._
Proof.: A solution \(T:\Sigma_{0}\to G\) can be used to define a group homomorphism \(\theta_{T}:\Gamma(A,b)\to G\) by sending \(e_{v}\mapsto T(v)\) and \(J\mapsto J_{G}\). Conversely, given a group homomorphism in \(\mathbf{Grp}_{J}(\Gamma(A,b),G)\), restricting to the generators specifies a solution. This proves part (1).
For the second part, assume that the solution set is non-empty for some \(G\). Then the group homomorphism associated to the solution implies that \(J\) has order \(d\) since the element \(\theta_{T}(J)=J_{G}\) has order \(d\). Conversely, if the order of \(J\) is \(d\) then \(T:\Sigma_{0}\to\Gamma(A,b)\) defined by \(T(v)=e_{v}\) is a solution in \(\Gamma(A,b)\).
**Corollary 2.5**.: _Assume that \(\Gamma(A,b)\) is abelian and \(J\) has order \(d\). Then \((A,b)\) admits a solution in \(\mathbb{Z}_{d}\)._
Solutions in the unitary group \(U(\mathbb{C}^{m})\), where \(m\geq 1\), are of particular importance in the study of linear systems. When \(m\geq 2\) such solutions are usually referred to as _operator solutions_, and when \(m=1\) they are called _classical solutions_. Next, we will show that instead of studying solutions in unitary groups, we can focus on finite groups.
**Proposition 2.6** ([Slo]).: _For a linear system \((A,b)\) we have \(\text{Sol}(A,b;U(\mathbb{C}^{m}))\neq\emptyset\) for some \(m\geq 1\) if and only if \(\text{Sol}(A,b;G)\neq\emptyset\) for some finite group \(G\)._
Proof.: For a finite group \(G\), we can construct a unitary representation of \(G\) by inducing the \(1\)-dimensional representation \(\langle J\rangle\to U(\mathbb{C})\) obtained by sending \(J\) to \(\omega\mathbb{1}\). This gives an injective group homomorphism \(\phi:G\to U(\mathbb{C}^{m})\) where \(m=|G/\langle J\rangle|\). Then given a solution \(T:\Sigma_{0}\to G\) the composite \(\phi\circ T:\Sigma_{0}\to U(\mathbb{C}^{m})\) is a solution in a unitary group.
The converse implication follows from the following fact: Any finitely generated subgroup of the general linear group \(\operatorname{GL}(\mathbb{C}^{m})\) is residually finite [16, Theorem 7.116]. Therefore the subgroup \(G\subset U(\mathbb{C}^{m})\) generated by \(\{T(v_{j}):\,j=1,\cdots,c\}\) is residually finite. Then \(G\) is the inverse limit of a sequence of surjective group homomorphisms
\[\cdots\to G_{i+1}\xrightarrow{f_{i}}G_{i}\to\cdots\to G_{1}\]
where each \(G_{i}\) is finite. The element \(J\in G\) is represented by a tuple \((J_{i})_{i\geq 1}\) of central elements where \(J_{i}\in G_{i}\). There exists \(N\geq 1\) such that \(J_{N}\) has order \(d\). Then using the projection \(\pi_{N}:G\to G_{N}\) we obtain a solution in a finite group.
For a simplicial complex \(\Sigma\) we will write \(\hat{\Sigma}\) for the _dual complex_ consisting of the vertex set
\[\hat{\Sigma}_{0}=\{\sigma_{i}:\,1\leq i\leq r\}\]
and maximal simplices
\[\hat{\sigma}_{j}=\{\sigma_{i}:\,v_{j}\in\sigma_{i}\},\]
where \(1\leq j\leq c\). An important source of linear systems comes from the incidence matrices of graphs. Let \(K\) be a graph with vertex set \(K_{0}\) and edge set \(K_{1}\), which we can think of as a simplicial complex. Let \(b:K_{0}\to\mathbb{Z}_{d}\) be a function. Then the incidence matrix \(A(K)\), with entries
\[A(K)_{v,x}=\left\{\begin{array}{ll}1&v\in x\\ 0&\text{otherwise},\end{array}\right.\]
together with the function \(b\) specifies a linear system. Note that \(\Sigma_{A(K)}\) coincides with the dual complex \(\hat{K}\). For more on these kinds of linear systems see [14].
**Example 2.7**.: Let \(K_{3,3}\) denote the complete bipartite graph illustrated in Figure (1a). The dual complex \(\Sigma_{3,3}=\hat{K}_{3,3}\) is given by a torus triangulated as in Figure (1b). It is well known that \(K_{3,3}\) admits a solution in \(\mathbb{Z}_{2}\) if and only if \(\sum_{i=1}^{6}b_{i}=0\); see [1, 16, 17]. In Section 3.5 we will study this linear system using our simplicial techniques.
Figure 1: (a) The \(K_{3,3}\) graph. Each vertex is assigned a value \(b_{i}\). (b) Dual of the graph representing a torus. Each triangle is assigned a value \(b_{i}\) (pink color corresponds to \(1\) value). The top edge is identified with the bottom edge, and the leftmost edge is identified with the rightmost edge.
### Simplicial realizations
In Section 2 we have seen that a linear system can be described by a simplicial complex, a set of functions each supported on a maximal simplex and a function on the set of maximal simplices. In this section we will express this data in a different way using the language of simplicial sets.
A _simplicial set_ consists of a sequence of sets \(X_{1},X_{2},\cdots,X_{n}\) together with
* face maps \(d_{i}:X_{n}\to X_{n-1}\) where \(0\leq i\leq n\), and
* degeneracy maps \(s_{j}:X_{n}\to X_{n+1}\) where \(0\leq j\leq n\)
satisfying the simplicial identities [1, 12]. Elements of \(X_{n}\) are called \(n\)-simplices. A simplex \(\sigma\in X_{n}\) in the image of \(s_{j}:X_{n}\to X_{n+1}\) for some \(j\) is called _degenerate_; otherwise, it is called _non-degenerate_. A map \(f:X\to Y\) between two simplicial sets consists of a sequence of functions \(f_{n}:X_{n}\to Y_{n}\) for every \(n\geq 0\) that respects the face and the degeneracy maps. A simplicial subset of \(X\) is a simplicial set \(Z\) together with a map \(i:Z\to X\) of simplicial sets such that each \(i_{n}\) is given by inclusion of sets \(Z_{n}\subset X_{n}\). A simplicial set \(X\) is _connected_ if for any two \(0\)-simplices there is a sequence of \(1\)-simplices connecting them. That is, for \(v,w\in X_{0}\) there exists \(\tau_{1},\cdots,\tau_{k}\in X_{1}\) such that
\[v=d_{i_{1}}\tau_{1},\ \ d_{i^{\prime}_{1}}\tau_{1}=d_{i_{2}}\tau_{2},\ \ d_{i^{\prime}_{2}}\tau_{2}=d_{i_{3}}\tau_{2},\ \cdots,d_{i^{\prime}_{k-1}}\tau_{k-1}=d_{i_{k}}\tau_{k},\ \ d_{i^{\prime}k}\tau_{k}=w\]
where \(i_{l}\in\{0,1\}\) and \(i_{l}\neq i^{\prime}_{l}\) for \(l=1,\cdots,k\). In this paper we will be mostly concerned with connected simplicial sets.
Our examples of simplicial sets will follow a basic construction from algebraic topology known as the nerve construction. Let \(G\) be a group. The nerve space \(NG\) is the simplicial set whose set of \(n\)-simplices is \(G^{n}\) and simplicial structure maps are given by
\[d_{i}(g_{1},g_{2},\cdots,g_{n})=\left\{\begin{array}{ll}(g_{2},\cdots,g_{n}) &i=0\\ (g_{1},\cdots,g_{i}g_{i-1},\cdots,g_{n})&0<i<n\\ (g_{1},\cdots,g_{n-1})&i=n\end{array}\right.\]
and
\[s_{j}(g_{1},g_{2},\cdots,g_{n})=(g_{1},\cdots,g_{j},1,g_{j+1},\cdots,g_{n}).\]
For simplicial realizations of linear systems we will be interested in certain simplicial subsets of nerve spaces.
Recall that for a set \(U\) we write \(\mathbb{Z}_{d}^{U}\) for the set of functions \(U\to\mathbb{Z}_{d}\), and this set comes with a group structure inherited from \(\mathbb{Z}_{d}\). We consider the nerve space \(N(\mathbb{Z}_{d}^{U})\). Our simplicial realization will be a certain subspace of this nerve space. Let \(\Sigma\) be a simplicial complex.
**Definition 2.8**.: We define a simplicial subset \(N(\mathbb{Z}_{d},\Sigma)\subset N(\mathbb{Z}_{d}^{\Sigma_{0}})\) whose \(n\)-simplices are given by
\[N(\mathbb{Z}_{d},\Sigma)_{n}=\{(s_{1},s_{2},\cdots,s_{n})\in(\mathbb{Z}_{d}^{ \Sigma_{0}})^{n}:\,\cup_{i=1}^{n}\mbox{supp}(s_{i})\in\Sigma\}\]
where \(\mbox{supp}(s)=\{v\in\Sigma_{0}:\,s(v)\neq 0\}\) denotes the support of a function \(s:\Sigma_{0}\to\mathbb{Z}_{d}\).
For a linear system \((A,b)\) with associated simplicial complex \(\Sigma\), the functions \(A_{i}\), where \(1\leq i\leq r\), can be regarded as \(1\)-simplices of \(N(\mathbb{Z}_{d},\Sigma)\). Thus \(N(\mathbb{Z}_{d},\Sigma)\) can be thought of as a space that encodes the matrix \(A\). Next we introduce another space that can be used for describing solutions of the linear system. Our construction is a modified version of the nerve construction introduced in [1]; see also [1] for other versions.
**Definition 2.9**.: The _mod-\(d\) commutative nerve space_ is the simplicial subset \(N(\mathbb{Z}_{d},G)\subset N(G)\) whose \(n\)-simplices are given by
\[N(\mathbb{Z}_{d},G)_{n}=\{(g_{1},g_{2},\cdots,g_{n})\in G^{n}:\,g_{i}^{d}=1_{G},\ g_{i}g_{j}=g_{j}g_{i},\ \forall 1\leq i,j\leq n\}.\]
When the \(d\)-torsion condition is dropped the resulting simplicial set is called the _commutative nerve space_ and denoted by \(N(\mathbb{Z},G)\), a space first introduced in [1]. Its \(n\)-simplices are given by \(n\)-tuples of pairwise commuting group elements.
**Lemma 2.10**.: _A simplicial set map \(f:X\to NG\) is determined by its restriction to \(1\)-simplices, that is, by the function \(f_{1}:X_{1}\to G\). Moreover, a function \(h:X_{1}\to G\) extends to a simplicial set map \(f:X\to NG\) where \(f_{1}=h\) if \(h(d_{1}\sigma)=h(d_{2}\sigma)h(d_{0}\sigma)\) for all \(\sigma\in X_{2}\)._
Proof.: See [1, Proposition 3.13].
This result can be used to describe maps into a simplicial subspace of \(NG\), for example, the mod-\(d\) commutative nerve space \(N(\mathbb{Z}_{d},G)\). The nerve spaces introduced so far are partial monoids in the sense of [1]. In fact, the theory introduced there provides a nice framework for studying the simplicial sets of interest in this paper. This approach will be pursued elsewhere.
We will need the following simplicial set maps:
* \(\iota:N\mathbb{Z}_{d}\to N(\mathbb{Z}_{d},G)\) induced by the homomorphism \(\mathbb{Z}_{d}\xrightarrow{1\mapsto J_{G}}G\),
* \(\alpha:\vee_{i=1}^{r}N\mathbb{Z}_{d}\to N(\mathbb{Z}_{d},\Sigma)\) whose \(i\)-th factor is induced by the homomorphism \(\mathbb{Z}_{d}\xrightarrow{1\mapsto A_{i}}\mathbb{Z}_{d}^{\Sigma_{0}}\),
* \(\beta:\vee_{i=1}^{r}N\mathbb{Z}_{d}\to N\mathbb{Z}_{d}\) whose \(i\)-th factor is induced by the homomorphism \(\mathbb{Z}_{d}\xrightarrow{1\mapsto b_{i}}\mathbb{Z}_{d}\).
**Proposition 2.11**.: _There is a bijective correspondence between \(\text{Sol}(A,b;G)\) and the set of simplicial set maps \(f:N(\mathbb{Z}_{d},\Sigma)\to N(\mathbb{Z}_{d},G)\) that make the following diagram commute_
(2)
Proof.: We will use Lemma 2.10. A solution \(T:\Sigma_{0}\to G\) can be used to construct a simplicial set map \(f:N(\mathbb{Z}_{d},\Sigma)\to N(\mathbb{Z}_{d},G)\) by setting \(f_{1}(s)=\prod_{v\in\text{supp}(s)}T(v)^{s(v)}\). Conversely, from a simplicial set map we obtain a solution by setting \(T(v)=f_{1}(\delta^{v})\).
### Simplicial distributions
Solutions of linear systems play an important role in the study of contextuality [1]. In this section, we will define this notion using the theory of simplicial distributions introduced in [1] and consider the simplicial distributions associated with solutions of linear systems.
A distribution on a set \(U\) is defined to be a function \(p:U\to\mathbb{R}_{\geq 0}\) with finite support such that \(\sum_{u\in U}p(u)=1\). We write \(D(U)\) for the set of distributions on \(U\). There is a function \(\delta:U\to D(U)\) sending \(u\) to the delta distribution
\[\delta^{u}(u^{\prime})=\left\{\begin{array}{ll}1&u=u^{\prime}\\ 0&\text{otherwise.}\end{array}\right.\]
Given a function \(f:U\to V\) we define a function \(Df:D(U)\to D(V)\) by
\[Df(p)(v)=\sum_{u\in f^{-1}(v)}p(u).\]
For a simplicial set \(Y\) the space of distributions on this simplicial set is defined to be the simplicial set \(D(Y)\) with \(n\)-simplices given by \(D(Y_{n})\) together with the face and the degeneracy maps
\[D(d_{i}):D(Y_{n})\to D(Y_{n-1})\ \ \text{ and }\ \ D(s_{j}):D(Y_{n})\to D(Y_{n+1}).\]
**Definition 2.12**.: A _simplicial distribution_ is a simplicial set map \(p:X\to D(Y)\). We will write \(\text{sDist}(X,Y)\) for the set of simplicial distributions. Given a simplicial set map \(s:X\to Y\) we can define a simplicial distribution \(\delta^{s}:X\xrightarrow{s}Y\xrightarrow{\delta}DY\). Simplicial distributions of the form \(\delta^{s}\) are called _deterministic distributions_. We will write \(\text{dDist}(X,Y)\) for the set of deterministic distributions.
Given a simplicial distribution \(p:X\to DY\), we will write \(p_{\sigma}\) for the distribution \(p_{n}(\sigma)\in D(Y_{n})\) where \(\sigma\in X_{n}\). We will denote by \(\delta:Y\to DY\) the simplicial set map defined by \(\delta_{n}(\sigma)=\delta^{\sigma}\). There is a canonical map
\[\Theta:D(\operatorname{dDist}(X,Y))\to\operatorname{sDist}(X,Y)\]
that sends \(d=\sum_{s}\lambda(s)\delta^{s}\) to the simplicial distribution \(\Theta(d)\) defined as follows:
\[\Theta(d)_{\sigma}(\theta)=\sum_{r:\,r_{\sigma}=\theta}\lambda(r)\]
where \(\sigma\in X_{n}\), \(\theta\in Y_{n}\), and the summation runs over simplicial set maps \(r:X\to Y\) such that \(r_{\sigma}=\theta\).
**Definition 2.13**.: A simplicial distribution \(p:X\to DY\) is called _contextual_ if it does not lie in the image of \(\Theta\); otherwise, it is called _non-contextual_.
A conventional way of formulating contextuality is to use presheaves of distributions [1, 1]. Any presheaf of distributions can be realized as a simplicial distribution and the notion of contextuality given in Definition 2.13 specializes to the usual notion formulated in this language [1].
Contextual simplicial distributions arise from quantum measurements. Let \(\mathcal{H}\) denote a finite dimensional complex Hilbert space. Let \(\operatorname{Proj}(\mathcal{H})\) denote the set of projector operators acting on \(\mathcal{H}\), i.e., Hermitian operators that square to itself. A projective measurement is a function \(\Pi:U\to\operatorname{Proj}(\mathcal{H})\) with finite support such that \(\sum_{u\in U}\Pi(u)=\mathbb{1}_{\mathcal{H}}\). We write \(P_{\mathcal{H}}U\) for the set of projective measurements on \(U\). Given a function \(f:U\to V\) we define \(P_{\mathcal{H}}f:P_{\mathcal{H}}U\to P_{\mathcal{H}}V\) by
\[P_{\mathcal{H}}f(p)(v)=\sum_{u\in f^{-1}(v)}\Pi(u).\]
For a simplicial set \(Y\) the space of projective measurements is given by the simplicial set \(P_{\mathcal{H}}Y\) whose \(n\)-simplices are given by \(P_{\mathcal{H}}Y_{n}\) together with the simplicial structure maps
\[P_{\mathcal{H}}(d_{i}):P_{\mathcal{H}}(Y_{n})\to P_{\mathcal{H}}(Y_{n-1}) \quad\text{and}\quad P_{\mathcal{H}}(s_{j}):P_{\mathcal{H}}(Y_{n})\to P_{ \mathcal{H}}(Y_{n+1}).\]
Given a density operator \(\rho\), a positive operator of trace \(1\), we can define a simplicial set map \(\rho_{*}:P_{\mathcal{H}}Y\to DY\) by sending \(\Pi\in P_{\mathcal{H}}Y_{n}\) to the distribution
\[\rho_{*}\Pi(\theta)=\operatorname{Tr}(\rho\Pi(\theta)).\]
Checking that this is indeed a simplicial set map is straight-forward, and essentially follows from the linearity of the trace.
**Proposition 2.14**.: _The spectral decomposition theorem gives an isomorphism of simplicial sets_
\[sd:N(\mathbb{Z}_{d},U(\mathcal{H}))\to P_{\mathcal{H}}N\mathbb{Z}_{d}.\]
Proof.: In degree \(n\), \(\operatorname{sd}\) is described as follows: A tuple \((A_{1},\cdots,A_{n})\) of unitary matrices to the projective measurement \(\Pi:\mathbb{Z}_{d}^{n}\to\operatorname{Proj}(\mathcal{H})\) where \(\Pi(a_{1},\cdots,a_{n})\) projects onto the simultaneous eigenspace with eigenvalues \((\omega^{a_{1}},\cdots,\omega^{a_{n}})\). For details see [1, Proposition 6.3].
Now, let \(G\) be a group and \(\chi:G\to U(\mathcal{H})\) be a group homomorphism, in other words, a unitary representation. We have a commutative diagram of simplicial sets
where \(\chi_{*}\) in degree \(n\) sends \((g_{1},\cdots,g_{n})\) to \(\operatorname{sd}(\chi(g_{1}),\cdots,\chi(g_{n}))\).
**Proposition 2.15**.: _Let \((A,b)\) be a linear system over \(\mathbb{Z}_{d}\) and \(\Sigma\) the associated simplicial complex. Let \(\chi:G\to U(\mathcal{H})\) be a group homomorphism. Assume that \((A,b)\) admits a solution in \(G\), but not in \(\mathbb{Z}_{d}\). Let \(f:N(\mathbb{Z}_{d},\Sigma)\to N(\mathbb{Z}_{d},G)\) denote the simplicial set map corresponding to the solution (Proposition 2.11). Then the simplicial distribution given by the composite_
\[N(\mathbb{Z}_{d},\Sigma)\xrightarrow{f}N(\mathbb{Z}_{d},G)\xrightarrow{p_{ \rho}}DN\mathbb{Z}_{d}\]
_is contextual for any density operator \(\rho\)._
Proof.: In the contrary assume that \(p=f\circ p_{\rho}\) is non-contextual, that is, there exists \(d=\sum_{r}\lambda(r)\delta^{r}\) in \(D(\mathrm{dDist}(X,N\mathbb{Z}_{d}))\), where \(X=N(\mathbb{Z}_{d},\Sigma)\), such that \(\Theta(d)=p\). Let \(s:X\to N\mathbb{Z}_{d}\) be such that \(\lambda(s)\neq 0\). Then for all \(\sigma\in X_{n}\) we have
\[p_{\sigma}(s_{\sigma})=\sum_{r:\,r_{\sigma}=s_{\sigma}}\lambda(r)>\lambda(s)>0. \tag{3}\]
Consider the commutative diagram
The left square is Diagram 2. Our goal is to show that \(s\) makes the left-top triangle in the left square commute, i.e., \(s\circ\alpha=\beta\). By Lemma 2.10 it suffices to verify commutativity in degree 1. Let \(1\in\mathbb{Z}_{d}\) denote the 1-simplex in the \(i\)th factor of the wedge product \(\vee^{r}N\mathbb{Z}_{d}\). By the vertical map this element maps to the 1-simplex \(A_{i}\) and under the horizontal map it maps to \(b_{i}\). By commutativity of the outer square we obtain that \(p_{A_{i}}=\delta^{b_{i}}\). Then by Equation (3) we have
\[\delta^{b_{i}}(s_{A_{i}})=p_{A_{i}}(s_{A_{i}})>0.\]
That is, \(s_{A_{i}}=b_{i}\). Therefore \(s\circ\alpha=\beta\).
This result explains the importance of operator solutions of linear systems. Those which admits solution in a group but not in \(\mathbb{Z}_{d}\), i.e., solutions in the conventional sense, give rise to contextual simplicial distributions for any density operator (quantum state). Our approach is based on simplicial sets to make the connection to the theory of simplicial distributions in a more direct way. One can pass through presehaves of distributions to specialize the corresponding result in that language. That is, such linear systems also give rise to contextual presheaves of distributions [1].
### Linear systems from simplicial sets
Our goal in this section is to associate a linear system to a given simplicial set. We will show that as far as the solution groups are concerned any linear system can be converted to one that comes from a simplicial set.
Next we introduce cohomology of simplicial sets. Throughout the paper we restrict to cohomology with coefficients in \(\mathbb{Z}_{d}\). Given a simplicial set an \(n\)-cochain taking values in \(\mathbb{Z}_{d}\) is a function \(X_{n}\to\mathbb{Z}_{d}\). We will write \(C^{n}(X)\) for the set of \(n\)-cochains. There is a coboundary map
\[d_{n}:C^{n-1}(X)\to C^{n}(X)\]
defined by sending \(f:X_{n-1}\to\mathbb{Z}_{d}\) to be function \(d_{n}f:X_{n}\to\mathbb{Z}_{d}\):
\[d_{n}f(\sigma)=\sum_{i=0}^{n}f(d_{i}\sigma)\]
for \(\sigma\in X_{n}\). The \(n\)-th cohomology group is defined by the quotient group
\[H^{n}(X)=\frac{\ker(d_{n+1})}{\operatorname{im}(d_{n})}.\]
Sometimes we will write \(H^{n}(X,\mathbb{Z}_{d})\) to emphasize the coefficients. We will construct a linear system associated to a simplicial set together with a 2-cochain.
**Definition 2.16**.: Given a simplicial set \(X\) and a 2-cochain \(\gamma:X_{2}\to\mathbb{Z}_{d}\) we define a linear system \((A_{X},b_{\gamma})\) where \(A\) is a \(|X_{2}|\times|X_{1}|\) matrix and \(b\) is a column vector of size \(|X_{2}|\):
\[A_{\sigma,x}=\left\{\begin{array}{ll}\sum_{d_{i}\sigma=x}(-1)^{i}&x\in \partial\sigma\\ 0&\text{otherwise},\end{array}\right.\quad\text{and}\quad b_{\sigma}=-\gamma(\sigma)\]
where the summation runs over \(0\leq i\leq 3\) such that \(d_{i}\sigma=x\).
Our main examples in this paper will come from simplicial sets satisfying the property that \(|\partial\sigma|=3\) for every 2-simplex of \(X\). In this case the definition of \(A\) becomes:
\[A_{\sigma,x}=\left\{\begin{array}{ll}1&x\in\{d_{0}\sigma,d_{2}\sigma\}\\ -1&x=d_{1}\sigma\\ 0&\text{otherwise}.\end{array}\right.\]
A solution in \(G\) for the linear system \((A_{X},b_{\gamma})\) consists of a function \(T:X_{1}\to G\) that satisfies
* \(T(x)^{d}=1\) for all \(x\in X_{1}\),
* \(\{T(d_{i}\sigma):\,i=0,1,2\}\) pairwise commute for all \(\sigma\in X_{2}\),
* for every \(\sigma\in X_{2}\) we have \[T(d_{2}\sigma)T(d_{0}\sigma)T(d_{1}\sigma)^{-1}=J_{G}^{-\gamma(\sigma)}.\]
The simplicial set \(N(\mathbb{Z}_{d},\Sigma)\) (Definition 2.8) is our main example. As we have seen each row \(A_{i}\) can be regarded as an element in \(\mathbb{Z}_{d}^{\Sigma_{0}}\). The nerve space \(N\langle A_{i}\rangle\) is contained in \(N(\mathbb{Z}_{d},\Sigma)\). The union of these simplicial subsets over \(1\leq i\leq r\) is the simplicial subset given by the wedge \(\vee_{i=1}^{r}N\langle A_{i}\rangle\). Here we use the wedge notation to emphasize that for \(i\neq j\) the intersection of \(N\langle A_{i}\rangle\cap N\langle A_{j}\rangle\) is given by \(\Delta^{0}\), the simplicial set with \((\Delta^{0})_{n}=\{*\}\) for \(n\geq 0\) representing a point.
**Definition 2.17**.: Let \(\bar{N}(\mathbb{Z}_{d},\Sigma)\) denote the simplicial set obtained by taking the quotient of \(N(\mathbb{Z}_{d},\Sigma)\) by the subspace \(\vee_{i=1}^{r}N\langle A_{i}\rangle\). More explicitly, the \(n\)-simplices are given by
\[\bar{N}(\mathbb{Z}_{d},\Sigma)_{n}=\{\bar{0}\}\,\sqcup\,(N(\mathbb{Z}_{d}, \Sigma)_{n}-\vee_{i=1}^{r}(N\langle A_{i}\rangle)_{n})\,.\]
Next we construct a cohomology class \(\gamma_{b}\). We will use the cohomology long exact sequence of the cofiber sequence
\[\vee_{i=1}^{r}N\mathbb{Z}_{d}\to N(\mathbb{Z}_{d},\Sigma)\to\bar{N}(\mathbb{Z }_{d},\Sigma)\]
where we identify \(\langle A_{i}\rangle\) with \(\mathbb{Z}_{d}\). There is an associated long exact sequence in mod \(d\) cohomology
\[\cdots\to H^{1}(\bar{N}(\mathbb{Z}_{d},\Sigma))\to H^{1}(N(\mathbb{Z}_{d}, \Sigma))\to H^{1}(\vee_{i}N\mathbb{Z}_{d})\xrightarrow{\delta}H^{2}(\bar{N}( \mathbb{Z}_{d},\Sigma))\to\cdots \tag{4}\]
The vector \(b\in\mathbb{Z}_{d}^{r}\) can be identified with an element in \(H^{1}(\vee_{i}N\mathbb{Z}_{d})\) via the following isomorphism
\[H^{1}(\vee_{i}N\mathbb{Z}_{d})\cong\mathbb{Z}_{d}^{r}. \tag{5}\]
Let \(\gamma_{b}\) be the 2-cocycle on \(\bar{N}(\mathbb{Z}_{d},\Sigma)\) defined by
\[\gamma_{b}(\bar{s}_{1},\bar{s}_{2})=d\tilde{b}(\bar{s}_{1},\bar{s}_{2}) \tag{6}\]
where \(\tilde{b}\) is the 1-cochain on \(N(\mathbb{Z}_{d},\Sigma)\) given by
\[\tilde{b}(s)=\left\{\begin{array}{ll}ab_{i}&s=aA_{i}\\ 0&\text{otherwise}.\end{array}\right. \tag{7}\]
In Lemma A.1 we show that \(\delta(b)=\gamma_{b}\).
**Proposition 2.18**.: _For a linear system \((A,b)\), let \(X=\bar{N}(\mathbb{Z}_{d},\Sigma)\) and \(\gamma\) be such that \([\gamma]=\gamma_{b}\). Then there is an isomorphism between the solution groups_
\[\Gamma(A,b)\to\Gamma(A_{X},b_{\gamma}).\]
Proof.: Proof is given in Section A.
**Example 2.19**.: Consider the following linear system over \(\mathbb{Z}_{2}\):
\[\begin{pmatrix}1&1\\ 1&0\end{pmatrix}\begin{pmatrix}x\\ y\end{pmatrix}=\begin{pmatrix}b_{1}\\ b_{2}\end{pmatrix}.\]
The associated simplicial complex is given by \(\Sigma=P(\Sigma_{0})\) where \(\Sigma_{0}=\{v_{1},v_{2}\}\). The two rows can be regarded as the functions: \(A_{1}\colon v_{1}\mapsto 1,v_{2}\mapsto 1\) and \(A_{2}\colon v_{1}\mapsto 1,v_{2}\mapsto 0\). It is notationally convenient to encode functions \(\Sigma\to\mathbb{Z}_{2}\) by two 'bits', representing values on \(v_{1}\) and \(v_{2}\) respectively. Thus we encode \(A_{1}\) as \(11\) and \(A_{2}\) as \(10\). The simplicial set \(N(\mathbb{Z}_{2},\Sigma)\) consists of functions \(\{00,01,10,11\}\) as \(1\)-simplices. The \(n\)-simplices of \(N(\mathbb{Z}_{2},\Sigma)\) are given by \(n\)-tuples \((s_{1},s_{2},\ldots,s_{n})\in N(\mathbb{Z}_{2},\Sigma)_{1}^{n}\) with no further restrictions.
The simplicial set \(\bar{N}(\mathbb{Z}_{2},\Sigma)\) is obtained by identifying the two copies of \(N\mathbb{Z}_{2}\) to the base point. The set of \(1\)-simplices of \(\bar{N}(\mathbb{Z}_{2},\Sigma)\) is given by \(\{\bar{0}=00,01\}\) and the set of \(2\)-simplices is
\[\{\bar{0}=(00,00),(00,01),(01,00),(01,01),(01,10),(10,01),(11,01),(01,11),(11, 10),(10,11)\}.\]
Note that in \(\bar{N}(\mathbb{Z}_{2},\Sigma)_{2}\) only the elements of the form \((aA_{i},bA_{i})\) for \(i=1,2\) and \(a,b\in\mathbb{Z}_{2}\) are identified to the base point. In this case the linear system \((A_{X},b_{\gamma})\) where \(X=\bar{N}(\mathbb{Z}_{2},\Sigma)\) takes the following form:
\[\begin{array}{c}00\\ (00,00)\\ (00,01)\\ (01,01)\\ (10,10)\\ (11,01)\\ (01,11)\\ (11,10)\\ (10,11)\\ (11,10)\\ (10,11)\\ (10,11)\\ (11,10)\\ (10,11)\\ \end{array}\begin{pmatrix}1&0\\ 1&0\\ 1&0\\ 1&0\\ 1&0\\ 0&1\\ 0&1\\ 0&1\\ 0&1\\ 0&1\\ 0&1\\ 0&1\\ 0&1\\ \end{pmatrix}=\begin{pmatrix}0\\ 0\\ 0\\ 0\\ 0\\ b_{1}+b_{2}\\ b_{2}+b_{1}\\ b_{1}+b_{2}\\ b_{2}+b_{1}\\ b_{1}+b_{2}\\ b_{1}+b_{2}\\ b_{1}+b_{2}\\ b_{1}+b_{2}\\ b_{1}+b_{2}\\ \end{pmatrix}\]
This linear system can be reduced to the initial one.
## 3 Twisted products
Let \(X\) be a simplicial set and \(\gamma:X_{2}\to\mathbb{Z}_{d}\) be a \(2\)-cocycle. We will assume that \(\gamma\) is normalized in the sense that
\[\gamma(s_{i}X_{1})=0\ \ \ \text{for}\ \ i=0,1. \tag{8}\]
To this cocycle we can associate a fibration
\[N\mathbb{Z}_{d}\overset{i}{\to}X_{\gamma}\to X.\]
The total space \(X_{\gamma}\) can be described explicitly using twisted products.
**Definition 3.1**.: The twisted product \(X_{\gamma}=N\mathbb{Z}_{d}\times_{\gamma}X\) is a simplicial set defined as follows:
* the set of \(n\)-simplices is given by \(\mathbb{Z}_{d}^{n}\times X_{n}\),
* for \(\alpha\in\mathbb{Z}_{d}^{n}\) and \(\tau\in X_{n}\) the simplicial structure maps are given by \[d_{i}(\alpha,\tau)=(d_{i}\alpha,d_{i}\tau),\quad s_{j}(\alpha,\tau)=(s_{j}\alpha,s_{j}\tau)\] where \(1\leq i\leq n\) and \(0\leq j\leq n\); and when \(i=0\) we have \[d_{0}(\alpha,\tau)=(\eta(\tau)+d_{0}\alpha,d_{0}\tau)\] where \(\eta:X_{n}\to\mathbb{Z}_{d}^{n-1}\) is the twisting function that depends on the cocycle \(\gamma\); see Equation (34) in Section B.
### Commutative fundamental group
Let \(\mathbf{sSet}\) denote the category of simplicial sets. There is a well-known adjunction \(\pi_{1}\dashv N\) between the nerve functor \(N:\mathbf{Grp}\to\mathbf{sSet}\) that sends a group to the nerve space and the functor \(\pi_{1}:\mathbf{sSet}\to\mathbf{Grp}\) that sends a simplicial set \(X\) to the group defined as follows:
\[\pi_{1}(X)=\langle e_{x},\,x\in X_{1}:\,e_{d_{2}\sigma}e_{d_{0}\sigma}=e_{d_{ 1}\sigma}\ \forall\sigma\in X_{2}\rangle,\]
We will refer to \(\pi_{1}X\) as the _algebraic fundamental group_. Note that this group coincides with the fundamental group of the geometric realization for reduced simplicial sets, i.e., \(X_{0}=\{*\}\). For example, this is the case for \(N(\mathbb{Z}_{d},G)\) and \(N(\mathbb{Z}_{d},\Sigma)\).
**Lemma 3.2**.: _For every \(x\in s_{0}X_{0}\) we have \(e_{x}=1\)._
Proof.: Consider the \(2\)-simplex \(\sigma=s_{0}x\). We have \(d_{i}\sigma=x\) for all \(i=0,1,2\) by the simplicial relations. Therefore \(e_{x}e_{x}=e_{x}\), which implies that \(e_{x}=1\).
By the adjunction there is a natural isomorphism
\[\mathbf{sSet}(X,NG)\cong\mathbf{Grp}(\pi_{1}X,G).\]
Recall that \(N(\mathbb{Z}_{d},G)\) is a simplicial subset of \(NG\), and there is a similar adjunction that can be used to provide an algebraic description of simplicial set maps. Next, we describe this adjunction. To this end, we introduce a version of the algebraic fundamental group.
**Definition 3.3**.: The _commutative (\(d\)-torsion algebraic) fundamental group_ of a simplicial set \(X\) is the group \(\pi_{1}(\mathbb{Z}_{d},X)\) generated by \(e_{x}\) for \(x\in X_{1}\) subject to the relations
* \(e_{x}^{d}=1_{G}\) for all \(x\in X_{1}\),
* \([e_{d_{i}\sigma},e_{d_{j}\sigma}]=1_{G}\) for all \(\sigma\in X_{2}\) and \(i,j=0,1,2\),
* \(e_{d_{2}\sigma}e_{d_{0}\sigma}=e_{d_{1}\sigma}\) for all \(\sigma\in X_{2}\).
As a consequence of this definition there is a surjective group homomorphism \(\pi_{1}X\to\pi_{1}(\mathbb{Z}_{d},X)\) defined by the identity map on the set of generators.
**Lemma 3.4**.: _The fundamental group of \(X=N(\mathbb{Z}_{d},G)\) has the following presentation_
\[\langle e_{g},\,g\in G_{(d)}:\,e_{g}e_{h}=e_{gh}\text{ whenever }[g,h]=1\rangle.\]
_In particular, the quotient homomorphism \(\pi_{1}(X)\to\pi_{1}(\mathbb{Z}_{d},X)\) is an isomorphism._
Proof.: Using \(\sigma=(g,h)\), for \(g,h\in G_{(d)}\) such that \([g,h]=1\), we obtain the relation \(e_{g}e_{h}=e_{gh}\). From which it follows that \(e_{g}^{d}=e_{g^{d}}=e_{1}=1\) by Lemma 3.2 and \(e_{g}e_{h}=e_{gh}=e_{hg}=e_{h}e_{g}\).
**Lemma 3.5**.: _Simplicial set maps \(f:X\to N(\mathbb{Z}_{d},G)\) are in bijective correspondence with functions \(h:X_{1}\to G_{(d)}\) satisfying \(h(d_{1}\sigma)=h(d_{2}\sigma)h(d_{0}\sigma)\) for all \(\sigma\in X_{2}\)._
Proof.: Follows from Lemma 2.10
**Proposition 3.6**.: _The functors \(\pi_{1}(\mathbb{Z}_{d},\cdot)\) and \(N(\mathbb{Z}_{d},\cdot)\) constitute an adjoint pair, i.e., there is a natural bijection_
\[\mathbf{sSet}(X,N(\mathbb{Z}_{d},G))\cong\mathbf{Grp}(\pi_{1}(\mathbb{Z}_{d},X),G).\]
Proof.: A simplicial set map \(f:X\to NG\) factors through the inclusion \(N(\mathbb{Z}_{d},G)\subset NG\) if and only if the adjoint map \(\hat{f}:\pi_{1}X\to G\) factors through the quotient map \(\pi_{1}X\to\pi_{1}(\mathbb{Z}_{d},X)\). This follows from Lemma 3.4 and Lemma 3.5.
Also, the fundamental group of \(N(\mathbb{Z}_{d},\Sigma)\) has a nice description. For a simplex \(\sigma\in\Sigma\) let us define the function \(\delta^{\sigma}:\Sigma\to\mathbb{Z}_{d}\) by
\[\delta^{\sigma}(v)=\left\{\begin{array}{ll}1&v\in\sigma\\ 0&\text{otherwise.}\end{array}\right.\]
When \(\sigma=\{v\}\) we will abbreviate the notation as \(\delta^{v}\) and write \(e_{v}\) for \(e_{\delta^{v}}\).
**Lemma 3.7**.: _The fundamental group of \(X=N(\mathbb{Z}_{d},\Sigma)\) has the following presentation_
\[\langle e_{v},\,v\in\Sigma_{0}:\,e_{v}e_{w}=e_{\delta^{\{v,w\}}}\,\,\text{ whenever }\{v,w\}\in\Sigma\rangle.\]
_Moreover, the quotient homomorphism \(\pi_{1}(X)\to\pi_{1}(\mathbb{Z}_{d},X)\) is an isomorphism._
Proof.: Note that a function \(s:\Sigma_{0}\to\mathbb{Z}_{d}\) can be written as \(s=\sum_{v\in\Sigma_{0}}k_{v}\delta^{v}\). Using simplices of the form \(\sigma=(k\delta^{v},l\delta^{w})\), where \(v,w\) is such that \(\{v,w\}\in\Sigma\), we can write
\[e_{s}=\prod_{v\in\Sigma_{0}}e_{\delta^{v}}^{k_{v}}. \tag{9}\]
Then each generator \(e_{\delta^{v}}\) is \(d\)-torsion since \(e_{\delta^{v}}^{d}=e_{d\delta^{v}}=e_{0}=1\). Moreover, \([e_{\delta^{v}},e_{\delta^{w}}]=1\) whenever \(\{v,w\}\in\Sigma\).
**Corollary 3.8**.: _The set \(\text{Sol}(A,b;G)\) of solutions is in bijection with the set of group homomorphisms \(\theta:\pi_{1}N(\mathbb{Z}_{d},\Sigma)\to G\) satisfying \(\theta(e_{A_{i}})=J_{G}^{b_{i}}\) for all \(1\leq i\leq r\)._
Proof.: By the adjunction of Proposition 3.6 and the computation in Lemma 3.7, the diagrams in Proposition 2.11 characterizing solutions are in bijective correspondence with diagrams of group homomorphisms of the form
Note that for a group homomorphism \(\theta:\pi_{1}N(\mathbb{Z}_{d},G)\to G\) such that \(\theta(e_{A_{i}})=J_{G}^{b_{i}}\) the images \(\theta(e_{v})\) will be \(d\)-torsion and elements in \(\{\theta(e_{v}):v\in\sigma_{i}\}\) will pairwise commute. Using Equation (9) the condition \(\theta(e_{A_{i}})=J_{G}^{b_{i}}\) is equivalent to
\[\theta(e_{A_{i}})=\prod_{v\in\sigma_{i}}\theta(e_{v})^{A_{ij}}=J_{G}^{b_{i}}.\]
### Fundamental group of twisted products
In this section we will describe the algebraic fundamental group of \(X_{\gamma}\). Using the explicit formula for \(\eta\) given in Equation (34) in Section B we have
\[d_{0}(a,b;\sigma)=(\gamma(\sigma)+b;d_{0}\sigma)\]
where \(\sigma\in X_{2}\) and \((a,b)\in\mathbb{Z}_{d}^{2}\). The fundamental group \(\pi_{1}(X_{\gamma})\) is generated by \(e_{a,x}\), where \((a,x)\in\mathbb{Z}_{d}\times X_{1}\), subject to the relations
\[e_{a+b,d_{1}\sigma}=e_{a,d_{2}\sigma}\,e_{\gamma(\sigma)+b,d_{0}\sigma}\]
where \(((a,b),\sigma)\in\mathbb{Z}_{d}^{2}\times X_{2}\). Here are some useful relations:
1. For \(\sigma=(a,b;s_{0}x)\), where \(x\in X_{1}\), \[e_{a+b,x}=e_{a,s_{0}(d_{1}x)}e_{b,x}\] where \(\gamma(s_{0}x)=0\) by the normalization condition. In particular, when \(b=0\) this gives \[e_{a,x}=e_{a,s_{0}(d_{1}x)}e_{0,x}\] (10) On the other hand, setting \(x=s_{0}v\) gives \[e_{a+b,s_{0}v}=e_{a,s_{0}v}e_{b,s_{0}v},\] using which we obtain \(e_{a,s_{0}v}^{d}=1\) and \([e_{a,s_{0}v},e_{b,s_{0}v}]=1\).
2. For \(\sigma=(a,b;s_{1}x)\), where \(x\in X_{1}\), \[e_{a+b,x}=e_{a,x}e_{b,s_{0}(d_{0}x)}\] where \(\gamma(s_{1}x)=0\) by the normalization condition. In particular, when \(a=0\) this gives \[e_{b,x}=e_{0,x}e_{b,s_{0}(d_{0}x)}.\] (11)
We will write \(e_{x}=e_{0,x}\). Combining (1) and (2) we obtain the following.
**Lemma 3.9**.: _We have_
\[e_{a,s_{0}(d_{1}x)}=e_{x}\,e_{a,s_{0}(d_{0}x)}\,e_{x}^{-1}. \tag{12}\]
Proof.: Follows from Equation (10) and (11).
See Figure (2) for a pictorial representation of this generator.
**Lemma 3.10**.: _If \(X\) is reduced then \(e_{a}=e_{a,s_{0}(*)}\) is central._
Proof.: Using Equation (11) we can write \(e_{a,x}=e_{x}e_{a}\). Then the result follows from Equation (10) and (12).
Now, we turn to the commutative fundamental group. The additional relations are
1. \(e_{a,x}^{d}=1\) for every \((a,x)\in\mathbb{Z}_{d}\times X_{1}\),
2. \(\{e_{\gamma(\sigma)+b,d_{0}\sigma},e_{a+b,d_{1}\sigma},e_{a,d_{2}\sigma}\}\) pairwise commute for every \((a,b;\sigma)\in\mathbb{Z}_{d}^{2}\times X_{2}\).
When \(X\) is connected we will write
\[e_{a}=e_{a,s_{0}v} \tag{13}\]
since in the commutative fundamental group the generators \(e_{a,s_{0}v}\) with different \(v\)'s are all identified.
**Theorem 3.11**.: _Let \(X\) be a connected simplicial set. The assignment \(e_{a,x}\mapsto J^{a}e_{x}\) defines an isomorphism of groups_
\[\pi_{1}(\mathbb{Z}_{d},X_{\gamma})\to\Gamma(A_{X},b_{\gamma}).\]
_Moreover, for two cocycles \(\gamma\) and \(\gamma^{\prime}\) such that \([\gamma]=[\gamma^{\prime}]\) we have an isomorphism \(\Gamma(A_{X},b_{\gamma})\cong\Gamma(A_{X},b_{\gamma^{\prime}})\)._
Figure 2: The generator \(e_{a,s_{0}(d_{1}x)}\) corresponds to the loop \(a\) at the vertex \(d_{1}x\), the source of the edge \(x\). The conjugate element \(e_{x}\,e_{a,s_{0}(d_{0}x)}\,e_{x}^{-1}\) is the loop obtained by traversing \(x\), \(a\), and \(x\) in the reverse direction. There two elements coincide in \(\pi_{1}X_{\gamma}\).
Proof.: We will denote the homomorphism in the statement by \(\phi\). We firstly prove that it is a well-defined homomorphism of groups, i.e., the relators in \(\pi_{1}(\mathbb{Z}_{d},X_{\gamma})\) are taken to \(1\) by \(\phi\).
1. Let \((a,b;\sigma)\) be a \(2\)-simplex in \(X_{\gamma}\). Consider the relation \(e_{a,d_{2}\sigma}e_{\gamma(\sigma)+b,d_{0}\sigma}e_{a+b,d_{1}\sigma}^{-1}=1\). We have \[\phi\left(e_{a,d_{2}\sigma}e_{\gamma(\sigma)+b,d_{0}\sigma}e_{a+b,d_{1}\sigma}^{-1}\right) =J^{a}e_{d_{x}\sigma}J^{\gamma(\sigma)+b}e_{d_{0}\sigma}J^{-a-b}e _{d_{1}\sigma}^{-1}\] \[=J^{\gamma(\sigma)}e_{d_{x}\sigma}e_{d_{0}\sigma}e_{d_{1}\sigma}^ {-1}\] \[=J^{\gamma(\sigma)}J^{-\gamma(\sigma)}=1,\] where we use the centrality of the element \(J\) and the product relation \(e_{d_{x}\sigma}e_{d_{0}\sigma}e_{d_{1}\sigma}^{-1}=J^{-\gamma(\sigma)}\) in the group \(\Gamma(A_{X},b_{\gamma})\) (Definition 2.3 and 2.16).
2. Consider the relation \(e_{a,x}^{d}=1\). We have \[\phi(e_{a,x}^{d})=(J^{a}e_{x})^{d}=J^{ad}e_{x}^{d}=1,\] where we used the centrality of the element \(J\) and the fact that all generators in \(\Gamma(A_{X},b_{\gamma})\) are of order \(d\).
3. The last relation is that for a \(2\)-simplex \((a,b;\sigma)\in X_{\gamma}\) the elements \(\{e_{a+b,d_{1}\sigma}^{-1},e_{a,d_{2}\sigma},e_{\gamma(\sigma)+b,d_{0}\sigma}\}\) pairwise commute. Note that image of every element from this set is the product of a power of \(J\) and an element \(e_{d_{i}\sigma}\) for \(i=0,1,2\). However, the elements \(e_{d_{0}\sigma},e_{d_{1}\sigma}\) and \(e_{d_{2}\sigma}\) pairwise commute in \(\Gamma(A_{X},b_{\gamma})\) and \(J\) is central. Thus \(\phi(e_{a+b,d_{1}\sigma}^{-1}),\phi(e_{a,d_{2}\sigma}),\phi(e_{\gamma(\sigma)+b,d_{0}\sigma})\) also pairwise commute.
Now, we define \(\psi\colon\Gamma(A_{X},b_{\gamma})\to\pi_{1}(\mathbb{Z}_{d},X_{\gamma})\) by \(\psi(J)=e_{1}\) and \(\psi(e_{x})=e_{x}\). Note that if \(\psi\) is a well-defined group homomorphism then \(\psi\) and \(\phi\) are mutually inverse, proving that \(\phi\) is an isomorphism. So we need to check again that \(\psi\) is taking the relators of \(\Gamma(A_{X},b_{\gamma})\) to \(1\) in \(\pi_{1}(\mathbb{Z}_{d},X_{\gamma})\).
1. By definition, the elements \(e_{x}\in\pi(\mathbb{Z}_{d},X_{\gamma})\) are \(d\)-torsion, proving that \(\psi(e_{x}^{d})=1\). We also have that the element \(e_{1}=e_{1,s_{0}(v)}\) for some vertex \(v\in\left(X_{\gamma}\right)_{0}\) is \(d\)-torsion, so \(\psi(J^{d})=1\).
2. By Lemma 3.10 the element \(\psi(J)=e_{1}\) is central in \(\pi_{1}(\mathbb{Z}_{d},X_{\gamma})\).
3. Let \(\sigma\) be a \(2\)-simplex in \(X\). We need to show that \(\psi(e_{d_{i}\sigma})\) pairwise commute in \(\pi(\mathbb{Z}_{d},X_{\sigma})\) for \(i=0,1,2\). There is a \(2\)-simplex \((0,0;\sigma)\) in \(X_{\gamma}\), showing that these elements indeed commute.
4. Finally, we need to consider the product relation. Let \(\sigma\) be a \(2\)-simplex in \(X\). Then we need to show that \[\psi\left(J^{\gamma(\sigma)}e_{d_{0}\sigma}e_{d_{2}\sigma}e_{d_{1}\sigma}^{-1 }\right)=1.\] Consider the \(2\)-simplex \((0,0;\sigma)\) in \(X_{\gamma}\). It gives the following relation in \(\pi_{1}(\mathbb{Z}_{d},X_{\gamma})\): \[e_{\gamma(\sigma),d_{0}\sigma}e_{d_{2}\sigma}e_{d_{1}\sigma}^{-1}=1.\] However, we can write the first element in this relation as \(e_{\gamma(\sigma),d_{0}\sigma}=e_{d_{0}\sigma}e_{\gamma(\sigma)}\) and by the centrality of the element \(e_{1}\) we obtain that \(e_{d_{0}\sigma}e_{d_{2}\sigma}e_{d_{1}\sigma}^{-1}=e^{-\gamma(\sigma)}.\) Combining these gives \[\psi\left(J^{\gamma(\sigma)}e_{d_{0}\sigma}e_{d_{2}\sigma}e_{d_{1}\sigma}^{-1 }\right)=e_{1}^{\gamma(\sigma)}e_{d_{0}\sigma}e_{d_{2}\sigma}e_{d_{1}\sigma}^{- 1}=e^{\gamma(\sigma)}e^{-\gamma(\sigma)}=1.\]
We conclude that \(\psi\) is a well-defined group homomorphism. Thus the result follows.
For the second statement, observe that Theorem B.1 implies that when the cohomology classes \([\gamma]\) and \([\gamma^{\prime}]\) coincide there is an isomorphism of bundles. Therefore \(\pi_{1}(\mathbb{Z}_{d},X_{\gamma})\cong\pi_{1}(\mathbb{Z}_{d},X_{\gamma^{ \prime}})\). The result follows from the first part of the Proposition.
### Characterizations of solutions
Consider the central extension
\[1\to\langle J_{G}\rangle\to G\xrightarrow{\pi}\bar{G}\to 1 \tag{14}\]
where \(\bar{G}\) is the quotient group. For the image \(\pi(g)\) of an element \(g\in G\) we will simply write \(\bar{g}\).
**Definition 3.12**.: Let \(\bar{N}(\mathbb{Z}_{d},G)\) denote the simplicial set whose \(n\)-simplices are given by
\[\bar{N}(\mathbb{Z}_{d},G)_{n}=\{(\bar{g}_{1},\bar{g}_{2},\cdots,\bar{g}_{n}): \,(g_{1},\cdots,g_{n})\in N(\mathbb{Z}_{d},G)_{n}\}.\]
The simplicial structure maps are similar to that of \(N(\mathbb{Z}_{d},G)\).
Equivalently, \(\bar{N}(\mathbb{Z}_{d},G)\) is the orbit space under the action of \(N\mathbb{Z}_{d}\) on \(N(\mathbb{Z}_{d},G)\) which in degree \(n\) is given by
\[(a_{1},\cdots,a_{n})\cdot(g_{1},\cdots,g_{n})=(J^{a_{1}}g_{1},\cdots,J^{a_{n}} g_{n}).\]
This is a free action, hence the quotient map under this action is a fibration with fiber \(N\mathbb{Z}_{d}\):
\[N\mathbb{Z}_{d}\to N(\mathbb{Z}_{d},G)\to\bar{N}(\mathbb{Z}_{d},G) \tag{15}\]
By the general theory of fibrations it is classified by a cohomology class \(\gamma_{G,d}\in H^{2}(\bar{N}(\mathbb{Z}_{d},G))\). More explicitely, this class is the image of the transgression homomorphism in the \(E_{2}\)-page of the Serre spectral sequence of the fibration Equation (15) (see [11, Section 6.2]):
\[d_{2}:H^{1}(N\mathbb{Z}_{d})\to H^{2}(\bar{N}(\mathbb{Z}_{d},G))\]
Identifying \(H^{1}(N\mathbb{Z}_{d})\cong\mathbb{Z}_{d}\) we have
\[\gamma_{G,d}=d_{2}(1). \tag{16}\]
**Lemma 3.13**.: _There is a bijection between_
1. _the set of maps_ \(f:X\to\bar{N}(\mathbb{Z}_{d},G)\) _such that_ \(f^{*}(\gamma_{G,d})=[\gamma]\)_, and_
2. _the set of commutative diagrams of the form_ \[\begin{CD}N\mathbb{Z}_{d}\\ \end{CD}\] (17)
Proof.: This follows from the classification of fibrations given in Theorem B.1. Given \(f:X\to\bar{N}(\mathbb{Z}_{d},G)\) such that \(f^{*}(\gamma_{G,d})=[\gamma]\) and pulling back \(N(\mathbb{Z}_{d},G)\to\bar{N}(\mathbb{Z}_{d},G)\) along \(f\) gives a principal \(N\mathbb{Z}_{d}\)-bundle over \(X\) isomorphic to \(X_{\gamma}\to X\). This gives a commutative triangle as in Diagram 17. Conversely, a commutative triangle as in Diagram 17 descends to a map \(f:X\to\bar{N}(\mathbb{Z}_{d},G)\) between the quotient spaces (under the action of \(N\mathbb{Z}_{d}\)). Comparing the spectral sequences implies that \(f^{*}(\gamma_{G,d})=[\gamma]\).
**Corollary 3.14**.: _Let \((A,b)\) denote the linear system associated to \((X,\gamma)\), where \(X\) is a connected simplicial set. There is a bijective correspondence between the following sets:_
1. _the set_ \(\text{Sol}(A,b;G)\) _of solutions,_
2. _the set of group homomorphisms_ \(\theta:\Gamma(A,b)\to G\) _such that_ \(\theta(J)=J_{G}\)_,_
3. _the set of group homomorphisms_ \(\theta:\pi_{1}(\mathbb{Z}_{d},X_{\gamma})\to G\) _such that_ \(\theta(e_{1})=J_{G}\)_,_
4. _the set of maps_ \(f:X\to\bar{N}(\mathbb{Z}_{d},G)\) _such that_ \(f^{*}(\gamma_{G,d})=[\gamma]\)_._
Proof.: The bijection between (1) and (3) is proved in Corollary 3.8. The bijection between (2) and (3) follows from Theorem 3.11. Part (1) of Proposition 2.4 gives the bijection between (1) and (2). Next we prove the bijection between (3) and (4). Proposition 3.6 implies that there is a bijection between (3) and commutative diagrams given in Diagram 17. Using Lemma 3.13 finishes the proof.
Known examples of linear systems over \(\mathbb{Z}_{d}\) for \(d\) an odd integer behave particularly nice. We state this as a conjecture and provide some evidence for it in Section 4.
**Conjecture 3.15**.: _Let \(d>1\) be an odd integer. Any linear system over \(\mathbb{Z}_{d}\) admitting a solution in \(G\) also admits a solution in \(\mathbb{Z}_{d}\)._
The implications of this conjecture for contextuality is important. Let \(\chi:G\to U(\mathcal{H})\) be a unitary representation. Proposition 2.15 implies that those linear system admitting a solution in some group \(G\) but not in \(\mathbb{Z}_{d}\), produces contextual simplicial distributions for any density operator (see Definition 2.13). The conjecture states that those contextual simplicial distributions never arise when \(d\) is an odd prime. However, when \(d=2\) this is not the case. For example, the \(K_{3,3}\) linear system studied in Example 4.7 admits a solution in \(D_{8}*D_{8}\) but not in \(\mathbb{Z}_{2}\). This distinction between \(d=2\) and \(d\) odd has crucial implications in quantum computation [1, 2].
### Power maps
For a group \(G\) let us write \(G_{(d)}\) to denote the set of \(d\)-torsion elements, i.e., those \(g\in G\) satisfying \(g^{d}=1_{G}\).
**Definition 3.16**.: Let \(\mathbf{Grp}_{(d)}\) denote the category whose objects are groups and a morphism \(G\to H\) is given by a set map \(\theta:G_{(d)}\to H\) satisfying
\[\theta(g_{1}g_{2})=\theta(g_{1})\theta(g_{2})\]
for all \(g_{1},g_{2}\in G_{(d)}\) such that \(g_{1}g_{2}=g_{2}g_{1}\).
Let \(\iota:\mathbf{Grp}\to\mathbf{Grp}_{(d)}\) denote the inclusion functor. The nerve space \(N(\mathbb{Z}_{d},\cdot)\) extends to a functor from \(\mathbf{Grp}_{(d)}\) to the category of simplicial sets. That is, a morphism \(\theta:G\to H\) of the category \(\mathbf{Grp}_{(d)}\) induces a map of simplicial sets \(\theta:N(\mathbb{Z}_{d},G)\to N(\mathbb{Z}_{d},H)\). This observation follows from Definition 2.9. There are special morphisms in this category which turn out to be very useful.
**Definition 3.17**.: For an integer \(m\), let \(\omega_{m}:G\to G\) denote the set map defined by \(g\mapsto g^{m}\).
In general, the map \(\omega_{m}\) is not a group homomorphism. For instance, \(\omega_{-1}\) is a group homomorphism if and only if \(G\) is abelian. However, \(\omega_{m}\) is a morphism of the category \(\mathbf{Grp}_{(d)}\). In this case the domain is restricted to the \(d\)-torsion part \(G_{(d)}\). This map induces a simplicial set map, denoted by \(\omega_{m}:N(\mathbb{Z}_{d},G)\to N(\mathbb{Z}_{d},G)\), which in degree \(n\) is given as follows
\[(g_{1},g_{2},\cdots,g_{n})\mapsto(g_{1}^{m},g_{2}^{m},\cdots,g_{n}^{m}).\]
It is straight-forward to verify that this assignment respects the simplicial structure giving us a simplicial set map as claimed.
**Lemma 3.18**.: _We have \(\omega_{m}^{*}(\gamma_{G,d})=m\gamma_{G,d}\)._
Proof.: Raising to the \(m\)-th power gives a commutative diagram of simplicial sets
(18)
In cohomology the induced map \(H^{1}(N\mathbb{Z}_{d})\to H^{1}(N\mathbb{Z}_{d})\) is again multiplication by \(m\). Recall that the class \(\gamma_{G,d}\) is obtained from the transgression \(d_{2}:H^{1}(N\mathbb{Z}_{d})\to H^{2}(\bar{N}(\mathbb{Z}_{d},G))\); see Equation (16). Comparing the spectral sequences of the fibrations in Diagram (18) we obtain the desired result.
Let us write \(d=d_{1}\cdots d_{k}\), where \(d_{i}\)'s are positive integers such that \(\text{gcd}(d_{i},d_{j})=1\) for all \(i\neq j\). Let \(\gamma_{i}\) denote the image of \([\gamma]\) under the map \(H^{2}(X,\mathbb{Z}_{d})\to H^{2}(X,\mathbb{Z}_{d_{i}})\) induced by the mod \(d_{i}\) reduction homomorphism \(\mathbb{Z}_{d}\to\mathbb{Z}_{d_{i}}\).
**Lemma 3.19**.: _Let \(q_{i}=\prod_{j\neq i}d_{j}\) and \(q_{i}^{-1}\) denote the inverse of \(q_{i}\) in \(\mathbb{Z}_{d_{i}}\). The following diagram of simplicial sets commutes_
_Moreover, the composite \(\omega_{q_{i}^{-1}}\circ\omega_{q_{i}}\) coincides with \(N\mathbb{Z}_{d}\to N\mathbb{Z}_{d_{i}}\) induced by the mod \(d_{i}\) reduction._
**Lemma 3.20**.: _Mod \(d_{i}\) reduction induces an injective map_
\[\{\,f:X\to\bar{N}(\mathbb{Z}_{d},G):\,f^{*}(\gamma_{G,d})=[\gamma]\,\} \longrightarrow\prod_{i=1}^{k}\{\,f_{i}:X\to\bar{N}(\mathbb{Z}_{d_{i}},G):\,f _{i}^{*}(\gamma_{G,d_{i}})=\gamma_{i}\,\}.\]
Proof.: Consider the diagram in Lemma 3.19 and the associated spectral sequence for \(H^{*}(\cdot,\mathbb{Z}_{d_{i}})\). Let \(r_{i}:N(\mathbb{Z}_{d},G)\to N(\mathbb{Z}_{d_{i},G})\) denote the composite \(\omega_{q_{i}^{-1}}\circ\omega_{q_{i}}\) and \(\bar{r}_{i}\) the induced map \(\bar{N}(\mathbb{Z}_{d},G)\to\bar{N}(\mathbb{Z}_{d_{i}},G)\) between the quotient spaces. Combining \(\bar{r}_{i}\)'s gives an injective map
\[\bar{r}:N(\mathbb{Z}_{d},G)\to\prod_{i}\bar{N}(\mathbb{Z}_{d_{i}},G).\]
Given \(f:X\to\bar{N}(\mathbb{Z}_{d},G)\) we can compose with \(\bar{r}\) and project onto the \(i\)th factor to obtain \(f_{i}:X\to\bar{N}(\mathbb{Z}_{d_{i}},G)\). By Lemma 3.19 the induced map in cohomology
\[\bar{r}_{i}^{*}:H^{2}(\bar{N}(\mathbb{Z}_{d_{i}},G),\mathbb{Z}_{d_{i}})\to H ^{2}(\bar{N}(\mathbb{Z}_{d},G),\mathbb{Z}_{d_{i}})\]
sends \(\gamma_{G,d_{i}}\) to the mod \(d_{i}\) reduction of \(\gamma_{G,d}\). Therefore \(f_{i}^{*}(\gamma_{G,d_{i}})=\gamma_{i}\) for each \(1\leq i\leq k\).
In particular, this result can be applied to a prime decomposition of \(d\) to reduce problems to the case where \(d\) is a prime power. Let \(d=\prod_{i=1}^{k}p_{i}^{\alpha_{i}}\) be a prime decomposition. Given a linear system \((A,b)\) over \(\mathbb{Z}_{d}\) let us write \((A^{(i)},b^{(i)})\) for the linear system over \(\mathbb{Z}_{p_{i}^{\alpha_{i}}}\) obtained by mod \(p_{i}^{\alpha_{i}}\) reduction.
**Corollary 3.21**.: _Mod \(p_{i}^{\alpha_{i}}\) reduction reduction induces an injective map between the solution sets:_
\[\text{Sol}(A,b;G)\to\prod_{i=1}^{k}\text{Sol}(A^{(i)},b^{(i)};G).\]
Proof.: Follows from Corollary 3.14 and Lemma 3.20.
Let us turn to the induced map on the algebraic fundamental group of \(N(\mathbb{Z}_{d},G)\) (see Section 3.1). In general, the power map induces a map on the commutative fundamental group \(\pi_{1}(\mathbb{Z}_{d},X)\) of an arbitrary simplicial set:
\[(\omega_{m})_{*}:\pi_{1}(\mathbb{Z}_{d},X)\to\pi_{1}(\mathbb{Z}_{d},X)\]
defined by \(e_{x}\mapsto e_{x}^{m}\). Let \(p\) be a positive integer dividing \(d\) and let \(q=d/p\). We can identify \(\pi_{1}(\mathbb{Z}_{p},X)\) as a subgroup of \(\pi_{1}(\mathbb{Z}_{d},X)\) via the homomorphism \(\iota_{q}:\pi_{1}(\mathbb{Z}_{p},X)\to\pi_{1}(\mathbb{Z}_{d},X)\) defined by \(e_{x}\mapsto e_{x}^{q}\). This homomorphism splits via the composition
\[\pi_{1}(\mathbb{Z}_{d},X)\xrightarrow{(\omega_{q})_{*}}\pi_{1}(\mathbb{Z}_{d },X)\xrightarrow{(\omega_{q^{-1}})_{*}}\pi_{1}(\mathbb{Z}_{p},X)\]
Now, we give an application of the \(\omega_{-1}\) map to linear systems over \(\mathbb{Z}_{d}\) when \(d\) is odd. We will write \(w\) for a word in a finitely presented group, that is, a product of the form \(w=e_{x_{1}}e_{x_{2}}\cdots e_{x_{l}}\) where each \(e_{x_{i}}\) is a generator. The word \(w^{\text{op}}\) will denote the opposite word given by the product \(e_{x_{l}}\cdots e_{x_{2}}e_{x_{1}}\).
**Proposition 3.22**.: _Let \(d>1\) be an odd integer. If the equation \(w=e_{1}^{a}w^{\text{op}}\) holds in \(\pi_{1}(\mathbb{Z}_{d},X_{\gamma})\) then \(w=w^{\text{op}}\)._
Proof.: The map \(\omega_{-1}\) is an automorphism of \(\pi_{1}(\mathbb{Z}_{d},X_{\gamma})\). Applying this automorphism to the equation, we obtain that \(w=e_{1}^{-a}w^{\text{op}}\), where \(w=e_{x_{1}}^{-1}e_{x_{2}}^{-1}\cdots e_{x_{l}}^{-1}\), also holds. Together with the original equation we obtain
\[(J^{-a}w^{\text{op}})w=w(J^{a}w^{\text{op}})\;\Rightarrow\;J^{2a}=1.\]
Since \(d\) is odd, this implies \(a=0\mod d\).
Combining this result with Lemma 3.4 we obtain the following result proved in [13, Theorem 5] using different methods.
**Corollary 3.23**.: _Let \(d>1\) be an odd integer. If the equation \(w=J^{a}w^{\text{op}}\) holds in \(\pi_{1}N(\mathbb{Z}_{d},G)\) then \(w=w^{\text{op}}\)._
Proof.: We apply Proposition 3.22 with \(X=\bar{N}(\mathbb{Z}_{d},G)\) and \([\gamma]=\gamma_{G,d}\). Note that \(N(\mathbb{Z}_{d},G)\cong X_{\gamma}\).
### The \(K_{3,3}\) linear system
In this section we will study the linear system introduced in Example 2.7. It will be useful to introduce chains on a simplicial set, a dual notion to cochains. Given a simplicial set \(X\) the set \(C_{n}(X)\) of \(n\)-chains is the free \(\mathbb{Z}_{d}\)-module generated by the symbols \([\sigma]\) where \(\sigma\in X_{n}\). The group \(C^{n}(X)\) of \(n\)-chains can be identified with the group of \(\mathbb{Z}_{d}\)-module homomorphisms \(C_{n}(X)\to\mathbb{Z}_{d}\).
Let \((A,b)\) denote this linear system. Recall that the dual of the associated simplicial complex \(\Sigma_{A}\) is given by the complete bipartite graph \(K_{3,3}\). We can also think of this linear system as one that is obtained from a simplicial set \(X\) as depicted in Figure (1b) and a cocycle \(\gamma\) defined by \(\sigma\mapsto b_{\sigma}\). We will write
\[[X]=[\sigma_{1}]+[\sigma_{2}]+[\sigma_{3}]-([\sigma_{4}]+[\sigma_{5}]+[\sigma_ {6}])\]
for the \(2\)-chain. We evaluate \(\gamma\) on \([X]\) as follows: \(\gamma(X)=\gamma(\sigma_{1})+\gamma(\sigma_{2})+\gamma(\sigma_{3})-(\gamma( \sigma_{4})+\gamma(\sigma_{5})+\gamma(\sigma_{6}))\).
**Lemma 3.24**.: _For any two nondegenerate \(1\)-simplices \(x,y\) that do not belong to a common \(2\)-simplex the following relation holds in \(\pi_{1}(\mathbb{Z}_{d},X_{\gamma})\):_
\[[e_{x},e_{y}]=e_{1}^{\gamma[X]}.\]
Proof.: Let \(\operatorname{Aut}(K_{3,3})\) denote the automorphism group of \(K_{3,3}\). It is well-known that this group is isomorphic to \((\Sigma_{3}\times\Sigma_{3})\rtimes\mathbb{Z}_{2}\)[26]. \(\operatorname{Aut}(K_{3,3})\) acts transitively on the set of pairs \(\{x,y\}\) that do not belong to a common \(\sigma\). This action carries over to the generators in the algebraic fundamental group. Therefore it suffices to consider \(x,y\) that live on the boundary of Figure 3. We have
\[e_{x}e_{y}e_{x}^{-1}e_{y}^{-1} =(e_{z_{1}}^{-1}e_{t_{1}}e_{1}^{\gamma(\sigma_{1})})(e_{t_{2}}e_ {z_{3}}^{-1}e_{1}^{\gamma(\sigma_{3})})(e_{s_{2}}e_{z_{3}}^{-1}e_{1}^{\gamma( \sigma_{6})})^{-1}(e_{z_{1}}^{-1}e_{s_{1}}e_{1}^{\gamma(\sigma_{4})})^{-1}\] \[=e_{z_{1}}^{-1}(e_{t_{1}}e_{t_{2}})(e_{s_{2}}^{-1}e_{s_{1}}^{-1}) e_{z_{1}}e_{1}^{\gamma(\sigma_{1})+\gamma(\sigma_{3})-\gamma(\sigma_{4})-\gamma( \sigma_{6})}\] \[=e_{z_{1}}^{-1}e_{z_{2}}e_{z_{1}}^{-1}e_{z_{1}}e_{1}^{\gamma( \sigma_{1})+\gamma(\sigma_{2})+\gamma(\sigma_{3})-\gamma(\sigma_{4})-\gamma( \sigma_{5})-\gamma(\sigma_{6})}\] \[=e_{1}^{\gamma(\sigma_{1})+\gamma(\sigma_{2})+\gamma(\sigma_{3})- \gamma(\sigma_{4})-\gamma(\sigma_{5})-\gamma(\sigma_{6})}\] \[=e_{1}^{\gamma[X]}.\]
Recall that under the identification \(\Gamma(A,b)\cong\pi_{1}(\mathbb{Z}_{d},X_{\gamma})\) we have \(e_{1}\mapsto J\).
**Proposition 3.25**.: _When \(d>1\) is an odd integer \(\pi_{1}(\mathbb{Z}_{d},X_{\gamma})\) is abelian._
Proof.: By Lemma 3.24 we have \(e_{x}e_{y}=J^{\gamma[X]}e_{y}e_{x}\). Then Proposition 3.22 implies that \(e_{x}e_{y}=e_{y}e_{x}\).
Let \(\gamma\) denote the \(2\)-cocycle (see Figure (4)) defined by
\[\gamma(\sigma_{i})=\left\{\begin{array}{ll}1&i=2\\ 0&\text{otherwise}.\end{array}\right.\]
The corresponding cohomology class is \([\gamma]=1\). For the next result we will use the identification \(X_{\gamma}\cong N(\mathbb{Z}_{2},D_{8}*D_{8})\).
**Proposition 3.26**.: _For \(d=2\) we have_
\[\pi_{1}(\mathbb{Z}_{2},X_{\gamma})=\left\{\begin{array}{ll}\mathbb{Z}_{2}^{ 5}&[\gamma]=0\\ D_{8}*D_{8}&[\gamma]=1.\end{array}\right.\]
Proof.: When \([\gamma]=0\) the result follows from Lemma 3.24. Let \([\gamma]=1\). We can assume that the cocycle representing this class is as given in Figure (4). We rely on the fact that the isomorphism type of the commutative fundamental group only depends on the cohomology class; see Theorem 3.11. We have a diagram
where \(K\) denotes the kernel of the homomorphism \(\pi_{1}(\mathbb{Z}_{2},X_{\gamma})\to D_{8}*D_{8}\). Lemma 3.24 implies that \(\pi_{1}(\mathbb{Z}_{2},X)\) is abelian. From the presentation we see that this group is generated by \(e_{x_{1}},e_{x_{2}},e_{z_{1}},e_{z_{2}}\). On the other hand, it surjects onto \(\mathbb{Z}_{2}^{4}\). Therefore \(K=1\).
According to Proposition 2.15 simplicial distributions in the case of \([\gamma]=1\) will be contextual. The polytope of simplicial distributions for \(d=2\) is described in [1].
Figure 4:
Linear systems from groups
In this section we will study linear systems obtained from groups. We begin with a central extension as in (14). That is, \(G\) is a group with a central element \(J_{G}\) of order \(d\) and \(\bar{G}\) is the quotient group by \(\langle J_{G}\rangle\). Let
\[\phi:\bar{G}\to G \tag{19}\]
be a set-theoretic section of the quotient homomorphism \(\pi:G\to\bar{G}\) that preserves the identity element, i.e., \(\phi(1_{\bar{G}})=1_{G}\). Given such a section we can construct a \(2\)-cocycle \(\gamma_{\phi}:G^{2}\to\mathbb{Z}_{d}\) by
\[\gamma_{\phi}(g,h)=\phi(g)\phi(h)\phi(gh)^{-1}.\]
It is well-known that the class of the central extension is given by the cohomology class \([\gamma_{\phi}]\in H^{2}(\bar{G})\); see [20, Chapter 6]. Consider the map of fibrations
(20)
The pull-back of the cochain \(\gamma_{\phi}\), which we denote by \(\gamma_{\phi,d}\), represents the cohomology class \(\gamma_{G,d}\). Explicitly, we have \(\gamma_{\phi,d}=i_{2}\circ\gamma_{\phi}\).
**Definition 4.1**.: The linear system over \(\mathbb{Z}_{d}\) associated to \((G,J_{G})\) is the linear system \((A_{X},b_{\gamma})\) where
\[X=\bar{N}(\mathbb{Z}_{d},G)\;\;\text{and}\;\;\gamma=\gamma_{\phi,d}.\]
We denote this linear system simply by \((A_{G},b_{\phi})\).
The explicit cocycle \(\gamma_{\phi,d}\) can be used to give a twisted product description of \(N(\mathbb{Z}_{d},G)\).
**Proposition 4.2**.: _Let \(X=\bar{N}(\mathbb{Z}_{d},G)\) and \(\gamma=\gamma_{\phi,d}\). There is an isomorphism of simplicial sets_
\[X_{\gamma}\to N(\mathbb{Z}_{d},G)\]
Proof.: In degree \(n\) this map is given by
\[(a_{1},\cdots,a_{n})\times(\bar{g}_{1},\cdots,\bar{g}_{n})\mapsto(J^{a_{1}} \phi(\bar{g}_{1}),\cdots,J^{a_{n}}\phi(\bar{g}_{n})).\]
The simplicial structure of the twisted product is such that this assignment gives a well defined simplicial set map.
### Homotopical methods
In this section we assume that \(G\) is generated by its \(d\)-torsion elements, i.e., \(G=\langle G_{(d)}\rangle\). For details on the homotopical constructions in this section we refer to [1].
**Definition 4.3**.: Let \(E(\mathbb{Z}_{d},G)\) denote the simplicial set whose set of \(n\)-simplices is \(G\times N(\mathbb{Z}_{d},G)_{n}\) and simplicial structure maps are given by
\[d_{i}(g_{0},g_{1},\cdots,g_{n})=\left\{\begin{array}{cc}(g_{0},g_{1},\cdots,g_{i}g_{i+1},\cdots,g_{n})&0\leq i<n\\ (g_{0},g_{1},\cdots,g_{n-1})&i=n\end{array}\right.\]
and
\[s_{j}(g_{0},g_{1},\cdots,g_{n})=s_{j}(g_{0},g_{1},\cdots,g_{j},1,g_{j-1},\cdots,g_{n}).\]
There is a simplicial set map \(p_{d}:E(\mathbb{Z}_{d},G)\to N(\mathbb{Z}_{d},G)\) defined in degree \(n\) by projecting onto the last \(n\) coordinates. Note that \(E(\mathbb{Z}_{d},G)\) is a simplicial subset of \(EG\), total space of the universal principal \(G\)-bundle \(EG\to BG\), whose \(n\)-simplices are given by \(G^{n+1}\) together with a similar simplicial structure. There is a pull-back diagram of principal \(G\)-bundles
Note that
\[E(\mathbb{Z}_{d},G)\to N(\mathbb{Z}_{d},G)\to NG \tag{21}\]
is a fibrations sequence. Then the two horizontal maps in the lower part of Diagram (20) can be extended to fibrations:
(22)
Let \(|\cdot|\) denote the geometric realization functor. Applying \(\pi_{1}(|\cdot|)\) to Diagram (22) we obtain a commutative diagram of groups
(23)
where
\[K(d,G)=\pi_{1}|E(\mathbb{Z}_{d},G)|,\ \ \Gamma(\mathbb{Z}_{d},G)=\pi_{1}N( \mathbb{Z}_{d},G)\ \ \text{and}\ \ \bar{\Gamma}(\mathbb{Z}_{d},G)=\bar{N}(\mathbb{Z}_{d},G).\]
The group \(K(d,G)\) can be computed by choosing a maximal tree in the 1-skeleton; see [1, Lemma 4] for a similar computation for \(\pi_{1}|E(\mathbb{Z},G)|\). The 1-skeleton is given by a graph whose vertices are the elements of \(G\) and edges consist of arrows \(g\xrightarrow{g^{-1}h}h\) where \((g^{-1}h)^{d}=1\). We choose a maximal tree \(T\) as follows: For each vertex \(g\) let us choose a sequence of \(d\)-torsion elements \(g_{1},\cdots,g_{n(g)}\) satisfying \(g=g_{1}\cdots g_{n(g)}.\) We consider the path from the base point \(1\in G\) to the vertex \(g\) given by the sequence of arrows
\[p_{g}:1\xrightarrow{g_{1}}g_{1}\xrightarrow{g_{2}}g_{1}g_{2}\xrightarrow{g_ {3}}\cdots\xrightarrow{g_{n(g)}}g_{1}\cdots g_{n(g)}=g. \tag{24}\]
If \(g\) is \(p\)-torsion then we take \(n(g)=1\) and \(p_{g}:1\xrightarrow{g}g\).
**Proposition 4.4**.: _Let \(G\) be a group generated by \(d\)-torsion elements. The group \(K(\mathbb{Z}_{d},G)\) is generated by \(e_{g,h}\), where \((g,h)\in G^{2}\) such that \(g^{-1}h\in G_{(d)}\), subject to_
1. \(e_{g,1}=e_{1,g}=1\) _for_ \(g\in G_{(d)}\)
2. _for_ \((g,h,k)\in G^{3}\) _such that_ \((g^{-1}h,h^{-1}k)\in N(\mathbb{Z}_{d},G)_{2}\)_, the relation_ \[e_{g,h}e_{h,k}e_{g,k}^{-1}=1.\]
_The element \(e_{g,h}\) maps to the product \(e_{g_{1}}\cdots e_{g_{n(g)}}e_{g^{-1}h}e_{h_{n(k)}}^{-1}\cdots e_{h_{1}}^{-1}\) in \(\pi_{1}N(\mathbb{Z}_{d},G)\)._
Proof.: The maximal tree \(T\) is the union of the paths \(p_{g}\) for each \(g\in G\). The generators of the fundamental group are given by the loops based at the identity element. For each \((g,h)\in G\) such that \(g^{-1}h\in G_{(d)}\) the generator is given by the loop
\[e_{g,h}:1\xrightarrow{p_{g}}g\xrightarrow{g^{-1}h}h\xrightarrow{p_{h}^{-1}}1\]
The relations come from the degenerate edges and the \(2\)-simplices. The former are of the form
\[1=e_{s_{0}(g)}=e_{1,g}\ \ \ \text{and}\ \ \ 1=e_{s_{1}(g)}=e_{g,1}.\]
For each triple \((g,h,k)\in G^{3}\) such that \((g^{-1}h,h^{-1}k)\in N(\mathbb{Z}_{d},G)_{2}\), which represents the \(2\)-simplex \(\sigma=(g,g^{-1}h,h^{-1}k)\), we have
\[1=e_{d_{2}\sigma}e_{d_{0}\sigma}e_{d_{1}\sigma}^{-1}=e_{g,h}e_{h,k}e_{g,k}^{- 1}.\]
### Finite \(p\)-groups
Next we focus on the case where \(d\) is given by a prime \(p\). For a group \(H\), we will write \(H_{\text{ab}}\) and \(H_{\text{el}}\) for the largest abelian and elementary abelian \(p\)-group quotients, respectively.
**Proposition 4.5**.: _Consider the fibration \(N\mathbb{Z}_{p}\xrightarrow{i}X_{\gamma}\to X\). We have \([\gamma]\neq 0\) if and only if the image of \(\pi_{1}N\mathbb{Z}_{p}\to\pi_{1}X_{\gamma}\) is contained in the kernel of the canonical homomorphism \(\pi_{1}X_{\gamma}\to(\pi_{1}X_{\gamma})_{el}\)._
Proof.: Consider the five term exact sequence of the Serre spectral sequence of the fibration:
\[0\to H^{1}(X)\to H^{1}(X_{\gamma})\xrightarrow{i^{*}}H^{1}(N\mathbb{Z}_{p}) \xrightarrow{\delta}H^{2}(X)\to H^{2}(X_{\gamma})\]
We have \(\delta(1)=[\gamma]\neq 0\). Therefore \(i^{*}\) is the zero map. This is equivalent to the dual map \(i_{*}:H_{1}(N\mathbb{Z}_{p})\to H_{1}(X_{\gamma})\) being zero. The result follows from the observation that for a simplicial set \(Y\) we have \(H_{1}(Y)=(\pi_{1}Y)_{\text{el}}\).
**Proposition 4.6**.: _Assume that \([\gamma]\neq 0\) and \(K(\mathbb{Z}_{p},G)=1\). Then \(\gamma_{G,p}\neq 0\)._
Proof.: This follows from Proposition 4.5 and the following diagram
Note that only the horizontal sequence of maps is exact. The composite of the right-hand vertical maps is zero. By commutativity of the top square the image of \(\mathbb{Z}_{p}\to\Gamma(\mathbb{Z}_{p},G)\) lands in the kernel of \(\Gamma(\mathbb{Z}_{p},G)\to\Gamma(\mathbb{Z}_{p},G)_{\text{el}}\). Hence the class \(\gamma_{G,p}\) classifying the fibration \(N\mathbb{Z}_{p}\to N(\mathbb{Z}_{p},G)\to\tilde{N}(\mathbb{Z}_{p},G)\) is nonzero.
**Example 4.7**.: Let \((A,b)\) denote the \(K_{3,3}\) linear system studied in Section 3.5 and \(G\) denote the central product \(D_{8}*D_{8}\). We will write \(X\) for the underlying simplicial set representing the torus depicted in Figure (3). We identify \(\Gamma(A,b)\) with \(\pi_{1}(\mathbb{Z}_{d},X_{\gamma})\) (Theorem 3.11). When \(d=2\) Proposition 3.26 implies that if \([\gamma]=1\) then \(\pi_{1}(X_{\gamma})\cong G\). In this case using Proposition 4.2 we can identify \(X_{\gamma}\) with \(N(\mathbb{Z}_{2},G)\). Then we conclude that
\[\Gamma(\mathbb{Z}_{2},G)\cong G\ \ \ \text{and}\ \ \ K(\mathbb{Z}_{2},G)=1. \tag{25}\]
By Proposition 4.5 we have \(\gamma_{G,2}\neq 0\) and by Corollary 3.14 the \(K_{3,3}\) linear system does not admit a solution in \(\mathbb{Z}_{2}\). On the other hand, Equation (25) implies that the linear system admits a solution in \(G\). For the remaining cases, (1) \(d=2\) and \([\gamma]=0\), and (2) \(d\) odd, \(\Gamma(\mathbb{Z}_{2},G)\) is abelian, and by Corollary 2.5 the linear system admits a solution in \(\mathbb{Z}_{d}\).
**Lemma 4.8**.: _[Car] Let \(p\) be an odd prime and \(G\) be a nonabelian \(p\)-group generated by \(p\)-torsion elements. Then there exists \(g,h\in G_{(p)}\) such that \([g,h]\neq 1\) and \(g^{-1}h\in G_{(p)}\)._
Proof.: Proof is given in Section C.
**Corollary 4.9**.: _Let \(p\) be an odd prime and \(G\) be a nonabelian \(p\)-group generated by \(p\)-torsion elements. Then \(K(\mathbb{Z}_{p},G)\) is nontrivial._
Proof.: Let \(g,h\) be \(p\)-torsion elements satisfying \([g,h]\neq 1\) and \(g^{-1}h\) is \(p\)-torsion, obtained by Lemma 4.8. Then Proposition 4.4 implies that \(e_{g,h}\) is a non-trivial element in \(K(\mathbb{Z}_{p},G)\).
Assume that the element \(e_{g,h}\) constructed in Corollary 4.9 is central in \(\Gamma(\mathbb{Z}_{p},G)\). Consider the diagram of group extensions
(26)
If the generator \(1\in H^{1}(\langle e_{g,h}\rangle)\cong\mathbb{Z}_{p}\) hits the extension class \(\gamma_{G}\) then \(\gamma_{G,p}=0\). As we will see in the next section this is indeed the case for extraspecial \(p\)-groups when \(p>2\). Then using Corollary 3.14, we see that this observation provides some evidence for Conjecture 3.15 when \(d\) is an odd prime.
### Extraspecial \(p\)-groups
A \(p\)-group \(E\) is called extraspecial if \(Z(E)=[E,E]=\Phi(E)\cong\mathbb{Z}_{p}\); see [1]. The order of an extraspecial \(p\)-group is given by \(p^{2n+1}\) where \(n\geq 1\). When \(n=1\) there are two types \(E_{1}^{+}\) and \(E_{1}^{-}\). For \(p=2\), \(E_{1}^{+}\) is the dihedral group \(D_{8}\) and \(E_{1}^{+}\) is the quaternion group \(Q_{8}\). For \(p>2\), \(E_{1}^{+}\) is the group of triangular \(3\times 3\) matrices over \(\mathbb{Z}_{p}\) with \(1\)'s on the diagonal, and \(E_{1}^{-}\) is the semidirect product of a cyclic group of order \(p^{2}\) by a cyclic group of order \(p\) acting non-trivially. In general, for \(n\geq 1\) we have
\[E_{n}^{+}=\underbrace{E_{1}^{+}*\cdots*E_{1}^{+}}_{n}\ \text{and}\ E_{n}^{-}= \underbrace{E_{1}^{+}*\cdots*E_{1}^{+}}_{n-1}*E_{1}^{-}\]
where \(G*H\) denotes the central product. Let \(J_{E}\) denote a generator of \(Z(E)\). An extraspecial \(p\)-group fits in a central extension
\[1\to\langle J_{E}\rangle\to E_{n}\to\mathbb{Z}_{p}^{2n}\to 0.\]
Finally, for \(p=2\) we will also consider almost extraspecial \(2\)-groups, which are defined by \(E_{n}^{0}=E_{n}^{+}*\mathbb{Z}_{4}\). In this case the central product is with respect to the cyclic subgroup of order \(2\) in \(\mathbb{Z}_{4}\).
When \(p>2\), the group \(E_{n}^{-}\) is not generated by \(p\)-torsion elements; hence will be omitted from our consideration. In the next result we compute \(\pi_{1}N(\mathbb{Z}_{p},E)\) for the remaining types of extraspecial \(p\)-groups. We will need the following construction: For a group \(G\) let us write \(\hat{G}\) for the subgroup of \(G\times G\) generated by \((g,g^{-1})\).
**Theorem 4.10**.: _For an extraspecial \(p\)-group \(E\) the group \(\Gamma(\mathbb{Z},\!E)\) is described as follows:_
1. _For_ \(p=2\)_,_ \(\Gamma(\mathbb{Z}_{2},E)\cong E\) _if_ \(E=E_{n}^{+}\) _and_ \(n\geq 2\)_, or_ \(E=E_{n}^{-}\) _and_ \(n\geq 3\)_._
2. _For an almost extraspecial_ \(2\)_-group,_ \(\Gamma(\mathbb{Z}_{2},E_{n}^{0})\cong\hat{E}_{n}^{0}\) _if_ \(n\geq 2\)_._
3. _For_ \(p>2\)_,_ \(\Gamma(\mathbb{Z}_{2},E_{n}^{+})\cong\hat{E}_{n}^{+}\)_._
Proof.: Let us begin with the case \(p=2\). Observe that the linear system associated to \((E_{2}^{+},J_{E})\) can be identified with the odd parity (\([\gamma]=1\)) linear system associated to \(K_{3,3}\). More precisely, \(X_{\gamma}\cong N(\mathbb{Z}_{2},E_{2}^{+})\). Therefore Proposition 3.26 implies that the statement holds for \(E_{2}^{+}\). To generalize this to \(n\geq 2\), the argument is similar. Consider the diagram of groups
The key fact we need is the following: Given two pairs of \(p\)-torsion elements \((g_{1},g_{2})\) and \((h_{1},h_{2})\) of \(E\) satisfying \([g_{1},g_{2}]=[h_{1},h_{2}]\) there exists an automorphism \(\phi\) of \(E\) fixing the central element \(J_{E}\) such that \(\phi(g_{1})=h_{1}\) and \(\phi(g_{2})=h_{2}\). This is a well-known fact and follows from Witt's lemma [1, Chapter 7]. Using the action of the automorphism group of \(E\) on \(\Gamma(\mathbb{Z}_{2},E)\) and Lemma 3.24 we see that any pair of \(2\)-torsion elements satisfying \([g_{1},g_{2}]=J_{E}\) will satisfy \([e_{g_{1}},e_{g_{2}}]=J\). Then \(\bar{\Gamma}(\mathbb{Z}_{2},E)\) turns out to be isomorphic to \(\mathbb{Z}_{2}^{2n}\). This implies that \(K(\mathbb{Z}_{2},E)=1\). A similar argument works for \(E=E_{n}^{-}\) when \(n\geq 3\).
For the almost extraspecial group, the simplicial set \(N(\mathbb{Z}_{2},E_{n}^{0})\) can be identified with \(N(\mathbb{Z}_{2},E_{n}^{+})\). This follows from the basic fact that there is a bijection between the partially ordered set (poset) of the elementary abelian subgroups of \(E_{n}^{0}\) and the poset of abelian subgroups of \(E_{n}^{+}\); see [1, Chapter 8]. The fundamental group of the latter simplicial set is computed in [1].
For \(p>2\) note that \(N(\mathbb{Z}_{p},E)\) coincides with \(N(\mathbb{Z},E)\) since every element of \(E\) is \(p\)-torsion. The fundamental group of \(N(\mathbb{Z},E)\) for \(n\geq 2\) is isomorphic to \(\hat{E}\) as shown in [1].
**Corollary 4.11**.: _If a linear system admits a solution in \(E_{n}^{+}\) for \(p>2\) then it admits a solution in \(\mathbb{Z}_{p}\)._
Proof.: By Theorem 4.10 we have
\[\bar{\Gamma}(\mathbb{Z}_{p},E)\cong\hat{E}/\langle(J_{E},J_{E}^{-1})\rangle.\]
This group turns out to be isomorphic to \(E_{n}^{+}\). Therefore in the Diagram (26) the extension class \(\gamma_{E}\) is hit by the transgression map. Thus \(\gamma_{E,p}=0\).
### Higher odd prime torsion groups
In this section we give more examples in support of Conjecture 3.15. For details see [1]. Let \(\mathcal{H}=\mathbb{C}\mathbb{Z}_{p}\) denote the vector space with basis \(\{b_{a}:\,a\in\mathbb{Z}_{p}\}\). For \(m\geq 1\), we define a subgroup of the special unitary group \(\mathrm{SU}(\mathcal{H})\):
\[E_{1}(p^{m})=\langle T_{(p^{m})},X\rangle\]
where \(T\) is the maximal torus given by the diagonal matrices in \(\mathrm{SU}(\mathcal{H})\) and \(X\) is the permutation matrix defined by \(Xb_{a}=b_{a+1}\). We define
\[E_{n}(p^{m})=\underbrace{E_{1}(p^{m})\otimes\cdots\otimes E_{1}(p^{m})}_{n}.\]
Note that for \(m=1\) we obtain \(E_{n}^{+}\).
An element of \(T\) can be described by a function \(\xi:\mathbb{Z}_{p}\to U(1)\) such that \(\prod_{q\in\mathbb{Z}_{p}}\xi(q)=1\). The corresponding diagonal matrix will be denoted by \(D(\xi)\). The function \(\xi\) can be expressed as
\[\xi(q)=e^{\sum_{j=1}^{m}2\pi if_{j}(q)/p^{j}}\]
where \(f_{j}(q)=\sum_{b\in\mathbb{Z}_{p}}\nu_{j,b}q^{b}\) and \(\nu_{j,b}\in\mathbb{Z}_{p}\). We define a function
\[\phi:E_{n}(p^{m})\to E_{n}(p) \tag{27}\]
as follows
\[\phi(M_{1}\otimes\cdots\otimes M_{n})=\phi_{1}(M_{1})\otimes\cdots\otimes\phi _{1}(M_{n})\]
where for \(M=D(\xi)X^{b}\) we have
\[\phi_{1}(M)=\left\{\begin{array}{ll}D(\omega^{\nu_{1,0}+\nu_{2,0}+\nu_{1,1} q})&b=0\\ D(\omega^{\nu_{1,0}+\nu_{1,1}q})X&b=1\\ \phi_{1}(M^{b^{-1}})^{b}&1<b<p.\end{array}\right.\]
**Theorem 4.12**.: _Let \(p\) be an odd prime. The function \(\phi\) defined in Equation (27) is a morphism in \(\mathbf{Grp}_{(p)}\) that splits the inclusion \(E_{n}(p)\to E_{n}(p^{m})\). As a consequence the induced homomorphism_
\[\Gamma(\mathbb{Z}_{p},E_{n}(p))\to\Gamma(\mathbb{Z}_{p},E_{n}(p^{m}))\]
_splits in \(\mathbf{Grp}\)._
Proof.: Proof is given in [10, Theorem 2].
**Corollary 4.13**.: _If a linear system admits a solution in \(E_{n}(p^{m})\) then it admits a solution in \(\mathbb{Z}_{p}\)._
Proof.: Follows from Theorem 4.12, Corollary 4.11 and Corollary 2.5.
|
2302.12460 | Boundary output feedback stabilisation for 2-D and 3-D parabolic
equations | The present paper addresses the topic of boundary output feedback
stabilization of parabolic-type equations, governed by linear differential
operators which can be diagonalized by the introduction of adequate weighting
functions (by means of the Sturm-Liouville method), and which evolve in bounded
spatial domains that are subsets of $\mathbb{R}^d,\ d=1,2,3$.
Combining ideas inspired by \cite{lhachemi2022finite} for the boundary output
feedback control of 1-D parabolic PDEs and \cite{munteanu2019boundary} for the
state feedback control of multi-D parabolic PDEs, we report in this paper an
output feedback boundary stabilizing control with internal Dirichlet
measurements designed by means of a finite-dimensional observer. The reported
control design procedure is shown to be systematic for 2-D and 3-D parabolic
equations. | Hugo Lhachemi, Ionut Munteanu, Christophe Prieur | 2023-02-24T05:20:41Z | http://arxiv.org/abs/2302.12460v1 | # Boundary output feedback stabilisation for 2-D and 3-D parabolic equations
###### Abstract
The present paper addresses the topic of boundary output feedback stabilization of parabolic-type equations, governed by linear differential operators which can be diagonalized by the introduction of adequate weighting functions (by means of the Sturm-Liouville method), and which evolve in bounded spatial domains that are subsets of \(\mathbb{R}^{d},\ d=1,2,3\). Combining ideas inspired by [19] for the boundary output feedback control of 1-D parabolic PDEs and [22] for the state feedback control of multi-D parabolic PDEs, we report in this paper an output feedback boundary stabilizing control with internal Dirichlet measurements designed by means of a finite-dimensional observer. The reported control design procedure is shown to be systematic for 2-D and 3-D parabolic equations.
**Keywords:** second order parabolic equations, exponential asymptotic stabilization, observer design, eigenvalues and eigenfunctions, spectral decomposition, proportional feedback control
**MSC2020:** 35K10, 93D15, 93B53, 93B52.
## 1 Introduction
There is now a large amount of literature dealing with boundary control of dynamical infinite-dimensional systems. In particular, various techniques have been developed for the design of control strategies for 1-D partial differential equations (PDEs). These include: Lyapunov methods [5], backstepping design [16], linear quadratic control methods [8], characteristic analysis [6], among other approaches. Even if some of these approaches have been generalized to multi-D PDEs, such as LQR methods (see e.g., [29]), constructive methods for multi-D PDEs are not so developed. Among the contributions, one can find in [4] the design of simple proportional-type boundary stabilizing controllers for multi-D parabolic-type equations under a restrictive assumption concerning the linear independence of the traces
of the normal derivatives of the eigenfunctions. Extensions of this approach while removing the aforementioned restrictive assumption have been reported in the recent textbook [22]. Boundary control and observation of heat equations on multi-dimensional domains has also been solved in [9] under the restrictive assumption that all the unstable modes are simple.
In this paper, we study the problem of output feedback stabilization of 2-D and 3-D parabolic PDEs using spectral reduction methods. Spectral decomposition techniques consist first of the projection of the PDE into a finite-dimensional unstable system plus a residual stable infinite-dimensional system. In this framework, the control strategy is designed on the unstable finite-dimensional part of the plant. Although simple in their basic concepts, spectral-based control design methods are challenging because one must ensure that the control strategy does not introduce any spillover effect, that is: the control strategy originally designed on a finite-dimensional approximation of the PDE plant may actually fail to stabilize the infinite-dimensional system due to the interconnection of the controller with the infinite-dimensional residual dynamics (see, e.g., [2, 1]). These type of approaches originate back to the 60s in the works of Rusell [24] and to the 80s in particular in the work [27]. These ideas were further developed later in various directions: see e.g. [7, 21, 23] for the one-dimensional case and e.g. [3, 22, 28, 30] for the multi-dimensional case.
Spectral reduction-based methods are very attractive in practice because they allow the design of finite-dimensional control strategies for parabolic PDEs; see for instance the seminal works [27, 25] in the context of output feedback. They are particularly relevant because they allow the computation of reduced order models, making numerical computations and practical implementations much easier to handle compared to infinite-dimensional control and observation strategies [15]. Adopting spectral reduction-based representation of parabolic PDEs [7, 24] and leveraging on pioneer works [27, 25] augmented with Linear Matrix Inequalities (LMIs) procedures [14], stabilization problems for 1-D parabolic PDEs have been solved in a systematic manner for various boundary control and boundary output [19], including the possibility to handle systems of 1-D parabolic PDEs [12]. In this paper, we further develop and generalize these methods to the case of multi-D parabolic PDEs.
The aim of this paper is to report a constructive control design procedure for the output feedback boundary stabilization of multi-D parabolic PDEs that is systemic in the 2-D and 3-D cases. Extending procedures reported in [19] for 1-D PDEs and combining this latter work with analyses on the eigenspaces inspired by [22], we succeed to design a finite-dimensional observer-based output feedback control strategy for multi-D parabolic-type equations. More precisely, we consider in this work the following boundary-controlled parabolic-type equation evolving in \(\mathcal{O}\), an open and connected subset of \(\mathbb{R}^{d}\) with \(d\in\{1,2,3\}\), with smooth boundary \(\partial\mathcal{O}\) split into two disjoint parts \(\partial\mathcal{O}=\Gamma_{1}\cup\Gamma_{2}\), such that \(\Gamma_{1}\) has non-zero Lebesgue measure. Then, the system is described by
\[\partial_{t}z(x,t)+\sum_{i,j=1}^{d}a_{ij}(x)\partial_{ij}z(x,t)+ \sum_{i=1}^{d}b_{i}(x)\partial_{i}z(x,t)+c(x)z(x,t)=0,\ t>0,\ x\in\mathcal{O}; \tag{1b}\] \[z(x,t)=u(x,t),\ x\in\Gamma_{1},\ z(x,t)=0\ x\in\Gamma_{2},\ t>0;\] (1c) \[z(0,x)=z_{o}(x),\ x\in\mathcal{O}. \tag{1a}\]
The system output consists of \(M\in\mathbb{N}^{*}\) in-domain measurements:
\[y(t)=\left(z(\xi_{1},t),z(\xi_{2},t),\ldots,z(\xi_{M},t)\right), \tag{2}\]
with \(\xi_{i}\in\mathcal{O}\) that are 2 by 2 distinct. As we shall see, the number of measurements \(M\) to be selected will depend on the maximum multiplicity of the unstable eigenvalues of the plant
to be stabilized by the control strategy. In this context, our only assumption in the present work is that the second order governing differential operator can be diagonalized in a suitable Riesz-basis (this will be described in details in Subsection 2 below). The main result of this paper can be informally stated as follows (see Theorem 11 for a precise statement)
_Theorem: Assuming that the governing linear operator of equation (1) is in divergence form (as written in (4) below), there exists an explicit output feedback controller (see (20) below with \(U\) provided by (33)) which exponentially stabilizes the reaction-diffusion equation (1) based on the sole internal measurement (2)._
The outline of the paper is as follows. Various notations, assumptions, and preliminary properties are summarized in Section 2. Then, the proposed control stategy along with the main stability result, that rigorously formalizes the above informal theorem, is reported in Section 3. Concluding remarks are formulated in Section 4.
## 2 Notation and preliminary properties
### Notation and basic definitions
Spaces \(\mathbb{R}^{n}\) are endowed with the Euclidean scalar product \(\left\langle\cdot,\cdot\right\rangle_{n}\) and norm \(\|\cdot\|_{n}\). The associated induced norms of matrices are denoted by \(\|\cdot\|\). \(L^{2}(\mathcal{O})\) stands for the space of square Lebesgue integrable functions on \(\mathcal{O}\) and is endowed with the inner product \(\langle f,g\rangle=\int_{\mathcal{O}}f(x)g(x)dx\) with associated norm denoted by \(\|\cdot\|_{L^{2}}.\) In addition, we denote by \(\langle\cdot,\cdot\rangle_{L^{2}(\Gamma_{1})}\) the scalar product in \(L^{2}(\Gamma_{1})\) with the Lebesgue surface measure. For an integer \(m\geq 1,\) the \(m-\)order Sobolev space is denoted by \(H^{m}(\mathcal{O})\) and is endowed with its usual norm denoted by \(\|\cdot\|_{H^{m}}.\) We set \(H_{0}^{1}(\mathcal{O})\) for the completion of the space of infinitely differentiable functions, which are nonzero only on a compact subset of \(\mathcal{O},\) with respect to the Sobolev norm \(\|\cdot\|_{H^{1}}.\) For a symmetric matrix \(P\in\mathbb{R}^{n\times n},\ P\succeq 0\) (resp. \(P\succ 0\)) means that \(P\) is positive semi-definite (resp. positive definite).
Let \(\left\{\varphi_{n}\right\},\ n\in\mathbb{N}^{*}\) be a sequence in a Hilbert space \((H,\left\langle\cdot,\cdot\right\rangle_{H},\|\cdot\|_{H}).\) It is called a Riesz basis if: (i) \(\overline{\text{span}\left\{\varphi_{n}\right\}}=H;\) and (ii) there exists constants \(0<c\leq C<\infty\) such that
\[c\sum_{n\geq 1}|\alpha_{n}|^{2}\leq\left\|\sum_{n\geq 1}\alpha_{n}\varphi_{n} \right\|_{H}^{2}\leq C\sum_{n\geq 1}|\alpha_{n}|^{2},\]
for all sequences of scalars \(\left\{\alpha_{n}\right\}_{n}\) so that \(\sum_{n\geq 1}|\alpha_{n}|^{2}<\infty.\)
For any given function \(\mu\in C(\mathcal{O}),\) we introduce the weighted Lebesgue space
\[L^{2}_{\mu}(\mathcal{O})=\left\{f:\mathcal{O}\to\mathbb{R}\text{ measurable }\,:\,\int_{\mathcal{O}}f^{2}(x)\mu(x)dx<\infty\right\}.\]
If there exist constants \(0<\mu_{m},\mu_{M}<\infty\) such that \(0<\mu_{m}\leq\mu(x)\leq\mu_{M}\) almost everywhere, then the two spaces \(L^{2}(\mathcal{O})\) and \(L^{2}_{\mu}(\mathcal{O})\) are both algebraically and topologically equivalent. This implies, in particular, that a Riesz basis in \(L^{2}(\mathcal{O})\) is also a Riesz basis in \(L^{2}_{\mu}(\mathcal{O})\) and vice versa.
### Differential operator in divergence form
Let us denote by \(\mathcal{A}:H^{2}(\mathcal{O})\to L^{2}(\mathcal{O})\) the second order differential operator:
\[\mathcal{A}f=\sum_{i,j=1}^{d}a_{ij}(x)\partial_{ij}f+\sum_{i=1}^{d}b_{i}(x) \partial_{i}f+c(x)f. \tag{3}\]
**Assumption 1**: _We assume that there exists a multiplier \(\mu\in C^{2}(\overline{\mathcal{O}})\), with \(0<\mu_{m}\leq\mu(x)\leq\mu_{M}\) for all \(x\in\mathcal{O}\) for some constants \(0<\mu_{m},\mu_{M}<\infty\), such that \(\mu\mathcal{A}\) can be rewritten in divergence form:_
\[\mu\mathcal{A}f=-\sum_{i=1}^{d}\partial_{i}(\tilde{a}_{i}(x)\partial_{i}f)+ \tilde{c}(x)f \tag{4}\]
_with \(C^{1}(\overline{\mathcal{O}})\)-smooth coefficients \(\tilde{a}_{i},\ \tilde{c}\), for which there exists constants \(0<\tilde{a}_{m}<\tilde{a}_{M}\) and \(\tilde{c}_{m}<\tilde{c}_{M}\) such that \(0<\tilde{a}_{m}\leq\tilde{a}_{i}(x)\leq\tilde{a}_{M}\) and \(\tilde{c}_{m}\leq\tilde{c}(x)\leq\tilde{c}_{M}\) for all \(x\in\overline{\mathcal{O}}\) and all \(1\leq i\leq d\)._
Previous assumption implies that \(-\mathcal{A}_{0}:=-\mathcal{A}|_{H^{2}(\mathcal{O})\cap H^{1}_{0}(\mathcal{O})}\) is the generator of a \(C_{0}\)-analytic semigroup in \(L^{2}(\mathcal{O})\). To show this, we apply the well-known Hille-Yosida theorem. First, \(\mathcal{D}(\mathcal{A}_{0})=H^{2}(\mathcal{O})\cap H^{1}_{0}(\mathcal{O})\) is dense in \(L^{2}(\mathcal{O})\). Then, for \(\lambda>0\), we consider the equation
\[(\lambda+\mathcal{A}_{0})f=g.\]
It follows, by scalarly multiplying this equation by \(\mu f\) that
\[\left\langle g,\mu f\right\rangle =\left\langle(\lambda+\mathcal{A}_{0})f,\mu f\right\rangle\] \[=\lambda\left\langle\mu f,f\right\rangle+\left\langle\mu\mathcal{ A}_{0}f,f\right\rangle\] \[\geq(\lambda\mu_{m}+\tilde{c}_{m})\|f\|_{L^{2}(\mathcal{O})}^{2} +\tilde{a}_{m}\|\nabla f\|_{L^{2}(\mathcal{O})}^{2}.\]
Hence, for \(\lambda\) large enough, we have \(\lambda\mu_{m}+\tilde{c}_{m}>0\) and
\[(\lambda\mu_{m}+\tilde{c}_{m})\|f\|_{L^{2}(\mathcal{O})}^{2}\leq\mu_{M}\|g\|_ {L^{2}(\mathcal{O})}\|f\|_{L^{2}(\mathcal{O})}\]
or, equivalently
\[\|(\lambda+\mathcal{A}_{0})^{-1}g\|_{L^{2}(\mathcal{O})}=\|f\|_{L^{2}( \mathcal{O})}\leq\frac{\mu_{M}}{\lambda\mu_{m}+\tilde{c}_{m}}\|g\|_{L^{2}( \mathcal{O})}.\]
Thus, \(-\mathcal{A}_{0}\) is the generator of a \(C_{0}\)-analytic semigroup \(\left\{e^{-t\mathcal{A}_{0}}:\ t\geq 0\right\}\) in \(L^{2}(\mathcal{O})\).
Besides this, the above inequalities, together with the compact embedding of \(H^{1}_{0}(\mathcal{O})\) in \(L^{2}(\mathcal{O})\), implies the compactness of the resolvent of \(-\mathcal{A}_{0}\). Therefore, \(-\mathcal{A}_{0}\) has a countable set of eigenvalues, which accumulates at infinity, and for which the corresponding eigenfunctions form a Riesz basis in \(L^{2}(\mathcal{O}).\) More exactly, in view of the divergence form (4), we consider the following weighted eigenvalue problem (\(\lambda\in\mathbb{C}\)):
\[-\sum_{i=1}^{d}\partial_{i}(\tilde{a}_{i}\partial_{i}\varphi)+\tilde{c}\varphi =\mu\lambda\varphi,\ x\in\mathcal{O};\ \varphi=0\ \text{on}\ \partial\mathcal{O}.\]
Owing to classical theory on spectral properties of elliptic self-adjoint operators (see, e.g. [11, Chapter 8]) we know that the above problem has a countable set of solutions formed by an increasing sequence of real eigenvalues \(\left\{\lambda_{n}\right\}_{n\in\mathbb{N}^{*}}\) which accumulate to infinity and with corresponding eigenfunctions \(\left\{\varphi_{n}\right\}_{n\in\mathbb{N}^{*}}\) that form an orthonormal basis in \(L^{2}_{\mu}(\mathcal{O}).\) By the above discussions, we know that \(\left\{\varphi_{n}\right\}_{n\in\mathbb{N}^{*}}\) forms a Riesz basis in \(L^{2}(\mathcal{O})\) as well (but is not necessarily orthonormal). Let us define \(\psi_{n}:=\mu\varphi_{n},\ n\geq 1.\) Then \(\left\{\varphi_{n}\right\}_{n\in\mathbb{N}^{*}}\) and \(\left\{\psi_{n}\right\}_{n\in\mathbb{N}^{*}}\) are bi-orthonormal in \(L^{2}(\mathcal{O}),\) i.e.,
\[\left\langle\varphi_{i},\psi_{j}\right\rangle=\delta_{i,j},\ i,j\geq 1,\]
where \(\delta_{i,j}\) is the Kronecker symbol. In particular, there exist constants \(c_{1},c_{2}>0\) such that
\[c_{1}\sum_{n\geq 1}\left\langle f,\psi_{n}\right\rangle^{2}\leq\|f\|_{L^{2}( \mathcal{O})}^{2}\leq c_{2}\sum_{n\geq 1}\left\langle f,\psi_{n}\right\rangle^{2}, \quad\forall f\in L^{2}(\mathcal{O}). \tag{5}\]
The following lemma, which is the key ingredient for the introduction of the Lyapunov functional in the proof of our main result stated in Theorem 11, provides a direct relation between the \(H^{1}_{0}\)-norm and the coefficients of projection onto the Riesz basis \(\left\{\phi_{n}\right\}_{n}\).
**Lemma 2**: _Let us fix \(\nu\geq 0\) so that \(\tilde{c}_{m}+\nu\mu_{m}>0\). Then, there exist constants \(c_{3},c_{4}>0\) such that_
\[c_{3}\|f\|_{H^{1}_{0}(\mathcal{O})}^{2}\leq\sum_{n\geq 1}(\lambda_{n}+\nu) \left\langle f,\psi_{n}\right\rangle^{2}\leq c_{4}\|f\|_{H^{1}_{0}(\mathcal{O })}^{2}, \tag{6}\]
_for all \(f\in H^{2}(\mathcal{O})\cap H^{1}_{0}(\mathcal{O})\)._
**Proof.** Recall that, thanks to the Poincare inequality, we have that \(\|\nabla\cdot\|_{L^{2}(\mathcal{O})}\) is an equivalent norm in \(H^{1}_{0}(\mathcal{O})\). In particular, there exists a constant \(\mathcal{C}>0\) such that
\[\|\nabla f\|_{L^{2}(\mathcal{O})}^{2}\geq\mathcal{C}\|f\|_{H^{1}_{0}(\mathcal{ O})}^{2},\ \forall f\in H^{1}_{0}(\mathcal{O}).\]
Next, we observe for any \(f\in H^{2}(\mathcal{O})\cap H^{1}_{0}(\mathcal{O})\) the following:
\[\left\langle(\mathcal{A}_{0}+\nu)f,f\right\rangle_{L^{2}_{\mu}( \mathcal{O})} =\int_{\mathcal{O}}\mu\left\{(\mathcal{A}_{0}f)f+\nu f^{2}\right\}dx\] \[=-\sum_{i=1}^{d}\int_{\mathcal{O}}\partial_{i}(\tilde{a}_{i} \partial_{i}f)fdx+\int_{\mathcal{O}}(\tilde{c}+\nu\mu)f^{2}dx\] \[=\sum_{i=1}^{d}\int_{\mathcal{O}}\tilde{a}_{i}(\partial_{i}f)^{2} dx+\int_{\mathcal{O}}(\tilde{c}+\nu\mu)f^{2}dx\]
where we have applied integration by parts. This shows that
\[\left\langle(\mathcal{A}_{0}+\nu)f,f\right\rangle_{L^{2}_{\mu}(\mathcal{O})} \geq(\tilde{c}_{m}+\nu\mu_{m})\|f\|_{L^{2}(\mathcal{O})}^{2},\]
and so \(\lambda_{n}>-\nu\) for all \(n\geq 1\); and that
\[\mathcal{C}\tilde{a}_{m}\|f\|_{H^{1}_{0}(\mathcal{O})}^{2}\leq\|(\mathcal{A}_ {0}+\nu)^{1/2}f\|_{L^{2}_{\mu}(\mathcal{O})}^{2}\leq\max(\tilde{a}_{M},\tilde{c }_{M}+\nu\mu_{M})\|f\|_{H^{1}_{0}(\mathcal{O})}^{2}.\]
Then it can be seen from (5) that
\[\sum_{n\geq 1}(\lambda_{n}+\nu)\left\langle f,\psi_{n}\right\rangle^ {2} \leq\frac{1}{c_{1}}\|(\mathcal{A}_{0}+\nu)^{1/2}f\|_{L^{2}(\mathcal{O })}^{2}\] \[\leq\frac{1}{c_{1}\mu_{m}}\|(\mathcal{A}_{0}+\nu)^{1/2}f\|_{L^{2} _{\mu}(\mathcal{O})}^{2}\] \[\leq\frac{\max(\tilde{a}_{M},\tilde{c}_{M}+\nu\mu_{M})}{c_{1}\mu_ {m}}\|f\|_{H^{1}_{0}(\mathcal{O})}^{2}\]
while
\[\sum_{n\geq 1}(\lambda_{n}+\nu)\left\langle f,\psi_{n}\right\rangle ^{2} \geq\frac{1}{c_{2}}\|(\mathcal{A}_{0}+\nu)^{1/2}f\|_{L^{2}( \mathcal{O})}^{2}\] \[\geq\frac{1}{c_{2}\mu_{M}}\|(\mathcal{A}_{0}+\nu)^{1/2}f\|_{L^{2 }_{\mu}(\mathcal{O})}^{2}\] \[\geq\frac{\mathcal{G}\tilde{a}_{m}}{c_{2}\mu_{M}}\|f\|_{H^{1}_{0 }(\mathcal{O})}^{2}.\]
This concludes the proof of Lemma 2.
We fix the integer \(N_{0}\in\mathbb{N}^{*}\) such that \(\lambda_{j}\geq\lambda_{N_{0}+1}>0\) for all \(j\geq N_{0}+1\); and let \(N\in\mathbb{N}\) be large enough such that \(N\geq N_{0}\) and \(\lambda_{n}\geq 1,\ \forall n\geq N+1\). A key element in the application of the control strategy reported in this paper relies on the asymptotic behavior of the eigenfunctions \(\varphi_{n}\) evaluated at the measurement locations \(\xi_{i}\). This asymptotic behavior is assessed through the following lemma.
**Lemma 3**: _The following holds:_
\[\sum_{n\geq N+1}\frac{\varphi_{n}^{2}(\xi_{i})}{\lambda_{n}^{2}}<\infty,\qquad \forall i\in\{1,2,\ldots,M\}. \tag{7}\]
**Proof.** Considering \(\mathcal{O}\) as a manifold with the Riemannian metric \(\mu dx\), it follows that \(\mathcal{A}\) is a self-adjoint operator in the Lebesgue \(L^{2}\) space associated to this manifold. Hence, one may argue as in [13] (or as in [26] for the particular case of the Laplace operator) to deduce the existence of a constant \(C>0\) such that, for any \(\lambda\geq 1\), we have
\[\sum_{\sqrt{\lambda_{j}}\in[\lambda,\lambda+1)}|\varphi_{j}(\xi)|^{2}\leq C \lambda^{d-1},\ \forall\xi\in\overline{\mathcal{O}}. \tag{8}\]
Let \(0<\frac{1}{5-d}<\beta<1\), where we recall that \(d\in\{1,2,3\}\). We note that \([1,\infty)=\cup_{m\geq 1}[m^{\beta},m^{\beta}+1).\) Therefore, taking \(\lambda=m^{\beta}\) in (8), we infer for any \(\xi\in\overline{\mathcal{O}}\) that
\[\sum_{n\geq N+1}\frac{\varphi_{n}^{2}(\xi)}{\lambda_{n}^{2}} \leq\sum_{m=1}^{\infty}\left(\sum_{j\geq N+1,\,\sqrt{\lambda_{j} }\in[m^{\beta},m^{\beta}+1)}\frac{\varphi_{j}^{2}(\xi)}{\lambda_{j}^{2}} \right)\leq\sum_{m=1}^{\infty}\frac{1}{m^{4\beta}}\left(\sum_{j\geq N+1,\, \sqrt{\lambda_{j}}\in[m^{\beta},m^{\beta}+1)}\varphi_{j}^{2}(\xi)\right)\] \[\leq C\sum_{m=1}^{\infty}\frac{1}{m^{4\beta}}m^{(d-1)\beta}=C \sum_{m=1}^{\infty}\frac{1}{m^{(5-d)\beta}}<\infty,\]
since \(\beta(5-d)>1\).
**Remark 4**: _It is worth being noted that for dimensions \(d\geq 4\) the quantities \(\sum_{n\geq N+1}\frac{\varphi_{n}^{2}(\xi_{i})}{\lambda_{n}^{2}},\ i\in\left\{1,2,...,M\right\},\) might not be finite, in general. This is the key point that restricts the application of the proposed control strategy for multi-D equations with \(d\geq 4.\)_
For latter purpose, we need to show a unique continuation property of the eigenfunctions of the operator \(\mathcal{A},\) as stated in the following lemma.
**Lemma 5**: _Let any \(\varphi\not\equiv 0\) satisfying_
\[-\sum_{i=1}^{d}\partial_{i}(\tilde{a}_{i}(x)\partial_{i}\varphi)+\tilde{c}(x) \varphi-\mu(x)\lambda\varphi=0\ \mbox{in}\ \mathcal{O};\ \varphi=0\ \mbox{on}\ \partial\mathcal{O}.\]
_Then, \(\sum_{i=1}^{d}n_{i}(\cdot)\tilde{a}_{i}(\cdot)\partial_{i}\varphi(\cdot)\) is not identically zero on \(\Gamma_{1}\). Here, \(n_{i}\) are the components of the unit outward normal to the boundary of \(\mathcal{O}\)._
**Proof.** This property holds true because the principal part of the differential operator is uniformly elliptic (which, usually, is called the elliptic continuation principle). Let us assume by contradiction that \(\sum_{i=1}^{d}n_{i}(\cdot)\tilde{a}_{i}(\cdot)\partial_{i}\varphi(\cdot)\equiv 0\) on \(\Gamma_{1}.\) Choose some \(x_{0}\in\Gamma_{1},\) and choose coordinates \(x=(x^{\prime},x_{d})\) so that \(x_{0}=0\) and for some \(r>0\)
\[\mathcal{O}\cap B(0,r)=\left\{x\in B(0,r);\ x_{d}>g(x^{\prime})\right\},\]
where \(g:\mathbb{R}^{d-1}\rightarrow\mathbb{R}\) is a \(C^{\infty}-\)function. We extend the domain near \(x_{0}\) by choosing \(\psi\in C_{c}^{\infty}(\mathbb{R}^{d-1})\) with \(\psi=0\) for \(\|x^{\prime}\|_{d-1}\geq r/2\) and \(\psi=1\) for \(\|x^{\prime}\|_{d-1}\leq r/4,\) and by letting
\[\mathcal{O}^{*}=\mathcal{O}\cup\left\{x\in B(0,r);\ x_{d}>g(x^{\prime})- \varepsilon\psi(x^{\prime})\right\}.\]
Here, \(\varepsilon>0\) is chosen so small that \(\left\{(x^{\prime},x_{d});\ \|x^{\prime}\|_{d-1}\leq r/2,\ x_{d}=g(x^{\prime})- \varepsilon\psi(x^{\prime})\right\}\) is contained in \(B(0,r)\). Then, \(\mathcal{O}^{*}\) is connected open set with smooth boundary.
Define the functions
\[\varphi^{*}(x)=\left\{\begin{array}{ll}\varphi(x)&\mbox{if }x\in\mathcal{O},\\ 0&\mbox{if }x\in\mathcal{O}^{*}\setminus\mathcal{O},\end{array}\right.\]
\[\tilde{a}_{i}^{*}(x)=\left\{\begin{array}{ll}\tilde{a}_{i}(x)&\mbox{if }x \in\mathcal{O},\\ 0&\mbox{if }x\in\mathcal{O}^{*}\setminus\mathcal{O},\end{array}\right.\]
\[(\tilde{c}-\mu\lambda)^{*}(x)=\left\{\begin{array}{ll}(\tilde{c}-\mu(x) \lambda)(x)&\mbox{if }x\in\mathcal{O},\\ 0&\mbox{if }x\in\mathcal{O}^{*}\setminus\mathcal{O}.\end{array}\right.\]
Then, \(\varphi^{*}\in H^{2}(\mathcal{O}^{*})\) is solution to
\[-\sum_{i=1}^{d}\partial_{i}(\tilde{a}_{i}^{*}(x)\partial_{i}\varphi^{*}(x))+( \tilde{c}-\mu\lambda)(x)^{*}\varphi^{*}(x)=0\ \mbox{a.e. in}\ \mathcal{O}^{*},\]
with \(\varphi^{*}\equiv 0\) in some open ball contained in \(\mathcal{O}^{*}\setminus\mathcal{O}.\) Invoking the result in [10], we immediately get that \(\varphi\equiv 0\) in \(\mathcal{O},\) which is in contradiction with the hypothesis. It concludes the proof of Lemma 5.
With all these elements in hands, we are now in position to introduce the output feedback control strategy that exponentially stabilizes the reaction-diffusion equation (1) based on the sole internal measurement (2). This is discussed in the next section.
## 3 Design of the control strategy
Let us denote by \(A_{0}:=\left(\left\langle-\mathcal{A}\varphi_{i},\psi_{j}\right\rangle\right)_{1 \leq i,j\leq N_{0}}.\) It is seen that \(A_{0}=-\mathrm{diag}(\lambda_{1},\lambda_{2},\ldots,\lambda_{N_{0}})\). For dimension \(d\geq 2\), it is highly possible to have multiple eigenvalues. Note however that the first eigenvalue is always simple. Therefore, the number of scalar measurements \(M\) from the system output (2), as well as the structure of the control strategy, needs to be slightly adapted in function of the multiplicity of the unstable eigenvalues. To fix the ideas and to keep the presentation as concise as possible, we make the following choice for the spectrum structure (other possible choices can be treated in a similar manner as below without any supplementary effort): the second eigenvalue is of multiplicity equal two, while the rest of the first \(N_{0}\) eigenvalues are simple, i.e., we have
\[\lambda_{1}<\lambda_{2}=\lambda_{3}<\lambda_{4}<\ldots<\lambda_{N_{0}}.\]
This configuration leads us to consider \(M=2\) scalar outputs, i.e.,
\[y(t)=(z(\xi_{1},t),z(\xi_{2},t))^{\top}. \tag{9}\]
We then fix \(\eta>0\) so that \(\lambda_{2}+\eta\neq\lambda_{j},\ \forall j\in\{1,3,\ldots,N_{0}\}\).
**Remark 6**: _Note that the other configurations, in terms of multiplicity of the different eigenvalues, can be handled in a similar way by setting the number of scalar measurements \(M\) as the maximum of multiplicity for the eigenvalues \(\lambda_{1},\ldots,\lambda_{N_{0}}\)._
### Preliminary control design
For \(\gamma>0\) large enough and for each \(v\in L^{2}(\Gamma_{1})\), there exists a unique solution, \(D\), to the equation
\[\mathcal{A}D-2\sum_{i=1}^{N_{0}}\lambda_{i}\left\langle D,\psi_{ i}\right\rangle\varphi_{i}-\eta\left\langle D,\psi_{2}\right\rangle\varphi_{2}+ \gamma D=0\ \mathrm{in}\ \mathcal{O};\] \[D=v\ \mathrm{on}\ \Gamma_{1},\ D=0\ \mathrm{on}\ \Gamma_{2}. \tag{10}\]
Indeed, arguing as in [17], this is a direct consequence of the application of the Lax-Milligram theorem. This allows us to introduce the following:
**Definition 7**: _Let \(D_{\gamma}:L^{2}(\Gamma_{1})\to L^{2}(\mathcal{O})\) be defined by \(D_{\gamma}v:=D\) where, for any given \(v\in L^{2}(\Gamma_{1})\), \(D\in L^{2}(\mathcal{O})\) is the unique solution to (10)._
**Lemma 8**: _We have_
\[\left(\begin{array}{c}\left\langle D_{\gamma}v,\psi_{1}\right\rangle\\ \left\langle D_{\gamma}v,\psi_{2}\right\rangle\\ \vdots\\ \left\langle D_{\gamma}v,\psi_{N_{0}}\right\rangle\end{array}\right)=-\Lambda _{\gamma}\left(\begin{array}{c}\left\langle v,\sum_{i=1}^{d}n_{i}\tilde{a}_ {i}\partial_{i}\varphi_{1}\right\rangle_{L^{2}(\Gamma_{1})}\\ \left\langle v,\sum_{i=1}^{d}n_{i}\tilde{a}_{i}\partial_{i}\varphi_{2} \right\rangle_{L^{2}(\Gamma_{1})}\\ \vdots\\ \left\langle v,\sum_{i=1}^{d}n_{i}\tilde{a}_{i}\partial_{i}\varphi_{N_{0}} \right\rangle_{L^{2}(\Gamma_{1})}\end{array}\right), \tag{11}\]
_where_
\[\Lambda_{\gamma}=\mathrm{diag}\left(\frac{1}{\gamma-\lambda_{1}},\frac{1}{ \gamma-\lambda_{2}-\eta},\frac{1}{\gamma-\lambda_{3}},\ldots,\frac{1}{\gamma -\lambda_{N_{0}}}\right), \tag{12}\]
_while_
\[\left\langle D_{\gamma}v,\psi_{k}\right\rangle=-\frac{1}{\gamma+\lambda_{k}} \left\langle v,\sum_{i=1}^{d}n_{i}\tilde{a}_{i}\partial_{i}\varphi_{k}\right\rangle _{L^{2}(\Gamma_{1})},\ k\geq N_{0}+1. \tag{13}\]
**Proof.** Scalarly multiplying equation (10) by \(\psi_{k}=\mu\varphi_{k},\ k=1,2,\ldots,N_{0}\), while taking advantage of the bi-orthogonality of the sequences \(\{\varphi_{i}\}_{i\geq 1}\) and \(\{\psi_{i}\}_{i\geq 1}\), we get
\[\left\langle\mu\mathcal{A}D_{\gamma}v,\varphi_{k}\right\rangle-2\lambda_{k} \left\langle D_{\gamma}v,\psi_{k}\right\rangle-\eta\left\langle D_{\gamma}v, \psi_{2}\right\rangle\delta_{2,k}+\gamma\left\langle D_{\gamma}v,\psi_{k} \right\rangle=0. \tag{14}\]
Let us compute \(\left\langle\mu\mathcal{A}D_{\gamma}v,\varphi_{k}\right\rangle\). Using the integration by parts formula and invoking the boundary conditions of \(\varphi_{j}\) and \(D_{\gamma}v\), we obtain that
\[\left\langle\mu\mathcal{A}D_{\gamma}v,\varphi_{k}\right\rangle =\left\langle-\sum_{i=1}^{d}\partial_{i}(\tilde{a}_{i}\partial_{ i}D_{\gamma}v)+\tilde{c}D_{\gamma}v,\varphi_{k}\right\rangle\] \[=\sum_{i=1}^{d}\left\langle D_{\gamma}v,n_{i}\tilde{a}_{i} \partial_{i}\varphi_{k}\right\rangle_{L^{2}(\Gamma_{1})}+\left\langle D_{ \gamma}v,-\sum_{i=1}^{d}\partial_{i}(\tilde{a}_{i}\partial_{i}\varphi_{k})+ \tilde{c}\varphi_{k}\right\rangle\] \[=\left\langle v,\sum_{i=1}^{d}n_{i}\tilde{a}_{i}\partial_{i} \varphi_{k}\right\rangle_{L^{2}(\Gamma_{1})}+\lambda_{k}\left\langle D_{\gamma }v,\psi_{k}\right\rangle.\]
Substituting the right hand side of the latter identity into equation (14), we infer that
\[\left(-\lambda_{k}-\eta\delta_{2,k}+\gamma\right)\left\langle D_{\gamma}v, \psi_{k}\right\rangle=-\left\langle v,\sum_{i=1}^{d}n_{i}\tilde{a}_{i} \partial_{i}\varphi_{k}\right\rangle_{L^{2}(\Gamma_{1})},\ k\in\{1,\ldots,N_{ 0}\}.\]
This gives (11). Proceeding similarly, we infer that (13) holds.
We next fix \(N_{0}\) positive constants \(0<\gamma_{1}<\gamma_{2}<\ldots<\gamma_{N_{0}}\), selected sufficiently large such that, for each \(k\in\{1,2,\ldots,N_{0}\}\),
1. equation (10) is well-posed for \(\gamma=\gamma_{k}\);
2. \(\gamma_{k}\pm(\lambda_{i}+\eta\delta_{2,i})\neq 0\) for all \(1\leq k\leq N_{0}\) and \(i\in\mathbb{N}^{*}\)
Following Definition 7, we denote for \(k\in\{1,2,\ldots,N_{0}\}\) by \(D_{\gamma_{k}}\) the corresponding operators as defined by (10).
We now introduce the Gram matrix \(\mathbf{B}\), defined by:
\[\mathbf{B}:=\left(\left\langle\sum_{i=1}^{d}n_{i}\tilde{a}_{i}\partial_{i} \varphi_{k},\sum_{i=1}^{d}n_{i}\tilde{a}_{i}\partial_{i}\varphi_{l}\right\rangle _{L^{2}(\Gamma_{1})}\right)_{1\leq k,l\leq N_{0}} \tag{15}\]
and we set
\[B_{k}:=\Lambda_{\gamma_{k}}\mathbf{B}\Lambda_{\gamma_{k}},\ k\in\{1,2,\ldots,N _{0}\}. \tag{16}\]
We also define
\[\mathcal{L}(x)=\left(\sum_{i=1}^{d}n_{i}(x)\tilde{a}_{i}(x)\partial_{i}\varphi _{1}(x),\ldots,\sum_{i=1}^{d}n_{i}(x)\tilde{a}_{i}(x)\partial_{i}\varphi_{N_{0 }}(x)\right)^{\top} \tag{17}\]
for all \(x\in\Gamma_{1}\). Invoking iteratively the unique continuation property of the eigenfunctions of the operator \(\mathcal{A}\), stated by Lemma 5, we observe that \(\mathcal{L}\) has non-zero entries for all \(x\) in a non-zero measure subset of \(\Gamma_{1}\). Then, because \(-\lambda_{k}-\eta\delta_{2,k}\) for \(k\in\{1,2,\ldots,N_{0}\}\) are \(2\) by \(2\) distinct and owing to [22, Proposition 2.1], we have that \(B_{1}+B_{2}+\ldots+B_{N_{0}}\) is an invertible matrix. Therefore, we define:
\[A:=(B_{1}+B_{2}+\ldots+B_{N_{0}})^{-1}. \tag{18}\]
Next, introducing an auxiliary command input \(U:[0,\infty)\to\mathbb{R}^{N_{0}}\), that will be specified later in Subsection 3.3, we define
\[u_{k}(x,t):=\left\langle\Lambda_{\gamma_{k}}AU(t),\mathcal{L}(x)\right\rangle_{ N_{0}},\;x\in\Gamma_{1},\;t\geq 0, \tag{19}\]
for all \(k=1,2,\ldots,N_{0}\). Then, we define the boundary control \(u\) appearing in the plant (1) as
\[\begin{split} u(x,t):&=u_{1}(x,t)+u_{2}(x,t)+ \ldots+u_{N_{0}}(x,t)\\ &=\sum_{k=1}^{N_{0}}\left\langle\Lambda_{\gamma_{k}}AU(t), \mathcal{L}(x)\right\rangle_{N_{0}}.\end{split} \tag{20}\]
**Lemma 9**: _It holds, for all \(k=1,\ldots,N_{0}\),_
\[\left(\begin{array}{c}\left\langle D_{\gamma_{k}}u_{k},\psi_{1}\right\rangle \\ \left\langle D_{\gamma_{k}}u_{k},\psi_{2}\right\rangle\\ \vdots\\ \left\langle D_{\gamma_{k}}u_{k},\psi_{N_{0}}\right\rangle\end{array}\right)= -B_{k}AU, \tag{21}\]
**Proof.**
Based on (11), we infer that
\[\left(\begin{array}{c}\left\langle D_{\gamma_{k}}u_{k},\psi_{1}\right\rangle \\ \left\langle D_{\gamma_{k}}u_{k},\psi_{N_{0}}\right\rangle\end{array}\right)=- \Lambda_{\gamma_{k}}\left(\begin{array}{c}\left\langle u_{k},\sum_{i=1}^{d}n _{i}\tilde{a}_{i}\partial_{i}\varphi_{1}\right\rangle_{L^{2}(\Gamma_{1})}\\ \left\langle u_{k},\sum_{i=1}^{d}n_{i}\tilde{a}_{i}\partial_{i}\varphi_{2} \right\rangle_{L^{2}(\Gamma_{1})}\\ \vdots\\ \left\langle u_{k},\sum_{i=1}^{d}n_{i}\tilde{a}_{i}\partial_{i}\varphi_{N_{0} }\right\rangle_{L^{2}(\Gamma_{1})}\end{array}\right).\]
Taking into account (15) and (19), this yields
\[\left(\begin{array}{c}\left\langle D_{\gamma_{k}}u_{k},\psi_{1}\right\rangle \\ \left\langle D_{\gamma_{k}}u_{k},\psi_{2}\right\rangle\\ \vdots\\ \left\langle D_{\gamma_{k}}u_{k},\psi_{N_{0}}\right\rangle\end{array}\right)= -\Lambda_{\gamma_{k}}\mathbf{B}\Lambda_{\gamma_{k}}AU\]
and so, owing to the definition of \(B_{k}\) given by (16), the claimed identity (21) is proved.
### Spectral reduction
Our objective is now to specify the auxiliary command input \(U\) that appears in (19). To do so, we first need to carry on a spectral reduction of the system formed by the plant (1) along with the preliminary control input (20). This is done in this subsection.
**Lemma 10**: _Consider the following change of variable :_
\[w:=z-\sum_{k=1}^{N_{0}}D_{\gamma_{k}}u_{k} \tag{22}\]
_and define the coefficients of projection \(w_{n}(t)=\left\langle w(t,\cdot),\psi_{n}\right\rangle\) and \(z_{n}(t)=\left\langle z(t,\cdot),\psi_{n}\right\rangle\). Then we have_
\[\frac{d}{dt}z_{n} =-\lambda_{n}z_{n}+\sum_{k=1}^{N_{0}}(\lambda_{n}+\gamma_{k}) \left\langle D_{\gamma_{k}}u_{k},\psi_{n}\right\rangle\] \[\quad-2\sum_{k,i=1}^{N_{0}}\lambda_{i}\left\langle D_{\gamma_{k} }u_{k},\psi_{i}\right\rangle\delta_{i,n}-\eta\sum_{k=1}^{N_{0}}\left\langle D _{\gamma_{k}}u_{k},\psi_{2}\right\rangle\delta_{2,n} \tag{23}\]
_for all \(n\geq 1\). Moreover, we have_
\[\frac{d}{dt}w_{n}=-\lambda_{n}w_{n}+\sum_{k=1}^{N_{0}}\gamma_{k}\left\langle D _{\gamma_{k}}u_{k},\psi_{n}\right\rangle-\sum_{k=1}^{N_{0}}\left\langle D_{ \gamma_{k}}\frac{d}{dt}u_{k},\psi_{n}\right\rangle \tag{24}\]
_for all \(n\geq N_{0}+1\)._
**Proof.** We first equivalently rewrite (1) as an internal-type control problem. More precisely, invoking the change of variable (22), we have
\[\frac{d}{dt}w =\frac{d}{dt}z-\frac{d}{dt}\sum_{k=1}^{N_{0}}D_{\gamma_{k}}u_{k}\] \[\overset{\eqref{eq:def}}{=}-\mathcal{A}z-\frac{d}{dt}\sum_{k=1}^ {N_{0}}D_{\gamma_{k}}u_{k}\] \[=-\mathcal{A}_{0}w-\sum_{k=1}^{N_{0}}\mathcal{A}D_{\gamma_{k}}u_{ k}-\frac{d}{dt}\sum_{k=1}^{N_{0}}D_{\gamma_{k}}u_{k}\] \[\overset{\eqref{eq:def}}{=}-\mathcal{A}_{0}w-2\sum_{k,i=1}^{N_{0 }}\lambda_{i}\left\langle D_{\gamma_{k}}u_{k},\psi_{i}\right\rangle\varphi_{i} -\eta\sum_{k=1}^{N_{0}}\left\langle D_{\gamma_{k}}u_{k},\psi_{2}\right\rangle \varphi_{2}\] \[\quad+\sum_{k=1}^{N_{0}}\gamma_{k}D_{\gamma_{k}}u_{k}-\frac{d}{ dt}\sum_{k=1}^{N_{0}}D_{\gamma_{k}}u_{k},\ t>0. \tag{25}\]
Then, recalling that \(w_{n}(t)=\left\langle w(t,\cdot),\psi_{n}\right\rangle\) and \(z_{n}(t)=\left\langle z(t,\cdot),\psi_{n}\right\rangle\), the projection of (25) gives
\[\frac{d}{dt}w_{n} =-\lambda_{n}w_{n}-\sum_{k=1}^{N_{0}}\left\langle D_{\gamma_{k}} \frac{d}{dt}u_{k},\psi_{n}\right\rangle-2\sum_{k,i=1}^{N_{0}}\lambda_{i}\left \langle D_{\gamma_{k}}u_{k},\psi_{i}\right\rangle\delta_{i,n}\] \[\quad-\eta\sum_{k=1}^{N_{0}}\left\langle D_{\gamma_{k}}u_{k}, \psi_{2}\right\rangle\delta_{2,n}+\sum_{k=1}^{N_{0}}\gamma_{k}\left\langle D_{ \gamma_{k}}u_{k},\psi_{n}\right\rangle \tag{26}\]
for all \(n\geq 1\). This gives (24) because \(\delta_{i,n}=0\) for all \(1\leq i\leq N_{0}\) and \(n\geq N_{0}+1\). Now, in view of the the change of variable formula (22), we have
\[w_{n}=z_{n}-\sum_{k=1}^{N_{0}}\left\langle D_{\gamma_{k}}u_{k},\psi_{n}\right\rangle. \tag{27}\]
Thus, combining (26)-(27), we infer that (23) holds.
Our objective is now to write an ODE describing the dynamics of a finite number of modes of the system composed of the plant (1) and the control input (20). To do so, we define
\[Z^{N_{0}}:=(\left\langle z,\psi_{1}\right\rangle,\left\langle z,\psi_{2} \right\rangle,\ldots,\left\langle z,\psi_{N_{0}}\right\rangle)^{\top}\]
and
\[\Xi:=\text{diag}(0,\eta,0,\ldots,0).\]
In view of equation (23) and taking into account the relation (21), we deduce that
\[\begin{split}\frac{d}{dt}Z^{N_{0}}(t)&=A_{0}Z^{N_{ 0}}(t)+\sum_{k=1}^{N_{0}}\left[-A_{0}+\gamma_{k}I\right]\left(\begin{array}{ c}\left\langle D_{\gamma_{k}}u_{k},\psi_{1}\right\rangle\\ \left\langle D_{\gamma_{k}}u_{k},\psi_{2}\right\rangle\\ \ldots\\ \left\langle D_{\gamma_{k}}u_{k},\psi_{N_{0}}\right\rangle\end{array}\right) \\ &=A_{0}Z^{N_{0}}(t)-\sum_{k=1}^{N_{0}}\left[A_{0}+\gamma_{k}I-\Xi \right]B_{k}AU(t)\\ &\stackrel{{\eqref{eq:Z^N_0}}}{{=}}A_{0}Z^{N_{0}}(t)+ \left(-A_{0}-\sum_{k=1}^{N_{0}}\gamma_{k}B_{k}A+\Xi\right)U(t).\end{split} \tag{28}\]
where we recall that \(A_{0}=-\text{diag}(\lambda_{1},\ldots,\lambda_{N_{0}})\). Introducing now a second integer \(N\geq N_{0}+1\) to be specified later, we define
\[Z^{N-N_{0}}:=(\left\langle z,\psi_{N_{0}+1}\right\rangle,\left\langle z,\psi_ {N_{0}+2}\right\rangle,\ldots,\left\langle z,\psi_{N}\right\rangle)^{\top}\]
along with \(A_{1}=-\text{diag}(\lambda_{N_{0}+1},\ldots,\lambda_{N})\) and \(H^{N-N_{0}}:\mathbb{R}^{N_{0}}\rightarrow\mathbb{R}^{N-N_{0}}\) defined by
\[H^{N-N_{0}}U =\left(\sum_{k=1}^{N_{0}}(\lambda_{N_{0}+1}+\gamma_{k})\left\langle D _{\gamma_{k}}u_{k},\psi_{N_{0}+1}\right\rangle,\ldots,\sum_{k=1}^{N_{0}}( \lambda_{N}+\gamma_{k})\left\langle D_{\gamma_{k}}u_{k},\psi_{N}\right\rangle \right)^{\top}\] \[\stackrel{{\eqref{eq:Z^N_0}}}{{=}}\sum_{k=1}^{N_{0}} \left(\begin{array}{c}\left\langle u_{k},\sum_{i=1}^{d}n_{i}\tilde{a}_{i} \partial_{i}\varphi_{N_{0}+1}\right\rangle_{L^{2}(\Gamma_{1})}\\ \left\langle u_{k},\sum_{i=1}^{d}n_{i}\tilde{a}_{i}\partial_{i}\varphi_{N_{0}+ 2}\right\rangle_{L^{2}(\Gamma_{1})}\\ \ldots\\ \left\langle u_{k},\sum_{i=1}^{d}n_{i}\tilde{a}_{i}\partial_{i}\varphi_{N} \right\rangle_{L^{2}(\Gamma_{1})}\end{array}\right)\]
where \(u_{k}\) is expressed in function of the auxiliary control input \(U\) based on (19). Then, using again (23), we deduce that
\[\frac{d}{dt}Z^{N-N_{0}}(t)=A_{1}Z^{N-N_{0}}(t)+H^{N-N_{0}}U(t),\;t>0. \tag{29}\]
### Observer design and definition of the auxiliary control input \(U\)
We are now in position to properly define in this subsection the auxiliary command input \(U\) that appears in (19). We select the measurement locations \(\xi_{1},\xi_{2}\in\mathcal{O}\) from (9) such that
\[|\varphi_{i}(\xi_{1})|+|\varphi_{i}(\xi_{2})|\neq 0,\;\forall i\in\{1,4,5,\ldots,N_ {0}\};\qquad\det\left|\begin{array}{cc}\varphi_{2}(\xi_{1})&\varphi_{3}(\xi_{ 1})\\ \varphi_{2}(\xi_{2})&\varphi_{3}(\xi_{2})\end{array}\right|\neq 0. \tag{30}\]
Note that such a selection is always possible due to the fact that \(\varphi_{i}\), \(i\in\{1,2,\ldots,N_{0}\}\), cannot vanish on any open ball included in \(\mathcal{O}\) and the fact that \(\varphi_{2},\varphi_{3}\) correspond to the same eigenvalue and are linearly independent.
Second, introducing \(\tilde{y}(t)=(w(\xi_{1},t),w(\xi_{2},t))^{\top}\), we deduce from the change of variable formula (22) that
\[y(t)=\tilde{y}(t)+\sum_{k=1}^{N_{0}}\begin{pmatrix}D_{\gamma_{k}}u_{k}(\xi_{1 },t)\\ D_{\gamma_{k}}u_{k}(\xi_{2},t)\end{pmatrix}=\sum_{i\geq 1}\begin{pmatrix} \varphi_{i}(\xi_{1})\\ \varphi_{i}(\xi_{2})\end{pmatrix}w_{i}(t)+\sum_{k=1}^{N_{0}}\begin{pmatrix}D_{ \gamma_{k}}u_{k}(\xi_{1},t)\\ D_{\gamma_{k}}u_{k}(\xi_{2},t)\end{pmatrix}. \tag{31}\]
Defining \(C_{0}=\left(\begin{array}{cc}\varphi_{1}(\xi_{1})&\varphi_{2}(\xi_{1})& \ldots&\varphi_{N_{0}}(\xi_{1})\\ \varphi_{1}(\xi_{2})&\varphi_{2}(\xi_{2})&\ldots&\varphi_{N_{0}}(\xi_{2}) \end{array}\right)\) and \(C_{1}=\left(\begin{array}{cc}\varphi_{N_{0}+1}(\xi_{1})&\varphi_{N_{0}+2}( \xi_{1})\ldots&\varphi_{N}(\xi_{1})\\ \varphi_{N_{0}+1}(\xi_{2})&\varphi_{N_{0}+2}(\xi_{2})\ldots&\varphi_{N}(\xi_{2 })\end{array}\right)\), it immediately follows from (30) that the pair \((A_{0},C_{0})\) satisfies the Kalman condition. Hence, we can fix \(L\in M_{N_{0}\times 2}(\mathbb{R})\) such that \(A_{0}-LC_{0}\) is Hurwitz with arbitrary spectral abscissa.
In view of (27), (28), and (29), we now define the following observer dynamics:
\[\hat{w}_{n} =\hat{z}_{n}-\sum_{k=1}^{N_{0}}\left\langle D_{\gamma_{k}}u_{k}, \psi_{n}\right\rangle,\quad 1\leq n\leq N \tag{32b}\] \[\frac{d}{dt}\hat{Z}^{N_{0}}(t) =A_{0}\hat{Z}^{N_{0}}(t)+\left(-A_{0}-\sum_{k=1}^{N_{0}}\gamma_{k }B_{k}A+\Xi\right)U(t)\] (32c) \[\quad-L\left\{\sum_{i=1}^{N}\begin{pmatrix}\varphi_{i}(\xi_{1}) \\ \varphi_{i}(\xi_{2})\end{pmatrix}\hat{w}_{i}(t)+\sum_{k=1}^{N_{0}}\begin{pmatrix} \left(D_{\gamma_{k}}u_{k}\right)(\xi_{1},t)\\ \left(D_{\gamma_{k}}u_{k}\right)(\xi_{2},t)\end{pmatrix}-y(t)\right\} \tag{32a}\]
for \(t>0\). This allows us to complete the definition of the control strategy by setting the auxiliary control input \(U\) as:
\[U=\hat{Z}^{N_{0}}. \tag{33}\]
Overall, the control strategy is composed of (20), (32), and (33).
### Main stabilization result
We can now state the main result of this work.
**Theorem 11**: _Assume that Assumption 1 holds. Let \(\delta>0\) and \(N_{0}\geq 1\) be such that \(\lambda_{n}>\delta\) for all \(n\geq N_{0}+1\). Assume that the first \(N_{0}\) eigenvalues of the operator \(\mathcal{A}\) are simple except of the second and the third one which are equal. With corresponding measurement (9), pick \(\xi_{1},\xi_{2}\in\mathcal{O}\) so that (30) holds true. Let \(L\in\mathbb{R}^{N_{0}}\) be such that \(A_{0}-LC_{0}\) is Hurwitz with
eigenvalues that have a real part strictly less than \(-\delta\). Then, \(\gamma_{1}<\gamma_{2}<\ldots<\gamma_{N_{0}}\) can be selected large enough such that the matrix \(-\sum_{k=1}^{N_{0}}\gamma_{k}B_{k}A+\Xi\) is Hurwitz with eigenvalues that have a real part strictly less than \(-\delta\). Furthermore, for \(N\geq N_{0}+1\) selected to be large enough, there exists a constant \(C>0\) such that, for any initial condition \(z_{o}\in H^{2}(\mathcal{O})\), the trajectory of the closed-loop system composed of the plant (1), the internal measurement (2) and the controller (20) with \(U\) given by (33), satisfies_
\[\|z(\cdot,t)\|_{H^{1}(\mathcal{O})}+\sum_{k=1}^{N}|\hat{z}_{k}(t)|\leq Ce^{- \delta t}\left(\|z(0,t)\|_{H^{1}(\mathcal{O})}+\sum_{k=1}^{N}|\hat{z}_{k}(0)| \right). \tag{34}\]
**Example 12**: _Consider the plant (1) with \(d=2\), \(\mathcal{O}=(0,\pi)\times(0,\pi)\), \(\Gamma_{1}=\{x_{1}\in(0,\pi),\ x_{2}=0\}\), and \(\mathcal{A}f=-\Delta f-3\nabla\cdot f-10f\). Hence, introducing \(\mu=e^{3x_{1}+3x_{2}}\), which is positive and bounded, \(1\leq\mu(x)\leq e^{6\pi},\) Assumption 1 is fulfilled because:_
\[\mu\mathcal{A}f=-\sum_{i=1}^{2}\partial_{i}(e^{3x_{1}+3x_{2}}\partial_{i}f)-10 e^{3x_{1}+3x_{2}}f.\]
_The associated eigenfunctions are described by \(\varphi_{i,j}(x)=\frac{2}{\pi}e^{-\frac{3x_{1}-3x_{2}}{2}}\sin(ix_{1})\sin(jx_{ 2})\) with the corresponding eigenvalues \(\lambda_{i,j}=\frac{4i^{2}+4j^{2}-31}{4}\). Then, it is seen that the bi-orthogonal system is given by \(\psi_{i,j}=\frac{2}{\pi}e^{\frac{3x_{1}+3x_{2}}{2}}\sin(ix_{1})\sin(jx_{2})\), \(i,j\in\mathbb{N}\setminus\{0\}\)._
_In this setting, the open-loop system is unstable with a total of three modes that are not exponentially stable: \(-\lambda_{1,1}=23/4\) and \(-\lambda_{1,2}=-\lambda_{2,1}=11/4\) with the corresponding three eigenfunctions_
\[\left\{\frac{2}{\pi}e^{\frac{-3x_{1}-3x_{2}}{2}}\sin x_{1}\sin x_{2},\ \frac{2}{\pi}e^{\frac{-3x_{1}-3x_{2}}{2}}\sin 2x _{1}\sin x_{2},\ \frac{2}{\pi}e^{\frac{-3x_{1}-3x_{2}}{2}}\sin x_{1}\sin 2x_{2} \right\}.\]
_Hence, fixing \(N_{0}=3\), the choice for the spectral multiplicity, of the first three eigenvalues, is verified. Moreover, (30) holds true as soon as \(\xi_{i}=(\xi_{i1},\ \xi_{i2})\in(0,\pi)^{2},\ i=1,2,\) such that \(\cos\xi_{11}\cos\xi_{22}-\cos\xi_{12}\cos\xi_{21}\neq 0\). This allows the application of Theorem 1._
**Proof.** We first need to rewrite the dynamics of the closed-loop system formed by (20), (32), and (33) in a suitable format for the upcoming stability analysis. To do so, we first define the errors of observation as
\[E^{N_{0}}:=Z^{N_{0}}-\hat{Z}^{N_{0}},\quad\tilde{E}^{N-N_{0}}:=\Lambda^{N-N_{0 }}(Z^{N-N_{0}}-\hat{Z}^{N-N_{0}})\]
where \(\Lambda^{N-N_{0}}=\text{diag}(\lambda_{Q_{0}+1},\ldots,\lambda_{N})\) while
\[\zeta:=\sum_{n\geq N+1}\begin{pmatrix}\varphi_{n}(\xi_{1})\\ \varphi_{n}(\xi_{2})\end{pmatrix}w_{n},\quad\tilde{C}_{1}:=C_{1}\left(\Lambda^ {N-N_{0}}\right)^{-1}.\]
Hence, we obtain from (20), (32), and (33) that
\[\frac{d}{dt}\hat{Z}^{N_{0}} =\left(-\sum_{k=1}^{N_{0}}\gamma_{k}B_{k}A+\Xi\right)\hat{Z}^{N_{ 0}}+LC_{0}E^{N_{0}}+L\tilde{C}_{1}\tilde{E}^{N-N_{0}}+L\zeta, \tag{35b}\] \[\frac{d}{dt}E^{N_{0}} =(A_{0}-LC_{0})E^{N_{0}}-L\tilde{C}_{1}\tilde{E}^{N-N_{0}}-L\zeta,\] (35c) \[\frac{d}{dt}\hat{Z}^{N-N_{0}} =A_{1}\hat{Z}^{N-N_{0}}+H^{N-N_{0}}\hat{Z}^{N_{0}},\] (35d) \[\frac{d}{dt}\tilde{E}^{N-N_{0}} =A_{1}\tilde{E}^{N-N_{0}}. \tag{35a}\]
Introducing the finite-dimensional state vector \(X=(\hat{Z}^{N_{0}},E^{N_{0}},\tilde{E}^{N-N_{0}})^{\top}\) along with the matrices
\[F:=\begin{pmatrix}-\sum_{k=1}^{N_{0}}\gamma_{k}B_{k}A+\Xi&LC_{0}&L\tilde{C}_{1} \\ 0&A_{0}-LC_{0}&-L\tilde{C}_{1}\\ 0&0&A_{1}\end{pmatrix},\quad\mathcal{L}:=\begin{pmatrix}L\\ -L\\ 0\end{pmatrix}, \tag{36}\]
we infer that the closed-loop system dynamics is described by
\[\frac{d}{dt}X =FX+\mathcal{L}\zeta \tag{37b}\] \[\frac{d}{dt}\hat{Z}^{N-N_{0}} =A_{1}\hat{Z}^{N-N_{0}}+H^{N-N_{0}}\hat{Z}^{N_{0}}\] (37c) \[\frac{d}{dt}w_{n} =-\lambda_{n}w_{n}+\sum_{k=1}^{N_{0}}\gamma_{k}\left\langle D_{ \gamma_{k}}u_{k},\psi_{n}\right\rangle-\sum_{k=1}^{N_{0}}\left\langle D_{ \gamma_{k}}\frac{d}{dt}u_{k},\psi_{n}\right\rangle,\quad n\geq N+1 \tag{37a}\]
Let \(c>1\) be an arbitrarily given constant. Let us now show that we can fix the real numbers \(0<\gamma_{1}<\gamma_{2}<\ldots<\gamma_{N_{0}}<c\gamma_{1}\) large enough such that the matrix \(-\sum_{k=1}^{N_{0}}\gamma_{k}B_{k}A+\Xi\) is Hurwitz with eigenvalues that have a real part strictly less than \(-\delta<0\). To do so, let \(\lambda\in\mathbb{C}\) and a non-zero vector \(Z\in\mathbb{C}^{N_{0}}\) be such that \(\left(-\sum_{k=1}^{N_{0}}\gamma_{k}B_{k}A+\Xi\right)Z=\lambda Z\). Recalling that \(A\) is defined by (18), \(A\) is symmetric definite positive and \(\sum_{k=1}^{N_{0}}B_{k}A=I\). Hence, it follows that
\[\lambda\|A^{\frac{1}{2}}Z\|_{N_{0}}^{2} =\left\langle\lambda Z,AZ\right\rangle_{N_{0}}=-\sum_{k=1}^{N_{0 }}\gamma_{k}\left\langle B_{k}AZ,AZ\right\rangle_{N_{0}}+\left\langle\Xi Z,AZ \right\rangle_{N_{0}}\] \[=-\gamma_{1}\left\langle Z,AZ\right\rangle_{N_{0}}+\gamma_{1} \sum_{k=1}^{N_{0}}\left\langle B_{k}AZ,AZ\right\rangle_{N_{0}}-\sum_{k=1}^{N_ {0}}\gamma_{k}\left\langle B_{k}AZ,AZ\right\rangle_{N_{0}}\] \[\quad+\left\langle A^{\frac{1}{2}}\Xi A^{-\frac{1}{2}}A^{\frac{ 1}{2}}Z,A^{\frac{1}{2}}Z\right\rangle_{N_{0}}\] \[=-\gamma_{1}\|A^{\frac{1}{2}}Z\|_{N_{0}}^{2}+\sum_{k=2}^{N_{0}}( \gamma_{1}-\gamma_{k})\left\langle B_{k}AZ,AZ\right\rangle_{N_{0}}+\left\langle A ^{\frac{1}{2}}\Xi A^{-\frac{1}{2}}A^{\frac{1}{2}}Z,A^{\frac{1}{2}}Z\right\rangle _{N_{0}}.\]
We have for all \(k\in\{1,\ldots,N_{0}\}\) that \(\gamma_{1}\leq\gamma_{k}\) and \(B_{k}\) is positive semi-definite (in view of its definition (16)). Moreover, by the definition of \(A\), using the Landau notation, we have
\[\|A^{\frac{1}{2}}\|=O(\gamma_{1}),\quad\|A^{-\frac{1}{2}}\|=O\left(\frac{1}{ \gamma_{1}}\right)\]
as \(\gamma_{1}\to+\infty\). Hence \(\|A^{\frac{1}{2}}\Xi A^{-\frac{1}{2}}\|\leq C\eta\), for some constant \(C>0\) independent of \(0<\gamma_{1}<\gamma_{2}<\ldots<\gamma_{N_{0}}<c\gamma_{1}\). It then yields from the above that
\[\Re(\lambda)\|A^{\frac{1}{2}}Z\|_{N_{0}}^{2}\leq(-\gamma_{1}+C\eta)\|A^{\frac{ 1}{2}}Z\|_{N_{0}}^{2}.\]
Since \(\|A^{\frac{1}{2}}Z\|_{N_{0}}\neq 0\), we obtain for \(\gamma_{1}\) large enough that \(\Re(\lambda)<-\delta\), which proves our claim. This, in particular, implies that the matric \(F\) defined by (36) is Hurwitz with eigenvalues that have a real part strictly less than \(-\delta\).
We now carry on a Lyapunov stability analysis. In view of (6), let us introduce the Lyapunov function, for all \((X,w)\in\mathbb{R}^{N+N_{0}}\times H^{1}(\mathcal{O})\)
\[V(X,w)=X^{\top}PX+\sum_{n\geq N+1}(\lambda_{n}+\nu)w_{n}^{2}. \tag{38}\]
The computation of the time derivative of \(V\) along the system trajectories (37) gives
\[\dot{V} =2X^{\top}P(FX+\mathcal{L}\zeta)\] \[\quad+2\sum_{n\geq N+1}(\lambda_{n}+\nu)\left\{-\lambda_{n}w_{n}+ \sum_{k=1}^{N_{0}}\gamma_{k}\left\langle D_{\gamma_{k}}u_{k},\psi_{n}\right\rangle -\sum_{k=1}^{N_{0}}\left\langle D_{\gamma_{k}}\frac{d}{dt}u_{k},\psi_{n} \right\rangle\right\}w_{n}\] \[=\tilde{X}^{\top}\begin{pmatrix}F^{\top}P+PF&P\mathcal{L}\\ \mathcal{L}^{\top}P&0\end{pmatrix}\tilde{X}-2\sum_{n\geq N+1}\lambda_{n}( \lambda_{n}+\nu)w_{n}^{2}\] \[\quad+2\sum_{n\geq N+1}(\lambda_{n}+\nu)\sum_{k=1}^{N_{0}}\gamma _{k}\left\langle D_{\gamma_{k}}u_{k},\psi_{n}\right\rangle w_{n}-2\sum_{n\geq N +1}(\lambda_{n}+\nu)\sum_{k=1}^{N_{0}}\left\langle D_{\gamma_{k}}\frac{d}{dt}u _{k},\psi_{n}\right\rangle w_{n}\]
where \(\tilde{X}:=\mathrm{col}(X,\zeta)\). Let us estimate the before last term. To do so, let us note that
\[\left\langle D_{\gamma_{k}}u_{k},\psi_{n}\right\rangle \stackrel{{\eqref{eq:V_T}}}{{=}}-\left\langle D_{ \gamma_{k}}\left\langle\Lambda_{\gamma_{k}}AU,L\right\rangle_{N_{0}},\psi_{n}\right\rangle\] \[=-\sum_{l=1}^{N_{0}}\tilde{\gamma}_{k,l}A_{L_{l}}U\left\langle D_ {\gamma_{k}}L_{l},\psi_{n}\right\rangle\]
where \(\tilde{\gamma}_{k,l}\) in the \(l\)th term on the diagonal of the matrix \(\Lambda_{\gamma_{k}}\) defined by (12), \(A_{L_{l}}\) is the \(l\)th line of the matrix \(A\), and \(L_{l}(x)\) is the \(l\)th component of \(L(x)\) defined by (17). Therefore, using Young's inequality, we deduce for any \(\epsilon>0\) that
\[2\sum_{n\geq N+1}(\lambda_{n}+\nu)\sum_{k=1}^{N_{0}}\gamma_{k} \left\langle D_{\gamma_{k}}u_{k},\psi_{n}\right\rangle w_{n}\] \[=-2\sum_{n\geq N+1}(\lambda_{n}+\nu)\sum_{k,l=1}^{N_{0}}\gamma_{ k}\tilde{\gamma}_{k,l}A_{L_{l}}U\left\langle D_{\gamma_{k}}L_{l},\psi_{n} \right\rangle w_{n}\] \[\leq 2\sum_{k,l=1}^{N_{0}}\sum_{n\geq N+1}|\gamma_{k}||\tilde{ \gamma}_{k,l}|\big{\|}A_{L_{l}}\big{\|}\left\langle D_{\gamma_{k}}L_{l},\psi_{ n}\right\rangle|\|U\|\times(\lambda_{n}+\nu)|w_{n}|\] \[\leq\epsilon\sum_{k,l=1}^{N_{0}}\gamma_{k}^{2}\tilde{\gamma}_{k,l }^{2}\|A_{L_{l}}\|^{2}\left\{\sum_{n\geq N+1}\left\langle D_{\gamma_{k}}L_{l}, \psi_{n}\right\rangle^{2}\right\}\|U\|^{2}+\frac{1}{\epsilon}\sum_{k,l=1}^{N_{ 0}}\sum_{n\geq N+1}(\lambda_{n}+\nu)^{2}w_{n}^{2}\] \[\stackrel{{\eqref{eq:V_T}}}{{\leq}}\epsilon\underbrace{ c_{1}\sum_{k,l=1}^{N_{0}}\gamma_{k}^{2}\tilde{\gamma}_{k,l}^{2}\|A_{L_{l}}\|^{2}\| \mathcal{R}_{N}D_{\gamma_{k}}L_{l}\|^{2}}_{:=S_{1,N}}\|U\|^{2}+\frac{N_{0}^{2}} {\epsilon}\sum_{n\geq N+1}(\lambda_{n}+\nu)^{2}w_{n}^{2}\]
where \(\mathcal{R}_{N}f:=\sum_{n\geq N+1}\left\langle f,\psi_{n}\right\rangle\varphi_{n}\) and \(U=\hat{Z}^{N_{0}}=E_{1}X\) with \(E_{1}=\begin{bmatrix}I&0&0\end{bmatrix}\). Similarly, we
have
\[2\sum_{n\geq N+1}(\lambda_{n}+\nu)\sum_{k=1}^{N_{0}}\left\langle D _{\gamma_{k}}\frac{d}{dt}u_{k},\psi_{n}\right\rangle w_{n}\] \[\stackrel{{(5)}}{{\leq}}\epsilon\underbrace{c_{1} \sum_{k,l=1}^{N_{0}}\tilde{\gamma}_{k,l}^{2}\|A_{L_{l}}\|^{2}\|\mathcal{R}_{N }D_{\gamma_{k}}L_{l}\|^{2}}_{:=S_{2,N}}\left\|\frac{d}{dt}U\right\|^{2}+\frac {N_{0}^{2}}{\epsilon}\sum_{n\geq N+1}(\lambda_{n}+\nu)^{2}w_{n}^{2}\]
where \(\frac{d}{dt}U=\frac{d}{dt}\hat{Z}^{N_{0}}=E_{2}\tilde{X}\) with \(E_{2}=\left[-\sum_{k=1}^{N_{0}}\gamma_{k}B_{k}A+\Xi\ \ LC_{0}\ \ L\tilde{C}_{1}\ \ L\right]\). Finally, we have by Cauchy-Schwarz inequality that
\[\|\zeta\|^{2} =\left(\sum_{n\geq N+1}\varphi_{n}(\xi_{1})w_{n}\right)^{2}+ \left(\sum_{n\geq N+1}\varphi_{n}(\xi_{2})w_{n}\right)^{2}\] \[\leq\underbrace{\sum_{n\geq N+1}\frac{\varphi_{n}(\xi_{1})^{2}+ \varphi_{n}(\xi_{2})^{2}}{(\lambda_{n}+\nu)^{2}}}_{:=S_{\varphi,N}}\times \sum_{n\geq N+1}(\lambda_{n}+\nu)^{2}w_{n}^{2}.\]
Gathering all the the above estimates, we deduce that
\[\dot{V}+2\delta V\leq\tilde{X}^{\top}\Theta_{1}\tilde{X}+\sum_{n\geq N+1}( \lambda_{n}+\nu)\Psi_{n}w_{n}^{2} \tag{39}\]
for \(\delta>0\) fixed such that \(F+\delta I\) is Hurwitz and where
\[\Theta_{1} =\begin{pmatrix}F^{\top}P+PF+2\delta P+\epsilon S_{1,N}E_{1}^{ \top}E_{1}&P\mathcal{L}\\ \mathcal{L}^{\top}P&-\eta I\end{pmatrix}+\epsilon S_{2,N}E_{2}^{\top}E_{2},\] \[\Psi_{n} =\left[-2\left(1-\frac{N_{0}^{2}}{\epsilon}\right)+\eta S_{ \varphi,N}\right]\lambda_{n}+\left[\frac{2N_{0}^{2}}{\epsilon}+\eta S_{ \varphi,N}\right]\nu+2\delta,\quad n\geq N+1\]
for an arbitrary \(\eta>0\).
Assume for the moment that \(\Theta_{1}\preceq 0\) and \(\Psi_{n}\leq 0\) for all \(n\geq N+1\). Then, in view of (39), we get that \(\dot{V}+2\delta V\leq 0\). Combining this estimate with the definition (38) of the Lyapunov function \(V\), the direct integration of the dynamics \(\hat{Z}^{N-N_{0}}\) from (37), the use of the estimates (6), and invoking the change of variable formula (22), we directly infer the existence of a constant \(C>0\), independent of the initial condition, such that the claimed stability estimate (34) holds.
To conclude the proof, it thus remains to show that \(N\) can be selected so that \(\Theta_{1}\preceq 0\) and \(\Psi_{n}\leq 0\) for all \(n\geq N+1\). To do so, let us set \(\epsilon=2N_{0}^{2}\) and \(\eta=1/\sqrt{S_{\varphi,N}}\) if \(S_{\varphi,N}\neq 0\), \(\eta=N\) otherwise. This implies that \(\eta\to+\infty\) while \(\eta S_{\varphi,N}\to 0\) as \(N\to+\infty\). Hence, for \(N\) large enough we have \(\eta S_{\varphi,N}\leq 1/2\) and \(\lambda_{n}\geq\lambda_{N+1}>0\) for all \(n\geq N+1\), which implies that
\[\Psi_{n}\leq\Theta_{2}:=-\frac{1}{2}\lambda_{N+1}+\frac{3}{2}\nu+2\delta,\quad n \geq N+1\]
with \(\Theta_{2}\to-\infty\) as \(N\to+\infty\). Now, since \(F\) defined by (36) is Hurwitz, we define \(P\succ 0\) as the unique solution to the Lyapunov equation \(F^{\top}P+PF+2\delta P=-I\). Owing to Lemma 3, it can be seen that \(\|\tilde{C}_{1}\|=O(1)\), hence \(\|L\tilde{C}_{1}\|=O(1)\), as \(N\to+\infty\). Therefore, a result
similar to [19, Lemma in Appendix] shows that \(\|P\|=O(1)\) as \(N\to+\infty\). Therefore, we have
\[\Theta_{1}=\underbrace{\begin{pmatrix}-I+\epsilon S_{1,N}E_{1}^{\top}E_{1}&P \mathcal{L}\\ \mathcal{L}^{\top}P&-\eta I\end{pmatrix}}_{:=\Theta_{1,p}}+\epsilon S_{2,N}E_{2 }^{\top}E_{2}. \tag{40}\]
Using the Schur complement for \(N\) sufficiently large so that \(\eta>1/2\), we see that \(\Theta_{1,p}\preceq-\frac{1}{2}I\) if and only if \(-\frac{1}{2}I+\epsilon S_{1,N}E_{1}^{\top}E_{1}+\frac{1}{\eta-\frac{1}{2}}P \mathcal{L}\mathcal{L}^{\top}P\preceq 0\). Noting that \(\|P\|=O(1)\) and \(S_{2,N}\to 0\) as \(N\to+\infty\) while \(\|E_{1}\|\) and \(\|\mathcal{L}\|\) are constants independent of \(N\), we deduce that \(\Theta_{1,p}\preceq-\frac{1}{2}I\) for all \(N\) selected to be large enough. In that case, \(\Theta_{1}\preceq-\frac{1}{2}I+\epsilon S_{2,N}E_{2}^{\top}E_{2}\) for all \(N\) selected to be large enough. Since \(\|E_{2}\|=O(1)\) and \(S_{2,N}\to 0\) as \(N\to+\infty\), we deduce that \(\Theta_{1}\preceq 0\) for \(N\) large enough.
## 4 Conclusions
This paper discussed the design of an observer-based feedback stabilizing controller for multi-dimensional parabolic type equations governed by diagonalizable second order differential operators. To fix the ideas and to ease the presentation, we focused the developments on the case of three unstable eigenvalues: one of multiplicity one and one of multiplicity two. However, the approach reported in this paper easily extends to any other case with a finite number of unstable modes with arbitrary finite multiplicity.
To conclude, it is worth to mention that, based on the technique presented in this work, a natural perspective is the study of non-linear multi-dimensional parabolic equations combining the present design method with [20], and the study of multi-dimensional parabolic equations with delays as done in e.g., [21, 18].
|
2302.04202 | Finite element approximation for uniformly elliptic linear PDE of second
order in nondivergence form | This paper proposes a novel technique for the approximation of strong
solutions $u \in C(\overline{\Omega}) \cap W^{2,n}_\mathrm{loc}(\Omega)$ to
uniformly elliptic linear PDE of second order in nondivergence form with
continuous leading coefficient in nonsmooth domains by finite element methods.
These solutions satisfy the Alexandrov-Bakelman-Pucci (ABP) maximum principle,
which provides an a~posteriori error control for $C^1$ conforming
approximations. By minimizing this residual, we obtain an approximation to the
solution $u$ in the $L^\infty$ norm. Although discontinuous functions do not
satisfy the ABP maximum principle, this approach extends to nonconforming FEM
as well thanks to well-established enrichment operators. Convergence of the
proposed FEM is established for uniform mesh-refinements. The built-in
a~posteriori error control (even for inexact solve) can be utilized in adaptive
computations for the approximation of singular solutions, which performs
superiorly in the numerical benchmarks in comparison to the uniform
mesh-refining algorithm. | Ngoc Tien Tran | 2023-02-08T17:19:53Z | http://arxiv.org/abs/2302.04202v2 | Finite element approximation for uniformly elliptic linear PDE of second order in nondivergence form
###### Abstract.
This paper proposes a novel technique for the approximation of strong solutions \(u\in C(\overline{\Omega})\cap W^{2,n}_{\mathrm{loc}}(\Omega)\) to uniformly elliptic linear PDE of second order in nondivergence form with continuous leading coefficient in nonsmooth domains by finite element methods. These solutions satisfy the Alexandrov-Bakelman-Pucci (ABP) maximum principle, which provides an a posteriori error control for \(C^{1}\) conforming approximations. By minimizing this residual, we obtain an approximation to the solution \(u\) in the \(L^{\infty}\) norm. Although discontinuous functions do not satisfy the ABP maximum principle, this approach extends to nonconforming FEM as well thanks to well-established enrichment operators. Convergence of the proposed FEM is established for uniform mesh-refinements. The built-in a posteriori error control (even for inexact solve) can be utilized in adaptive computations for the approximation of singular solutions, which performs superiorly in the numerical benchmarks in comparison to the uniform mesh-refining algorithm.
Key words and phrases:nondivergence, finite elements, a posteriori, a priori 2010 Mathematics Subject Classification: 65N12, 65N15, 65N30 This project received funding from the European Union's Horizon 2020 research and innovation programme (project DAFNE, grant agreement No. 891734).
## 1. Introduction
### Background
Given an open bounded Lipschitz domain \(\Omega\subset\mathbb{R}^{n}\), we seek the strong solution \(u\in C(\overline{\Omega})\cap W^{2,n}_{\mathrm{loc}}(\Omega)\) to the Dirichlet problem
\[Lu\coloneqq-A:\mathrm{D}^{2}u+b\cdot\nabla u+c\,u=f\text{ in }\Omega\quad \text{and}\quad u=g\text{ on }\partial\Omega \tag{1.1}\]
with an uniformly elliptic second-order operator \(L\) in nondivergence form, a right-hand side \(f\in L^{n}(\Omega)\), and Dirichlet data \(g\in C(\partial\Omega)\). The existence of strong solutions is guaranteed under the following assumption.
**Assumption 1.1**.: _Let \(A\in C(\overline{\Omega};\mathbb{S})\) with \(\lambda\mathrm{I}_{n}\leq A\leq\Lambda\mathrm{I}_{n}\), \(b\in L^{\infty}(\Omega;\mathbb{R}^{n})\), and \(0\leq c\in L^{\infty}(\Omega)\)._
Here, \(0<\lambda\leq\Lambda\) are (fixed) ellipticity constants. We refer to Section 2 for further details on the PDE (1.1). Monotone finite difference methods (FDM) can approximate these solutions because they respect some maximum principle on the discrete level. General convergence theory has been established in [3] even for fully nonlinear PDE. However, FDM are restricted to low-order methods on structured meshes and fixed size finite difference stencils are generally not sufficient for a convergent scheme [19].
In contrast to PDE in divergence form, a variational formulation for (1.1) may not be available if \(A\) is not sufficiently smooth. In these cases, the design of finite element schemes for (1.1) becomes challenging. We point out several finite element methods (FEM) in the literature. By imitating the convergence analysis on the continuous level, a FEM have been proposed in [12]. The two-scale lowest-order
method in [21] relies on discrete maximum principles and allows for convergence of FEM under further assumptions on the mesh. Both previously mentioned papers require the regularity \(u\in W^{2,p}(\Omega)\) and therefore, \(C^{1,1}\) boundary for the domain \(\Omega\) as a sufficient condition. For uniformly elliptic operators \(L\) with (possibly) discontinuous coefficients that satisfy the so-called Cordes condition, [25, 24] prove that there exists a strong solution \(u\in H^{2}(\Omega)\cap H^{1}_{0}(\Omega)\) to (1.1) for all right-hand side \(f\in L^{2}(\Omega)\) on convex domains \(\Omega\). In this case, (1.1) can be seen as a perturbation of the Laplace equation as the eigenvalues of \(A\) cannot leave certain range depending on the dimension \(n\). This allows access to finite element discretization and adaptive computation with plain convergence in [13, 17]. In two space dimensions and without lower-order terms, the Codes condition therein is equivalent to uniformly ellipticity of \(L\). The restrictions imposed on the coefficients are less practical in higher space dimensions or in presence of lower-order terms.
### Motivation
Global regularity of strong solutions \(u\) to the PDE (1.1) can only be obtained under rather strict restrictions on the domain \(\Omega\), e.g., \(C^{1,1}\) boundary. On Lipschitz domains, however, we can only expect the local regularity \(u\in C(\overline{\Omega})\cap W^{2,n}_{\mathrm{loc}}(\Omega)\); that is, \(u\in W^{2,n}(\omega)\) for any open set \(\omega\Subset\Omega\). The goal of this paper is the design of finite element methods on nonsmooth domains. The main tool is the well-known Alexandrov-Bakelman-Pucci (ABP) maximum principle
\[\|u\|_{L^{\infty}(\Omega)}\leq\|u\|_{V}\coloneqq\|u\|_{L^{\infty}(\partial \Omega)}+C_{1}\|Lu\|_{L^{n}(\Omega)} \tag{1.2}\]
for a positive constant \(C_{1}\) independent of \(u\). The key observation is that, given any function \(v\in V\) in the Banach space
\[V\coloneqq\{v\in C(\overline{\Omega})\cap W^{2,n}_{\mathrm{loc}}(\Omega):Lv \in L^{n}(\Omega)\} \tag{1.3}\]
endowed with the norm \(\|\bullet\|_{V}\) from (1.2), (1.2) implies the stability estimate \(\|u-v\|_{L^{\infty}(\Omega)}\leq\Phi(v)\) with the function
\[\Phi(v)\coloneqq\|g-v\|_{L^{\infty}(\partial\Omega)}+C_{1}\|f-Lv\|_{L^{n}( \Omega)}, \tag{1.4}\]
which simultaneously provides an a posteriori error control for the error of \(u-v\) in the \(L^{\infty}\) norm. We will show that \(W^{2,n}(\Omega)\) is dense in \(V\) so that the infimum of \(\Phi\) among all functions in \(W^{2,n}(\Omega)\) vanishes. In particular, the sequence of discrete minimizers \(u_{h}\) of \(\Phi\) in a \(C^{1}\) conforming finite element space \(V_{h}\) converges uniformly to \(u\) as the mesh-size \(h\) tends to zero. Here, \(u_{h}\) can be understood as a (possibly non-unique) best-approximation of \(u\) in \(V_{h}\) with respect to the norm \(\|\bullet\|_{V}\). The main difficulty of this fairly simple approach is the practical realization because \(\Phi\) is a nonsmooth nonlinear functional. If \(g\) is the trace of a finite element function, it is possible to enforce the Dirichlet data pointwise onto \(V_{h}\). This leads to a quadratic minimization problem in the affine space \(W_{h}\coloneqq\{v_{h}\in V_{h}:v_{h}=g\text{ on }\partial\Omega\}\). However, we will demonstrate with the Laplace equation as an example that this approach will fail in the sense that the minimum of \(\Phi\) in \(W_{h}\) may not vanish as \(h\to 0\) and the sequence of minimizers of \(\Phi\) in \(W_{h}\) may not approximate \(u\). Instead, the boundary error \(\|g-\bullet\|_{L^{\infty}(\partial\Omega)}\) in (1.4) is enforced as linear side constraints. As a result, the proposed scheme requires solving a constrained convex minimization problem or, in two space dimensions \(n=2\), a quadratic programming.
Due to their flexibility in terms of polynomial degree and simplicity of their practical realization, nonconforming discretizations outperform conforming ones for problems involving the Hessian. Although the ABP maximum principle (1.2) cannot be directly applied to discontinuous functions, an enrichment operator based on local averaging provides appropriate conforming approximations of these functions. Therefore, the convergence analysis of nonconforming FEM can be carried out as for conforming schemes. A welcome feature of the analysis of this paper is the built-in a posteriori error control that allows for adaptive mesh-refining strategies.
### Outline and Notation
The remaining parts of this paper are organized as follows. Section 2 recalls some classical results from PDE theory and proves the density of \(W^{2,n}(\Omega)\) in \(V\) with respect to a norm motivated by the ABP maximum principle. We demonstrate the design of FEM with this density result in Section 3 for conforming and nonconforming schemes. Numerical benchmarks in Section 4 conclude this paper.
Standard notation for function spaces applies throughout this paper. Let \(\mathbb{S}\subset\mathbb{R}^{n\times n}\) denote the set of all symmetric matrices with the identity matrix \(\mathrm{I}_{n}\). The notation \(A:B\) denotes the Euclidean scalar product of two matrices \(A,B\in\mathbb{R}^{n\times n}\), which induces the Frobenius norm \(|\bullet|\) in \(\mathbb{R}^{n\times n}\). The context-sensitive notation \(|\bullet|\) may also denote the absolute value of a scalar, the Euclidean norm of a vector, or the Lebesgue measure of a set. For any symmetric matrices \(A,B\in\mathbb{S}\), \(A\leq B\) means that all eigenvalues of \(B-A\in\mathbb{S}\) are nonnegative. The notation \(A\lesssim B\) abbreviates \(A\leq CB\) for a generic constant \(C\) independent of the mesh-size and \(A\approx B\) abbreviates \(A\lesssim B\lesssim A\). An open set \(\omega\subset\mathbb{R}^{n}\) with boundary \(\partial\omega\) satisfies an uniform exterior cone condition with the (closed) cone \(K\) if, for all \(x\in\partial\Omega\), there exists a cone \(K_{x}\) with vertex \(x\) such that \(K_{x}\) is congruent to \(K\) and \(K_{x}\cap\overline{\Omega}=\{x\}\).
## 2. Preliminary results from PDE theory
Throughout this paper, we always assume that \(L\) is a uniformly elliptic operator, i.e., there exist positive (ellipticity) constants \(0<\lambda\leq\Lambda\) such that the coefficient matrix \(A\) satisfies \(\lambda\mathrm{I}_{n}\leq A\leq\Lambda\mathrm{I}_{n}\) a.e. in \(\Omega\). The following maximum principle is a fundamental result in the analysis of strong solutions and plays a major role in the design and analysis of the finite element schemes of this paper.
**Theorem 2.1** (ABP maximum principle).: _Let \(\Omega\subset\mathbb{R}^{n}\) be an open bounded set, \(A\in L^{\infty}(\Omega;\mathbb{S})\) with \(\lambda\mathrm{I}_{n}\leq A\leq\Lambda\mathrm{I}_{n}\), \(b\in L^{\infty}(\Omega;\mathbb{R}^{n})\), and \(0\leq c\in L^{\infty}(\Omega)\). There exists a constant \(C_{1}\) depending on \(n\), \(\lambda\), \(\|b\|_{L^{\infty}(\Omega)}\), and \(\mathrm{diam}(\Omega)\) such that any strong solution \(u\in C(\overline{\Omega})\cap W^{2,n}_{\mathrm{loc}}(\Omega)\) to (1.1) satisfies (1.2)._
Proof.: The proof can be found in [15, Section 9.1] with the constant
\[C_{1}^{n}\leq\mathrm{diam}(\Omega)^{n}\bigl{(}\exp\bigl{(}2^{n-2}\mathrm{ diam}(\Omega)^{n}(1+\|b\|_{L^{\infty}(\Omega)}^{n}/\mathcal{D})/(w_{n}n^{n}) \bigr{)}-1\bigr{)}/\mathcal{D}.\]
Here, \(w_{n}=\pi^{n/2}/\Gamma(n/2+1)\) with the gamma function \(\Gamma\) and \(\lambda^{n}\leq\mathcal{D}\leq\Lambda^{n}\) denotes the (essential) infimum of the determinant of \(A\) over \(\Omega\). In 2d, \(w_{2}=\pi\).
Recall the norm \(\|\bullet\|_{V}\) from (1.2). While the ABP maximum principle states that \(\|u\|_{L^{\infty}(\Omega)}\leq\|u\|_{V}\), we cannot expect the reverse bound \(\|u\|_{V}\lesssim\|u\|_{L^{\infty}(\Omega)}\) in general. In fact, under additional assumptions, \(\|u\|_{V}\) is an upper bound for the \(H^{2}\) norm of \(u\).
_Remark 2.2_ (\(H^{2}\) error control).: Suppose that \(n=2\), \(b=0\), \(c=0\), and \(g=0\). If \(\Omega\) is convex or the boundary \(\partial\Omega\) of \(\Omega\) is of class \(C^{1,1}\), then the strong solution \(u\) to (1.1) satisfies \(u\in H^{2}(\Omega)\) with the estimate \(\|u\|_{H^{2}(\Omega)}\lesssim\|f\|_{L^{2}(\Omega)}=\|u\|_{V}\)[15, 25]. On the other hand, a Holder inequality leads to \(\|u\|_{V}\leq\|A\|_{L^{\infty}(\Omega)}\|\mathrm{D}^{2}u\|_{L^{2}(\Omega)}\) and so, we have the equivalence \(\|u\|_{H^{2}(\Omega)}\approx\|u\|_{V}\) of norms.
Note that, in general, the assumption \(u\in C(\overline{\Omega})\cap W^{2,n}_{\mathrm{loc}}(\Omega)\) in Theorem 2.1 cannot be relaxed by \(u\in C(\overline{\Omega})\cap W^{2,p}_{\mathrm{loc}}(\Omega)\) for some \(p<n\) due to a result by Alexandrov [2]. The ABP maximum principle leads to uniqueness of strong solutions, while existence can be rather involved. If \(A\) is continuous, then existence of strong solutions can be established following [15, Chapter 9]. (If the boundary \(\partial\Omega\) is additionally of class \(C^{1,1}\), then the global regularity \(u\in W^{2,n}(\Omega)\) is guaranteed.) Unfortunately, the situation is more complicated for merely bounded but possibly discontinuous
coefficient \(A\in L^{\infty}(\Omega;\mathbb{S})\) because there is currently no satisfying existence and uniqueness theory in at least three space dimensions \(n\geq 3\). It appears that existence would require \(W^{2,p}\) a priori estimates for appropriate \(p>1\). This is only available under further assumptions, e.g., if \(A\) satisfies the Cordes condition and \(p=2\)[26]. In view of the counterexamples in [22, 27], it seems that Sobolev spaces are not sufficient to describe all solutions to the PDE (1.1) with discontinuous coefficients in arbitrary space dimensions. Therefore, the theory of this paper applies to the following case without a priori information on the solution \(u\).
**Theorem 2.3** (existence and uniqueness of strong solutions).: _Suppose that the coefficients of \(L\) satisfy Assumption 1.1. Given \(f\in L^{n}(\Omega)\) and \(g\in C(\partial\Omega)\), there exists a unique strong solution \(u\in C(\overline{\Omega})\cap W^{2,n}_{\rm loc}(\Omega)\) to (1.1). For any open subset \(\omega\Subset\Omega\), there exists a constant \(C_{2}\) depending on \(n\), \(\lambda\), \(\Lambda\), \(\|b\|_{L^{\infty}(\Omega)}\), \({\rm diam}(\Omega)\), and \({\rm dist}(\omega,\partial\Omega)\) such that_
\[\|u\|_{W^{2,n}(\omega)}\leq C_{2}(\|u\|_{L^{\infty}(\partial\Omega)}+\|f\|_{L^ {n}(\Omega)}). \tag{2.1}\]
Proof.: It is known that any Lipschitz domain \(\Omega\) satisfies an exterior cone condition [16, Theorem 1.2.2.2]. The existence of strong solutions is stated in [15, Theorem 9.30] even under the weaker assumption \(A\in C(\Omega;\mathbb{S})\cap L^{\infty}(\Omega;\mathbb{S})\) and the interior estimate (2.1) is given in [15, Theorem 9.11]. (Notice that the term \(\|u\|_{L^{p}(\Omega)}\) therein can be replaced by \(\|u\|_{L^{\infty}(\partial\Omega)}\) thanks to the ABP maximum principle from Theorem 2.1.)
An immediate consequence of Theorem 2.1-2.3 is that all strong solutions to (1.1) form a Banach space. Recall \(V\) from (1.3) and \(\|\bullet\|_{V}\) from (1.2).
**Proposition 2.4** (\(V\) is Banach space).: _Suppose that the coefficients of \(L\) satisfy Assumption 1.1. Then \(V\) is a Banach space endowed with the norm \(\|\bullet\|_{V}\)._
Proof.: We only prove completeness of \(V\). Given any Cauchy sequence \((v_{j})_{j\in\mathbb{N}_{0}}\) in \(V\), the definition of \(\|\bullet\|_{V}\) implies that \((v_{j}|_{\partial\Omega})_{j\in\mathbb{N}}\) resp. \((Lv_{j})_{j\in\mathbb{N}}\) are Cauchy sequences in the Banach space \(C(\partial\Omega)\) resp. \(L^{n}(\Omega)\). Therefore, there exist \(f\in L^{n}(\Omega)\) and \(g\in C(\partial\Omega)\) with \(\lim_{j\to\infty}\|g-v_{j}\|_{L^{\infty}(\partial\Omega)}=0\) and \(\lim_{j\to\infty}\|f-Lv_{j}\|_{L^{n}(\Omega)}=0\). Theorem 2.3 proves that there exists a unique strong solution \(v\in C(\overline{\Omega})\cap W^{2,n}_{\rm loc}(\Omega)\) to \(Lv=f\) in \(\Omega\) and \(v=g\) on \(\partial\Omega\). In particular, \(v\in V\) is the limit of the Cauchy sequence \((v_{j})_{j\in\mathbb{N}}\) with respect to the norm \(\|\bullet\|_{V}\).
The next result states Holder continuity of strong solutions to (1.1).
**Theorem 2.5** (global Holder regularity).: _Given \(A\in L^{\infty}(\Omega;\mathbb{S})\) with \(\lambda\Lambda_{n}\leq A\leq\Lambda_{n}\), \(b\in L^{\infty}(\Omega;\mathbb{R}^{n})\), \(0\leq c\in L^{\infty}(\Omega)\), \(f\in L^{n}(\Omega)\), and \(g\in C^{0,\beta}(\partial\Omega)\) for some \(\beta\in(0,1)\), then any strong solution \(u\in C(\overline{\Omega})\cap W^{2,n}_{\rm loc}(\Omega)\) to (1.1) is Holder continuous \(u\in C^{0,\alpha}(\overline{\Omega})\) with a positive parameter \(\alpha\in(0,1)\) that solely depends on \(n\), \(\beta\), \(\lambda\), \(\Lambda\), \(\|b\|_{L^{\infty}(\Omega)}\), and the cone condition of \(\Omega\). In other words,_
\[|u(x)-u(y)|\leq C_{3}|x-y|^{\alpha}\]
_for any \(x,y\in\overline{\Omega}\). Here, the constant \(C_{3}\) solely depends on \(n\), \(\lambda\), \(\Lambda\), \(\|b\|_{L^{\infty}(\Omega)}\), \(\|c\|_{L^{\infty}(\Omega)}\), \(\|f\|_{L^{n}(\Omega)}\), \(\|g\|_{C^{0,\beta}(\partial\Omega)}\), and the cone condition of \(\Omega\)._
Proof.: A proof of this result can be found in [18, Theorem 6.2] or, in a slightly different formulation, in [15, Chapter 9].
In finite elements schemes, the coefficients of \(L\) are approximated whenever numerical integration is used. Well-known results from [20, 23] show that uniqueness may fail whenever we approximate general discontinuous coefficient \(A\). This issue does not arise if strong solutions exist.
**Lemma 2.6** (approximation of differential operator).: _Let \(A\in L^{\infty}(\Omega;\mathbb{S})\) with \(\lambda\mathrm{I}_{n}\leq A\leq\Lambda\mathrm{I}_{n}\), \(b\in L^{\infty}(\Omega;\mathbb{R}^{n})\), \(0\leq c\in L^{\infty}(\Omega)\), \(f\in L^{n}(\Omega)\), \(g\in C^{0,\beta}(\partial\Omega)\) and \((A_{j})_{j}\subset C(\Omega;\mathbb{S})\cap L^{\infty}(\Omega;\mathbb{S})\), \((b_{j})_{j}\subset L^{\infty}(\Omega;\mathbb{R}^{n})\), \(0\leq(c_{j})_{j}\subset L^{\infty}(\Omega)\), \((f_{j})_{j}\subset L^{n}(\Omega)\), \((g_{j})_{j}\subset C^{0,\beta}(\partial\Omega)\) for some \(\beta\in(0,1)\) be given such that_
1. \(\|A-A_{j}\|_{L^{\infty}(\Omega)}\to 0\)_,_ \(\|b-b_{j}\|_{L^{\infty}(\Omega)}\to 0\)_,_ \(\|c-c_{j}\|_{L^{\infty}(\Omega)}\to 0\)_,_ \(\|f-f_{j}\|_{L^{n}(\Omega)}\to 0\)_, and_ \(\|g-g_{j}\|_{C^{0,\beta}(\partial\Omega)}\to 0\) _as_ \(j\to\infty\)_;_
2. \(\lambda\mathrm{I}_{n}\leq\{A_{j},A\}\leq\Lambda\mathrm{I}_{n}\) _a.e. in_ \(\Omega\) _for all_ \(j\in\mathbb{N}\) _for some constants_ \(0<\lambda\leq\Lambda\)_._
_Suppose that there exists a strong solution \(u\in C(\overline{\Omega})\cap W^{2,n}_{\mathrm{loc}}(\Omega)\) to (1.1), then the sequence of strong solutions \(u_{j}\in C(\overline{\Omega})\cap W^{2,n}_{\mathrm{loc}}(\Omega)\) to_
\[L_{j}u_{j}\coloneqq-A:\mathrm{D}^{2}u_{j}+b\cdot\nabla u_{j}+cu_{j}=f_{j}\text { in }\Omega\quad\text{and}\quad u_{j}=g_{j}\text{ on }\partial\Omega\]
_converges uniformly to \(u\), i.e., \(\lim_{j\to\infty}\|u-u_{j}\|_{L^{\infty}(\Omega)}=0\)._
We note that the existence of \(u_{j}\) in Lemma 2.6 follows from Theorem 2.3 because the leading coefficient \(A_{j}\) is continuous. For strong solutions \(u\in W^{2,n}(\Omega)\), the assertion of Lemma 2.6 can be found in [23, Corollary 2.2]. A proof under the assumption \(u\in C(\overline{\Omega})\cap W^{2,n}_{\mathrm{loc}}(\Omega)\) is given in [5] even for fully nonlinear partial differential operators. For the convenience of the reader, we provide an elementary proof for the linear case below.
Proof.: Let \(v_{j}\coloneqq u-u_{j}\) be the strong solution to
\[L_{j}v_{j}=L_{j}(u-u_{j})\text{ in }\Omega\quad\text{and}\quad v_{j}=g-g_{j} \text{ on }\partial\Omega.\]
From Theorem 2.5 and the assumptions (a)-(b), we deduce that \(v_{j}\) is Holder continuous with \(\|v_{j}\|_{C^{0,\alpha}(\overline{\Omega})}\leq C_{4}\) for some exponent \(\alpha\in(0,1)\) and constant \(C_{4}\) independent of the index \(j\). Given \(\varepsilon>0\), define the open subset \(\Omega_{\varepsilon}\coloneqq\{x\in\Omega:\mathrm{dist}(x,\partial\Omega)>( \varepsilon/C_{4})^{1/\alpha}\}\Subset\Omega\). For any \(x\in\Omega\) and \(z\in\partial\Omega\), the Holder regularity of \(v_{j}\) proves \(|v_{j}(x)|\leq|v_{j}(x)-v_{j}(z)|+|v_{j}(z)|\leq C|x-z|^{\alpha}+|g(z)-g_{j}(z)|\). This and the definition of \(\Omega_{\varepsilon}\) imply
\[\|v_{j}\|_{L^{\infty}(\Omega)\Omega_{\varepsilon}}=\|u-u_{j}\|_{L^{\infty}( \Omega\setminus\Omega_{\varepsilon})}\leq\|g-g_{j}\|_{L^{\infty}(\partial \Omega)}+\varepsilon \tag{2.2}\]
The ABP maximum principle from Theorem 2.1 provides
\[\|v_{j}\|_{L^{\infty}(\Omega_{\varepsilon})}=\|u-u_{j}\|_{L^{\infty}(\Omega_ {\varepsilon})}\leq\|v_{j}\|_{L^{\infty}(\partial\Omega_{\varepsilon})}+C_{1} \|L_{j}(u-u_{j})\|_{L^{n}(\Omega_{\varepsilon})}. \tag{2.3}\]
A triangle, the Holder, and a Cauchy inequality lead to
\[\|L_{j}(u-u_{j})\|_{L^{n}(\Omega_{\varepsilon})}\leq\|f-f_{j}\|_ {L^{n}(\Omega)}+\|(L-L_{j})u\|_{L^{n}(\Omega_{\varepsilon})}\leq\|f-f_{j}\|_{L^ {n}(\Omega)}\] \[\quad+\big{(}\|A-A_{j}\|_{L^{\infty}(\Omega)}^{n/(n-1)}+\|b-b_{j }\|_{L^{\infty}(\Omega)}^{n/(n-1)}+\|c-c_{j}\|_{L^{\infty}(\Omega)}^{n/(n-1)} \big{)}^{(n-1)/n}\|u\|_{W^{2,n}(\Omega_{\varepsilon})}.\]
The combination of this with (2.2)-(2.3) leads to
\[\|u-u_{j}\|_{L^{\infty}(\Omega)}\leq\varepsilon+\|f-f_{j}\|_{L^ {n}(\Omega)}+\|g-g_{j}\|_{L^{\infty}(\partial\Omega)}\] \[\quad+\big{(}\|A-A_{j}\|_{L^{\infty}(\Omega)}^{n/(n-1)}+\|b-b_{j }\|_{L^{\infty}(\Omega)}^{n/(n-1)}+\|c-c_{j}\|_{L^{\infty}(\Omega)}^{n/(n-1)} \big{)}^{(n-1)/n}\|u\|_{W^{2,n}(\Omega_{\varepsilon})}.\]
Taking the limit of this as \(j\to\infty\) concludes \(\limsup_{j\to\infty}\|u-u_{j}\|_{L^{\infty}(\Omega)}\leq\varepsilon\) for arbitrary \(\varepsilon>0\), whence \(\lim_{j\to\infty}\|u-u_{j}\|_{L^{\infty}(\Omega)}=0\).
We note that the assumption \(g\in C^{0,\beta}(\partial\Omega)\) in Lemma 2.6 can be replaced by \(g\in C(\partial\Omega)\) if \(g_{j}=g\) for all \(j\), i.e., if the Dirichlet data is not approximated. The following density result is the foundation for the convergence analysis of this paper. Recall the vector space \(V\) from (1.3).
**Lemma 2.7** (density).: _Suppose that the coefficients of \(L\) satisfy Assumption 1.1. For any \(v\in V\), there exists a sequence \((v_{j})_{j}\) of functions \(v_{j}\in W^{2,n}(\Omega)\) such that \(Lv_{j}=Lv\) and \(\lim_{j\to\infty}\|v-v_{j}\|_{L^{\infty}(\partial\Omega)}=0\). In particular, \(W^{2,n}(\Omega)\) is dense in \(V\) (with respect to the norm \(\|\bullet\|_{V}\))._
Proof.: In the first step, the assertion is proven for any function \(v\in V\) with homogenous boundary data \(v=0\) on \(\partial\Omega\). Since \(\Omega\) is Lipschitz, the set
\[\Omega(\delta)\coloneqq\{x\in\mathbb{R}^{n}:\operatorname{dist}(x,\Omega)<\delta\}\]
is a Lipschitz domain for sufficiently small \(0<\delta\leq\delta_{0}\). In fact, the boundary of \(\Omega(\delta)\) can be represented locally by the graph of some Lipschitz continuous function with the same Lipschitz constant in the same local coordinates as for \(\Omega\)[9, Theorem 4.1]. It is observed in [16, p. 11] that the cone condition of a Lipschitz domain solely depends on these parameters. Hence, the sequence \((\Omega_{j})_{j}\) of Lipschitz domains
\[\Omega_{j}\coloneqq\{x\in\mathbb{R}^{n}:\operatorname{dist}(x,\Omega)<\delta_ {0}/j\}\]
approximates \(\Omega\) with \(\lim_{j\to\infty}\operatorname{dist}(\Omega,\partial\Omega_{j})=0\) and \(\Omega_{j}\) satisfies an exterior cone condition with a fixed cone \(K\) independent of \(j\). Let \(A\in C(\mathbb{R}^{n};\mathbb{S})\) be a (not relabelled) continuous extension of the coefficient \(A\). In particular, \(A\) is uniformly continuous in the compact set \(\overline{\Omega}_{1}\). Therefore, there exists a \(\gamma>0\) such that \(|A(x)-A(z)|\leq\lambda/2\) whenever \(|x-z|\leq\gamma\) for all \(x,z\in\overline{\Omega}_{1}\). The min-max principle shows, for any \(x\in\Omega_{1}\) with \(\operatorname{dist}(x,\partial\Omega)\leq\gamma\), that
\[\min_{y\in S(\mathbb{R}^{n})}y\cdot A(x)y\geq\min_{y\in S(\mathbb{R}^{n})}y \cdot A(z)y-\min_{y\in S(\mathbb{R}^{n})}y\cdot(A(z)-A(x))y\geq\lambda/2, \tag{2.4}\]
where \(z\) denotes the best-approximation of \(x\) onto \(\overline{\Omega}\). This shows \(\lambda\mathrm{I}_{n}/2\leq A\) and, by a similar argument, \(A\leq(\Lambda+\lambda/2)\mathrm{I}_{n}\) in \(\{z\in\overline{\Omega}_{1}:\operatorname{dist}(z,\Omega)\leq\gamma\}\). Without loss of generality we can assume that \(\delta_{0}\leq\gamma\) so that \(\lambda\mathrm{I}_{n}/2\leq A\leq(\Lambda+\lambda/2)\mathrm{I}_{n}\) holds pointwise in \(\Omega_{1}\). For any \(j\in\mathbb{N}\), let \(v_{j}\in C(\overline{\Omega}_{j})\cap W^{2,n}_{\mathrm{loc}}(\Omega_{j})\) be the unique strong solution to
\[Lv_{j}=f\text{ in }\Omega_{j}\quad\text{and}\quad v_{j}=0\text{ on }\partial\Omega_{j},\]
where the functions \(b\), \(c\), and \(f\) are extended by zero outside \(\Omega\). By design, \(v_{j}|_{\Omega}\in W^{2,n}(\Omega)\) and we claim that \(\lim_{j\to\infty}\|v-v_{j}\|_{V}=0\). In fact,
\[\|v-v_{j}\|_{V}=\|v-v_{j}\|_{L^{\infty}(\partial\Omega)}=\|v_{j}\|_{L^{\infty} (\partial\Omega)}. \tag{2.5}\]
From Theorem 2.5, we deduce that \(v_{j}\in C^{0,\alpha}(\overline{\Omega}_{j})\) with \(\|v_{j}\|_{C^{0,\alpha}(\overline{\Omega}_{j})}\leq C_{5}\). The parameter \(\alpha\in(0,1)\) and the constant \(C_{5}\) are independent of \(j\) because the cone condition of \(\Omega_{j}\) is independent of \(j\). This Holder continuity of \(v_{j}\) and \(v_{j}=0\) on \(\partial\Omega_{j}\) provide
\[|v_{j}(x)|\leq C_{5}\operatorname{dist}(x,\partial\Omega_{j})^{\alpha}\quad \text{for any }x\in\partial\Omega.\]
This and (2.5) result in \(\|v-v_{j}\|_{L^{\infty}(\partial\Omega)}\leq C_{5}\operatorname{dist}(\Omega, \partial\Omega_{j})^{\alpha}\), which tends to \(0\) as \(j\to\infty\). We thus proved that any \(v\in V\) with \(v=0\) on \(\partial\Omega\) can be approximated by functions in \(W^{2,n}(\Omega)\). In the general case, let some \(g_{j}\in C^{\infty}(\overline{\Omega})\) with \(\|v-g_{j}\|_{L^{\infty}(\Omega)}\leq 1/(2j)\) for any \(j\in\mathbb{N}\) be given. Then the strong solution \(w\in C(\overline{\Omega})\cap W^{2,n}_{\mathrm{loc}}(\Omega)\) to \(Lw=f-Lg_{j}\) in \(\Omega\) and \(w=0\) on \(\partial\Omega\) satisfies \(\|v-(w+g_{j})\|_{V}=\|v-(w+g_{j})\|_{L^{\infty}(\partial\Omega)}\leq 1/(2j)\). From the first step, there exists a \(w_{j}\in W^{2,n}(\Omega)\) such that \(Lw_{j}=Lw\) in \(\Omega\) and \(\|w-w_{j}\|_{L^{\infty}(\partial\Omega)}\leq 1/(2j)\). This and a triangle inequality conclude, for \(v_{j}\coloneqq w_{j}+g_{j}\in W^{2,n}(\Omega)\), that \(Lv_{j}=f\) in \(\Omega\) and \(\|v-v_{j}\|_{V}=\|v-v_{j}\|_{L^{\infty}(\partial\Omega)}\leq\|v-(w+g_{j})\|_{L^ {\infty}(\partial\Omega)}+\|w-w_{j}\|_{L^{\infty}(\partial\Omega)}\leq 1/j\to 0\) as \(j\to\infty\).
The following counterexample shows that the density result in Lemma 2.7 may fail if we enforce Dirichlet boundary data pointwise onto the spaces therein.
**Proposition 2.8** (Laplace equation).: _Let \(u\in C(\overline{\Omega})\cap H^{2}_{\mathrm{loc}}(\Omega)\) denote the strong solution to the Laplace problem \(-\Delta u=1\) in the two-dimensional \(L\)-shaped domain \(\Omega\coloneqq(-1,1)^{2}\setminus([0,1]\times[-1,0])\) with homogenous boundary data \(u=0\) on \(\partial\Omega\). Then \(u\) cannot be the uniform limit of any bounded sequence \((u_{j})_{j}\subset V\) of functions
\(u_{j}\in H^{2}(\Omega)\) with homogenous boundary data \(u_{j}=0\) on \(\partial\Omega\). (Here, the boundedness of \((u_{j})\) is understood with respect to the norm \(\|\bullet\|_{V}\) from (1.2).)_
Proof.: Since \(f\) is smooth, \(u\in C(\overline{\Omega})\cap C^{\infty}(\Omega)\)[15, Theorem 6.17]. However, \(u\notin H^{2}(\Omega)\) due to the reentrant corner of the domain \(\Omega\). We recall the \(H^{2}\) a priori estimate \(\|u_{j}\|_{H^{2}(\Omega)}\lesssim\|u_{j}\|_{L^{2}(\Omega)}+\|\Delta u_{j}\|_{L ^{2}(\Omega)}\) for any \(j\in\mathbb{N}\) on polygons from [16, Theorem 4.3.1.4]. This and the ABP maximum principle provide
\[\|u_{j}\|_{H^{2}(\Omega)}\lesssim\|\Delta u_{j}\|_{L^{2}(\Omega)}.\]
In particular, \((u_{j})\) is a bounded sequence with respect to the \(H^{2}\) norm. The Banach-Alaoglu theorem proves that \(u_{j}\) converges, up to some not relabelled subsequence, weakly to a \(v\in H^{2}(\Omega)\). Since \(u\notin H^{2}(\Omega)\), \(v\neq u\). From the compact embedding \(H^{2}(\Omega)\Subset C^{0,\alpha}(\overline{\Omega})\) for any \(0<\alpha<1\)[1, Theorem 6.3 III], we deduce that \(u_{j}\) converges uniformly to \(v\) up to another subsequence. Hence, the solution \(u\) does not coincide with any accumulation point of \((u_{j})_{j}\) with respect to the maximum norm. We note that this also holds for accumulation points with respect to the norm \(\|\bullet\|_{W^{1,p}(\Omega)}\) for any \(1\leq p<\infty\) thanks to the compact embedding \(H^{2}(\Omega)\Subset W^{1,p}(\Omega)\)[1, Theorem].
## 3. Finite element approximation
Before the density result from Lemma 2.7 is applied to the design of FEM, we fix some notation on the discrete level. Throughout the remaining parts of this paper, let \(\Omega\) be a bounded polyhedral Lipschitz domain.
### Discrete spaces
Let a quasi-uniform sequence \((\mathcal{T}_{j})_{j}\) of regular triangulation of \(\Omega\) into closed simplices or rectangles with the maximal mesh-size \(h_{j}\coloneqq\max_{T\in\mathcal{T}_{j}}h_{T}\), where \(h_{T}\coloneqq\operatorname{diam}(T)\) is the diameter of \(T\in\mathcal{T}_{j}\), be given such that \(\lim_{j\to\infty}h_{j}=0\). The set of all (resp. interior and boundary) sides of \(\mathcal{T}_{j}\) is denoted by \(\mathcal{F}_{j}\) (resp. \(\mathcal{F}_{j}^{i}\) and \(\mathcal{F}_{j}^{b}\)). For any interior side \(F\in\mathcal{F}_{j}^{i}\), there exist two cells \(T_{+},T_{-}\in\mathcal{T}_{j}\) with \(F=T_{+}\cap T_{-}\). The jump \([v]_{F}\) of any function \(v\in W^{1,1}(T_{\pm})\) is defined by \([v]_{F}\coloneqq v_{T_{+}}-v_{T_{-}}\). Given any \(T\in\mathcal{T}_{j}\) with sides \(\mathcal{F}_{j}(T)\), \(P_{k}(T)\) is the space of polynomials of degree at most \(k\in\mathbb{N}\). The piecewise version of this reads \(P_{k}(\mathcal{T}_{j})\coloneqq\{v_{h}\in L^{\infty}(\Omega):v_{h}|_{T}\in P_{ k}(T)\text{ for all }T\in\mathcal{T}_{j}\}\). Let \(W^{\ell,p}(\mathcal{T}_{j})\coloneqq\{v\in L^{p}(\Omega):v|_{T}\in W^{\ell,p}(T)\text { for all }T\in\mathcal{T}_{j}\}\), \(\ell\geq 1\), \(p\in[1,\infty]\), denote the space of piecewise \(W^{\ell,p}\) functions, endowed with the norm
\[\|v\|_{W^{\ell,p}(\mathcal{T}_{j})}\coloneqq\Big{(}\sum\nolimits_{T\in \mathcal{T}_{j}}\|v\|_{W^{\ell,p}(T)}^{p}\Big{)}^{1/p},\]
and \(\nabla_{\text{pw}}v\) (resp. \(\mathrm{D}_{\text{pw}}^{2}\)) denotes the piecewise gradient (resp. Hessian) of \(v\in W^{1,1}(\mathcal{T}_{j})\) (resp. \(v\in W^{2,1}(\mathcal{T}_{j})\)) without explicit reference to the triangulation \(\mathcal{T}_{j}\).
### Conforming FEM
In this section, let \(V(\mathcal{T}_{j})\subset W^{2,\infty}(\Omega)\) be a \(C^{1}\) conforming finite element space, e.g., the Argyris or Bogner-Fox-Schmit (BFS) finite element. We assume that any \(v\in W^{2,n}(\Omega)\) can be approximated by a sequence \((v_{j})_{j}\) of discrete functions \(v_{j}\in V(\mathcal{T}_{j})\) such that \(\lim_{j\to\infty}\|v-v_{j}\|_{W^{2,n}(\Omega)}=0\). The following result is an immediate consequence of Lemma 2.7.
**Corollary 3.1** (convergence of idealized FEM).: _Suppose that the coefficients of \(L\) satisfy Assumption 1.1. Given \(f\in L^{n}(\Omega)\) and \(g\in C(\partial\Omega)\), any sequence \((u_{j})_{j}\) of discrete minimizers \(u_{j}\) of the functional \(\Phi\) from (1.4) in \(V(\mathcal{T}_{j})\) converges uniformly to the strong solution \(u\in C(\overline{\Omega})\cap W^{2,n}_{\mathrm{loc}}(\Omega)\) to (1.1) as \(j\to\infty\)._
Proof.: Since \(\|u-u_{j}\|_{L^{\infty}(\Omega)}\leq\|u-u_{j}\|_{V}=\Phi(u_{j})\) from the ABP maximum principle in Theorem 2.1, it suffices to show \(\lim_{j\to\infty}\Phi(u_{j})=0\) for the convergence of FEM. Given \(\varepsilon>0\), Lemma 2.7 proves that there exists a \(v\in W^{2,n}(\Omega)\) such that \(Lv=f\)
and \(\|u-v\|_{L^{\infty}(\partial\Omega)}\leq\varepsilon/2\). Let \(v_{j}\) be the best-approximation of \(v\) in \(V(\mathcal{T}_{j})\) with respect to the \(W^{2,n}\) norm. The triangle inequality provides
\[\Phi(u_{j})\leq\Phi(v_{j})\leq\|u-v\|_{V}+\|v-v_{j}\|_{V}\leq\varepsilon/2+\|v- v_{j}\|_{V}. \tag{3.1}\]
Due to the Sobolev embedding [1, Theorem 4.12 II], there exists a constant \(C_{6}\) depending on the domain \(\Omega\) such that \(\|w\|_{L^{\infty}(\Omega)}\leq C_{6}\|w\|_{W^{2,n}(\Omega)}\) for any \(w\in W^{2,n}(\Omega)\). This, the Holder, and a Cauchy inequality lead to
\[\|v-v_{j}\|_{V} =\|v-v_{j}\|_{L^{\infty}(\Omega)}+C_{1}\|L(v-v_{j})\|_{L^{n}( \Omega)}\] \[\leq C_{6}\|v-v_{j}\|_{W^{2,n}(\Omega)}+\,C_{1}C_{7}\|v-v_{j}\|_{W ^{2,n}(\Omega)} \tag{3.2}\]
with the constant \(C_{7}\coloneqq\left(\|A\|_{L^{\infty}(\Omega)}^{n/(n-1)}+\|b\|_{L^{\infty}( \Omega)}^{n/(n-1)}+\|c\|_{L^{\infty}(\Omega)}^{n/(n-1)}\right)^{(n-1)/n}\). Since \(\lim_{j\to\infty}\|v-v_{j}\|_{W^{2,n}(\Omega)}=0\), the index \(j\) can be chosen sufficiently large so that \(\|v-v_{j}\|_{V}\leq\varepsilon/2\). This and (3.1) result in \(\Phi(u_{j})\leq\varepsilon\) for sufficiently large \(j\), which concludes the assertion.
Notice that \(u_{j}\) from Corollary 3.1 is a best-approximation of \(u\) in the discrete space \(V(\mathcal{T}_{j})\) with respect to the norm \(\|\bullet\|_{V}\) (although the uniqueness of \(u_{j}\) cannot be guaranteed). However, the computation of \(u_{j}\) involves a non-smooth nonlinear minimization problem. We avoid this by enforcing the boundary residual as side constrains. (Recall that, in general, the side constraints cannot be avoided by enforcing appropriate boundary data on the finite element functions as shown in Proposition 2.8.) For simplicity, suppose that there exists a set of points \(\mathcal{L}_{j,b}\subset\partial\Omega\) on the boundary such that the following norm equivalence holds
\[\|v_{j}\|_{L^{\infty}(F)}\leq C_{8}\max_{z\in\mathcal{L}_{j,b}\cap F}|v_{j}(z)| \tag{3.3}\]
for any discrete function \(v_{j}\in V(\mathcal{T}_{j})\) and \(F\in\mathcal{F}_{j}^{b}\) with a constant \(C_{8}\) independent of the index \(j\). (For example, if \(V(\mathcal{T}_{j})=P_{5}(\mathcal{T}_{j})\cap W^{2,\infty}(\Omega)\) is the Argyris finite element, then we can choose \(\mathcal{L}_{j,b}\) as the set of all Lagrange points associated with \(P_{5}(F)\) for some \(F\in\mathcal{F}_{j}^{b}\).) From (3.3), we deduce that
\[\|g_{j}-v_{j}\|_{L^{\infty}(\partial\Omega)}\leq C_{8}\max_{z\in\mathcal{L}_{ j,b}}|g_{j}(z)-v_{j}(z)| \tag{3.4}\]
Given a positive parameter \(\varepsilon>0\) and an approximation \(g_{j}\in V(\mathcal{T}_{j})\) of the boundary data \(g\), we define the set
\[\mathcal{A}^{\varepsilon}(g_{j},\mathcal{T}_{j})\coloneqq\{v_{j}\in V( \mathcal{T}_{j}):-\varepsilon\leq g_{j}(z)-v_{j}(z)\leq\varepsilon\text{ for all }z\in\mathcal{L}_{j,b}\}\]
of admissible discrete functions. The proposed finite element scheme minimizes the functional
\[\Psi(v_{j})\coloneqq\|f-Lv_{j}\|_{L^{2}(\Omega)}\quad\text{among }v_{j}\in \mathcal{A}^{\varepsilon}(g_{j},\mathcal{T}_{j}). \tag{3.5}\]
By definition, \(\mathcal{A}^{\varepsilon}(g_{j},\mathcal{T}_{j})\subset V(\mathcal{T}_{j})\) is a closed, nonempty, convex subset of \(V(\mathcal{T}_{j})\). Therefore, the minimum of \(\Psi\) in \(\mathcal{A}^{\varepsilon}(g_{j},\mathcal{T}_{j})\) is attained.
**Theorem 3.2** (convergence of conforming FEM).: _Suppose that the coefficients of \(L\) satisfy Assumption 1.1. Given \(f\in L^{n}(\Omega)\) and \(g\in C(\partial\Omega)\), let \((g_{j})_{j}\) with \(g_{j}\in V(\mathcal{T}_{j})\) approximate \(g\) on the boundary with \(\lim_{j\to\infty}\|g-g_{j}\|_{L^{\infty}(\partial\Omega)}=0\). For any \(\varepsilon>0\), the minimum of the functional \(\Psi\) from (3.5) in \(\mathcal{A}^{\varepsilon}(g_{j},\mathcal{T}_{j})\) vanishes in the limit as \(j\to\infty\) and any sequence \((u_{j})_{j}\) of discrete minimizers satisfies_
\[\limsup_{j\to\infty}\|u-u_{j}\|_{L^{\infty}(\Omega)}\leq C_{8}\varepsilon. \tag{3.6}\]
Proof.: Fix \(\varepsilon>0\). Lemma 2.7 provides a \(v\in W^{2,n}(\Omega)\) such that \(Lv=f\) and \(\|u-v\|_{L^{\infty}(\partial\Omega)}<\varepsilon/2\). Let \(v_{j}\) be the best-approximation of \(v\) onto the discrete space \(V(\mathcal{T}_{j})\) with respect to the \(W^{2,n}\) norm. Since \(\lim_{j\to\infty}\|v-v_{j}\|_{W^{2,n}(\Omega)}=0\) and
\(\Psi(v_{j})=\|L(v-v_{j})\|_{L^{\infty}(\Omega)}\lesssim\|v-v_{j}\|_{W^{2,n}(\Omega)}\) from (3.2), \(\Psi(v_{j})\to 0\) as \(j\to\infty\). The triangle inequality and the Sobolev embedding [1, Theorem 4.12 II] provide
\[\|g_{j}-v_{j}\|_{L^{\infty}(\partial\Omega)} \leq\|g_{j}-g\|_{L^{\infty}(\partial\Omega)}+\|g-v\|_{L^{\infty} (\partial\Omega)}+\|v-v_{j}\|_{L^{\infty}(\partial\Omega)}\] \[\leq\|g_{j}-g\|_{L^{\infty}(\partial\Omega)}+\varepsilon/2+C_{6} \|v-v_{j}\|_{W^{2,n}(\Omega)}.\]
The right-hand side of this converges to \(\varepsilon/2\) in the limit as \(j\to\infty\) and so, \(\|g_{j}-v_{j}\|_{L^{\infty}(\partial\Omega)}\leq\varepsilon\) for sufficient large indices \(j\). Hence, \(\max_{x\in\mathcal{L}_{j,b}}|g_{j}(x)-v_{j}(x)|\leq\|g_{j}-v_{j}\|_{L^{\infty}( \partial\Omega)}\leq\varepsilon\) and, therefore, \(v_{j}\in\mathcal{A}^{\varepsilon}(g_{j},\mathcal{T}_{j})\) for sufficient large \(j\). This concludes the assertion \(\lim_{j\to\infty}\Psi(u_{j})\leq\lim_{j\to\infty}\Psi(v_{j})=0\). The ABP maximum principle from Theorem 2.1 and the triangle inequality prove
\[\|u-u_{j}\|_{L^{\infty}(\Omega)}\leq\|u-u_{j}\|_{V}\leq\|g-g_{j}\|_{L^{\infty} (\partial\Omega)}+\|g_{j}-u_{j}\|_{L^{\infty}(\partial\Omega)}+C_{1}\Psi(u_{j }).\]
This, \(\lim_{j\to\infty}(\|g-g_{j}\|_{L^{\infty}(\partial\Omega)}+\Psi(u_{j}))=0\), and (3.4) conclude (3.6).
Although \(\Psi(u_{j})\) vanishes in the limit as \(j\to\infty\), the \(L^{\infty}\) error of \(u-u_{j}\) may not pass a certain threshold depending on the choice of the parameter \(\varepsilon\). Under additional smoothness assumption on the exact solution \(u\), we can obtain the following a priori estimate for \(\Psi(u_{j})\).
**Corollary 3.3** (a priori for conforming FEM).: _In the setting of Theorem 3.2, suppose that \(u\in W^{2,n}(\Omega)\). Then there exists a \(j_{0}\in\mathbb{N}\) depending on \(\varepsilon\) and \((g_{j})_{j}\) such that, for any \(j\geq j_{0}\),_
\[\Psi(u_{j})\leq C_{7}\min_{v_{j}\in V(\mathcal{T}_{j})}\|u-v_{j}\|_{W^{2,n}( \Omega)}. \tag{3.7}\]
Proof.: Recall from the proof of Theorem 3.2 that, for sufficiently large \(j\), the best approximation \(v_{j}\) of \(u\) in \(V(\mathcal{T}_{j})\) with respect to the \(W^{2,n}\) norm is an admissible discrete function, i.e., \(v_{j}\in\mathcal{A}^{\varepsilon}(g_{j},\mathcal{T}_{j})\). The bound (3.7) follows from \(\Psi(u_{j})\leq\Psi(v_{j})\) and (3.2).
### Nonconforming FEM
This subsection proposes a nonconforming FEM on simplicial meshes in two or three space dimensions \(n=2,3\). Given \(k\geq 2\), let \(V_{\rm nc}(\mathcal{T}_{j})\coloneqq P_{k}(\mathcal{T}_{j})\) denote the discrete ansatz space. As outlined in the proof of Theorem 3.2, any boundary residual arising from the ABP maximum principle in Theorem 2.1 will be enforced as side constraints. Let \(\mathcal{L}_{j}^{k}\) denote the set of all Lagrange points associated with the splines \(P_{k}(\mathcal{T}_{j})\cap W^{1,\infty}(\Omega)\)[11, Proposition 7.12] and \(\mathcal{L}_{j,b}^{k}\coloneqq\mathcal{L}_{j}^{k}\cap\partial\Omega\). For any \(\varepsilon>0\) and \(g_{j}\in V_{\rm nc}(\mathcal{T}_{j})\), we define the set
\[\mathcal{A}_{\rm nc}^{\varepsilon}(g_{j},\mathcal{T}_{j})\coloneqq \{v_{j}\in V_{\rm nc}(\mathcal{T}_{j}):-\varepsilon\leq(g_{j}|_{T}-v_{j}|_{T })(z)\leq\varepsilon\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\text{ for all }T\in\mathcal{T}_{j}\text{ and }z\in\mathcal{L}_{j,b}^{k}\cap T\} \tag{3.8}\]
of admissible discrete functions. The equivalence of norms in finite dimensional spaces leads to a piecewise version of (3.4),
\[\|g_{j}-v_{j}\|_{L^{\infty}(\partial\Omega)}\leq C_{9}\max_{T\in\mathcal{T}_{j }}\max_{z\in\mathcal{L}_{j,b}^{k}\cap T}(g_{h}|_{T}-v_{h}|_{T})(z)\quad\text{ for any }v_{h}\in V_{\rm nc}(\mathcal{T}_{h}) \tag{3.9}\]
with a positive constant \(C_{9}\) that solely depends on the dimension \(n\) and the polynomial degree \(k\). Given a fixed positive parameter \(\sigma>0\), the proposed nonconforming FEM minimizes the functional
\[\Psi_{\rm nc}(v_{j})\coloneqq\left(\|f-L_{\rm pw}v_{j}\|_{L^{n}(\Omega)}^{n}+ \sigma\mathrm{s}_{j}(v_{j})\right)^{1/n}\quad\text{among }v_{j}\in\mathcal{A}_{\rm nc}^{ \varepsilon}(g_{j},\mathcal{T}_{j}), \tag{3.10}\]
where \(L_{\rm pw}v_{j}\coloneqq-A:\mathrm{D}_{\rm pw}^{2}v_{j}+b\cdot\nabla_{\rm pw}v_{j }+cv_{j}\) is the piecewise application of the differential operator \(L\) to \(v_{j}\) and \(\mathrm{s}_{j}(v_{j})\coloneqq\sum_{T\in\mathcal{T}_{j}}\mathrm{s}_{j}(v_{j};T)\) with
\[\mathrm{s}_{j}(v_{j};T)\coloneqq\sum_{F\in\mathcal{F}_{j}^{j}\cap\mathcal{F}_{j}( T)}\left(h_{F}^{1-2n}\|[v_{j}]_{F}\|_{L^{n}(F)}^{n}+h_{F}^{1-n}\|[\nabla_{\rm pw }v_{j}]_{F}\|_{L^{n}(F)}^{n}\right) \tag{3.11}\]
for all \(v_{j}\in V_{\mathrm{nc}}(\mathcal{T}_{j})\) denotes the stabilization. Since discontinuous functions do not satisfy the ABP maximum principle from Theorem 2.1, we require a connection between the discrete space \(V_{\mathrm{nc}}(\mathcal{T}_{j})\) and \(W^{2,n}(\Omega)\). This is provided by a local averaging operator \(\mathcal{J}_{j}:V_{\mathrm{nc}}(\mathcal{T}_{j})\to P_{m}(\widehat{ \mathcal{T}}_{j})\cap W^{2,\infty}(\Omega)\) that maps \(v_{j}\in V_{\mathrm{nc}}(\mathcal{T}_{j})\) onto a \(C^{1}\) conforming piecewise polynomial function \(\mathcal{J}_{j}v_{j}\) of degree \(m\geq k\) in a subtra-angulation \(\widehat{\mathcal{T}}_{j}\) of \(\mathcal{T}_{j}\). These spaces are known as the Hsieh-Clough-Tocher (HCT) macro element [8, 28] and are (currently) available for arbitrary polynomial degree \(m\geq 3\) in 2d [10], but - to the best knowledge of the author - only for \(m=3\) in 3d. To establish the key estimate (3.12) below, it is vital that \(k\leq m\). This leads, at least in theory, to the restriction \(k\in\{2,3\}\) in 3d. While it is possible to replace the HCT macro element by other \(C^{1}\) conforming finite elements, e.g., from [29], in the construction of \(\mathcal{J}_{j}\) to include higher polynomial degrees, the local averaging may involve higher derivatives. This leads to additional jumps of higher derivatives in the stabilization \(\mathrm{s}_{j}\) from (3.11). We mention that convergence of FEM in the sense of Theorem 3.5 can still be guaranteed in this case.
**Lemma 3.4** (enrichment operator).: _Suppose that \(k\geq 2\) if \(n=2\) and \(k\in\{2,3\}\) if \(n=3\). There exists a linear operator \(\mathcal{J}_{j}:V_{\mathrm{nc}}(\mathcal{T}_{j})\to P_{m}(\widehat{ \mathcal{T}}_{j})\cap W^{2,\infty}(\Omega)\) for some \(m\geq k\) such that, for all \(v_{j}\in V_{\mathrm{nc}}(\mathcal{T}_{j})\), \(T\in\mathcal{T}_{j}\), and \(p\in(1,\infty)\),_
\[h_{T}^{-2p}\|v_{j} -\mathcal{J}_{j}v_{j}\|_{L^{p}(T)}^{p}+h_{T}^{-p}\|\nabla(v_{j}- \mathcal{J}_{j}v_{j})\|_{L^{p}(T)}^{p}+\|\mathrm{D}^{2}(v_{j}-\mathcal{J}_{j}v _{j})\|_{L^{p}(T)}^{p}\] \[\leq C_{10}\sum_{F\in\mathcal{F}_{j}^{*};F\cap \partial T\neq\emptyset}\left(h_{T}^{1-2p}\|[v_{j}]_{F}\|_{L^{p}(F)}^{p}+h_{F} ^{1-p}\|[\nabla_{\mathrm{pw}}v_{j}]_{F}\|_{L^{p}(F)}^{p}\right) \tag{3.12}\]
_with a constant \(C_{10}\) that solely depends on \(n\), \(p\), \(k\), \(m\), and the shape regularity of \(\mathcal{T}_{j}\). Here, \(\widehat{\mathcal{T}}_{j}\) denotes a subtriangulation of \(\mathcal{T}_{j}\) such that \(h_{\widehat{\mathcal{T}}_{j}}\approx h_{\mathcal{T}_{j}}\) a.e. in \(\Omega\) and the shape regularity of \(\widehat{\mathcal{T}}_{j}\) depends exclusively on the shape regularity of \(\mathcal{T}_{j}\)._
Proof.: Local averaging techniques with the estimate (3.12) are well understood in the literature [4, 14, 6]; we refer to aforementioned articles for a precise definition of \(\mathcal{J}_{j}\) with \(m=3\) and omit further details.
We state the main result of this subsection.
**Theorem 3.5** (convergence of dG FEM).: _Suppose that the coefficients of \(L\) satisfy Assumption 1.1, \(k\geq 2\) if \(n=2\), and \(k\in\{2,3\}\) if \(n=3\). Given \(f\in L^{n}(\Omega)\) and \(g\in C(\partial\Omega)\), let \((g_{j})_{j}\) with \(g_{j}\in V_{\mathrm{nc}}(\mathcal{T}_{j})\) approximate \(g\) on the boundary with \(\lim_{j\to\infty}\|g-g_{j}\|_{L^{\infty}(\partial\Omega)}=0\). For any \(\varepsilon>0\) and \(\sigma>0\), the minimum of the functional \(\Psi_{\mathrm{nc}}\) from (3.10) in \(\mathcal{A}_{\mathrm{nc}}^{e}(g_{j},\mathcal{T}_{j})\) vanishes in the limit as \(j\to\infty\) and any sequence \((u_{j})_{j}\) of discrete minimizers satisfies_
\[\limsup_{j\to\infty}\|u-u_{j}\|_{L^{\infty}(\Omega)}\leq C_{9}\varepsilon. \tag{3.13}\]
Proof.: Fix \(\varepsilon>0\) and \(\sigma>0\). As in the proof of Theorem 3.2, we select a \(v\in W^{2,n}(\Omega)\) such that \(Lv=f\) and \(\|u-v\|_{L^{\infty}(\partial\Omega)}\leq\varepsilon/2\) from Lemma 2.7. Let \(v_{j}\coloneqq\Pi_{\mathcal{T}_{j}}^{k}v\in V_{\mathrm{nc}}(\mathcal{T}_{j})\) denote the \(L^{2}\) projection of \(v\) onto \(V_{\mathrm{nc}}(\mathcal{T}_{j})\). The remaining parts of the proof is divided in four steps.
_Step 1:_ Prove \(v_{j}\in\mathcal{A}_{\mathrm{nc}}^{e}(g_{j},\mathcal{T}_{j})\) for sufficiently large \(j\). The proof of this departs from the split
\[\|g_{j}-v_{j}\|_{L^{\infty}(\partial\Omega)}\leq\|g_{j}-g\|_{L^{\infty}( \partial\Omega)}+\|g-v\|_{L^{\infty}(\partial\Omega)}+\|v-v_{j}\|_{L^{\infty}( \partial\Omega)}. \tag{3.14}\]
We claim that \(\lim_{j\to\infty}\|v-v_{j}\|_{L^{\infty}(\Omega)}=0\). This can follow from density arguments outlined below for the sake of completeness. Given any \(\delta>0\), choose \(w\in C^{\infty}(\overline{\Omega})\)
such that \(\|v-w\|_{L^{\infty}(\Omega)}\leq\delta\). The triangle inequality implies
\[\|v-v_{j}\|_{L^{\infty}(\Omega)}\leq\|v-w\|_{L^{\infty}(\Omega)}+\|(1-\Pi^{k}_{ \mathcal{T}_{j}})w\|_{L^{\infty}(\Omega)}+\|\Pi^{k}_{\mathcal{T}_{j}}(w-v)\|_{L ^{\infty}(\Omega)}.\]
This, the approximation property \(\|(1-\Pi^{k}_{\mathcal{T}_{j}})w\|_{L^{\infty}(\Omega)}\lesssim\|h_{\mathcal{T }_{j}}\nabla w\|_{L^{\infty}(\Omega)}\) and the \(L^{\infty}\) stability \(\|\Pi^{k}_{\mathcal{T}_{j}}(w-v)\|_{L^{\infty}(\Omega)}\lesssim\|v-w\|_{L^{ \infty}(\Omega)}\) of the \(L^{2}\) projection [11, Lemma 11.18] result in \(\limsup_{j\to\infty}\|v-v_{j}\|_{L^{\infty}(\Omega)}\leq C_{11}\delta\), where the constant \(C_{11}\) is independent of \(\delta\). Since \(\delta\) can be chosen arbitrary, this provides \(\lim_{j\to\infty}\|v-v_{j}\|_{L^{\infty}(\Omega)}=0\). In combination with \(\lim_{j\to\infty}\|g-g_{j}\|_{L^{\infty}(\Omega)}=0\) and \(\|g-v\|_{L^{\infty}(\partial\Omega)}=\|u-v\|_{V}\leq\varepsilon/2\), we deduce from (3.14) that \(\|g_{j}-v_{j}\|_{L^{\infty}(\partial\Omega)}\leq\varepsilon\) and so, \(v_{j}\in\mathcal{A}^{\varepsilon}_{\mathrm{nc}}(g_{j},\mathcal{T}_{j})\) for sufficiently large \(j\).
_Step 2:_ Prove \(\lim_{j\to\infty}\Psi_{\mathrm{nc}}(u_{j})=0\). The Holder and a Cauchy inequality show
\[\|f-L_{\mathrm{pw}}v_{j}\|_{L^{\infty}(\Omega)}=\|L_{\mathrm{pw}}(v-v_{j})\| _{L^{n}(\Omega)} \tag{3.15}\] \[\leq\big{(}\|A\|_{L^{\infty}(\Omega)}^{n/(n-1)}+\|b\|_{L^{\infty} (\Omega)}^{n/(n-1)}+\|c\|_{L^{\infty}(\Omega)}^{n/(n-1)}\big{)}^{(n-1)/n}\|(1 -\Pi^{k}_{\mathcal{T}_{j}})v\|_{W^{2,n}(\mathcal{T}_{j})}.\]
Since \(\lim_{j\to\infty}\|(1-\Pi^{k}_{\mathcal{T}_{j}})v\|_{W^{2,n}(\mathcal{T}_{j})}=0\), this implies
\[\lim_{j\to\infty}\|f-L_{\mathrm{pw}}v_{j}\|_{L^{n}(\Omega)}=0 \tag{3.16}\]
and it remains to prove that \(\lim_{j\to\infty}\mathrm{s}_{j}(v_{j})=0\) for the stabilization \(\mathrm{s}_{j}\) from (3.11) of \(v_{j}\). For any interior side \(F\in\mathcal{F}_{j}^{*}\) with the neighbouring cells \(T_{+},T_{-}\in\mathcal{T}_{j}\) and \(F=T_{+}\cap T_{-}\), \([v]_{F}=0\) and \([\nabla v]_{F}=0\) (in the sense of traces). A triangle and a trace inequality imply \(\|[v_{j}]_{F}\|_{L^{n}(F)}\leq h^{-1/n}\|v-v_{j}\|_{L^{n}(\omega_{F})}+h^{(n-1 )/n}\|\nabla(v-v_{j})\|_{L^{n}(\omega_{F})}\) and \(\|[\nabla_{\mathrm{pw}}v_{j}]_{F}\|_{L^{n}(F)}\leq h^{-1/n}\|\nabla_{\mathrm{ pw}}(v-v_{j})\|_{L^{n}(\omega_{F})}+h^{(n-1)/n}\|\mathrm{D}_{\mathrm{pw}}^{2}(v-v_{j}) \|_{L^{n}(\omega_{F})}\) with \(\omega_{F}\coloneqq\mathrm{int}(T_{+}\cup T_{-})\). This and the approximation property of the \(L^{2}\) projection \(\Pi^{2}_{\mathcal{T}_{j}}\)[11, Lemma 11.18] verify
\[\mathrm{s}_{j}(v_{j})\lesssim\|h_{\mathcal{T}_{j}}^{-2}(1-\Pi^{k}_{ \mathcal{T}_{j}})v\|_{L^{n}(\Omega)}^{n}+\|h_{\mathcal{T}_{j}}^{-1}\nabla_{ \mathrm{pw}}(1-\Pi^{k}_{\mathcal{T}_{j}})v\|_{L^{n}(\Omega)}^{n}\] \[\qquad\qquad+\|\mathrm{D}_{\mathrm{pw}}^{2}(1-\Pi^{k}_{\mathcal{T }_{j}})\|_{L^{n}(\Omega)}^{n}\lesssim\|\mathrm{D}_{\mathrm{pw}}^{2}(1-\Pi^{k}_ {\mathcal{T}_{j}})v\|_{L^{n}(\Omega)}^{n}. \tag{3.17}\]
Since \(\lim_{j\to\infty}\|\mathrm{D}_{\mathrm{pw}}^{2}(1-\Pi^{k}_{\mathcal{T}_{j}})v \|_{L^{n}(\Omega)}=0\), we deduce from (3.16) and the definition of \(\Psi_{\mathrm{nc}}\) in (3.10) that \(\lim_{j\to\infty}\Psi_{\mathrm{nc}}(v_{j})=0\). Because \(v_{j}\in\mathcal{A}^{\varepsilon}_{\mathrm{nc}}(g_{j},\mathcal{T}_{j})\) for sufficiently small \(h\), this concludes \(\lim_{j\to\infty}\Psi_{\mathrm{nc}}(u_{j})\leq\lim_{j\to\infty}\Psi_{\mathrm{ nc}}(v_{j})=0\).
_Step 3:_ Prove \(\|u-u_{j}\|_{L^{\infty}(\Omega)}\leq\|g-u_{j}\|_{L^{\infty}(\partial\Omega)}+C_{12}\Psi_{ \mathrm{nc}}(u_{j})\) for some positive constant \(C_{12}\) independent of \(j\). Recall the local averaging operator \(\mathcal{J}_{j}\) from Lemma 3.4. The point of departure is the split
\[\|u-u_{j}\|_{L^{\infty}(\Omega)}\leq\|u-\mathcal{J}_{j}u_{j}\|_{L^{\infty}( \Omega)}+\|\mathcal{J}_{j}u_{j}-u_{j}\|_{L^{\infty}(\Omega)}. \tag{3.18}\]
The application of the ABP maximum principle from Theorem 2.1 to the difference \(u-\mathcal{J}_{j}u_{j}\in C(\overline{\Omega})\cap W^{2,n}_{\mathrm{loc}}(\Omega)\) and a triangle inequality lead to
\[\|u-\mathcal{J}_{j}u_{j}\|_{L^{\infty}(\Omega)}\leq\|g-\mathcal{J}_{j}u_{j}\|_{L^ {\infty}(\partial\Omega)}+C_{1}\|f-L\mathcal{J}_{j}u_{j}\|_{L^{n}(\Omega)}\leq \|g-u_{j}\|_{L^{\infty}(\partial\Omega)} \tag{3.19}\] \[+\|u_{j}-\mathcal{J}_{j}u_{j}\|_{L^{\infty}(\partial\Omega)}+C_{1} \|f-L_{\mathrm{pw}}u_{j}\|_{L^{n}(\Omega)}+C_{1}\|L_{\mathrm{pw}}(u_{j}- \mathcal{J}_{j}u_{j})\|_{L^{n}(\Omega)}.\]
The Holder and a Cauchy inequality as in (3.2) provide \(\|L_{\mathrm{pw}}(u_{j}-\mathcal{J}_{j}u_{j})\|_{L^{n}(\Omega)}\lesssim\|u_{j}- \mathcal{J}_{j}u_{j}\|_{W^{2,n}(\mathcal{T}_{j})}\). This and Lemma 3.4 result in
\[\|f-L_{\mathrm{pw}}u_{j}\|_{L^{n}(\Omega)}+\|L_{\mathrm{pw}}(u_{j}-\mathcal{J}_{j }u_{j})\|_{L^{n}(\Omega)}\lesssim\Psi_{\mathrm{nc}}(u_{j}). \tag{3.20}\]
The function \(u_{j}-\mathcal{J}_{j}u_{j}\) is a piecewise polynomial in \(\widehat{\mathcal{T}}_{j}\). Since the shape regularity of \(\widehat{\mathcal{T}}_{j}\) only depends on the shape regularity of \(\mathcal{T}\) (cf. [6] for the three dimensional case), a scaling argument and Lemma 3.4 provide
\[\|u_{j}-\mathcal{J}_{j}u_{j}\|_{L^{\infty}(\Omega)}\lesssim\|h_{\mathcal{T}}^{-1}(u_ {j}-\mathcal{J}_{j}u_{j})\|_{L^{n}(\Omega)}\lesssim h\mathrm{s}_{j}(u_{j})^{1/n}.\]
The combination of this with (3.18)-(3.20) conclude Step 3.
_Step 4:_ Conclusion of the proof. From Step 3, we deduce that \(\|u-u_{j}\|_{L^{\infty}(\Omega)}\leq\|g-g_{j}\|_{L^{\infty}(\partial\Omega)}+\|g_ {j}-u_{j}\|_{L^{\infty}(\partial\Omega)}+C_{12}\Psi_{\mathrm{nc}}(u_{j})\). Since \(\lim_{j\to\infty}\|g-g_{j}\|_{L^{\infty}(\partial\Omega)}=0\) by assumption, \(\|g_{j}-u_{j}\|_{L^{\infty}(\partial\Omega)}\leq C_{9}\varepsilon\) from (3.9), and \(\lim_{j\to\infty}\Psi_{\mathrm{nc}}(u_{j})=0\) from Step 2, this implies (3.13).
As for the conforming FEM of Subsection 3.2, we obtain a priori error estimates for \(\Psi_{\mathrm{nc}}(u_{j})\) under additional regularity assumptions on the exact solution \(u\).
**Corollary 3.6** (a priori for nonconforming FEM).: _In the setting of Theorem 3.5, suppose that \(u\in W^{2,n}(\Omega)\). Then there exists a \(j_{0}\in\mathbb{N}\) depending on \(k\), \(\varepsilon\), \((g_{j})_{j}\), and the shape regularity of \((\mathcal{T}_{j})_{j}\) such that, for any \(j\geq j_{0}\),_
\[\Psi_{\mathrm{nc}}(u_{j})\lesssim\|(1-\Pi_{\mathcal{T}_{j}}^{k})u\|_{W^{2,n}( \Omega)}. \tag{3.21}\]
Proof.: The proof is essentially included in the proof of Theorem 3.5. For sufficiently large \(j\), the \(L^{2}\) approximation \(v_{j}\coloneqq\Pi_{\mathcal{T}_{j}}^{k}u\) of \(u\) in \(V_{\mathrm{nc}}(\mathcal{T}_{j})\) is an admissible discrete function \(v_{j}\in\mathcal{A}_{\mathrm{nc}}^{\varepsilon}(g_{j},\mathcal{T}_{j})\). The bound follows from \(\Psi_{\mathrm{nc}}(u_{j})\leq\Psi_{\mathrm{nc}}(v_{j})\), (3.15), and (3.17).
The following remark on another application of the density result from Lemma 2.7 concludes this section.
_Remark 3.7_ (least-squares).: Suppose that \(\|\bullet\|\) is a norm in the Banach space \(V\) from (1.3) such that \(\|v\|\lesssim\|v\|_{V}\) for any \(v\in V\), i.e., \(\|\bullet\|\) is a weaker norm than \(\|\bullet\|_{V}\) from (1.2). Recall the \(C^{1}\) conforming finite element space \(V(\mathcal{T}_{j})\) from Subsection 3.2. We deduce from the proof of Corollary 3.1 that any sequence \((u_{j})_{j}\) of best-approximation \(u_{j}\) of \(u\) in \(V(\mathcal{T}_{j})\) with respect to the norm \(\|\bullet\|\) satisfies \(\lim_{j\to\infty}\|u-u_{j}\|=0\). The choice \(\|v\|\coloneqq\left(\|v\|_{L^{2}(\partial\Omega)}^{2}+\|Lv\|_{L^{2}(\Omega)}^{ 2}\right)^{1/2}\) is of particular interest because the best approximation \(u_{j}\) of \(u\) in \(V(\mathcal{T}_{j})\) with respect to \(\|\bullet\|\) is the minimizer of the quadratic functional
\[\widetilde{\Psi}^{2}(v_{j})\coloneqq\|g-v_{j}\|_{L^{2}(\partial\Omega)}^{2}+ \|f-Lv_{j}\|_{L^{2}(\Omega)}^{2}\text{ among }v_{j}\in V(\mathcal{T}_{j}). \tag{3.22}\]
Since \(\widetilde{\Psi}\) is strongly convex, the minimizer \(u_{j}\) of \(\widetilde{\Psi}\) in \(V(\mathcal{T}_{j})\) is unique and satisfies the discrete Euler-Lagrange equations
\[\int_{\Omega}Lu_{j}Lv_{j}\,\mathrm{d}x+\int_{\partial\Omega}u_{j}v_{j}\, \mathrm{d}s=\int_{\Omega}fLv_{j}\,\mathrm{d}x+\int_{\partial\Omega}gv_{j}\, \mathrm{d}s. \tag{3.23}\]
While the previously proposed FEM need to solve a quadratic programming in 2d or a nonlinear convex minimization problem in 3d, this least-squares approach leads to a linear system of equations (3.23). However, convergence can only established in the nonstandard norm \(\|\bullet\|\) and control over the maximum norm is forfeited. Nevertheless, we can compute \(\Phi(u_{j})\) with \(\Phi\) from (1.4) to check for uniform convergence a posteriori. Thanks to the enrichment operator from Lemma 3.4, we can extend the least-squares approach to nonconforming FEM as well. This leads to the minimization of the functional
\[\widetilde{\Psi}_{\mathrm{nc}}^{2}(v_{j})\coloneqq\|g-v_{j}\|_{L^{2}(\partial \Omega)}^{2}+\|f-L_{\mathrm{pw}}v_{j}\|_{L^{2}(\Omega)}^{2}+\widetilde{s}_{j}( v_{j})\text{ among }v_{j}\in V_{\mathrm{nc}}(\mathcal{T}_{j}),\]
where \(\widetilde{s}_{j}\) is the quadratic version of \(\mathrm{s}_{j}\) from (3.11). The minimizer \(u_{j}\) of \(\widetilde{\Psi}_{\mathrm{nc}}\) in \(V_{\mathrm{nc}}(\mathcal{T}_{j})\) is unique and convergence holds in the sense that \(\lim_{j\to\infty}\widetilde{\Psi}_{\mathrm{nc}}(u_{j})=0\) as well as \(\lim_{j\to\infty}\|u-\mathcal{J}_{j}u_{j}\|=0\).
## 4. Numerical examples
This section presents results for three numerical benchmarks in two-dimensional non-convex domains with \(b=0\) and \(c=0\).
### Implementation
We implement the following FEM. The first method is the conforming BFS-FEM from Subsection 3.2 with the BFS finite element space \(V(\mathcal{T}_{h})\coloneqq Q_{3}(\mathcal{T}_{h})\cap W^{2,\infty}(\Omega)\)[7] as ansatz space; this is the space of all global \(C^{1}(\overline{\Omega})\) functions that are bicubic when restricted to any rectangle \(T\in\mathcal{T}_{h}\). The second method is the conforming least-squares (LS) FEM from Remark 3.7 with the same ansatz space. The third method is the nonconforming (NC) FEM from Theorem 3.5 with the default parameter \(\sigma=1\). The adaptive computations utilize the local contributions of the functionals \(\Psi\) from (3.5), \(\widetilde{\Psi}\) from (3.22) and \(\Psi_{\mathrm{nc}}\) from (3.10) as refinement indicators
\[\eta(T)\coloneqq\begin{cases}\|f-Lu_{h}\|_{L^{2}(T)}^{2}&\text{for the BFS-FEM,}\\ \|f-Lu_{h}\|_{L^{2}(T)}^{2}+\sum_{F\in\mathcal{F}_{h}(T)\cap\mathcal{F}_{h}( \partial\Omega)}\|g-u_{h}\|_{L^{2}(F)}^{2}&\text{for the LS-FEM,}\\ \|f-L_{\mathrm{pw}}u_{h}\|_{L^{2}(T)}^{2}+\mathrm{s}_{j}(u_{h};T)&\text{for the NC-FEM,}\end{cases}\]
where \(u_{h}\) is the discrete solution to the corresponding finite element scheme, and the Dorfler marking strategy, i.e., at each refinement step, a subset \(\mathcal{M}\subset\mathcal{T}\) with minimal cardinality is selected such that
\[\sum\nolimits_{T\in\mathcal{T}}\eta(T)\leq\frac{1}{2}\sum\nolimits_{T\in \mathcal{M}}\eta(T).\]
The convergence history plots display the quantities of interest against the number of degrees of freedom ndof. (Notice that ndof \(\approx h_{\mathrm{max}}^{-2}\) for uniform meshes.) Solid lines in the convergence history plots indicate adaptive mesh-refinements, while dashed lines are associated with uniform mesh-refinements. We recall from Theorem 2.1 that \(\Phi(u_{h})=\|g-u_{h}\|_{L^{\infty}(\partial\Omega)}+C_{1}\Psi(u_{h})\geq\|u- u_{h}\|_{L^{\infty}(\Omega)}\) is a guaranteed upper bound (GUB) of the error \(\|u-u_{h}\|_{L^{\infty}(\Omega)}\) for conforming and from Step 3 in the proof Theorem 3.5 that \(\|g-u_{h}\|_{L^{\infty}(\partial\Omega)}+C_{12}\Psi_{\mathrm{nc}}(u_{h})\geq \|u-u_{h}\|_{L^{\infty}(\Omega)}\) is an a posteriori error estimate for nonconforming FEM. Both error estimates hold for arbitrary discrete \(u_{h}\), so it is applicable to inexact solve.
### First experiment
In this benchmark, we approximate the exact solution
\[u(r,\varphi)\coloneqq r^{2/3}\sin(2\varphi/3)\]
in polar coordinates to (1.1) in the L-shaped domain \(\Omega=(-1,1)^{2}\setminus([0,1]\times[-1,0])\) with the coefficient \(A(x)\coloneqq(1+|x+(1,1)|^{1/3})\mathrm{I}_{2}\) and vanishing right-hand side \(f\equiv 0\). The solution belongs to \(H^{5/3-\delta}(\Omega)\) for any \(\delta>0\).
Figure 1. Convergence history of the BFS-FEM for the first experiment with different parameters \(\varepsilon\).
Figure 1 displays the convergence history of the \(L^{\infty}\) error and its GUB \(\Phi(u_{h})\) for the BFS-FEM. Uniform mesh-refinements only leads to marginal improvement of these two errors for the parameters \(\varepsilon=10^{-3}\) and \(\varepsilon=10^{-4}\), while a convergence rate of at least \(1/3\) is observed for \(\varepsilon=10^{-2}\) (although we expect that the \(L^{\infty}\) error and \(\Phi(u_{h})\) stagnate at \(10^{-2}\)). For all three FEM, the adaptive algorithm refines towards the reentrant corner as displayed in Figure 2(a). This leads to significant improvements for the \(L^{\infty}\) error and \(\Phi(u_{h})\) in Figure 1. We observe a preasymptotic range depending on \(\varepsilon\), where both errors only improve insignificantly. This is observed in all benchmarks of this section. The \(L^{\infty}\) error then drops and stagnates at \(\varepsilon\), while \(\Psi(u_{h})\), a contribution of the GUB \(\Phi(u_{h})\), converges with the rate \(1\). A similar behaviour to the \(L^{\infty}\) error is observed for the \(L^{2}\) error and the \(H^{1}\) error as well. We also apply the LS-FEM proposed in Remark 3.7 to this benchmark. Figure 2(b) shows convergence of all displayed errors on uniform meshes. We recall that this is not guaranteed a priori, but convergence can be checked a posteriori with the GUB \(\Phi(u_{h})\). Adaptive computation improves the convergence rates of all displayed errors (with \(3/4\) for the \(L^{2}\) error, \(2/5\) for the \(L^{\infty}\) error, \(H^{1}\) error, and \(\Psi(u_{h})\)). However, the \(L^{\infty}\) error remains at a high level in comparison to the BFS-FEM.
Figure 3. Convergence history of the NC-FEM for the first experiment with \(\varepsilon=10^{-3}\).
Figure 2. Adaptive mesh of the BFS-FEM (\(\varepsilon=10^{-3}\)) with \(2565\) rectangles (left) and convergence history of the LS-FEM (right) for the first experiment.
The results for the NC-FEM with \(\varepsilon=10^{-3}\) are displayed in Figure 3. On uniform meshes, convergence of the \(L^{\infty}\) error and \(\Psi_{\mathrm{nc}}\) is not observed within the computation range, although it is expected as the mesh-size \(h\) tends to zero. On adaptive meshes, the \(L^{\infty}\) error behave similarly to the conforming case with a preasymtotic range and stagnation at \(\varepsilon\). The preasymptotic range becomes larger for smaller \(\varepsilon\) (observed in undisplayed experiments) and is reduced by higher polynomial degree \(k\), which also leads to improved convergence rates for \(\Psi_{\mathrm{nc}}\).
### Second experiment
In this benchmark, we approximate the unknown solution to (1.1) in the L-shaped domain \(\Omega=(-1,1)^{2}\setminus([0,1]\times[-1,0])\) with the coefficient \(A\coloneqq(1+|x+(1,1)|^{1/3})\mathrm{I}_{2}\), right-hand side \(f\equiv 1\), and homogenous Dirichlet data \(g\equiv 0\). Conforming methods can provide unconditional information on the \(L^{\infty}\) error by evaluation of the GUB \(\Phi(u_{h})\). Figure 4(a) displays the convergence rate \(1/5\) for \(\Psi(u_{h})\) and \(\Phi(u_{h})\) on uniform meshes. The adaptive algorithm refines towards the reentrant corner. This leads to improved convergence of \(\Phi(u_{h})\), which stagnates at \(\varepsilon=10^{-3}\), and the optimal convergence rate \(1\) for \(\Psi(u_{h})\). Although GUB are available for the NC-FEM as well by computation of the constant \(C_{12}\) from Step 3 of the proof of Theorem 3.5 or by evaluation of the averaging operator \(\mathcal{J}_{h}\) from Lemma 3.4, this may lead to significant overestimation or implementation efforts. The results for NC-FEM are displayed in Figure 4(b). On uniform meshes, we do not observed convergence of \(\Psi_{\mathrm{nc}}(u_{h})\) within the computation range. In view of Corollary 3.6, adaptive computation recovers the optimal rates \((k+1)/2\) for \(\Psi_{\mathrm{nc}}(u_{h})\).
### Third experiment
In this benchmark, we approximate the exact solution
\[u(r,\varphi)\coloneqq r^{1/2}\sin(\varphi/2)-r^{2}\sin^{2}(\varphi)\]
to (1.1) in the slit domain \(\Omega\coloneqq(-1,1)^{2}\setminus([0,1]\times\{0\})\) with the discontinuous coefficient
\[A(x)\coloneqq\begin{cases}\mathrm{I}_{2}&\text{if }x_{1}\geq x_{2},\\ (1+|x-(-1,1)|^{1/3})\mathrm{I}_{2}&\text{otherwise}\end{cases}\]
and right-hand side \(f(x)=1\) if \(x_{1}\geq x_{2}\) and \(f(x)=(1+|x-(-1,1)|^{1/3})\) otherwise. The function \(u\) belongs to \(H^{3/2-\delta}(\Omega)\) for any \(\delta>0\). The convergence analysis of this paper does not apply to this example because \(A\) is discontinuous and \(\Omega\) is not a Lipschitz domain. Nevertheless, the ABP maximum principle from Theorem 2.1 applies to this example as well, so \(\Phi(u_{h})\) is a guaranteed upper bound for
Figure 4. Convergence history of the BFS-FEM (left) and of the NC-FEM (right) for the second experiment with \(\varepsilon=10^{-2}\).
\(u_{h}\|_{L^{\infty}(\Omega)}\) provided a discrete function \(u_{h}\in W^{2,n}(\Omega)\) is given. The results for the BFS-FEM and NC-FEM displayed in Figure 5 match the observations of previous experiments, although this example is not covered by the theory. Figure 6 shows that the adaptive algorithm refines towards the reentrant corner, but not along the set of discontinuity of the coefficient \(A\).
### Conclusion
In all computer experiments, we observed a preasymtotic behaviour, where the error in the \(L^{\infty}\), \(L^{2}\), and \(H^{1}\) norm only improves insignificantly or may not improve at all. A smaller parameter \(\varepsilon\) leads to a larger preasymtotic
Figure 5. Convergence history of the BFS-FEM (left) and of the NC-FEM (right) for the third experiment with \(\varepsilon=10^{-2}\).
Figure 6. Adaptive mesh of the BFS-FEM into 3454 rectangles for the third experiment with \(\varepsilon=10^{-2}\).
range but a better approximation in limit. The computer experiments provide empirical evidence that the \(L^{\infty}\) error will not converge to zero for a fixed parameter \(\varepsilon\) as suggested in Theorem 3.2 and Theorem 3.5. Adaptive computation leads to improved convergence rates of the minimizing functional and provides significant improvements to the convergence of \(u_{h}\) towards \(u\) in comparison to uniform mesh-refinements. The ABP maximum principle can be utilized a posteriori to quantify the \(L^{\infty}\) error without any information on the exact solution \(u\), provided a \(C^{1}\) conforming approximation is given.
|
2308.11132 | Isogeny classes of non-simple abelian surfaces over finite fields | Let $A=E \times E_{ss}$ be an abelian surface over a finite field
$\mathbb{F}_{q}$, where $E$ is an ordinary elliptic curve and $E_{ss}$ is a
supersingular elliptic curve. We give a lower bound on the size of isomorphism
classes of principally polarized abelian surfaces defined over
$\mathbb{F}_{q^{n}}$ that are $\overline{\mathbb{F}}_{q}$-isogenous to $A$ by
studying classification of certain kind of finite group schemes. | Yu Fu | 2023-08-22T02:26:55Z | http://arxiv.org/abs/2308.11132v1 | # Isogeny classes of non-simple abelian surfaces over finite fields
###### Abstract
Let \(A=E\times E_{ss}\) be an abelian surface over a finite field \(\mathbb{F}_{q}\), where \(E\) is an ordinary elliptic curve and \(E_{ss}\) is a supersingular elliptic curve. We give a lower bound on the size of isomorphism classes of principally polarized abelian surfaces defined over \(\mathbb{F}_{q^{n}}\) that are \(\overline{\mathbb{F}}_{q}\)-isogenous to \(A\) by studying classification of certain kind of finite group schemes.
+
Footnote †: _Key words and phrases_: abelian varieties, isogeny class, finite fields.
+
Footnote †: _Key words and phrases_: abelian varieties, isogeny class, finite fields.
## 1 Introduction
Many fundamental problems on Shimura varieties pertain to the behavior of isogeny classes, for example, the Hecke orbit conjecture and specific questions related to unlikely intersections. In [9, Theorem 4.1], Shankar and Tsimerman proved an asymptotic formula for the size of the isogeny class of ordinary elliptic curves over finite fields. As an application, they proved the existence of a hypersurface in the moduli space \(X(1)^{270}\), which intersects every isogeny class.
A few common strategies exist to obtain asymptotic formulas for the size of isogeny classes of abelian varieties over finite fields. In particular, when the abelian variety is ordinary and simple, the inspiring work of Deligne [4] explicitly classified such abelian varieties over finite fields. Using the classification, one can get bounds for the isogeny classes of ordinary abelian varieties, for example, [9, Theorem 3.3]. A handful of studies in this flavor have been performed in more general
settings. For example, one may refer to [7] when the abelian variety is almost-ordinary and geometrically simple and to [3] for a setting of Hilbert modular varieties. All of the results above depend on the existence of canonical lifting and classification of abelian varieties over finite fields.
A second way of doing this is to interpret isogeny classes in terms of orbital integrals. For example, in [2], Achter and Cunningham proved an explicit formula for the size of the isogeny class of a Hilbert-Blumenthal abelian variety over a finite field. They express the size of the isogeny class as a product of local orbital integrals on \(GL(2)\) and then evaluate all the relevant orbital integrals. See also [1] where Achter and Williams proved that for a particular class of simple, ordinary abelian surfaces over \(\mathbb{F}_{q}\) given by a \(q\)-well polynomial \(f\), the number of principally polarized abelian surfaces over \(\mathbb{F}_{q}\) with Weil polynomial \(f\) could be calculated by an infinite product of local factors which can be calculated by method of orbital integrals.
Throughout this article, let \((A,\lambda_{A})\) be a principally polarized non-simple abelian surface defined over \(\mathbb{F}_{q}\), with the polarization given by \(\lambda_{A}\). Moreover, assume that \(A\) has the form \(A=E\times E_{ss}\), where \(E\) is an ordinary elliptic curve and \(E_{ss}\) is a supersingular elliptic curve. The endomorphism algebra \(\operatorname{End}^{\circ}(A)\) is non-commutative, and there is no canonical lifting of \(A\). Therefore, we cannot interpret the question as estimating the size of class groups by using the classification of abelian varieties over finite fields. Instead, we measure the size of the isogeny class of \(A\) defined over \(\mathbb{F}_{q}\) and describe how this cardinality is affected by the base change to finite extensions of \(\mathbb{F}_{q}\) by using group-theoretical methods.
Before introducing the main theorem, we introduce some notations. Let \(I(q^{n},A)\) be the set of principally polarized abelian varieties defined over \(\mathbb{F}_{q^{n}}\) that are isogenous to \(A\) over \(\overline{\mathbb{F}}_{q}\). Let \(N(q^{n},A)\) denote the cardinality of \(I(q^{n},A)\). By interpreting the question as a classification of finite subgroup schemes, we obtain a lower bound on the number of principally polarized abelian varieties over \(\mathbb{F}_{q^{n}}\) that is isogenous to \(A\) over \(\overline{\mathbb{F}}_{q}\). Our main result is the following.
**Theorem 1.1**.: _Let \((A,\lambda_{A})\) be a principally polarized abelian variety over \(\mathbb{F}_{q}\) such that \(A=E\times E_{ss}\). Let \(K\) be the quadratic number field such that \(K=\operatorname{End}^{\circ}(E)\). Let \(n\) be an integer such that for all prime \(\ell\) ramified in \(\mathcal{O}_{K}\), \((n,\ell)\neq 1\). Then_
\[N(q^{n},A)\gg q^{n+o(1)}.\]
Also, we provide a different approach to count the size of isogeny classes of ordinary elliptic curves over finite fields, which upper bound is known by Lenstra [5, Proposition 1.19] and Shankar-Tsimerman [9, Theorem 3.3].
**Theorem 1.2**.: _Let \(E\) be an elliptic curve defined over \(\mathbb{F}_{q}\). For a positive density set of \(n\), we have_
\[N(q^{n},E)=(q^{n})^{1/2+o(1)}.\]
There is a general conjecture regarding the size of the isogeny class of abelian varieties over finite fields. Let \(N(W)\) be the open Newton stratum of \(\mathcal{A}_{g}\) consisting of all abelian varieties whose Newton polygon is \(W\) and let \(A\) be a principally polarized abelian variety in \(\mathcal{A}_{g}\). Recall that the _central leaf_ through \(A\) consists of all abelian varieties in \(N(W)\) whose \(p\)-divisible group is isomorphic to \(A[p^{\infty}]\). The _isogeny leaf_ through \(A\) is a maximal irreducible subscheme of \(\mathcal{A}_{g}\) consisting of abelian varieties \(A^{\prime}\) in \(N(W)\) such that \(A^{\prime}\) is isogenous to \(A\) through an isogeny whose kernel is an iteration extension of the group scheme \(\alpha_{p}\). Let \(\dim(CL)\) be the dimension of the central leaf through \(A\) and let \(\dim(IL)\) be the dimension of the isogeny leaf through \(A\).
**Conjecture 1.3**.: _We have_
\[N(q^{n},A)=q^{n(\frac{\dim(CL)}{2}+\dim(IL))+o(1)}.\]
All the previous results we state above satisfy the Conjecture 1.3. When \(A\) is a non-simple abelian surface, it is easy to see that the dimension of the central leaf through \(A\) is \(2\), by the formula of lattice-point count by Shankar and Tsimerman [9, Section 5.2]. The dimension of the isogeny leaf through \(A\) is \(0\). Therefore the conjecture is true in this case.
## 2 The Isogeny Classes and Maximal Isotropic Subgroups
A classical way to construct abelian varieties isogenous to a fixed abelian variety \(A\) is to take quotients of \(A\) by finite subgroup schemes. A theorem of Mumford [6, II.7 Theorem 4] addresses that one can construct isogenies from finite subgroups of an abelian variety and vice versa.
**Theorem 2.1**.: _[_6_, II.7 Theorem 4]_ _Let \(X\) be an abelian variety. There is a one-to-one correspondence between the two sets of objects:_
* _finite subgroups_ \(K\in X\)_,_
* _separable isogenies_ \(f:X\to Y\)_, where two isogenies_ \(f_{1}:X\to Y_{1}\)_,_ \(f_{2}:X\to Y_{2}\) _are considered equal if there is an isomorphism_ \(h:Y_{1}\to Y_{2}\) _such that_ \(f_{2}=h\circ f_{1}\)_, which is set up by_ \(K=\ker(f)\)_, and_ \(Y=X/K\)_._
**The maximal isotropic subgroups.** In order to count the number of abelian varieties isogenous to \(A\), a natural way is to look at all its finite subgroups \(G\subset A\). Let \(A[m]\) be the \(m\)-torsion subgroup of \(A\). when \(p\nmid m\), \(A[m]=(\mathbb{Z}/m\mathbb{Z})^{4}\). Without loss of generality, let \(m\) be a prime integer. Recall that for a symplectic \(F\)-vector space \(V\) equipped with the symplectuc bilinear form \(\omega:V\times V\to F\), a subspace \(H\) is called _isotropic_ if for any \(h_{1},h_{2}\in H\), \(\omega(h_{1},h_{2})=0\). It is a standard
fact that for a symplectic vector space of dimension \(2g\), each of the maximal isotropic subspaces is of dimension \(g\).
**Definition 2.2**.: Let \((A,\lambda_{A})\) be a principally polarized abelian surface, and \(\ell\) be a prime such that \(\ell\nmid p\). Define the _\((\ell^{m},\ell^{m})\)-isogeny_ to be any isogeny \(f:A\to B\) whose kernel is a maximal isoptopic subgroup of \(A[\ell^{m}]\), with respect to the Weil paring induced by the polarization \(\lambda_{A}\).
We claim that for an \((\ell^{m},\ell^{m})\)-isogeny \(f:A\to B\), there is a unique principal polarization on \(B\), denoted by \(\lambda_{B}\), such that \(f^{*}\lambda_{B}=\ell^{m}\times\lambda_{A}\). This is a consequence of Grothendieck's descent, and we omit the proof here. See [8, Proposition 2.4.7] for detailed proof. This fact allows us to compute a lower bound of \(N(q^{n},A)\) first by counting the number of maximal isotropic subgroups of \(A\) that are defined over \(F_{q^{n}}\), then by computing the number of the subgroups that give the same quotient up to isomorphism.
**Lemma 2.3**.: _For \(l\nmid p\), there are_
\[\ell^{3m}+\ell^{3m-1}+\ell^{3m-2}+l^{3m-3}\]
_maximal isotropic subgroups of \(A[\ell^{m}]\) with respect to the principal polarization \(\lambda_{A}\)._
Proof.: Without loss of generality, one can assume that \(\lambda_{A}\) is given by the matrix
\[\lambda_{A}=\left[\begin{array}{cccc}0&1&0&0\\ -1&0&0&0\\ 0&0&0&1\\ 0&0&-1&0\end{array}\right]\]
up to a proper choice of basis for the \(\ell\)-adic Tate module \(T_{\ell}A\). Then the corresponding symplectic form is \(\psi(x,y)=x^{T}My\). It is easy to see that any cyclic group \(H\) of order \(\ell^{m}\), we call it _isotropic line_, is an isotropic subgroup. Any maximal isotropic subgroup has the form \((\mathbb{Z}/\ell^{m}\mathbb{Z})^{2}\). These can be viewed as the _isotropic planes_ inside \(A[\ell^{m}]\). Let \(H^{\perp}\) denote the orthogonal complement of \(H\). A direct computation shows that
\[H\subset H^{\perp},dim(H^{\perp})=3.\]
Since any maximal isotropic subgroup has dimension two, the number of isotropic planes containing \(H\) is the number of lines in \(H^{\perp}/H\) counts to \(\ell^{m}+\ell^{m-1}\). The number of lines \(L\) in \(A[\ell^{m}]\) is \(\ell^{3m}+\ell^{3m-1}+\ell^{3m-2}+\ell^{3m-3}\) and any maximal isotropic plane contains \(\ell^{m}+\ell^{m-1}\) lines. The result follows.
We introduce a criterion by Waterhouse [12, Proposition 3.6], which enables us to rule out maximal isotropic subgroups that give the same quotient variety up to isomorphism. We investigate the \(\ell\)-power subgroups of \(A\), where \(\ell\nmid p\). Let \(H_{1},H_{2}\cong(\mathbb{Z}/\ell^{m}\mathbb{Z})^{2}\) be isotropic planes in \(A[\ell^{m}]\).
**Definition 2.4**.: \(H_{1}\) is equivalent to \(H_{2}\) if they define the same quotient up to isomorphism
\[A/H_{1}\cong A/H_{2}.\]
**Theorem 2.5**.: _[_12_, Proposition 3.6]_ _Let \(G_{1}\) and \(G_{2}\) be two finite subgroups of \(A\), not necessarily etale. Then \(A/G_{1}\cong A/G_{2}\) if and only if for some isogeny \(\rho\in\operatorname{End}(A)\) and some non-zero \(N\in\mathbb{Z}\), \(\rho^{-1}(G_{1})=[N]^{-1}G_{2}\)._
Proof.: See [12, Proposition 3.6]. We include the proof here for completeness.
Suppose \(A/G_{1}\simeq A/G_{2}\). Then we have \(\varphi_{i}:A\to B\) with \(\ker\varphi_{i}=G_{i},i=1,2\). For \(N_{1}\) large (e.g., \(N_{1}=\operatorname{rank}G_{1}\)), we have \([N_{1}]^{-1}G_{2}\supseteq G_{1}\). Now \([N_{1}]^{-1}G_{2}=\ker(N_{1}\varphi_{2})\), so by the definition of quotient there is a \(\sigma:B\to B\) such that \(\sigma\varphi_{1}=N_{1}\varphi_{2}\). For \(N_{2}\) large enough there is a \(\rho:A\to A\) with \(\varphi_{1}\rho=[N_{2}]\sigma\varphi_{1}\) (choose an \(i_{A}\) and look at the two lattices in E). Thus \(\varphi_{1}\rho=N_{1}N_{2}\varphi_{2}\). Set \(N=N_{1}N_{2}\), then
\[\rho^{-1}(G_{1})=\ker(\varphi_{1}\rho)=\ker([N]\varphi_{2})=[N]^{-1}G_{2}.\]
Conversely, \(A\stackrel{{\rho}}{{\to}}A\to A/G_{1}\) shows that
\[A/G_{1}\simeq A/\rho^{-1}(G_{1});\]
likewise
\[A/G_{2}\simeq A/[N]^{-1}G_{2},\]
so the condition is sufficient.
## 3 Counting inequivalent maximal isotropic planes
In this section, we prove a lower bound for the number of inequivalent maximal isotropic planes. The main result is Proposition 3.3 and Proposition 3.5. For any prime \(\ell\nmid p\), fix a basis \(\{e_{1},e_{2},f_{1},f_{2}\}\) for \(T_{\ell}A\), such that for \(i,j=1,2\), \(\omega(e_{i},f_{j})=1\) only when \(i\neq j\). For the rest of the paper, \(H\) will denote a maximal isotropic subgroup of \(A[\ell^{m}]\).
Let \(\phi:A\to B\) be an isogeny defined over \(\mathbb{F}_{q^{n}}\). Since there is no isogeny between ordinary and supersingular elliptic curves, the endomorphism ring decomposes as \(\operatorname{End}(A)=\operatorname{End}(E)\times\operatorname{End}(E_{ss})\). Therefore there is a decomposition of \(\phi\) into ordinary and supersingular part, namely \(\phi=\phi_{\operatorname{ord}}\times\phi_{\operatorname{ss}}\), accordingly a decomposition of the kernel: \(\ker(\phi)=K_{\operatorname{ord}}\times K_{ss}\), where \(K_{\operatorname{ord}}\subset E\) and \(K_{ss}\subset E_{ss}\). We have the following theorems on the number of endomorphisms of elliptic curves over finite fields whose kernel is cyclic:
**Proposition 3.1**.: _Let \(E\) be an ordinary elliptic curve defined over \(F_{q^{n}}\). For any positive integer \(d\), the number of endomorphisms in \(\mathrm{End}(E)\) with cyclic kernel \(\mathbb{Z}/d\mathbb{Z}\) is bounded up by \(O(d^{\epsilon})\)._
Proof.: Let \(K=\mathrm{End}^{\circ}(E)\), \(E\) is ordinary implies that \(K\) is a quadratic number field. Let \(\mathcal{O}_{K}\) denote the ring of integers of \(K\). Assume that \(d\) has prime decomposition \(d=p_{1}^{e_{1}}\cdots p_{r}^{e_{r}}q_{1}^{f_{1}}\cdots q_{s}^{f_{s}}d_{1}\), such that \(p_{i}\) splits in \(\mathcal{O}_{K}\), \(q_{j}\) inert in \(\mathcal{O}_{K}\) and every prime factor of \(d_{i}\), namely the ramified prime, divides \(D\). The number of endomorphisms in \(\mathrm{End}(E)\) with cyclic kernel \(\mathbb{Z}/\ell^{m}\mathbb{Z}\) is the number of elements in \(\mathcal{O}_{K}\) with norm \(\ell^{k}\). Therefore it is \((e_{1}+1)\cdots(e_{r}+1)\) if all \(f_{j}\), \(1\leq j\leq s\) are even, or zero otherwise. The divisor bound is \(O(d^{\epsilon})\), which is a standard fact.
**Proposition 3.2**.: _Let \(E_{ss}\) be a supersingular elliptic curve defined over \(F_{q^{n}}\). For \(\ell\nmid D\) where \(D\) is the determinant of the norm form on \(End(E_{ss})\), there are \(O(\ell^{m})\) endomorphisms whose kernel is the cyclic group \(\mathbb{Z}/\ell^{m}\mathbb{Z}\)._
Proof.: Let \(E_{ss}\) be a supersingular elliptic curve defined over \(F_{q^{n}}\) with characteristic \(p\), then \(O_{E_{ss}}=\mathrm{End}(E_{ss})\) is a maximal order in the quaternion algebra ramified exactly at \(p\) and \(\infty\). Endomorphism with kernel a cyclic subgroup of order \(m\), i.e., of degree \(m\), are elements in \(O_{E_{ss}}\) with norm \(m\). For a quaternion algebra \(F=\mathbb{Q}+\mathbb{Q}\alpha+\mathbb{Q}\beta+\mathbb{Q}\alpha\beta\) where \(\alpha^{2}=a,\beta^{2}=b,a<0,b<0,\beta\alpha=-\alpha\beta\), \(x=x_{0}+x_{1}\alpha+x_{2}\beta+x_{3}\alpha\beta\in F\), the norm \(N(x)=x\bar{x}=x_{0}^{2}-ax_{1}^{2}-bx_{2}^{2}+abx_{3}^{2}\) is a quaternion quadratic form. The question boils down to counting the number of representations
\[r(n)=r_{N}(n)=\#\{x\in\mathbb{Z}^{4},x=(x_{0},x_{1},x_{2},x_{3});N(x)=n\}\]
This can be solved making use of the theta series
\[\vartheta(z)=\sum_{\alpha\in\mathbb{Z}^{4}}e(zN(\alpha))=\sum_{n\geq 0}r(n)e(nz)\]
where \(e(z)=e^{2\pi iz}\), which is a generating series for \(r(n)\). \(\vartheta(z)\) satisfy
\[\vartheta(\frac{az+b}{cz+d})=\chi(\gamma)(cz+d)^{m/2}\vartheta(z)\]
where \(\gamma\in SL_{2}(\mathbb{Z})\) and therefore is a modular form of weight \(m/2\). So it can be written as the sum of an Eisenstein series
\[E(z)=\sum_{n\geq 0}\rho(n)e(nz),\rho(0)=1\]
and a cusp form
\[f(z)=\vartheta(z)-E(z)=\sum_{n\geq 1}a(n)e(nz).\]
Thus we can write \(r(n)=\rho(n)+a(n)\) and then bound it from above by estimating the Fourier coefficient \(a(n)\) of the cusp form and estimating \(\rho(n)\) gives a bound from below. In our case where \(m=4\), assume that \(\ell\nmid D\). We have \(\frac{m}{2}-1=1\) such that
\[\rho(n)\gg n.\]
One way to get a nontrivial upper bound for \(a(n)\) is to use the Rankin-Selberg method. For even \(m\), Deligne[1] proved that \(a(n)\ll n^{\frac{m}{4}-\frac{1}{2}+\epsilon}\). In our case, it turns out to be
\[a(n)\ll n^{\frac{1}{2}}.\]
Putting together, we get
\[r(\ell^{m})\gg\ell^{m}.\]
Since \(E_{ss}\) is simple, every non-zero endomorphism is an isogeny, and we have
\[\ell^{m}+\ell^{m-1}=O(\ell^{m})\]
cyclic subgroups of order \(\ell^{m}\) in \(E_{ss}[\ell^{m}]=\mathbb{Z}/\ell^{m}\mathbb{Z}\times\mathbb{Z}/\ell^{m}\mathbb{Z}\). Thus
\[r(n)=r_{N}(n)=O(\ell^{m}).\]
There are two types of maximal isotropic planes in \(A[\ell^{m}]\) we take into concern with respect to our choice of basis:
* **Type 1:**\(H\) is a product \(H_{1}\times H_{2}\) where \(H_{1}\in E\), \(H_{2}\in E_{ss}\).
* **Type 2:**\(H\) cannot be written as a product \(H_{1}\times H_{2}\) where \(H_{1}\leq E\), \(H_{2}\leq E_{ss}\).
### \(H\) is of product type
In the case where \(H\) is of type 1, we write \(H=\langle ae_{1}+be_{2},cf_{1}+df_{2}\rangle\), where \(a,b,c,d\in\mathbb{Z}/\ell^{m}\mathbb{Z}\). Here \(\{e_{1},e_{2},f_{1},f_{2}\}\) denote a basis of \(A[\ell^{m}]\). We claim that:
**Proposition 3.3**.: _Let \(N_{1}\) be the number of inequivalent maximal isotropic planes of type \(1\). We have_
\[N_{1}\asymp\ell^{m}\]
Proof.: For an elliptic curve, either ordinary or supersingular, there are \(O(\ell^{m})\) cyclic subgroups of the order less than or equal to \(\ell^{m}\). Therefore, there are \(O(\ell^{2m})\) such kind of \(H\) in total. Let \(H_{1}\) and \(H_{1}^{{}^{\prime}}\) be cyclic subgroups(isotropic lines) of \(E[\ell^{m}]\). By Theorem 2.5, \(A/H_{1}\cong A/H_{1}^{{}^{\prime}}\) if and only if there exists \(\phi\in\mathrm{End}(E)\), \(N\in\mathbb{Z}\) such that \(\phi^{-1}H_{1}^{{}^{\prime}}=[N]^{-1}H_{1}\). For such possible \(\phi\) that has prime-to-
kernel, we have \(N\nmid\ell\), \(N^{-1}H_{1}=(\mathbb{Z}/N\mathbb{Z})\times(\mathbb{Z}/N\mathbb{Z})\times(\mathbb{ Z}/\ell^{m}\mathbb{Z})\). Since \(\operatorname{Ker}(N)\subset\operatorname{Ker}(\phi)\), \(\phi\) factors through the multiplication by \(N\) map as \(\phi=i\circ N\) where \(i\in\operatorname{Aut}(E)\). But for an ordinary elliptic curve, there are only finitely many units in \(\operatorname{End}(E)\), thus the possible choices of \(\phi\).
The same argument also works for \(\phi\) has \(\ell\)-power kernel. Indeed, for positive integer \(k\), we have \([\ell^{k}]^{-1}H_{1}=(\mathbb{Z}/\ell^{k}\mathbb{Z})\times(\mathbb{Z}/\ell^{k+ m}\mathbb{Z})\) and the possible choices of \(\operatorname{Ker}(\phi)\) are \((\mathbb{Z}/\ell^{k+i}\mathbb{Z})\times(\mathbb{Z}/\ell^{k-i}\mathbb{Z})\) for \(0\leq i\leq m\). Proposition 3.1 implies that the number of inequivalent isotropic lines \(H_{1}\subset E\) is \(O(l^{m-\epsilon})\).
We assumed that \(H_{2}\) comes from the supersingular elliptic curve \(E_{ss}\). Since the number of supersingular elliptic curves up to \(\bar{\mathbb{F}}_{q}\)-isomorphism is finite, for instance, see [10, V.4 Theorem 4.1]. we have finitely many inequivalent \(H_{2}\in E_{ss}\). Putting these arguments together, we proved that the number of such inequivalent \(H\) of type 1 is asymptotically \(\ell^{m}\).
### \(H\) is not a product
In the second case, we assume that \(H\) is not a product of ordinary and supersingular subgroups. To be explicit, we write \(H=\langle e_{1}+af_{1}+bf_{2},e_{2}+cf_{1}+df_{2}\rangle\), with the assumption that
\[det(\begin{bmatrix}a&b\\ c&d\end{bmatrix})=-1.\]
By Lemma 2.3 and Proposition 3.3, the total number of the maximal isotropic plane in the non-product form is \(O(\ell^{3m})\).
Fix an \(H\) in this form. We count the number of all isotropic planes \(H^{\prime}\) that is equivalent to \(H\). By work of Waterhouse [12],
\[\phi^{-1}H^{\prime}=[N]^{-1}H\]
for some \(\phi\in\operatorname{End}(A)\) and some positive integer \(N\). Before proving Proposition 3.5, we introduce the following lemma.
For any \(\phi\in\operatorname{End}(A)\), we can write \(\phi=\phi_{\operatorname{ord}}\times\phi_{\operatorname{ss}}\). As a consequence, the kernel of \(\phi\) decomposes as \(\operatorname{Ker}(\phi)=K_{\operatorname{ord}}\times K_{\operatorname{ss}}\). Therefore, to bound the number of endomorphisms \(\phi\) once we fix \(N\), we need to bound the number of possible \(\phi_{\operatorname{ord}}\) and \(\phi_{\operatorname{ss}}\) separately. By Proposition 3.1, the number of endomorphisms of an ordinary elliptic curve with a fixed degree \(d\) is \(O\left(d^{c}\right)\). Therefore, we only have to determine how many possible choices of \(K_{\operatorname{ss}}\) we can have under the assumption of \(H\).
**Lemma 3.4**.: _Assume that \(H\) is of Type \(2\), and take \(N=\ell^{m}\). Then there are at most \(O(\ell^{m})\) supersingular endomorphisms which we denote by \(\phi_{ss}\), such that_
\[\phi_{ss}=\ell^{a}\circ\phi_{cyc},\]
_for some_ \(0\leq a\leq m\)_, and there exists an endomorphism_ \(\phi=\phi_{\mathrm{ord}}\times\phi_{\mathrm{ss}}\)_, such that_
\[\phi^{-1}H^{\prime}=[N]^{-1}H.\]
_Moreover,_ \(\phi_{cyc}\) _is cyclic of order at most_ \(\ell^{m}\)_._
Proof.: First of all, we prove that the degree of \(\phi_{\mathrm{cyc}}\) is at most \(\ell^{m}\). This is equivalent to the statement that we cannot have an element
\[\alpha\in[\ell^{m}]^{-1}H\cap E_{\mathrm{ss}}\]
whose order is greater than or equals to \(\ell^{m+1}\). We prove this by contradiction. Suppose such an element exists and call it \(|x|\). Since \(|x|\geq\ell^{m+1}\), \(\ell^{m}\circ(x)\) is nontrivial. By definition we have \(\ell^{m}\circ(x)\in H\) and \(\ell^{m}\circ(x)\in E_{\mathrm{ss}}\). Therefore
\[\ell^{m}\circ(x)\in[\ell^{m}]^{-1}H\cap E_{\mathrm{ss}}.\]
Since \(H\) has the form \(H=\langle e_{1}+af_{1}+bf_{2},e_{2}+cf_{1}+df_{2}\rangle\), one gets to the conclusion that \(E_{\mathrm{ss}}\cap H=\{\mathrm{id}\}\). Hence the contradiction.
By Proposition 3.2, we conclude that there are at most \(O(\ell^{m})\) such \(\phi_{\mathrm{cyc}}\), hence at most \(O(\ell^{m})\) such \(\phi_{\mathrm{ss}}\).
**Proposition 3.5**.: _Let \(N_{2}\) be the number of inequivalent maximal isotropic planes of type 2. We have_
\[N_{2}>>\ell^{2m-\epsilon}\]
Proof.: For a fixed \(H\), getting a lower bound of the number of inequivalent isotropic planes is equivalent to getting an upper bound of the maximal isotropic planes which are equivalent to \(H\). We do this by bounding the number of endomorphisms \(\phi\in\mathrm{End}(A)\) such that \(\phi\circ[N]^{-1}H\) is a maximal isotropic plane for each fixed \(N\), as \(N\) goes through the set of positive integers.
First, suppose that \(\ell\nmid N\). In this case, the pullback of an isotropic plane under \(\phi\) has the form
\[\phi^{-1}(H)\simeq(\mathbb{Z}/\ell^{m}\mathbb{Z})^{2}\times\ker(\phi).\]
On the other hand, we have
\[[N]^{-1}H\simeq(\mathbb{Z}/\ell^{m}\mathbb{Z})^{2}\times(\mathbb{Z}/N\mathbb{ Z})^{4}.\]
By Theorem 2.5, \(\mathrm{Ker}(\phi)\simeq(\mathbb{Z}/N\mathbb{Z})^{4}\). Therefore we have \(\phi=i\circ N\), where \(i\in\mathrm{Aut}(A)\) is an automorphism. Taking for granted the fact that principally polarized abelian varieties have finitely many automorphisms which are independent of \(n\), we get finitely many \(H^{\prime}\) that is equivalent to \(H\) where \(H^{\prime}=\phi(\circ[N]^{-1}H)\) is a maximal isotropic plane inside \(A[\ell^{m}]\).
When \(l\mid N\), \(k\geq 1\). We may write \(N=N_{0}\cdot\ell^{a}\) for some \(a\geq 1\), where \(N_{0}\) is coprime to \(\ell\). Then \[[N]^{-1}H\simeq(\mathbb{Z}/\ell^{a}\mathbb{Z})^{2}\times(\mathbb{Z}/\ell^{m+a} \mathbb{Z})^{2}\times(\mathbb{Z}/N_{0}\mathbb{Z})^{4}.\] Therefore \[\operatorname{Ker}(\phi)\simeq(\mathbb{Z}/N_{0}\mathbb{Z})^{4}\times G_{\ell},\] where \(G_{\ell}\) is some \(\ell\)-power subgroup which we will specify later. Therefore \(\phi\) can be written as a decomposition \[\phi=i\circ N_{0}\circ\phi_{l}\] where \(\phi_{\ell}\) is an \(\ell\)-power isogeny with kernel \(G_{\ell}\). So without loss of generality, we can assume that \(N=\ell^{k}\) for some \(k\geq 1\) and prove the following subcases depending on the power of \(\ell\). If \(k<m\), we have \[[\ell^{k}]^{-1}H=(\mathbb{Z}/\ell^{k}\mathbb{Z})^{2}\times(\mathbb{Z}/\ell^{k +m}\mathbb{Z})^{2}.\] Let \(A\subset[\ell^{k}]^{-1}H\) be a subgroup such that \[[\ell^{k}]^{-1}H/A\simeq(\mathbb{Z}/\ell^{m}\mathbb{Z})^{2}.\] Then \[A\simeq(\mathbb{Z}/\ell^{k}\mathbb{Z})^{2}\times(\mathbb{Z}/\ell^{k+i} \mathbb{Z})\times(\mathbb{Z}/\ell^{k-i}\mathbb{Z})\] for some \(0\leq i\leq m\). Hence the possible choices of \(\operatorname{Ker}(\phi)\) have the above form. For \(k=m\), we have \[[\ell^{m}]^{-1}H=(\mathbb{Z}/\ell^{m}\mathbb{Z})^{2}\times(\mathbb{Z}/\ell^{m+ m}\mathbb{Z})^{2}.\] Similarly we have the possible choices for \(\operatorname{Ker}(\phi)\) are subgroups in the following form \[(\mathbb{Z}/\ell^{m+i}\mathbb{Z})\times(\mathbb{Z}/\ell^{m-i}\mathbb{Z})\times (\mathbb{Z}/\ell^{m+j}\mathbb{Z})\times(\mathbb{Z}/\ell^{m-j}\mathbb{Z})\] for \(0\leq i\leq m\) and \(0\leq j\leq m\). For \(k>m\), the possible choices for \(\operatorname{Ker}(\phi)\) are \[(\mathbb{Z}/\ell^{k+i}\mathbb{Z})\times(\mathbb{Z}/\ell^{k+j}\mathbb{Z})\times (\mathbb{Z}/\ell^{k+m-n}\mathbb{Z})\times(\mathbb{Z}/\ell^{k+m-w}\mathbb{Z})\] where \(i,j,n,w\geq 0\) and \(i+j+n+w=2m\). However, since \(\mathbb{Z}/\ell^{k-m}\mathbb{Z}\) is a common factor, this implies \(\operatorname{Ker}(\phi)\) contains \(\ker([\ell^{k-m}])=(\mathbb{Z}/\ell^{k-m}\mathbb{Z})^{4}\). Therefore \(\phi\) factors through the multiplication by \(\ell^{k-m}\) map, we are returning to the case where \(k=m\). Now we can bound the number of endomorphisms and the number of maximal isotropic planes equivalent to a given \(H\). Proposition 3.1 asserts that the number of endomorphisms of an ordinary elliptic curve with a fixed degree \(d\) is \(O\left(d^{\kappa}\right)\) and Lemma 3.4 states that
there are at most \(O(\ell^{m})\) supersingular endomorphisms that serve as the supersingular part of \(\phi.\) An upper bound of the maximal isotropic planes isomorphic to a fixed \(H\) is \(O(\ell^{m+\epsilon}).\) Therefore the total number of inequivalent maximal isotropic planes of type 2 in \(A[\ell^{m}]\) is
\[O(\ell^{2m-\epsilon})=O(\ell^{3m})/O(\ell^{m+\epsilon})).\]
_Remark 3.6_.: We note that improving the bound without the \(\epsilon\) term is plausible.
**Corollary 3.7**.: _Let \(\ell_{1},\cdots,\ell_{n}\) be \(n\) primes different from \(p\) and let \(m_{1},\cdots,m_{n}\) be positive integers. Let \(N_{0}\) be the total number of inequivalent maximal isotropic planes. We have_
\[N_{0}>>(\ell_{1}^{m_{1}}\cdots\ell_{n}^{m_{n}})^{2-\epsilon}\]
Proof.: The proof is a generalization of Proposition 3.5 proof. Since the majority of the inequivalent maximal isotropic planes come from products of isotropic planes of Type 2 as \(\ell\) varies, we fix a subgroup \(G\)
\[G\simeq(\mathbb{Z}/\ell_{1}^{m_{1}}\mathbb{Z})^{2}\times\cdots\times(\mathbb{ Z}/\ell_{n}^{m_{n}}\mathbb{Z})^{2}\]
such that for each \(\ell_{i},\)\(1\leq i\leq n,\)\((\mathbb{Z}/\ell_{1}^{m_{1}})^{2}\) is an isotropic plane of Type 2. Let \(\{e_{1}^{1},e_{2}^{1},f_{1}^{1},f_{2}^{1}\},\)\(\cdots,\)\(\{e_{1}^{n},e_{2}^{n},f_{1}^{n},f_{2}^{n}\}\) be a basis for \(T_{\ell_{1}}(A),\cdots,T_{\ell_{n}}(A),\) respectively.
We count the maximal number of maximal isotropic planes \(G^{\prime}\) that is equivalent to \(G.\) By Theorem 2.5, \(G\) and \(G^{\prime}\) are equivalent if there is \(\phi\in\operatorname{End}(A)\) and non-zero positive integer \(N\) such that \(\phi^{-1}G^{\prime}=[N]^{-1}G.\) We split the argument into different cases based on the choice of \(N\).
**Case I: \(N\) is coprime to \(\ell_{1},\cdots,\ell_{n}\).**
If \(N\) is coprime to \(\ell_{1}\cdots\ell_{n},\)
\[[N]^{-1}H=(\mathbb{Z}/N\mathbb{Z})^{4}\times(\mathbb{Z}/\ell_{1}^{m_{1}})^{2 }\mathbb{Z}\times\cdots\times(\mathbb{Z}/\ell_{n}^{m_{n}}\mathbb{Z})^{2}.\]
Therefore we have \(\phi=i\circ N,\) where \(i\in\operatorname{Aut}(A)\) is an automorphism. Taking for granted the fact that principally polarized abelian varieties have finitely many automorphisms, we get finitely many \(H^{\prime}\) that is equivalent to \(H\) under the assumption that \(H^{\prime}=\phi\circ[N]^{-1}H.\)
**Case II: \(N=\ell_{j_{1}}^{m_{j_{1}}}\cdots\ell_{j_{k}}^{m_{j_{k}}}\) for some \(0<k\leq n\) and \(1\leq j_{1}<\cdots<j_{k}\leq n.\)**
Similar to Proposition 3.5, if \(N\) is not coprime to some of the \(\{\ell_{1},\cdots,\ell_{n}\},\) we may restrain ourselves on this case, for the same reason as explained in the proof of Proposition 3.5.
The pullback of \(G\) under \(\ell_{j_{1}}^{m_{j_{1}}}\cdots\ell_{j_{k}}^{m_{j_{k}}}\) is isomorphic to
\[(\mathbb{Z}/\ell_{j_{1}}^{m_{j_{1}}}\mathbb{Z})^{2}\times\cdots\times(\mathbb{Z}/ \ell_{j_{k}}^{m_{j_{k}}}\mathbb{Z})^{2}\times(\mathbb{Z}/\ell_{j_{1}}^{2m_{j_{1 }}}\mathbb{Z})^{2}\times\cdots\times(\mathbb{Z}/\ell_{j_{k}}^{2m_{j_{k}}} \mathbb{Z})^{2}\times\prod_{j\neq j_{1},\cdots,j_{k}}(\mathbb{Z}/\ell_{j}^{m_{j }}\mathbb{Z})^{2}.\]
Recall that for each \(1\leq i\leq n\) we assume that
\[G_{i}:=(\mathbb{Z}/\ell_{i}^{m_{i}}\mathbb{Z})^{2}=\langle e_{1}^{i}+a_{i}f_{ 1}^{i}+b_{i}f_{2}^{i},e_{2}^{i}+c_{i}f_{1}^{i}+d_{i}f_{2}^{i}\rangle,\]
with the assumption that
\[\det(\begin{bmatrix}a_{i}&b_{i}\\ c_{i}&d_{i}\end{bmatrix})=-1.\]
Similar to the proof of Lemma 3.4, an endomorphism \(\phi\) that satisfies the Waterhouse's criterion can be realized as an endomorphism with kernel \(\operatorname{Ker}(\phi)=K_{\operatorname{ord}}\times K_{\operatorname{ss}}\). Moreover, we can factor out the \(\ell\)-power scalar multiple from each part and consider those supersingular endomorphisms whose kernels are cyclic subgroups. We claim that the supersingular part \(K_{\operatorname{ss}}\) contains a cyclic subgroup of order at most \(\ell_{j_{1}}^{m_{j_{1}}}\cdots\ell_{j_{k}}^{m_{j_{k}}}\). Suppose this is not the case, i.e., there is an element \(x\in[\ell_{j_{1}}^{m_{j_{1}}}\cdots\ell_{j_{k}}^{m_{j_{k}}}]^{-1}H\cap E_{ \operatorname{ss}}\) with order \(|x|\geq\ell_{j_{1}}^{m_{j_{1}}}\cdots\ell_{j_{k}}^{m_{j_{k}}}\), then
\[\ell_{j_{1}}^{m_{j_{1}}}\cdots\ell_{j_{k}}^{m_{j_{k}}}\circ(x)\in\prod_{i}G_{ i}\cap E_{\operatorname{ss}}\]
is nontrivial. By definition of \(G_{i}\) for each \(1\leq i\leq n\), the intersection \(\prod_{i}G_{i}\cap E_{\operatorname{ss}}\) is trivial. Therefore the claim follows.
By Proposition 3.2, there are at most \(\ell_{j_{1}}^{m_{j_{1}}}\cdots\ell_{j_{k}}^{m_{j_{k}}}\) such \(K_{\operatorname{ss}}\). For the ordinary component, Proposition 3.1 implies that there are at most \((\ell_{j_{1}}^{m_{j_{1}}}\cdots\ell_{j_{k}}^{m_{j_{k}}})^{2\epsilon}\) many possible choices for \(K_{\operatorname{ord}}\). We conclude that there are at most \((\ell_{j_{1}}^{m_{j_{1}}}\cdots\ell_{j_{k}}^{m_{j_{k}}})^{2-\epsilon}\) maximal isotropic planes \(G^{\prime}\) that are equivalent to a given \(G\). The result follows.
## 4 Proof of the Theorems
### Semisimplicity assumption on the Frobenius action
Let \(E\) be an ordinary elliptic curve over \(\mathbb{F}_{q}\) with \(\operatorname{End}^{\circ}(E)=K\) and let \(\pi\) be the Frobenius endomorphism. Here \(K\) is the quadratic imaginary field generated by \(\pi\): \(K=\mathbb{Q}(\pi)\). We fix a basis \(\{e_{1},e_{2}\}\) of \(T_{\ell}(E)\). The characteristic polynomial \(\chi_{\pi}\) is the unique polynomial such that for every \(n\) prime to \(p\), the characteristic polynomial of the action of the Frobenius \(\pi\) on \(E[n]\) is \(\chi_{\pi}\bmod n\). Let \(\Delta_{\pi^{n}}\) be the discriminant of \(\pi^{n}\). The characteristic polynomial is a quadratic polynomial
\[\chi_{\pi^{n}}=x^{2}-tx+q^{n}.\]
We have \(\Delta_{\pi^{n}}=t^{2}-4q^{n}\).
_Remark 4.1_.: For an degree \(n\) extension \(F_{q^{n}}\), the Frobenius of \(E_{\mathbb{F}_{q^{n}}}\) is \(\pi^{n}\).
An isogeny \(\phi\colon E\to E^{\prime}\) whose kernel is a cyclic subgroup of \(E\) can be understood by looking at the Frobenius action on the torsion subgroups. If \(\phi\) is defined over \(\mathbb{F}_{q^{n}}\), then \(\ker\phi\) is stabilized by the Frobenius action. For \(\ell\neq p\), the number of \(\ell\)-power isogenies defined over \(\mathbb{F}_{q^{n}}\) is determined by the action of \(\pi^{n}\) on \(\ell\)-power torsions. Moreover, the action of Frobenius can be realized as a \(2\times 2\) matrix with coefficients in \(\mathbb{Z}/\ell^{m}\mathbb{Z}\).
Now we state the semisimplicity assumption on the Frobenius action, which helps us narrow down cases that we should focus on.
Recall that our goal is to compute the number of \(\ell\)-power isogenies from \(E\) that is defined over \(\mathbb{F}_{q^{n}}\), as \(\ell\) goes through all prime integers. The semisimplicity of the Frobenius action depends on whether \(\ell\) is ramified in \(\mathcal{O}_{K}\) or not:
* If \(\ell\) is unramified in \(\mathcal{O}_{K}\), then \(\pi^{n}\) is semisimple modulo \(\ell^{m}\) for all \(m\geq 1\). We prove the following lemma:
**Lemma 4.2**.: _Let \(m\) be the maximal number such that \(\Delta_{\pi^{n}}\equiv 0\) mod \(\ell^{2m}\), then_
\[\pi^{n}\equiv\begin{bmatrix}\lambda&0\\ 0&\lambda\end{bmatrix}\text{ mod }\ell^{m}.\]
Proof.: Let \(\lambda_{1}\) and \(\lambda_{2}\) be the eigenvalues of \(\chi_{\pi^{n}}\). We have
\[\Delta_{\pi^{n}}=(\lambda_{1}-\lambda_{2})^{2}\]
and \(\ell^{2m}\) divides \(\Delta_{\pi^{n}}\). Therefore \(\ell^{m}\mid(\lambda_{1}-\lambda_{2})\).
Since \(\ell\) is unramified in \(\mathcal{O}_{K}\), the action of \(\pi^{n}\) is semisimple modulo \(\ell^{m}\). Work on the setting over \(\mathbb{Z}_{\ell}\), if \(\lambda_{1},\lambda_{2}\in\mathbb{Z}_{\ell}\) we are done. Otherwise, \(\lambda_{1},\lambda_{2}\in\mathcal{O}_{\ell}\) where \(\mathcal{O}_{\ell}\) is unramified of degree \(2\). We now prove that \(\lambda_{1},\lambda_{2}\) mod \(\ell^{m}\) are in \(\mathbb{Z}/\ell^{m}\mathbb{Z}\). By the semisimplicity assumption, the action of \(\pi^{n}\) is diagonalizable over \(\mathcal{O}_{\ell}/\ell^{m}\mathcal{O}_{\ell}\) for any \(m\geq 1\). This is equivalent to say there exists \(X\in\operatorname{GL}_{2}(\mathcal{O}_{\ell}/\ell^{m}\mathcal{O}_{\ell})\) such that
\[\pi^{n}=X\begin{bmatrix}\lambda_{1}&0\\ 0&\lambda_{2}\end{bmatrix}X^{-1}\text{ mod }\ell^{m}.\]
But we proved that \(\lambda_{1}\equiv\lambda_{2}\) mod \(\ell^{m}\). Therefore
\[\pi^{n}=XX^{-1}\begin{bmatrix}\lambda_{1}&0\\ 0&\lambda_{1}\end{bmatrix}=\begin{bmatrix}\lambda_{1}&0\\ 0&\lambda_{1}\end{bmatrix}\text{ mod }\ell^{m}.\]
Since \(\pi^{n}\) mod \(\ell^{m}\in\operatorname{GL}_{2}(\mathbb{Z}/\ell^{m}\mathbb{Z})\), the lemma follows.
* If \(\ell\) is ramified in \(\mathcal{O}_{K}\), then it is possible that the Frobenius action \(\pi^{n}\) is not semisimple. But Recall that we assumed on \(n\) such that for \(\ell\) ramified in \(\mathcal{O}_{K}\), \((n,\ell)\neq 1\). This implies for all such \(\ell\mid\Delta_{K}\), the power of \(\ell\) dividing \(\Delta_{\pi^{n}}\) is bounded independent of \(n\). Therefore we have the following corollary:
**Corollary 4.3**.: _Let \(n\) be an integer such that for all prime \(\ell\mid\Delta_{K}\), \((n,\ell)=1\). Let \(S\) be the set of all primes that divides \(\Delta_{K}\). Then the number of \(\ell\)-power isogenies where \(\ell\in S\) is bounded independent of \(n\). In other words, this number does not grow with \(n\)._
We make the table of classification of the Frobenius action under the semisimplicity assumption as follows:
1. Assume \(\ell,m,n\) such that \(\chi_{\pi^{n}}\) mod \(\ell^{m}\) is irreducible modulo \(\ell^{m}\). In this case \(\pi^{n}\) acts on \(E[\ell^{m}]\) as a distortion map. I.e., no subgroups \(\mathbb{Z}/\ell^{m}\mathbb{Z}\) is stabilized by \(\pi^{n}\). Therefore \(E\) has no \(\ell^{m}\)-isogenies defined over \(\mathbb{F}_{q^{n}}\).
2. Assume \(\ell,m,n\) such that \(\pi^{n}\) is diagonalizable mod \(\ell^{m}\). Moreover, \(\chi_{\pi^{n}}\) has distinct eigenvalues \(\lambda\) and \(\mu\) modulo \(\ell^{m}\). In this case, the Frobenius acts on \(E[\ell^{m}]\) as a matrix conjugates to \(\begin{bmatrix}\lambda&0\\ 0&\mu\end{bmatrix}\) and there are two isogenies of degree \(\ell^{m}\) from \(E\) which are defined over \(\mathbb{F}_{q^{n}}\), given by \(E\) modulo cyclic subgroups generated by the two eigenvectors of \(\lambda\) and \(\mu\) respectively.
3. Assume \(\ell,m,n\) such that \(\pi^{n}\) is diagonalizable mod \(\ell^{m}\). Moreover, the Frobenius \(\chi_{\pi^{n}}\) modulo \(\ell^{m}\) has one eigenvalue \(\lambda\) of multiplicity two. In this case, the Frobenius acts on \(E[\ell^{m}]\) as a scalar multiple by \(\begin{bmatrix}\lambda&0\\ 0&\lambda\end{bmatrix}\) and every \(\ell^{m}\) subgroup is stable under \(\pi^{n}\). Therefore, there are \(\ell^{m}+\ell^{m-1}+\cdots+1\)\(\ell\)-power isogenies of degree less than or equal to \(\ell^{m}\) from \(E\) which are defined over \(\mathbb{F}_{q^{n}}\).
4. Assume \(\ell,m,n\) such that \(\pi^{n}\) is diagonalizable mod \(\ell^{m}\). Assume that \(\chi_{\pi^{n}}\) has distinct eigenvalues \(\lambda\) and \(\mu\) modulo \(\ell^{m}\) but eigenvalues are congruent modulo \(\ell^{r}\) for some \(1<r<m\). In this case, the Frobenius acts on \(E[\ell^{r}]\) as a matrix conjugates to \(\begin{bmatrix}\lambda&0\\ 0&\lambda\end{bmatrix}\) and there are \(\ell^{r}+\ell^{r-1}+\cdots+1\)\(\ell\)-power isogenies of degree less than or equal to \(\ell^{m}\) from \(E\) which are defined over \(\mathbb{F}_{q^{n}}\).
### Horizontal isogenies
As one may notice, there are a lot of prime power isogenies of \(E\) that are indeed endomorphisms of \(E\). By the theory of complex multiplications, the number of such isogenies is bounded by the class number of \(\mathcal{O}_{K}\), see Theorem 4.6. We give information about when such isogenies arise and use the information to bound the total number of isogenies defined over \(\mathbb{F}_{q^{n}}\).
We use the classification of the Frobenius action on the \(\ell\)-torsion subgroups to compute each case's horizontal prime power isogenies.
**Definition 4.4**.: Let \(f\colon E\to E^{\prime}\) be an isogeny of degree \(\ell^{m}\). We say \(f\) is _horizontal_ if \(\operatorname{End}(E)=\operatorname{End}(E^{\prime})\).
Let \(\mathfrak{a}\) be an invertible ideal in \(\operatorname{End}(E)\). Define the \(\mathfrak{a}\)-torsion subgroup of \(E\) as
\[E[\mathfrak{a}]:=\left\{P\in E\left(\overline{\mathbb{F}}_{q}\right)\,|\,\, \sigma(P)=0\text{ for all }\sigma\in\mathfrak{a}\right\}.\]
Let \(\phi_{\mathfrak{a}}\) be an isogeny whose kernel is \(E[\mathfrak{a}]\). Then the codomain \(E/E[\mathfrak{a}]\) is a well-defined elliptic curve. The isogeny \(\phi_{\mathfrak{a}}\) is horizontal, and its degree equals the ideal norm of \(\mathfrak{a}\). We denote by \(\mathfrak{a}\cdot E\) for the isomorphism class of the image of \(\phi_{\mathfrak{a}}\).
**Lemma 4.5**.: _Let \(E_{q^{n}}\) be an ordinary elliptic curve over \(\mathbb{F}_{q^{n}}\) with the Frobenius action by \(\pi^{n}\). Let \(H(\ell^{m})\) denote the number of horizontal \(\ell^{m}\)-isogenies._
\[H(\ell^{m})=\begin{cases}0,\text{ if }\pi^{n}\text{ is irreducible}\\ 1,\text{ if }\pi^{n}\text{ is diagonalizable with one eigenvalue modulo }\ell^{m}\\ 2,\text{ if }\pi^{n}\text{ is diagonalizable with two eigenvalues modulo }\ell^{m}\end{cases}.\]
Proof.: If \(\Delta_{\pi^{n}}\) is not a square modulo \(\ell^{m}\), we are in case (a) where no subgroup of order \(\ell^{m}\) is stabilized by the action of \(\pi^{n}\). Therefore no \(\ell^{m}\)-isogeny is defined over \(\mathbb{F}_{q^{n}}\).
Suppose \(\pi^{n}\) is diagonalizable with one eigenvalue modulo \(\ell^{m}\). In that case, we are in case (b.2) (and (b.3)), and there is one horizontal isogeny given by \(\mathfrak{a}=(\pi^{n}-\lambda,\ell^{m})\) with norm \(\ell^{m}\). Moreover, \(\phi_{\mathfrak{a}}\) is self-dual.
If \(\pi^{n}\) is diagonalizable with two eigenvalues modulo \(\ell^{m}\), we are in case (b.1). There are two torsion subgroups of order \(\ell^{m}\), generated by the eigenvector of \(\lambda\) and \(\mu\), respectively. The two horizontal isogenies are given by the ideals \(\mathfrak{a}=(\pi^{n}-\lambda,\ell^{m})\) and \(\hat{\mathfrak{a}}=(\pi^{n}-\mu,\ell^{m})\). Furthermore, \(\mathfrak{a}\hat{\mathfrak{a}}=(\ell^{m})\) implying that \(\mathfrak{a}\) and \(\hat{\mathfrak{a}}\) are the inverse of one another in the class group, thus \(\phi_{\hat{\mathfrak{a}}}\) is the dual isogeny of \(\phi_{\mathfrak{a}}\).
Recall that for an elliptic curve \(E\) with CM by an order \(\mathcal{O}\), horizontal \(\ell\)-isogenies correspond to the CM action of an invertible \(\mathcal{O}\)-ideal of norm \(\ell\). Moreover, let \(\operatorname{Ell}_{q}(\mathcal{O})\) be the set
\[\operatorname{Ell}_{q}\left(\mathcal{O}\right):=\left\{E/\mathbb{F}_{q}: \operatorname{End}(E)\simeq\mathcal{O}\right\}.\]
Because elliptic curves in \(\operatorname{Ell}_{q}(\mathcal{O})\) are connected exclusively by horizontal cyclic isogenies, the theory of complex multiplication tells us:
**Theorem 4.6**.: _Let \(E\) be an elliptic curve with endomorphism ring \(\mathcal{O}\). Then the set of horizontal isogenies forms a principal homogeneous space under the class group of \(\mathcal{O}\). To be precise, assume the set is non-empty. Then it is a principal homogeneous space for the class group \(\mathcal{C}\ell(\mathcal{O})\), under the action_
\[\mathcal{C}\ell(\mathcal{O})\times\operatorname{Ell}_{q}(\mathcal{O}) \longrightarrow\operatorname{Ell}_{q}(\mathcal{O}), \tag{4.2}\] \[(\mathfrak{a},E) \longmapsto\mathfrak{a}\cdot E \tag{4.1}\]
_with cardinality equal to the class number \(h(\mathcal{O})\)._
Proof.: See for example [11, Chapter II].
### Proof of Theorem 1.2
Fix an ordinary elliptic curve \(E\) over \(\mathbb{F}_{q}\) as in the previous context. Assume \(\operatorname{End}(E)=\mathcal{O}\). We compute the size of \(I(q^{n},E)\), which can be interpreted as the number of certain cyclic subgroups. We consider two kinds of subgroups; one is that the subgroups are the kernel of some horizontal isogenies, and the other is where the subgroups define non-horizontal isogenies.
Let \(\{\ell_{i}\}\), \(1\leq i\leq k\) be the set of prime divisors of \(\Delta_{\pi^{n}}\) which are unramified in \(\mathcal{O}_{K}\). For each \(i\), let \(m_{i}\) be the _maximal_ integer such that \(\ell^{2m}\mid\Delta_{\pi^{n}}\). By Lemma 4.2, the \(n\)-Frobenius action is diagonalizable and The classification \((b.2)\) tells us that every \(\ell^{j}\)-subgroup, \(1\leq j\leq m\) is defined over \(F_{q^{n}}\). For ordinary elliptic curves, \(\Delta_{\pi^{n}}\neq 0\), so only finitely many primes divide \(\Delta_{\pi^{n}}\).
**Lemma 4.7**.: _We have_
\[N(q^{n},E)\asymp(\prod_{i=1}^{k}\ell_{i}^{m_{i}})^{1-\epsilon}.\]
Proof.: Denote by \(\operatorname{Ell}_{q,\operatorname{as/ds}}(\mathcal{O})\) the isomorphism classes of elliptic curves that admit an ascending/descending isogeny to \(E\). Thus
\[N(q^{n},E)=|\operatorname{Ell}_{q,\operatorname{as/ds}}(\mathcal{O})|+| \operatorname{Ell}_{q}(\mathcal{O})|.\]
Since we have the assumptions on \(n\), by Lemma 4.2, \(\pi^{n}\) is diagonalizable modulo any power of \(\ell_{1},\cdots,\ell_{k}\), and by Corollary 4.3, the number of ramified prime-power isogenies does not grow with \(n\). Thus we only have to consider isogenies from cases where the Frobenius action is diagonal, i.e., the number of isogenies grows with \(n\). By Theorem 4.6, the number of horizontal isogenies \(|\operatorname{Ell}_{q}(\mathcal{O})|=h(\mathcal{O})\) is a fixed number once we fix \(E\).
The number of non-horizontal isogenies is roughly the number of cyclic subgroups of order less than or equal to \(\ell_{1}^{m_{1}}\cdots\ell_{k}^{m_{k}}\), up to minus \(h(\mathcal{O})\). This is because Lemma 4.5 implies that if \(\Delta_{\pi^{n}}\equiv 0\) mod \(\ell^{m}\), there is always a horizontal \(\ell\)-power isogeny, and Theorem 4.6 tells us there are at most \(h(\mathcal{O})\) horizontal isogenies come from this form. By Theorem 2.5, For any cyclic subgroup \(G\) of \(\mathbb{Z}/\ell_{1}^{m_{1}}\cdots\ell_{k}^{m_{k}}\mathbb{Z}\), there are at most \((\ell_{1}^{m_{1}}\cdots\ell_{k}^{m_{k}})^{\epsilon}\) cyclic subgroups that give quotient curves isomorphic to \(E/G\). Therefore
\[N(q^{n},E) =\prod_{i=1}^{k}(\ell_{i}^{m_{i}}+\ell_{i}^{m_{i}-1}+\cdots+\ell )/(\ell_{1}^{m_{1}}\cdots\ell_{k}^{m_{k}})^{\epsilon} \tag{4.4}\] \[\asymp(\ell_{1}^{m_{1}}\cdots\ell_{k}^{m_{k}})^{1-\epsilon} \tag{4.3}\]
Proof of Theorem 1.2.: Lemma 4.7 asserts that we can write \(N(q^{n},E)\) as a product of \(\ell_{1}^{m_{1}},\cdots,\ell_{k}^{m_{k}}\); on the other hand, for large \(n\), the product well approximates the square root of \(\Delta_{\pi^{n}}\):
\[\prod_{i=1}^{k}\ell_{i}^{m_{i}}\asymp\Delta_{\pi^{n}}^{\frac{1}{2}}\asymp q^{ \frac{n}{2}}.\]
The theorem follows.
### Proof of Theorem 1.1
Proof of Theorem 1.1.: Let \(A=E\times E_{ss}\) be an abelian surface defined over \(\mathbb{F}_{q}\), with the assumption that \(E\) is the same ordinary elliptic curve as in the previous section. The Frobenius \(\pi_{A}^{n}\) acts on the \(\ell\)-adic Tate modules of \(A\) by a conjugacy to
\[\begin{pmatrix}\pi^{n}&0\\ \hline 0&q^{n/2}&0\\ &0&q^{n/2}\end{pmatrix}\]
where \(\pi^{n}\) is the Frobenius of \(E\) over \(\mathbb{F}_{q^{n}}\). For the set of prime divisors of \(\Delta_{\pi^{n}}\) which are unramified in \(\mathcal{O}_{K}\) and positive integers \(n\) such that \((n,\ell)\neq 1\), we want to count the number of inequivalent maximal isotropic planes defined over \(\mathbb{F}_{q^{n}}\). By definition of \(m_{i}\), for each \(1\leq i\leq k\), \(\pi_{A}^{n}\) acts as a scalar on \(A[\ell_{i}^{m_{i}}]\).
Corollary 3.7 together with the equality
\[\prod_{i=1}^{k}\ell_{i}^{m_{i}}\asymp q^{\frac{n}{2}}\]
indicate that for a positive density set of \(n\), we have
\[N(q^{n},A)\gg((\ell_{1}^{m_{1}}\cdots\ell_{k}^{m_{k}})^{2-\epsilon})=q^{n+\epsilon}\]
for some \(\epsilon>0\).
## Acknowledgements
The author wishes to thank Ananth Shankar for encouraging her to think about this question and also for helpful conversations throughout this project. We are very grateful to Tonghai Yang for many helpful discussions.
Y.F. is partially supported by the NSF grant DMS\(-\)2100436. |
2310.04013 | A Survey of Mathematical Models on Somitogenesis | This paper presents a comprehensive survey of various established
mathematical models pertaining to Somitogenesis, a biological process. The
study begins by revisiting and replicating the findings from prominent research
papers in this domain, subsequently offering a critical evaluation of the
strengths and weaknesses inherent in each approach. By synthesizing this
knowledge, the paper aims to contribute to a deeper understanding of
Somitogenesis, and pave the way for further advancements in the development of
enhanced mathematical models for this intricate biological process. The
concluding section offers valuable insights and directions for prospective
research in this field. | Hanyu Song | 2023-10-06T04:50:55Z | http://arxiv.org/abs/2310.04013v1 | # A Survey of Mathematical Models on Somitogenesis
###### Abstract
This paper presents a comprehensive survey of various established mathematical models pertaining to Somitogenesis, a biological process. The study begins by revisiting and replicating the findings from prominent research papers in this domain, subsequently offering a critical evaluation of the strengths and weaknesses inherent in each approach. By synthesizing this knowledge, the paper aims to contribute to a deeper understanding of Somitogenesis, and pave the way for further advancements in the development of enhanced mathematical models for this intricate biological process. The concluding section offers valuable insights and directions for prospective research in this field.
###### Contents
* 1 Introduction to Somitogenesis
* 1.1 Unsolved questions
* 1.2 Clock and Wavefront Model
* 1.3 Summary
* 1.4 Mathematical Equations
* 1.5 Analysis
* 1.6 Oscillatory-based Model
* 1.7 Summary of the PORD Model
* 1.8 Mathematical Equation
* 1.9 Analysis
* 20 Excitable Model
* 2.1 Summary of the one-dimensional RD Model
* 2.2 FhN-type system and excitability
* 2.3 Analysis
* 2.4 Conclusion
* 3 Appendices
* 3 Code
* 3.1 Clock and Wavefront
* 3.2 Nagahara Discrete
* 4
## Introduction to Somitogenesis
Somites are blocks of cells that lie along the anterior-posterior (AP) vertebrate embryonic axis of the developing embryo.
Somitogenesis is the process by which somites form by segmenting the axis into similar morphological units such as vertebrates etc... Somitogenesis serves as the key biological process in the embryo since it's responsible for segmenting the vertebrate axis and generating the prepattern that guides the formation of the tendons, ribs, muscle, and other associated features of the body trunk. Figure 1 illustrates the form of somites in an embryo and how segmentation works in the AP axis.
Although many details about somitogenesis are still debated, there are some scientific facts that serve as the fundamentals for further research: Somites segment from the presomitic mesoderm (PSM): thick bands of tissue that lie on either side of the AP axis. The segmentation begins with the establishment of a prepattern of gene expression, and it is characterized by periodic activations in regions where future somites will segment. Early scanning microscope images show that the posterior PSM displays a series of cells similar in size and structure, known as _somitomeres_, which seem to be the precursors of the somites [1]. The existence of this prepattern was confirmed by microsurgical experiments in which isolated parts of the PSM formed somites in strict isolation [7]. Figure 2 demonstrates the wave-like gene expression in a mouse embryo.
Figure 1: Embryonic somite and the AP axis. The left picture is a human embryo [8], where the somites are already in shape when the embryo is still very immature. The right picture is an anterior-posterior axis where somites segment from the PSM cell that lies on both sides of the AP axis [3]. Front the posterior end to the anterior, the cells transform from undetermined, to determined, then finally to somites.
Another fact about the PSM is that the PSM is not a homogeneous tissue [13]. This is supported by microsurgical experiments conducted by Dubrulle and co-workers: AP inversions of somite-length regions of the posterior PSM resulted in normal segmentation whilst inversions of the anterior PSM resulted in somites with reversed polarity [10], which suggested that the anterior-most part of the PSM is determined with regard to its segmentation program, whilst the posterior-most part of PSM is susceptible in this respect. This proves the PSM's heterogeneity, which is a key feature of the models for somitogenesis.
The different regions of the PSM were found to correspond to regions of varying FGF signaling. _fgf8_, which is a gradient of FGF8 (Fibroblast Growth Factor 8). FGF8 is a gene with dynamic expression in the PSM, peaking at the posterior end of the embryo, whilst decreasing in the direction of the anterior end [6]. See Figure 3. The function of _fgf8_ is to down-regulate the cells, meaning, higher concentration of _fgf8_ will prevent the segmentation of PSM, whilst its decrease will make segmentation possible, and when _fgf8_ decreases past a certain threshold, the cells are then able to segment into somites. We call that threshold "the determination front" [10]. The uneven distribution of _fgf8_ implies that the positional information of the PSM cell is crucial. However, the role of positional information is a controversial issue in mathematical biology, and it's typically not possible to build robust biological structures without additional mechanisms, such as diffusion [15].
#### Unsolved questions
We know the important information that the down-regulation of _fgf8_ heavily affects the somitogenesis process, and we seem to understand the logic behind the process of somitogenesis, but it is difficult to draw any conclusion about which specific type of model is capable of accurately recapitulating this process. There are still many questions that must be determined: whether the PSM cells are oscillatory or excitable with respect to _fgf8_ levels? Are the cells globally controlled by the gene or do they have local interactions between themselves as well? Does the global _fgf8_ down-regulation even matter? What will happen if the _fgf8_ is kept constant, can a reaction-diffusion model, that emphasizes local interactions between cells, explain this process accurately? In the later discussions of this paper, we will look into several kinds of mathematical models, with each of them having distinct answers to the above questions. Admittedly, none of them are deemed to be "perfect", with each of them having its own drawbacks. Although there are plenty of models out there, understanding those models' mechanisms is still important as it could accelerate the process of creating a better one in the future.
Although there are plenty of models out there, to date nobody has provided any comparison of them, and most papers on this topic don't even reference each other
Figure 2: Gene expression in a mouse embryo, where the gene is marked green [2]. The gene has a wave-like propagation from the posterior end of the PSM to the anterior side.
as they are in different fields: Mathematical Biology, Bio Development, Physics, etc. Therefore, this paper's goal is not to compare and find the perfect model, but to see each of their distinctive advantages and try to synthesize them if possible, while avoiding their drawbacks when we attempt to create new models in the future.
## Clock and Wavefront Model
### Summary
One of the most famous and widely studied models is the clock and wavefront (C & W) model. As its name implies, the model proposes the existence of a segmentation clock and a wavefront of FGF8 along the AP axis of vertebrate embryos. This idea was first proposed in 1975 by Cooke and Zeeman, with the gist that there is a longitudinal, global positional information, which is the above-mentioned FGF8 gradient distribution, that interacts with a smooth cellular oscillator, which is the so-called clock, to govern the time for the PSM cells to segment and develop into somites. This idea was then revised by Pourquie and co-workers, where they went into more specifics and proposed that the clock sets the times at which new somite boundaries form whilst the position of the determination front sets where they form [10]. For a cell at a particular point, they assume that competence to segment is only achieved once FGF8 signaling has decreased below a certain threshold, the position of which is known as the determination front.
Figure 3: PSM cell and gene gradients, FGF included [11]. The PSM elongates posteriorly as the somites are formed, whilst the gradient of FGF8 always has a peak at the very posterior end of the PSM cell. As it decreases to a certain level, the prepattern arises (the blocks that are not fully green), then somites are formed. What’s contrasting to the FGF gradient is the RA gradient, which is another type of gene that peaks at the anterior end of the PSM, but it’s not that relevant to the overall process compared to FGF8.
Therefore, the whole somitogenesis process, according to this model, is divided and analyzed into different parts. Before reaching a determination front, a cell will gain the ability to segment by being able to produce a "somitic factor", which could be several possibilities of genes. One clock oscillation after reaching the determination front, cells become able to produce the "signaling molecule". After a cell is able to produce somitic factor and respond to the signaling molecule, it is specified as somitic and becomes refractory to FGF8 signaling [16]
#### Mathematical Equations
C & W mathematical equations were first proposed by Collier _et al._ (2000) and were developed by Mclnerny _et al._ in 2004, then by Baker and colleagues in 2006. One of the most important features of this model is that local mechanisms, controlled by time points and positional information, will trigger segmentation, which fits the C & W assumption perfectly. After segmentation, cells adhere to each other, creating distinctive somites. When creating this model, Collier made some further assumptions [12]: (1) The AP-axis can be seen as fixed with respect to the cells. The PSM's length is constant and the segmentation pattern progresses with a constant speed. (In reality, the posterior end is actually elongating.) (2) The signals that are emitted by specified cells when they reach certain points are like pulses. The signaling molecule disperses fast and diffuses rapidly. This is the key assumption since rapid diffusion can ensure that only cells that are in certain positions will respond to the signal, if not, all cells will segment at the same time. (3) Somites
Figure 4: Representation of the vertebrate body plan during somite formation [4]. The top part of the diagram shows the FGF8 wavefront, with a peak in the posterior and a decrease in the direction of the anterior. When the FGF8 decreases to a certain level, it reaches the determination front. The middle section of the diagram shows the AP axis of the embryo with the somites (dark grey blocks), determined region (light grey blocks), and the undetermined region (light grey band) clearly marked. The bottom is a visualization of a segmentation clock, which shows the time needed for cells to gain segmentation.
are formed continually, and the beginning or end of this process is not considered, which means it's assumed that signals emitted from cells exist all the time.
This model can be well explained by Figure 5. In this diagram, x denotes the distance while t denotes time. There are two key components in this model: u(x,t) and v(x,t), in which u(x,t) represents the degree of concentration of somitic factor a cell is exposed to at a given x and t, while v(x,t) represents the diffusive signaling molecule. A cell that has a high concentration of u is specified as somitic, while those with a low concentration of u are non-somitic.
The mathematical equations proposed by Collier and colleagues are also based on these two components [12]:
\[\partial_{t}u(x,t)=\frac{(u+\mu v)^{2}}{\gamma+\kappa u^{2}}\chi_{u}(x,t)- \frac{u}{k} \tag{1}\]
\[\partial_{t}v(x,t)=\frac{\chi_{v}(x,t)}{\epsilon+u}-v-D\frac{\partial^{2}v}{ \partial x^{2}} \tag{2}\]
\(\chi_{u}\) and \(\chi_{v}\) are controlled by two Heaviside step functions:
\[\chi_{u}=H(ct-x+x_{1}) \tag{3}\] \[\chi_{v}=H(ct-x+x_{2}) \tag{4}\]
note that the Heaviside step functions' rule of calculation is:
\[H(x)=\begin{cases}1&x\geq 0\\ 0&x<0\end{cases} \tag{5}\]
As mentioned above, u and v represent the concentration of "somitic growth factor" and "signaling molecule" respectively, while other variables in these equations are all positive constants. This model uses a zero flux boundary condition. It
Figure 5: Representation of C & W model illustrating the two time points P1 and P2 and the three key stages within the model. [12]. Cells at the posterior end of the PSM (Region I) are less mature than those in other regions since the somitogenesis process starts from the anterior part. As cells become more mature in Region II, they become capable of responding to the signaling molecule v, emitted by cells at point P2. In Region III they begin to form somites and are no longer able to emit any signals.
indicates that this boundary condition prevents anything from leaving this system, which may be an application of the third assumption made by Collier mentioned above.
Heaviside functions,\(\chi_{u}\) and \(\chi_{v}\), play an important role in this model. They can be seen as switches: the elements inside brackets, t, and x, which are the time and location information, together determine the on and off of the dynamics in u and v respectively. In Figure 5, the Heaviside functions are shown along with the regions where the somitic growth factor u is, respectively, high (\(x<x_{2}+ct\)) and low (\(x>x_{1}+ct\)). Somitic growth factor and signaling molecules boost the somitogenesis process collectively and they affect each other, as we can see that \(\partial_{t}u\) is affected by v and \(\partial_{t}v\) is affected by u. Specifically, u inhibits v while v activates u.
The model was further expanded by Baker and colleagues in 2006. His team made some revisions to the two equations above and they added a third equation into the system: \(\frac{\partial w}{\partial t}\) which represents the changing gradient of FGF8, which downregulates the somitogenesis process [4]:
\[\frac{\partial u}{\partial t} =\frac{(u+\mu v)^{2}}{\gamma+u^{2}}\chi_{u}-u \tag{6}\] \[\frac{\partial v}{\partial t} =k(\frac{\chi_{v}}{\epsilon+u}-v)+D_{v}\frac{\partial^{2}v}{ \partial x^{2}}\] (7) \[\frac{\partial w}{\partial t} =\chi_{w}-\eta w+D_{w}\frac{\partial^{2}w}{\partial x^{2}} \tag{8}\]
and \(\chi_{w}=H(x-x_{n}-c_{n}t)\) where \(x_{n}\) and \(c_{n}\) are constants. Based on the previous system, this system is reproducing these important behaviors: (1) the increase of somitic factor u is activated by signaling molecules and is self-regulating. (2) The somitic factor is an inhibiting signaling molecule. In other words, signaling molecule is produced rapidly in areas where somitic factor concentration is low. (3) FGF8 is produced in the tail and regresses along the x-axis.
#### Analysis
The C & W mathematical model proved to be effective in producing a qualitatively reasonable match to reality. We recapitulated and reproduced some of the results of the above mathematical equations shown in McInerney and Baker's papers, and they do support the gist of C & W theory. We first analyzed the qualitative behaviors of this model, and the result can be explained with Figure 6, which is derived from Figure 5:
In region I, since the switches \(\chi_{u}=\chi_{v}=0,(u,v)\rightarrow(0,0)\), while in Region II and III, as \(\chi_{u}and\chi_{v}\) change with respect to t, the qualitative behavior of these two regions will be different. Below are the phase planes of u and v in Region II and III respectively, which is a reproduction of Fig 2 and Fig 3 in McInerney's paper [12] using XPP:
Figure 6: The three stages of somitogenesis from the posterior to the anterior end of the PSM.
In Region II, there are three steady states: two stable equilibriums when u is close to 0 and 1, and a saddle in the middle. The region II carries the pulse of the signaling molecule. In the phase plane, we can see that after the cells pass the determination front and before they finish one clock cycle, the cells gain the ability to respond to signals.- the somitic factor concentration u is always above 0, while they can't generate signaling molecules themselves, as v remains to be 0. In Region III, after cells undergo one cycle of segmentation clock, \(\chi_{u}=\chi_{v}=1\). There is only one stable equilibrium in the phase plane, meaning whatever u and v's initial values are, they will all arrive at that specific point and their values will remain stable, then cells are identified as somitic.
We then reproduced the numerical solutions for equations (1) and (2) in McInerey's Figure 11(a) and (b) [12], shown in figure 8.
We also reproduced the numerical solution of the C & W model in one spatial dimension given by equations (5) (6) (7 )in Baker's Fig 3 [16], using the code provided in the appendix. Figure 9 contains the numerical solutions for \(u(x,t),v(x,t)\) and \(w(x,t)\) respectively:
Figure 8: Numerical solution given by equation (1) and (2) for \(0\leq t\leq 300\). Parameter values: \(\mu=10^{-1},\gamma=0.2,\kappa=10,c=5*10^{-3},\epsilon=10^{-3},D=100\). This set of parameter values violated one of the conditions mentioned in the paper, so this solution is not a perfect one since for \(\mu=10^{-5}\) and \(\gamma=0.2\), a high level of v fails to activate u production [12].
Figure 7: The phase planes of u and v in Region II (left) and Region III (right).
However, the fact that the verification of the results of the above models, in some ways, shows the models' validity, can not prove the models to be flawless. There are some issues that need to be considered before constructing a better model. The equation set (1) and (2) is not robust because somites depend sensitively on plenty of factors, such as mesh, the speed c, and initial conditions. Any slight interference in those factors will prevent them from obtaining successful results. For equations (5), (6), and (7), although the results in Fig 9 show a clear and consistent pattern of pulses of somitic factor and signaling molecule, they rely heavily on a very smooth gradient w. Admittedly, in normal cases, the idea that u and v rely on a smooth gene gradient is not, in itself, problematic. However, in this model, it's simply assumed that a _generic_ FGF8 molecule makes up the gradient controlling the position of the determination front [16], which means although the gene's name is FGF8, it in fact represents the aggregate influence of multiple genes that may affect the somitogenesis process. In other words, the gradient is modeled at a very phenomenological level. Requiring such a gradient to be perfectly smooth becomes a drawback of this system. A stochastic FGF8 gradient or some random _fgf8_ pulse will easily mess up the result. This problem is mentioned and demonstrated in Fig 3 in Baker's paper [16]. Also, in this paper, the position of the determination front is prescribed, yet in reality, it will be subject to plenty of factors such as the gradient slope, etc...
Figure 9: Numerical solution given by equation (5) (6) and (7), showing the spatiotemporal dynamics of the somitic factor, (a). the signaling molecule, (b), and _fgf8_ (c). The regression of the FGF8 wavefront is accompanied by a series of pulses in the signaling molecule and coherent rises in the level of the somitic factor. Parameter values: \(\mu=10^{-4},\gamma=10^{-3},\kappa=10,\epsilon=10^{-3},\eta=1.0,D_{v}=50,D_{w }=20,x_{n}=0,c_{n}=0.5,D=100\).
## Oscillatory-based Model
### Summary of the PORD Model
In the C & W model mentioned above, the key is that long-range molecular gradients control the movement of the front and therefore the placement of the stripes in the embryo. In this section, we are introducing a fundamentally different system: the progressive oscillatory reaction-diffusion model, or PORD model, which does not rely on a global gradient control, but is driven by short-range interactions.
In the first section of this paper, we have introduced several "facts" and "unsolved questions" in the somitogenesis process. Although the oscillatory model's mechanism is very different from C & W, they do share a lot of similarities - it's just that their interpretations and understandings to those facts are different. The PORD model admits the existence of the posterior movement of the determination front, yet it explains that it's not controlled by global positional information but by interactions between cells. In Cotterell's paper, it's argued that the PORD model could also explain some other important features of somitogenesis, such as size regulation, which previous reaction-diffusion models fail to explain. However, we did find that controlling the FGF8 gradient such as adding random pulse in the C & W model, will result in larger somites, which strengthens the argument that the amount of somitic factor controls the size of somites.
The PORD model argues that there is a molecular patterning process that sequentially produces stripes of gene expression along the PSM, resulting in the segmentations in the PSM. Figure 10 shows this mechanism and its comparison with the C & W model. There are two dynamical systems that are involved in this process. First, cells of the PSM exhibit oscillations of gene expression. These oscillations are organized into traveling waves and they are locally well synchronized:
Figure 10: The comparison between C & W model’s and the PORD model’s mechanisms [5]. The left figure shows C & W model focuses on global gradient control while the PORD model focuses on short-range interactions between cells. The bottom figure shows the oscillation of gene expression in the PSM cells, which are the stripes. Each stripe of gene expression will, in the future, correspond to a subsequent somite boundary. The right figure is the comparison of the sensitivity of the stripe positions. The positional accuracy of the arrest front will be more sensitive to noise if defined by long-range gradients(top) than if defined by the distance from the last-formed expression stripe (bottom).
neighboring cells are in very similar phases of the cycle [5]. Second, these oscillations are arrested in an anterior-to-posterior progression, which means the position where the oscillations are frozen travels posteriorly through the PSM, and that position is called the arrest front. Note that the arrest front is similar but not equivalent to the determination front mentioned above, and is addressed in the discussion section in Cotterell's paper [5].
Despite being locally self-organizing, the PORD model involves both molecular oscillations in the PSM and a traveling wavefront. Yet, it continues to create stripes even in the absence of a moving FGF gradient. Thus it does not rely on positional information along the PSM. In this reaction-diffusion model, the distance between stripes is defined by the local diffusion of a repressor molecule, which is secreted from the stripes themselves (See the right of Figure 10). However, the fact that the model behaves the same with and without the gradient seems like a potential problem, since the PSM cells have been studied without a gradient and their behaviors seem to be very different [9].
Overall, the PORD model challenges the existing clock and wavefront models by providing a fundamentally alternative theory based on locally self-organization. It could explain some size scaling and have higher robustness of somite size regulation. Some of the PORD model's results also stand the test in chick embryos, which shows its validity [5].
#### Mathematical Equation
The exploration of mathematical equations for the PORD model is refreshing. Cotterell and colleagues enumerated all possible topologies that are possible for a gene regulatory network of three genes. Of the 9710 possible networks, 210 produced a multi-stripe pattern for at least one parameter set. Of all the stalactites in the topological tree containing successful topologies, they found two versions of the C & W model and several versions of the oscillatory PORD model.
The simplest design of the oscillatory model is a network that contains only two nodes ((A) of Figure 11), comprising a cell-autonomous activator (A), which is itself activated by the FGF signal, and a diffusible repressor (R). A and R are defined by the following equations:
\[\frac{\partial A}{\partial t} =\Phi(\frac{\kappa_{1}A-\kappa_{2}R+F+\beta}{1+\kappa_{1}A+ \kappa_{2}R+F\,+\beta})-\mu A \tag{9}\] \[\frac{\partial R}{\partial t} =\frac{\kappa_{3}A}{1+\kappa_{3}A}-D\nabla^{2}R-\mu R \tag{10}\]
where \(\kappa_{1},\kappa_{2},\kappa_{3}\) define the strengths of regulatory interactions between A and R. D is the diffusion constant for R, \(\mu\) is a fixed decay constant, and F is the regulatory input of the FGF gradient onto a. \(\beta\) is the background regulatory input of A. To prevent negative values of morphogens, we use the function \(\Phi(x)=xH(x)\), where \(H(x)\) is the Heaviside function.
Together they form a reaction-diffusion mechanism where R inhibition is responsible for the spacing of adjacent stripes. Since the PORD model does not rely on global positional information, the model does not spontaneously generate segments everywhere but rather progresses from anterior to posterior, which is similar to real-world biological phenomena. (B) and (C) in Figure 11 shows the wave of gene propagation and its oscillatory mechanism.
Figure 11: The PORD mechanisms [5]. **(A)** The minimum somite-patterning circuit that implements the PORD mechanism. It contains an activator molecule (green) and a diffusible repressor (red). \(\kappa_{1},\kappa_{2}\), and \(\kappa_{3}\) are strengths of interactions between A and R. It shows that the system relies on the FGF to activate but does not need FGF to perpetuate, which fits the PORD character. **(B)** Gene expressions are initiated at the posterior end and they travel to the anterior end. The white stripe is the formed somite, which is the position where the last gene expression stopped. When the next wave propagation arrives at a certain distance from the last one, it will stop and form the next stripe. **(C)** A snapshot of gene expression oscillation along the PSM. The oval dashed arrows indicate the oscillation directions. The blue line is the FGF gradient. It's not directly related to stripe formation. The Buffer Region is generated by the diffusion of the repressor from the last formed stripe, which inhibits oscillations. Therefore, cells cannot go beyond the Buffer Region and will exit oscillations in the Oscillating Edge Region to form new stripes. Newly formed stripes act as the next source of repressor to prevent oscillations. They will form new Buffer Regions and push the arrest front posteriorly.
### Analysis
The PORD model proves to be a typical oscillatory model. Its wave propagation theory as well as the mathematical equation both exhibit its oscillatory nature. We used XPP and recapitulated figure E of Figure 2 in the Cotterell paper [5], see the left of Figure 12.
Ignored its diffusion state and made stationary, the system reveals that oscil
Figure 12: XPP analysis of the PORD model. The left figure is the phase portrait for the non-diffusing case of Equations (8) and (9), which means to ignore the diffusing term D. The green and red lines are the nullclines for the activator A and inhibitor R respectively. The right figure is the bifurcation analysis for the same case. It shows the bifurcation of activator A with respect to the FGF gradient.
Figure 11: The PORD mechanisms [5]. **(A)** The minimum somite-patterning circuit that implements the PORD mechanism. It contains an activator molecule (green) and a diffusible repressor (red). \(\kappa_{1},\kappa_{2}\), and \(\kappa_{3}\) are strengths of interactions between A and R. It shows that the system relies on the FGF to activate but does not need FGF to perpetuate, which fits the PORD character. **(B)** Gene expressions are initiated at the posterior end and they travel to the anterior end. The white stripe is the formed somite, which is the position where the last gene expression stopped. When the next wave propagation arrives at a certain distance from the last one, it will stop and form the next stripe. **(C)** A snapshot of gene expression oscillation along the PSM. The oval dashed arrows indicate the oscillation directions. The blue line is the FGF gradient. It’s not directly related to stripe formation. The Buffer Region is generated by the diffusion of the repressor from the last formed stripe, which inhibits oscillations. Therefore, cells cannot go beyond the Buffer Region and will exit oscillations in the Oscillating Edge Region to form new stripes. Newly formed stripes act as the next source of repressor to prevent oscillations. They will form new Buffer Regions and push the arrest front posteriorly.
lations are the natural dynamic state for most cells in the PSM. The bifurcation analysis (the right of Figure 12) also reveals its oscillatory nature. The activator has a Hopf bifurcation which starts when the fgf8 drops to a certain level. Meaning, when the fgf8 is high, the activator will be stimulated. When fgf8 decreases to a certain level, the activator will interact with the inhibitor and will start to oscillate. The part in between the two green boundaries is where oscillation exists. And the oscillation will stop once the cells reach the arrest front, which in this case, is when the fgf8 decreases to 0.
However, the PORD model has received some criticisms. For one, although in the paper, it's been claimed multiple times that this model does not require the moving FGF gradient, it nevertheless acts to couple the rate of embryo growth with the integral levels of FGF8 signaling in the PSM [5]. Meaning, it can't ignore the fact that FGF8 plays an important role in controlling the size of somites. Higher levels of FGF signaling will result in smaller somites. Also, from the bifurcation analysis shown above, we can see that the character of the PORD system is somewhat similar to C & W system in that FGF8 gradient information could control the cell activities in both cases. The position where the activator starts to oscillate can be seen as the determination front in C & W model, and the new terminology "arrest front" in the PORD system also locates close to where the FGF8 gradient drops to a very low level. Simply put, although the PORD system has created some new terms such as the "buffer region" and "the arrest front", its activities, similar to C & W model, could still be explained by the FGF8 gradient control. Another problem is that, not only in PORD model but also in some other oscillatory-based models, many of them control the specific position of spatial stripes "manually", by defining thresholds or piecewise functions [14]. Although this may help create beautiful results, it's in contrast to the principle of the self-organization of biological systems.
Nonetheless, the PORD model, as one of the most famous oscillatory models for somitogenesis, does present a very different perspective. It reveals the possibility that cells themselves carry an oscillatory nature in the absence of diffusion. It also produces several nice movies to show the oscillation process clearly. We tried to reproduce that in Matlab but didn't succeed.
## 5 Excitable Model
### Summary of the one-dimensional RD Model
Both the C & W model and the PORD model have an important feature that has not been mentioned in previous sections. That is, both models set spatial continuity as a key requirement. Spatial continuity, in this case, means that both models ignore the size of cells and see the PSM as a whole unit, or, a spatial continuum. However, although spatial continuity is acceptable in most chemical reaction systems, Nagahara and colleagues argue that it's not always the case in biological systems. It's simply because cells in a multicellular organism have a finite size [14]. In the initial stages of an organism's developmental process, when important biological structures first emerge, the number of cells is usually small and the size of a single cell can not be simply ignored since the size of the field where the phenomena occur is comparable to that of a cell. As we know spatial continuity is not always met, we may have to consider the spatial variations between cells. Yet, Nagahara and colleagues propose that instead of thinking in that way, it's better to just treat cells as "interacting discrete nodes" in a network. Since the diffusion inside a cell is much faster than that in a membrane, treating cells as individual nodes that compose a huge network is suitable.
Nagahara also criticized the complicity and difficulty in other models, since models based on a continuum will have difficulty producing a narrow boundary between
different distinctive behaviors, as sharp as two or three cells. The fact that most models assume the existence of two or more different interactions, such as activator, inhibitor, etc., among the neighboring cells, is also complicated. Therefore, they created a simple, one-dimensional reaction-diffusion model that focuses on three things: (1) No diffusion among inhibitors, (2) cells are discrete instead of continuous, and (3) spatial inhomogeneity, where (1), (2) are new ideas, while (3) is an old one. See Figure 13.
#### FhN-type system and excitability
Below is a hypothetical model that describes the above features of gene expression in somitogenesis:
\[\frac{\partial u}{\partial t} =f(u,v)+D\mathcal{L}u \tag{11}\] \[\frac{\partial v}{\partial t} =g(u,v) \tag{12}\]
where u and v are concentrations of the activator and inhibitor respectively. \(D\mathcal{L}u\) is the diffusion term for the activator while f and g are the reaction terms of the activator and the inhibitor. f and g are given as follows:
\[f(u,v) =\frac{1}{\tau_{1}}(\frac{1}{\gamma}u(u-a)(1-u)-v+\beta) \tag{13}\] \[g(u,v) =\frac{1}{\tau_{2}}(u-v) \tag{14}\]
where \(\tau_{1}\) and \(\tau_{2}\) represent the time scales of local reaction kinetics of u and v respectively. \(\gamma\) represents the spatial gradient and the temporal change in the concentration of a certain substance [14], which is dependent on space x and time t. In reality,\(\gamma\)'s biological counterpart, in the case of somitogenesis, could be the FGF8 gradient in the PSM. \(\gamma\) thus plays an important role in this model.
We call this model the "FhN-type" model. The reason for that is because the reaction terms (12) and (13) resemble the "Fitzhugh-Nagumo" model. It's named after Richard Fitzhugh who suggested the model in 1961 and J. Nagumo who created the equivalent circuit the following year. The Fitzhugh-Nagumo model is a generic model for _excitable systems_. Because of its simple two-variable form and generality, it has been used widely. The Fitzhugh-Nagumo prototype model has the following
Figure 13: An illustration of the one-dimensional reaction-diffusion system proposed by Nagahara and colleagues [14]. The major difference between this one and the PORD model is that the diffusion of the inhibitor v is excluded. There’s only one interaction between neighboring cells, which is the activator u.
form:
\[\dot{v}=v-\frac{v^{3}}{3}-w+I_{ext} \tag{15}\] \[\tau\dot{w}=v+a-bw \tag{16}\]
The reason why we say this model represents _excitable systems_, is because when \(I_{ext}\), the external stimulus, exceeds a certain threshold level, the system will exhibit a characteristic excursion in the phase plane, before the variables \(v\) and \(w\) relax back to the rest values. We say the system is excited and will be refractory to excitability for a period of time. When \(I_{ext}\) doesn't exceed that threshold, there will not be an excursion, and we say the system is not yet excited and remains quiescent or excitable. Except for the excursion, the phase plane also contains two nullclines. One is linear and the other is a cubic nullcline, or a sigmoid. The excitability of the system can be discovered by looking at the spatial relationship between the two nullclines: the closer the linear nullcline to the peak of the sigmoid, the more excitable the system is. See Figure 14 for the phase plane.
#### Analysis
In this model, \(\gamma\) is defined as a linear gradient function:
\[\gamma(x)=0.21-0.20x,\hskip 28.452756ptx\in[0,1] \tag{17}\]
With the given information, we used XPP and created the bifurcation diagrams to find the model's qualitative features. See Figure 15. We can see that \(\gamma\) has a hopf bifurcation. When \(x\in[0,1]\), \(\gamma\) is invariant and the system is stable, yet when x increases beyond the boundary, the system tends to become oscillatory. \(\alpha\)'s bifurcation is a saddle-node bifurcation. Together with the stable region of \(\gamma\)'s bifurcation, the system tends to exhibit a bistable region when \(\gamma\) is small. Two stable regions will coexist and then the system will tend to be oscillatory as \(\gamma\) increases. The right figure is a cusp bifurcation. Note that the cusp bifurcation has a normal form which resembles equation (12).
Figure 14: The phase plane for the prototype Fitzhugh-Nagumo model. \(I_{ext}=0.5\), a = 0.8, b = 0.7. The blue line is the trajectory of the FhN model in the phase space. The pink and yellow lines are two nullclines where the pink line is a cubic nullcline and the yellow line is the linear nullcline.
The Nagahara paper utilizes the idea of bistability. By changing the parameter \(\gamma\), which adjusts the amplitude of cubic function, we can vary the local kinetics from oscillation to bistability [14]. Figure 16 shows the changes in the nullclines \(f(u,v)=0\) and \(g(u,v)=0\) by manipulating \(\gamma\). As we can see, the system gains more excitability when \(\gamma\) is decreased, since the decrease in \(\gamma\) will make the sigmoid, or the cubic nullcline, elongated vertically. As the linear nullcline will not change, this elongation will close the distance between the linear nullcline and the sigmoid, thus making the system more easily excited. Furthermore, using plotting techniques in Mathematica, we found that manipulating \(\alpha\) and \(\beta\) in equation (12) will also change the excitability, in which the increase in \(\alpha\) makes the system less excitable, while the increase in \(\beta\) makes the system more excitable.
A spatiotemporal diagram is shown in Figure 17, where (a) \(\gamma\) is given as invariant. A single pulse triggered from the left boundary propagated to the right and generated a stationary band at a specific position. (b) If we take into account the posterior growth of the PSM, \(\gamma\) will be a function of both space and time. The pulses will lead to a static, periodic structure. (c) \(\gamma\) sets to decrease as the wave passes, while we ignore the growth of the PSM. The pulses can also create a static, periodic structure, but the bandwidth will be much thicker.
Figure 16: Phase planes for equation (12) and (13) [14]. They correspond to (a) oscillatory state in big \(\gamma\) case and (b) bistable state in small \(\gamma\) case.
Figure 15: The bifurcation diagrams for the one-dimensional RD model. The left figure is a hopf bifurcation of \(\gamma\). The middle figure is a saddle-node bifurcation of \(\alpha\). The right figure is a co-dimension bifurcation diagram, which is a cusp bifurcation of \(\gamma\) and \(\alpha\). Cusp bifurcation is defined as two branches of saddle-node bifurcation curve that meet tangentially, forming a semi-cubic parabola.
However, the Nagahara model is not a typical RD model. The structures in Figure 17 will not form unless the model is discrete, which is in opposition to continuous. Spatial discreteness is an important feature, and also the base for this model. Normally in the continuous case, the wave propagation triggered from the left will not stop and generate a stationary band, rather, it will propagate across the field without stopping [14]. The paper proposes that, when D is small enough, the propagation of the wave will be blocked and there exist stable steady solutions. This phenomenon is called "wave propagation failure". We used Matlab and simulated what will happen to the system as D varies, and we obtained the result shown in Figure 18, using the code provided in the appendix.
Overall, this one-dimensional RD model, which is the FhN-type model, is the most immature model among the three. Unlike the previous two that have already been going through in vitro experiments, this model is highly theoretical and leans more toward physics than biology. However, the simple, fresh idea of excitability opens a new perspective to see the whole process. Excitability in this model, as we have found in Mathematica, can vary with respect to \(\gamma,\alpha,\) and \(\beta\). Depending on three variables seems a bit capricious but the simple philosophy behind excitability - how easy it is to cross the threshold - makes it simple to accurately manipulate the excitability of a system or how to make it "excited". We look forward to exploring more about this feature, and to implementing it in an effort to effectively understand somitogenesis.
Figure 17: The numerical simulations for the one-dimensional RD model, illustrate the manner of wave propagation, depending on the geometrical distribution of \(\gamma\)[14].
Figure 18: The spatiotemporal diagram for the system as D varies. D decreases from (a) to (c). In (a), \(D=10^{-3}\). The system is not yet discrete and the wave propagates through the field non-stop. In (b), \(D=10^{-4}\). The system becomes discrete, and the wave propagation leaves a steady state solution, just like what’s shown in Fig 17 (a). In (c), \(D=10^{-5}\). The system is discrete but the diffusion is so weak that the signal fails to reach the bistable region.
## Conclusion
As they represent several of the mainstream ideas in the field, it is not surprising that all three models provide abundant insights into the hidden mechanisms behind somitogenesis. While their core ideas are different in one way or another, and each of them has its own flaws, each model's existence nevertheless greatly improves scientists' understanding toward this field and motivates new experiments.
The Clock and Wavefront model proposed a prescribed determination front, which is determined by the level of FGF8 gradient, that controls the positioning of somites. It segments the PSM cell into different regions and explains the somitogenesis process systematically. The result is easy to recapitulate and it shows desired characters that fit the theory. Its biggest drawback is its hard-coded outcomes, for example, the model can not explain experiments in which mutant embryo determination fronts change since its position is prescribed in the model.
The PORD model proposes a local reaction-diffusion oscillatory mechanism that can generate stripes of somites without global gradient information. The system is simple and their video result is very impressive. However, in contrast to its claims, the PORD model still relies on the FGF8 gradient to control the size of somites, and many of its mechanisms can still be explained by _fgf8_ which seems as if the theory is like another "perspective" to see the _fgf8_'s effects. Many of the oscillatory models control the specific position of the stripes manually using thresholds or piecewise functions, which violates its self-organizing nature.
The excitable model is a one-dimensional reaction-diffusion model that resembles the "Fitzhugh-Nagumo" model, which is an excitable, generic model widely applied in fields of physics. Discreteness plays an important role in the system: the model decreases the diffusion level until cells are no longer continuous, then blocks the wave propagation generated by excitability, creating fixed stripes of somites, which is a refreshing idea. This model is theoretical and has not gone through in vitro experiments, and its result is not robust as it needs a very fine-tuned diffusion level, which is hard to achieve in reality, but the idea of an excitable system has latent potential and much to be exploited.
The C&W and PORD models yield great results that match our desire. However, their strict conditions often do not meet in real-life biology, and it seems that this problem can't simply be solved under the existing frames of mathematical equations. In my perspective, excitability is what needs to be studied the most rigorously in order to understand somitogenesis. Dr. Hubaud and his colleagues' research [9] which proposed excitability as a general framework for oscillations in the PSM cell is a great start. While referring to other models for inspiration, we should boldly explore the direction of excitable models, instead of sticking to past experiences and techniques on other models that seem more successful at the moment, since forsaking the mindset that holds for the moment is a must to create a better one.
|
2305.09959 | Warm Molecular Gas in the Central Parsecs of the Buried Nucleus of NGC
4418 Traced with the Fundamental CO Ro-vibrational Absorptions | We investigated the inner buried nucleus of a nearby luminous infrared galaxy
NGC 4418 using high-resolution spectroscopy of fundamental carbon monoxide (CO)
ro-vibrational absorptions around $4.67 \mu$m for the first time. This method
allowed us to examine the physical and kinematical properties in the hot inner
region of this nucleus. We detected a series of both very deep (partly
saturated) $^{12}$CO and moderately deep (optically thin) $^{13}$CO absorption
lines and inferred a large column density ($N_\mathrm{H2}=(5\pm3)\times10^{23}$
cm$^{-2}$ in front of the $5 \mu$m photosphere) of warm
($T_\mathrm{ex}\simeq170$ K) molecular gas by assuming an isothermal
plane-parallel slab illuminated by a compact background MIR-emitting source. We
modeled that the warm CO absorber almost covers the central heating source and
that it is an inner layer around the $5 \mu$m photosphere (at $r=$several pc)
of a compact shroud of gas and dust ($d\sim100$ pc). The width of the
absorption lines ($110$ km s$^{-1}$) and their small deviation from the
systemic velocity ($<10$ km s$^{-1}$) are consistent with a warm and turbulent
layer with little bulk motion in the radial direction. | Youichi Ohyama, Shusuke Onishi, Takao Nakagawa, Kosei Matsumoto, Naoki Isobe, Mai Shirahata, Shunsuke Baba, Kazushi Sakamoto | 2023-05-17T05:35:56Z | http://arxiv.org/abs/2305.09959v2 | Warm Molecular Gas in the Central Parsecs of the Buried Nucleus of NGC 4418 Traced with the Fundamental CO Rovibrational Absorptions1
###### Abstract
We investigated the inner buried nucleus of a nearby luminous infrared galaxy NGC 4418 using high-resolution spectroscopy of fundamental carbon monoxide (CO) rovibrational absorptions around 4.67 \(\mu\)m for the first time. This method allowed us to examine the physical and kinematical properties in the hot inner region of this nucleus. We detected a series of both very deep (partly saturated) \({}^{12}\)CO and moderately deep (optically thin) \({}^{13}\)CO absorption lines and inferred a large column density (\(N_{\rm H2}=(5\pm 3)\times 10^{23}\) cm\({}^{-2}\) in front of the 5 \(\mu\)m photosphere) of warm (\(T_{\rm ex}\simeq 170\) K) molecular gas by assuming an isothermal plane-parallel slab illuminated by a compact background MIR-emitting source. We modeled that the warm CO absorber almost covers the central heating source and that it is an inner layer around the 5 \(\mu\)m photosphere (at \(r=\)several pc) of a compact shroud of gas and dust (\(d\sim 100\) pc). The width of the absorption lines (110 km s\({}^{-1}\)) and their small deviation from the systemic velocity (\(<10\) km s\({}^{-1}\)) are consistent with a warm and turbulent layer with little bulk motion in the radial direction.
Infrared spectroscopy (2285) -- Luminous infrared galaxies (946) +
Footnote †: journal: Journal of Physics B
## 1 Introduction
NGC 4418 is a nearby luminous infrared galaxy (LIRG) notable for its compact and luminous but highly obscured nucleus at a distance of 34 Mpc (\(V_{\rm sys}=2117\) km s\({}^{-1}\) in the heliocentric system1; Ohyama et al., 2019; note that at this distance, 1'' corresponds to 165 pc). This galaxy hosts a compact bright mid-infrared (MIR) emitting core at the nucleus, which cannot be resolved at \(\sim 0\farcs 3\) resolution (Evans et al., 2003; Siebenmorgen et al., 2008; Roche et al., 2015). The corresponding submillimeter and far-infrared (FIR) cores (Sakamoto et al., 2013, 2021; Lutz et al., 2016) emit most of its large bolometric luminosity (\(L_{\rm bol}\simeq 1\times 10^{11}\)\(L_{\odot}\); e.g., Gonzalez-Alfonso et al., 2012). It displays a prominent red spectral energy distribution (SED) at MIR beyond \(\sim 5\)\(\mu\)m, but only a stellar SED at the shorter wavelengths (Spoon et al., 2001; Evans et al., 2003; Imanishi et al., 2004; Siebenmorgen et al., 2008). This object is located at the end of the deepest 9.7 \(\mu\)m absorption by amorphous silicate dust (hereafter, the 9.7 \(\mu\)m absorption) and the smallest equivalent width of the polycyclic aromatic hydrocarbon (PAH) 6.2 \(\mu\)m emission on the so-called Spoon diagram and is classified as 3A (Spoon et al., 2007). The very deep 9.7 \(\mu\)m absorption and the prominent ice features at MIR are often discussed in the context of obscured heating sources such as buried
young stellar objects (e.g., Dudley & Wynn-Williams, 1997; Spoon et al., 2001, 2004). All of these characteristics are very different from typical active galactic nuclei (AGNs) and starburst galaxies, and the nucleus of NGC 4418 is sometimes referred to as a compact obscured nucleus (CON; Costagliola et al., 2013; Falstad et al., 2021 and references therein).
Whether the CONs are powered by very compact young star clusters or AGNs is an open question, and the CON of NGC 4418 has been extensively studied in this regard due to its proximity and large FIR luminosity. NGC 4418 has often been classified as an AGN on the basis of indirect observational evidence such as a compact radio core (\(\lesssim 1^{\prime\prime}\); e.g., Kewley et al., 2000; but see below for the very long baseline interferometry, VLBI result), the extremely large MIR luminosity surface density (Evans et al., 2003; Siebenmorgen et al., 2008), and the very deep 9.7 \(\mu\)m and prominent ice absorptions at MIR (e.g., Dudley & Wynn-Williams, 1997; Spoon et al., 2001). However, more rigorous searches for an obscured AGN have not yet provided clear answers. In X-rays, NuSTAR found no hard X-ray emission, although an AGN still cannot be ruled out because an extremely large hydrogen column density (\(N_{\rm H}>10^{25}\) cm\({}^{-2}\); Gonzalez-Alfonso et al., 2012; Costagliola et al., 2013; Sakamoto et al., 2021) can obscure even the hard X-rays below the current detection limit (Yamada et al., 2021). At (sub)millimeter wavelengths, a molecular emission line ratio of HCN/HCO\({}^{+}\) has been proposed as a powerful diagnosis of the central heating source (e.g., Kohno et al., 2001; Imanishi et al., 2004) on the basis of the XDR (X-ray dominating region) models (e.g., Meijerink & Spaans, 2005). This technique has been applied to this galaxy nucleus, but the results are still controversial (e.g., Imanishi et al., 2004; Aalto et al., 2007; Imanishi et al., 2018; see also Gonzalez-Alfonso et al., 2012; Costagliola et al., 2013). In radio, VLBI observations have resolved the radio core into multiple very compact blobs, clearly indicating that a single AGN alone cannot explain its nuclear activity (Varenius et al., 2014).
The main goal of this study is to investigate the physical and kinematical properties of the molecular gas within the CON of NGC 4418. We utilize a novel method to analyze the fundamental (\(v=0\to 1\)) rovibrational absorption lines of carbon monoxide (CO) centered at 4.67 \(\mu\)m with high-resolution spectroscopy. Imanishi et al. (2010) reported the deep CO absorption in NGC 4418 with the low-resolution AKARI spectrum. This method has some advantages over the abovementioned methods. Thanks to the compact hot dust distribution around the central heating source (\(r\sim 3.6\) pc or \(0\farcs 04\) FWHM; Section 6.2), we can use a pencil beam to effectively eliminate contamination from, e.g., circumnuclear star formation. In addition, thanks to the many rovibrational transitions of CO within a small wavelength interval, we can analyze many transitions of the same molecular species at once, eliminating ambiguity regarding the abundance effect on the line ratio. Finally, absorption line analysis with transitions from the vibrational ground state is much simpler than emission line analysis when complex excitation such as IR pumping is involved.
Spoon et al. (2003) detected resolved rovibrational CO absorption lines in NGC 4945 and pioneered the analysis of the CO absorption to derive physical properties of the CO-absorbing gas in an extragalactic environment. Spoon et al. (2004) detected a blended broad CO absorption feature in the low-resolution Spitzer spectrum of IRAS F00183\(-\)7111, an ultra-luminous infrared galaxy (ULIRG) hosting an AGN, and investigated the warm dense gas in the vicinity of the nucleus for the first time. Geballe et al. (2006) and Shirahata et al. (2013) performed the similar CO absorption analysis to IRAS 08572+3915, another nearby ULIRG hosting an AGN, using high-resolution spectroscopy and performed a detailed study of the physical and kinematical conditions of the AGN dusty torus. Onishi et al. (2021) further studied this galaxy to examine the inflow and outflow of warm CO in detail, and Matsumoto et al. (2022) used a theoretical model to demonstrate that such flow signatures can be observed. Baba et al. (2018) studied a sample of nearby obscured AGNs with low-resolution AKARI and Spitzer spectra to systematically examine the properties of the hot gas in AGNs (see also Baba et al., 2022). We will compare NGC 4418 with these galaxies in detail in Section 6.5.
## 2 Observation and Data Reduction
The M-band Echelle spectrum of NGC 4418 around 4.8 \(\mu\)m was obtained with an IRCS spectrograph (Tokunaga et al., 1998; Kobayashi et al., 2000) at the Subaru telescope on 2010 February 27 and 28 (UT). We adjusted the grisms to include as many \({}^{12}\)CO \(P\) branch (\(J\to J-1\)) transitions as possible and a few \(R\) branch (\(J\to J+1\)) transitions spanning over two wavelength coverages with a small wavelength overlap. Combined with a \(0\farcs 54\)-wide slit, this setup provided a spectral resolution of \(R=5300\) (or \(dV=57\) km s\({}^{-1}\)). We took many short-exposure frames while dithering along the slit direction every 60 seconds, following a standard "ABBA" sequence. Total exposure times were 68 and 76 minutes for the short- and long-wavelength coverage, re
spectively. The standard star HR 5685 was observed every night for each spectral coverage.
Data reduction was performed in a standard way for Echelle spectral images obtained in the ABBA sequence. Sky subtraction was performed by subtracting A-B and B-A pairs of the frames and the sky-subtracted frames were stacked after correcting for dithering offsets. The extracted one-dimensional spectrum of the standard star for each night was used to calibrate the wavelength (by referencing narrow telluric absorption features) and to remove the telluric absorption features of the similarly extracted spectrum of NGC 4418 taken on the same night. A small flux scaling was applied to stitch the two spectra together using the wavelength overlap between them (between 4.766 \(\mu\)m and 4.780 \(\mu\)m around \(P(8)\) and \(P(9)\) of \({}^{12}\)CO; Section 3). The noise of the spectrum was estimated from the standard deviation of individual short-exposure frames. The heliocentric velocity correction was applied to the wavelength and velocity of the final spectrum.
## 3 The Spectrum
The spectrum displays a series of deep absorption lines of \({}^{12}\)CO \(v=0\to 1\) rovibrational transitions (hereafter, the \({}^{12}\)CO absorptions; Figure 1 top and third panels). The noise spectrum (bottom panel) exhibits many narrow wavelength regions with higher noise, which are caused by strong telluric absorptions. Their width corresponds to the instrumental resolution. The \({}^{12}\)CO absorption lines are much broader than these noise spikes and are found in a different pattern of the noise spikes, thereby confirming the robust detection of all of the \({}^{12}\)CO absorptions. Most of the narrow positive and negative spikes in the spectrum (top and third panels) that have similar widths to the noise spikes, on the other hand, are likely artifacts due to these strong telluric features. Our wavelength coverage includes the \(P\) branch up to \(P(18)\) and the \(R\) branch up to \(R(3)\) of \({}^{12}\)CO. Many \({}^{12}\)CO absorptions are deep, but display nonzero flux at the bottom of the absorption profiles. The spectrum around \({}^{12}\)CO \(P(6)\)-\(P(8)\) (and some other wavelength regions) looks more complex than other spectral regions that simply display periodic \({}^{12}\)CO absorptions, indicating additional absorption features there. They appear in a different pattern of the noise spikes, as in the case of the \({}^{12}\)CO absorptions. As we will demonstrate below, they are explained by the \({}^{13}\)CO \(v=0\to 1\) rovibrational absorption lines (hereafter, the \({}^{13}\)CO absorptions). Our wavelength coverage includes the \(P\) branch up to \(P(7)\) and the \(R\) branch up to \(R(17)\) of \({}^{13}\)CO. The \({}^{13}\)CO absorptions, as some isolated ones (e.g., \(P(2)\) and \(R(5)\)) indicate, are much shallower than the \({}^{12}\)CO absorptions. We also detected a pure-rotational H\({}_{2}\)\(S(9)\) emission line, but no hydrogen Pt\(\beta\) emission line.
### Spectral fitting
We performed spectral model fitting to measure the optical depths, the recession velocity, and the line width of the \({}^{12}\)CO and \({}^{13}\)CO absorptions. We adopted a realistic but as simple as possible model in this work, while S. Onishi et al. (2023, in preparation) will discuss the multicomponent models. We assumed a single velocity component, but fitted the \({}^{12}\)CO and \({}^{13}\)CO absorptions independently to measure \(\tau_{\rm 12CO}(J)\) and \(\tau_{\rm 13CO}(J)\). We also assumed a Gaussian optical depth profile for each of the CO absorption lines with the common recession velocity (\(V_{\rm CO}\)) and the velocity width (\(\sigma_{\rm CO}\)) for the \({}^{12}\)CO and \({}^{13}\)CO absorptions. We obtained their transition parameters from HITRAN (Gordon et al., 2022) and the Leiden Atomic and Molecular Database (van der Tak et al., 2020). We fitted the spectrum with a second-order polynomial continuum without the continuum normalization. We assumed an isothermal plane-parallel slab illuminated by a compact background MIR-emitting source. We modeled the observed spectrum by the background continuum with the CO absorption lines and a scaled continuum, as \(f_{\rm obs}\equiv(1-S)\times f_{\rm cont}\times\exp{(-\sum(\tau_{\rm CO(J)}))}+S \times f_{\rm cont}\), where \(S\) represents the scaling constant. The scaled continuum in the second term (the continuum "floor") is to approximate the nonzero fluxes at the bottom of the deep \({}^{12}\)CO absorptions. We caution that, in general, both CO line emission and dust continuum emission from various sources contribute to the floor2, and we cannot distinguish them without detailed physical models of gas, dust, and their emission. The floor can be simplified as the leaking background continuum not covered by the absorber in front, as \((1-C_{\rm f})\times f_{\rm cont}\), where \(C_{\rm f}\) is the covering factor of the absorber over the background light when seen from the observer, if we neglect all other contributions. In the following, we adopt \(S=1-C_{\rm f}\) for simplicity. We note that \(C_{\rm f}\) measured this way is a lower limit of the true covering factor in general, and the less-than-unity \(C_{\rm f}\) we obtain below does not necessarily indicate the presence of uncovered
light from the background source. We added the H\({}_{2}\,S(9)\) emission line by assuming a Gaussian profile at a recession velocity (\(V_{\rm H2S9}\)) different from \(V_{\rm CO}\). Because this line appears as narrow as the instrumental width (see below), we assumed it to be unresolved. Possible ice absorptions within the wavelength coverage, two CO ice features near \({}^{12}\)CO \(P(1)\), are much broader than the CO gas absorption lines (see Onishi et al., 2021 for the case of IRAS 08572+3915), and the spectrum does not display such features. Therefore, we omitted them from the model. We call the model described so far the full model to distinguish it from the simplified model described below.
We simultaneously fitted the continuum, the floor, and all spectral features with 55 free parameters with the full model. We utilized the Bayesian fitting code "emcee", which incorporates the Markov Chain Monte Carlo technique (Foreman-Mackey et al., 2013). We adopted flat priors with the following simple range constraints for physically likely solutions and better convergence.3 For some \({}^{12}\)CO and \({}^{13}\)CO pairs whose absorption profiles closely overlap each other, we set \(\tau_{12\rm CO}>\tau_{13\rm CO}\). We
Figure 1: The observed and the best-fit model spectra with the full model. Top: the observed spectrum (black) with the best-fit model spectra without \({}^{13}\)CO (red), the continuum only (orange), and the floor only (blue) overlaid. The detected \({}^{12}\)CO and H\({}_{2}\,S(9)\) lines and the expected position of Pf\(\beta\) for the CO velocity are marked. Second: the residual spectrum with the best-fit model spectrum without \({}^{13}\)CO (black\(-\)red in the top panel; red). Third: the observed spectrum (black) with the best-fit model spectra including all components (magenta), without \({}^{12}\)CO (blue), and the continuum only (orange) overlaid. The modeled \({}^{13}\)CO absorptions are marked. Fourth: the residual spectrum including all components (black\(-\)magenta in the third panel; magenta). Bottom: the noise spectrum (black). The second, third, fourth, and bottom panels are shifted by \(-0.5\), \(-2.5\), \(-3.0\), and \(-4.0\), respectively, as indicated by the horizontal dashed lines. All spectra are shown only where the signal-to-noise ratio (S/N) per pixel of the observed spectrum is greater than 1.5, while the entire spectrum along with the noise was used for the model fit. An emission line-like feature at 4.8515 \(\mu\)m (or at 4.8173 \(\mu\)m in the rest frame of NGC 4418) is located at one of the strong telluric absorption features and is an artifact caused by the telluric correction. Each model spectrum has a shade corresponding to the 16th–84th percentile range, which is very narrow except around some \({}^{13}\)CO absorptions that overlap the nearby \({}^{12}\)CO absorptions.
adopted a criterion of close overlap as the central wavelength differences of a line pair being less than 15 km s\({}^{-1}\) (\(\simeq 1/4\times\sigma_{\rm CO}\)). For H\({}_{2}\,S(9)\), we constrained its recession velocity to avoid the nearby CO absorption lines. In the emcee run, we had 120 walkers, each with \(5\times 10^{5}\) steps, thinned every 500 steps, with the burn-in steps removed by \(1\times 10^{5}\), for a typical autocorrelation length of 2000.
We also performed a similar spectral model fitting, but with a simplified model assuming a single component in the local thermodynamic equilibrium (LTE) conditions (hereafter, the LTE model), to test the robustness of the fitting with the full model above. This is because the full model, with a very large number of free parameters to fit (55), could overfit the complicated spectrum and/or be severely affected by degeneracy among the fitting parameters, and could provide unreliable results, although the full model results indicate a simple LTE-like population distribution on the rotation diagrams (Section 4). In the LTE model, we linked the optical depths of different transitions according to the Boltzmann distribution and derived the excitation temperatures and the column densities directly, without fitting the optical depths on the rotation diagrams afterward. We adopted the same assumptions and model equation for the kinematics and the line profile of the CO absorptions, the continuum shape, the floor, and the H\({}_{2}\) emission line as for the full model. This simplified model requires only 12 parameters to fit (six parameters regarding the CO absorption lines--one set of column density and excitation temperature for \({}^{12}\)CO, another set for \({}^{13}\)CO, their common recession velocity and the velocity width--along with six parameters regarding other components--the continuum, the floor, and the H\({}_{2}\) emission--that are common to the full model), ensuring much more robust results. We used the same software in a similar manner as for the full model. The LTE model results and comparison with the full model results are shown in the Appendix. Because we found consistent results between the two models, we adopt the full model results in the following sections.
### Fit results
With the full model, we successfully obtained a good fit with reduced-\(\chi^{2}\simeq 1.1\) with 1985 degrees of freedom (Figure 1; Table 1). Most parameters, except those with the closely overlapping \({}^{12}\)CO and \({}^{13}\)CO absorptions, were well constrained. The best-fit model reasonably reproduced some spectral portions between the CO absorption lines that display an unabsorbed continuum (e.g., \(R(0)\)-\(P(1)\) and \(P(10)\)-\(P(12)\) of \({}^{12}\)CO). This suggests that the true continuum is not far above the best-fit model continuum and that the CO absorption depths were not significantly underestimated due to the inaccurate continuum placement. The \({}^{12}\)CO absorptions are deep and mostly saturated (over a floor with a large \(C_{\rm f}=0.86\pm 0.01\)) with \(\tau\gtrsim 3\) at the line centers (top panel). Even higher-\(J\) absorption lines (up to \(P(18)\) with \(E_{\rm lower}(J)/k=945\) K, where \(k\) is the Boltzmann constant) are deep although they may not be as heavily saturated as the lower-\(J\) ones. The residual spectrum after subtracting the \({}^{12}\)CO absorptions (second panel) reveals a series of moderately deep (\(\tau\lesssim 1\) at the line centers) absorption lines whose wavelengths exactly match those of the \({}^{13}\)CO absorptions (third and fourth panels). After removing uncertain detections due to deep nearby \({}^{12}\)CO absorptions or low signal-to-noise ratio (S/N), the \({}^{13}\)CO absorptions were unambiguously detected up to \(J=13\) (\(R(13)\)) with \(E_{\rm lower}(J)/k=481\) K. The recession velocity for all CO absorptions is very close to the systemic one (\(dV_{\rm CO}=+4\pm 1\) km s\({}^{-1}\)). The absorption profiles are broad (\(110\pm 3\) km s\({}^{-1}\) FWHM after subtracting the instrumental width in quadrature), as some isolated unsaturated \({}^{13}\)CO absorptions (e.g., \(P(2)\) and \(R(5)\)) indicate. The H\({}_{2}\,S(9)\) emission line is well reproduced by an unresolved line (FWHM\(=57\) km s\({}^{-1}\)) at a slightly lower velocity than the systemic one (\(dV_{\rm H2S9}=-21\pm 7\) km s\({}^{-1}\)).
**Table 1**. Parameters of the CO Absorption Lines (\(v=0\to 1\), \(J\to J^{\prime}\)) toward NGC 4418
\begin{tabular}{c c c c c c c c c c} \hline \hline \(J\) & \(\lambda^{a}\) & \(E_{\rm lower}(J)/k^{b}\) & \(F_{\rm J^{\prime}}\)\({}^{c}\) & & \(\tau^{d}\) & & \(N_{\rm CO}(J)^{e}\) (\(10^{17}\) cm\({}^{-2}\)) & Rotation Diagram\({}^{f}\) \\ & (Å) & (K) & (\(10^{-6}\)) & 16th & 50th & 84th & 16th & 50th & 84th & \\ & & & Percentile & Percentile & Percentile & Percentile & Percentile & Percentile & Percentile & \\ \hline \({}^{12}\)CO \(R\) branch (\(J^{\prime}=J+1\)) & & & & & & & & & \\
3 & 46333 & 33.1917 & 6.5668 & 82.9 & 92.8 & 104.4 & 6.64 & 7.44 & 8.36 & ✓ \\
2 & 46412 & 16.5962 & 6.8814 & 70.7 & 80.6 & 92.3 & 5.39 & 6.14 & 7.03 & ✓ \\ \hline \end{tabular}
**Table 1**_continued_
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline \(J\) & \(\lambda\)\({}^{a}\) & \(E_{\rm lower}(J)/k\)\({}^{b}\) & \(F_{\rm JJ^{\prime}}\)\({}^{c}\) & & \(\tau\)\({}^{d}\) & & \(N_{\rm CO}(J)\)\({}^{e}\) (\(10^{17}\) cm\({}^{-2}\)) & Rotation Diagram\({}^{f}\) \\ & (Å) & (K) & (\(10^{-6}\)) & 16th & 50th & 84th & 16th & 50th & 84th \\ & & & Percentile & Percentile & Percentile & Percentile & Percentile & Percentile & \\ \hline
1 & 46493 & 5.5321 & 7.6270 & 95.8 & 115.9 & 139.5 & 6.57 & 7.94 & 9.56 & \\
0 & 46575 & 0.0000 & 11.4173 & 44.5 & 57.8 & 71.2 & 2.03 & 2.64 & 3.25 & \\ \hline \({}^{12}\)CO \(P\) branch (\(J^{\prime}=J-1\)) & & & & & & & & \\
[MISSING_PAGE_POST]
\hline \end{tabular}
\end{table}
Table 1: _continued_
## 4 Rotation diagrams and hydrogen column density
We analyzed the rotational-level populations in the \(v=0\) vibrational energy state on the rotation diagrams (Figure 2). We calculated the column density of the \(J\)th rotational state (\(N_{\rm CO}(J)\)) using the optical depth of that transition with equations (2)-(5) of Onishi et al. (2021). We removed the optical depth measurements of \({}^{12}\)CO \(P(3)\) and \({}^{13}\)CO \(R(9)\) immediately adjacent to the H\({}_{2}\,S(9)\) emission, as well as some closely overlapping \({}^{12}\)CO and \({}^{13}\)CO pairs (Section 3.1; see the last column of Table 1 for the transitions used). When the gas is in the LTE conditions, the level populations are distributed according to the Boltzmann distribution, as in Equation (12) of Onishi et al. (2021). Under such conditions, the (natural) logarithm of level populations (divided by the statistical weight of \(2J+1\)) align along a straight line determined by the excitation temperature (\(T_{\rm ex}\)) and the total column.
We found that linear functions fit both \({}^{12}\)CO and \({}^{13}\)CO reasonably well and derived \(T_{\rm ex,12CO}=286\pm 30\) K and \(T_{\rm ex,13CO}=173\pm 17\) K, and the total column densities \(N_{\rm 12CO}=(1.5\pm 0.3)\times 10^{19}\) cm\({}^{-2}\) and \(N_{\rm 13CO}=(4.3\pm 0.5)\times 10^{18}\) cm\({}^{-2}\). However, the results seem unreasonable; the two excitation temperatures do not match, and the \({}^{12}\)CO column density is only \(\simeq 3.5\)
Figure 2: The rotation diagrams of \({}^{12}\)CO (top) and \({}^{13}\)CO (bottom) in log10 scale with 16th–84th percentile ranges around the median as a function of \(E_{\rm lower}(J)/k\). The red and blue solid straight lines represent the best-fit LTE populations with the full model result for the \({}^{12}\)CO and \({}^{13}\)CO, respectively. The red dashed line represents the scaled (by a factor of 29) \({}^{13}\)CO best-fit model (blue solid line). The red and blue dotted lines with shaded areas represent the best-fit populations with the LTE model (see the Appendix) for \({}^{12}\)CO and \({}^{13}\)CO, respectively. The shaded areas represent the 16th–84th percentile ranges of the populations.
times that of \({}^{13}\)CO, while the Galactic [\({}^{12}\)C]/[\({}^{13}\)C] abundance ratio is 20-100 (e.g., Yan et al., 2023). We interpreted the results as an underestimation of the \({}^{12}\)CO columns of the lower-\(J\) levels due to the saturation of the absorptions; the deep absorption profiles change little as a function of the optical depth near the bottoms, where the S/N is very low due to the absorptions, and the fit cannot exclude solutions with moderate optical depths. The posterior probability distribution functions of the lower-\(J\)\({}^{12}\)CO columns indeed show asymmetric tails toward larger columns, while those of the \({}^{13}\)CO columns do not. Therefore, we scaled the \({}^{13}\)CO model-level populations to match the three highest-\(J\)\({}^{12}\)CO columns by assuming \(T_{\rm ex,12CO}=T_{\rm ex,13CO}\) and obtained \(N_{\rm 12CO}/N_{\rm 13CO}=29\pm 15\) and \(N_{\rm 12CO}\sim(6\pm 3)\times 10^{19}\) cm\({}^{-2}\). Adopting \(\log(N_{\rm H2}/N_{\rm CO})=3.9\), a typical value in the photon dominated region (PDR) and XDR models of Meijerink and Spaans (2005) for large hydrogen column densities, we obtained the H\({}_{2}\) column density \(N_{\rm H2}=(5\pm 3)\times 10^{23}\) cm\({}^{-2}\) for the warm CO absorber. This column density is only up to the 5 \(\mu\)m photosphere that emits the background light (Section 6.2; see also Section 6.3).
The largest systematic uncertainty in deriving the hydrogen column density is in the [\({}^{12}\)CO]/[\({}^{13}\)CO] abundance ratio. This ratio varies by more than a factor of 5 within our Galaxy (e.g., Yan et al., 2023), and our adopted ratio (\(N_{\rm 12CO}/N_{\rm 13CO}=29\pm 15\)) is close to the carbon isotope ratio of [\({}^{12}\)C]/[\({}^{13}\)C] = \(21\pm 5\) near our Galactic center (Yan et al., 2023). The lower abundance ratio there is attributed to more star formation in the past than in the outer Galactic radius. Because the nucleus of NGC 4418 has likely experienced active star formation in the recent past (Ohyama et al., 2019), our column estimate seems reasonable.
## 5 Comparison with the Akari low-resolution spectrum
Our high-resolution measurement of the CO absorption lines is consistent with the low-resolution (\(R\simeq 160\) at 4.7 \(\mu\)m) AKARI CO measurement of the blended CO absorption feature. Imanishi et al. (2010) reported the AKARI CO absorption depth \(\tau(\rm CO)=0.5\). Here, \(\tau(\rm CO)\) is the apparent depth of the blended CO feature measured at the bottom of the blended \(R\)-branch feature4, and is often used as an indicator of the absorbing CO column density without careful analysis using LTE spectral modeling (Spoon et al., 2004; Baba et al., 2018, 2022). We note, however, that \(\tau(\rm CO)\) is proportional to the CO column density only until individual CO absorption lines are saturated; once saturated, it increases with the line (velocity) width (see Baba et al., 2018 for the demonstration). We adopted our best-fit LTE model (see the Appendix) to simulate the AKARI spectrum; we first synthesized the full CO absorption spectrum including the \(J>3\)\(R\)-branch transitions below our wavelength coverage and then applied spectral smoothing. We obtained \(\tau(\rm CO)=0.5\), in good agreement with the actual AKARI CO measurement (Imanishi et al., 2010).
Footnote 4: Many lines in the \(P\)- and \(R\)-branches are blended in the low-resolution AKARI spectra to form a broad two-horned absorption feature centered at \(\simeq 4.7\)\(\mu\)m. The blended \(R\)-branch feature is generally deeper than the blended \(P\)-branch feature due to larger oscillator strengths and smaller wavelength intervals between the individual absorption lines in the \(R\)-branch (Table 1).
## 6 Discussion
### Physical Conditions of the CO Gas
The level populations on the population diagram are roughly distributed along a straight line (Section 4), suggesting the possibility of the LTE conditions where only collisions determine the level populations. Mashian et al. (2015) studied the high-\(J\)(14-19) level populations using the CO spectral line energy distribution (SLED) with the Herschel data and found that a simple large velocity gradient model with warm (\(T_{\rm kin}=63\)-125 K) and dense (\(n_{\rm H2}=4\times 10^{5}\) cm\({}^{-3}\)) gas, with no background radiation field, can explain the SLED. A recent high-resolution Atacama Large Millimeter/submillimeter Array (ALMA) study revealed a sharp CO concentration in (\(J=2\)-1) - (\(J=6\)-5) rotational transitions at the nucleus (with \(0\farcs 15\)-\(0\farcs 3\) beams; Sakamoto et al., 2021), strongly suggesting that the warm dense gas is associated with the nuclear region (see also Section 6.3). Gonzalez-Alfonso et al. (2012) estimated the gas density \(n_{\rm H2}=3\times 10^{6}\) cm\({}^{-3}\) in their "core" region (\(T_{\rm d}=140\)-150 K, \(r=10\) pc)5 on the basis of the detailed spectral analysis of multiple FIR and submillimeter lines. For comparison, the highest \({}^{13}\)CO rotational level we firmly detected in rovibrational absorption is \(J=13\) (in \(R(13)\)), and the critical density for the collisional excitation to this level is \(n_{\rm cr}(J=13)\sim 2\times 10^{6}\) cm\({}^{-3}\).6 This is of the same order as the density measured by Gonzalez-Alfonso et al. (2012). Therefore, the CO excitation is likely dominated by collisional excitation, and the CO molecules there are warm (\(T_{\rm kin}\simeq T_{\rm ex}\)). We note that despite the ambiguity of the CO gas temperature,
the measured columns are roughly correct unless the higher-\(J\) absorption lines beyond our spectral coverage are much stronger or weaker than simple extrapolation from the current coverage. S. Onishi et al. (2023, in preparation) will present a more detailed analysis of the excitation.
### 5 \(\mu\)m Dust Photosphere
Most of the background 5 \(\mu\)m emission originates from a 5 \(\mu\)m dust photosphere because the hotter interior cannot be seen behind an optically thick dust shroud with a unity covering factor around the nucleus (Sections 6.3, 6.6). Dudley and Wynn-Williams (1997) measured the temperature and the effective size of the photosphere for the \(\sim 10\)\(\mu\)m light (\(T_{\rm d}=280\) K, \(r=0.6\) pc) using their continuum flux measurements at 8 \(\mu\)m and 14 \(\mu\)m; observers see the same photosphere (with a dust opacity of \(\simeq 1\)) at these two wavelengths on either side of the opacity bump around 9.7 \(\mu\)m. Gonzalez-Alfonso et al. (2012) estimated the temperature and the size of their innermost "hot" region (\(T_{\rm d}=350\) K, \(r=2.6\) pc) and the foreground absorption (\(A_{\rm V}=70\) mag) to reproduce the Spitzer MIR SED. Utilizing the better Spitzer SED of Gonzalez-Alfonso et al. (2012) and employing the more robust method of Dudley and Wynn-Williams (1997) that is insensitive to the absorption modeling and fitting, we obtained the updated parameters (\(T_{\rm d}=300\) K, \(r=3.6\) pc assuming \(\tau_{8\mu{\rm m}}=\tau_{14\mu{\rm m}}=1\)). For comparison, according to a simple model of spherical dust enshrouding a central heating source with \(L_{\rm bol}=10^{11}\)\(L_{\odot}\)(Scoville, 2013), \(T_{\rm d}=200\)-\(350\) K at \(r=3.6\) pc for the optically thin and thick limits, respectively.7 Therefore, the 10 \(\mu\)m background light is likely from an optically thick dust sphere with \(T_{\rm d}\simeq 300\) K at \(r\sim 4\) pc. We note that the \(N\)-band (\(\sim 10\)\(\mu\)m) emission is pointlike with a \(\simeq 0\farcs 2\) (\(r\simeq 16\) pc) FWHM beam (Evans et al., 2003; Siebenmorgen et al., 2008; Roche et al., 2015), in agreement with this picture. The dust emission has a nuclear core of FWHM\(\sim 60\) mas (10 pc) and peak \(T_{\rm b}\sim 400\) K at 440 and 860 \(\mu\)m, where one penetrates the nucleus deeper than in MIR (Sakamoto et al., 2021). For our qualitative discussion below, we assume that the size of the 5 \(\mu\)m photosphere is approximately the same as that of the 10 \(\mu\)m photosphere (\(r_{5\mu{\rm m}}\sim r_{10\mu{\rm m}}\sim 4\) pc). This size defines our \(\sim 0\farcs 04\) beam (for \(r\sim 3.6\) pc) to sample CO in absorption.
Footnote 7: The dust is heated by the central heating source and the neighboring heated dust in the optically thin and thick limits, respectively.
### Where Is the Warm CO-absorbing Region?
The distribution of the warm CO-absorbing gas can be constrained by the high-resolution CO emission line study with ALMA (Sakamoto et al., 2021). The CO (\(J=3\)-2) peak brightness temperature of the nuclear molecular-gas concentration measured with a \(0\farcs 15\) FWHM (\(r_{0\farcs 15}^{\prime}=12\) pc) beam is \(T_{\rm b}=144\) K (without the correction for the possible beam dilution and with nominal continuum subtraction; the true peak \(T_{\rm b}\) is likely higher for these reasons; Sakamoto et al. 2021). Although this brightness temperature is lower than the excitation temperature we measured (\(T_{\rm ex}=173\pm 17\) K), the warm CO-absorbing gas must be located within this nuclear concentration, because no other circum-nuclear regions display \(T_{\rm b}>100\) K. If the CO excitation condition is close to LTE, the similarity between the dust photosphere temperature and the CO excitation and brightness temperatures suggests that the CO-absorbing layer is close and outside the photosphere (\(r=\)several parsecs): this causes CO to be absorbed.
The warm CO-absorbing gas is most likely located between \(r_{5\mu{\rm m}}\) and \(r_{0\farcs 15}^{\prime}\) (4-12 pc) and, thus, within a compact dust shroud around the central heating source. The dust shrouds of NGC 4418 and some other sources in the same class 3A (NGC 1377 and IRAS 08572+3915; Spoon et al. 2007) have been modeled with a compact (with the inner radius \(r_{\rm in}\) at the dust sublimation radius \(r_{\rm sub}\)) but geometrically thick (\(r_{\rm out}/r_{\rm in}\geq 100\), where \(r_{\rm out}\) is the outer radius) shroud with smoothly distributed dust (Dudley and Wynn-Williams, 1997; Rowan-Robinson and Efstathiou, 2009; Roussel et al., 2006; Siebenmorgen et al., 2008).8 Levenson et al. (2007) demonstrated, using IRAS 08572+3915 as an extreme example, that the very deep 9.7 \(\mu\)m absorption requires such a geometrically thick dust shroud because it requires the large radial gradient in dust temperature. NGC 4418 displays a similar but even more extreme SED (deeper 9.7 \(\mu\)m absorption and redder MIR color9), and such characteristics can be explained by more extreme shroud parameters ac
cording to the model of Levenson et al. (2007).10 For NGC 4418, \(r_{\rm sub}(\simeq r_{\rm in})\sim 0.2\) pc assuming an AGN SED and dust sublimation temperature of 1500 K (Nenkova et al., 2008), and the shroud extends out to \(r_{\rm out}\gtrsim 20\) pc well beyond the 5 \(\mu\)m photosphere. The gas and dust concentration around the nucleus actually extends out to \(r=\) 50-100 pc, according to the ALMA molecular-gas observation of Sakamoto et al. (2021) and the FIR and submillimeter spectral modeling of Gonzalez-Alfonso et al. (2012). Scoville (2013) and Marshall et al. (2018) demonstrated that the dust shroud should not be clumpy; otherwise, characteristic dust spectral features are easily erased, because the emission from the inner hot dust or the hotter surface of the internal dust clumps contributes significantly to the emergent SED. The large \(C_{\rm f}\) suggests a large covering factor of the warm CO-absorbing region in front of the background light source (Section 3.1), and this can be naturally explained if the warm CO is located within, but in a very inner region of, such a dust shroud. In contrast to the radially extended dust shroud, the CO-absorbing region is likely to be a thin layer, as \(dr_{\rm CO}=N_{\rm H2}/n_{\rm H2}/\phi_{\rm V}=5\times 10^{23}\) cm\({}^{-2}/3\times 10^{6}\) cm\({}^{-3}/\phi_{\rm V}\sim 0.06/\phi_{\rm V}\) pc\(=0.06\)-2 pc, where \(\phi_{\rm V}\) (volume filling factor) is likely to be between 0.03 (Onishi et al., 2021) and 1. We note that this radial size corresponds to the warm CO, which is only a fraction of the total CO and H\({}_{2}\). The H\({}_{2}\) column density we measured (\(N_{\rm H2}=(5\pm 3)\times 10^{23}\) cm\({}^{-2}\); Section 4) traces only this thin layer in front of the 5 \(\mu\)m photosphere at \(r\sim 4\) pc (Section 6.2) and, therefore, the total columns to the central heating source should be larger than our estimate, more significantly so for a thinner absorbing layer.
Footnote 10: An alternative idea involving an extended diffuse cold dust screen has been proposed to explain the deep 9.7 \(\mu\)m absorption, e.g., by González-Martin et al. (2013) and Roche et al. (2015). However, we prefer the model involving a compact hot shroud, which is required to explain many other observed characteristics of this galaxy nucleus (e.g., González-Alfonso et al., 2012; Costagliola et al., 2013; Sakamoto et al., 2021, 2021), while the corresponding extremely large optical absorption (\(A_{\rm V}>100\) mag) at the nucleus is not known (Scoville et al., 2000; Evans et al., 2003; Ohyama et al., 2019).
### Dynamical Implications for Warm CO Region
The dynamical conditions of the CO-absorbing region measured with the \(0\farcs 04\) beam in front of the 5 \(\mu\)m photosphere are characterized by both a broad width (110 km s\({}^{-1}\) FWHM) and an absence of significant radial bulk motion along our line of sight (\(|dV|<10\) km s\({}^{-1}\); Section 3.2). A disk-like rotational velocity field around the nucleus contributes little to the observed CO absorption profile due to the relatively small projected velocity gradient (\(\sim 50\) km s\({}^{-1}\) over \(0\farcs 2\); Sakamoto et al. 2021). In addition, any radial velocity gradient within the absorbing layer is likely small if the layer is as thin as we estimated above (of the order of 0.1-1 pc). Therefore, we expect that the broad CO absorption profile is mainly due to a large turbulent motion in the CO-absorbing region.
### Comparison with Galaxies with Previous Rovibrational CO Measurements
In an attempt to further characterize the circumnuclear warm dense gas in NGC 4418, we compiled the rovibrational CO absorptions and other observed properties that are likely to similarly trace the warm dense gas, as we discuss in detail later, for a number of sources (Table 2). To allow for a meaningful comparison across the sample, we utilize low-resolution (\(R\sim 100\)) CO measurements with AKARI and Spitzer, along with the synthetic low-resolution spectrum based on the high-resolution measurement. The AKARI spectroscopy is particularly efficient for detecting the CO absorptions in low-redshift galaxies due to its wavelength coverage below 5 \(\mu\)m. Our sample comes mainly from Baba et al. (2022), who compiled the AKARI CO measurements, and two others with other CO measurements. We specifically compare galaxies with either deep CO absorption, enhanced vibrationally-excited HCN emission at submillimeter wavelengths, or both. See also Table 2 for the references.
#### 6.5.1 Ngc 4945
NGC 4945 hosts an obscured AGN although its MIR emission is dominated by starburst. Spoon et al. (2003) detected resolved gaseous rovibrational CO absorption lines (Section 1) as well as deep broad ice (mostly OCN\({}^{-}\) and polar CO ice) features around \(\simeq 4.7\)\(\mu\)m using a high-resolution (\(R=3000\)) ISAAC spectrum. They derived \(T_{\rm ex}=35^{+7.5}_{-2.5}\) K, log \(N_{\rm 12CO}=18.3\pm 0.1\) cm\({}^{-2}\), and FWHM\(=50\pm 5\) km s\({}^{-1}\), i.e., the CO-absorbing gas in NGC 4945 is much cooler, lower in column density, and less turbulent when compared to NGC 4418 and other galaxies displaying deep CO absorption within the sample of Baba et al. (2022) (Section 6.5.4). This result seems reasonable because the CO-absorbing gas in NGC 4945 is associated with an extended (\(r=120\) pc) star-forming disk (Spoon et al., 2003).
Although the low-resolution (\(R\simeq 160\)) AKARI spectrum of NGC 4945 displays a deep absorption at \(\simeq 4.7\)\(\mu\)m (Castro et al., 2014), it is mainly due to the OCN\({}^{-}\) and polar CO ice absorptions Spoon et al. (2003) detected. By using the best-fit parameters of the LTE
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & \(\tau\)(CO) & C\({}_{2}\)H\({}_{2}\) and HCN & HCN-vib & Spectral & Continuum & Refs & Refs \\ & & Absorption & Enhancement & Class & Slope & for (3) & for (6) \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) \\ \hline \multicolumn{6}{c}{AKARI Sample with CO Absorption of Baba et al. (2022)} \\ Deep CO Absorption & & & & & & \\ IRAS 15250+3609 & \(>1.0\) & Y & N & 3A & red (\(>4\)\(\mu\)m) & 1 & 7 \\ IRAS 08572+3915 NW & 0.7 & Y & N & 3A & red & 1 & 8 \\ UGC 5101 & 0.7 & Y (shallowa & Y & 2B & red & 1 & 8 \\ IRAS 20551-4250 & \(>0.5\) & Y (C\({}_{2}\)H\({}_{2}\)b & N & 3B & red (\(>3.5\)\(\mu\)m) & 2, 3, 4 & 7 \\ NGC 4418 & 0.5 & Y & Y & 3A & red (\(>4\)\(\mu\)m) & 1 & 7 \\ \multicolumn{6}{c}{Moderate CO Absorption} \\ Arp 220 & 0.4 & Y & Y & 3A & flat & 1 & 7 \\ IRAS 17208-0014 & 0.4 & Y (shallow) & Y & 2B & flat & 1 & 9 \\ Shallow CO Absorption & & & & & & \\ IC 860 & 0.3 & Y & Y & 2B & blue & 1 & 7 \\ Zw 049.057 & 0.3 & N & Y & 1C & blue & 5 & 7 \\ \hline \multicolumn{6}{c}{Others} \\ IRAS F00183-7111 & \(0.4^{c}\) & \(\ldots\) & \(d\) & \(\ldots\) & \(e\) & 3B & red & 6 & 7, 10 \\ NGC 4945 & \(0.15^{f}\) & \(\ldots\) & \(\ldots\) & 3A & red & \(\ldots\) & 11 \\ \hline \end{tabular} Note. – Column 1: Object Names. Column 2: AKARI CO optical depths from Baba et al. (2022) in a decreasing order of the optical depths and others with known CO absorption properties (see the text). Column 3: rovibrational C\({}_{2}\)H\({}_{2}\) and HCN absorptions. Column 4: enhancement of the HCN-vib luminosity relative to the total IR luminosity (\(L_{\rm HCN-vib}/L_{\rm IR}>1\times 10^{-8}\)) from Falstad et al. (2019). Column 5: MIR spectral class of Spoon et al. (2007) taken from Spoon et al. (2022). Column 6: continuum slope at 3–5 \(\mu\)m in janskys. Column 7: references to the rovibrational C\({}_{2}\)H\({}_{2}\) and HCN absorptions (4). Column 8: references to the continuum slope (6).
\end{table}
Table 2: Comparison of MIR Rovibrational Absorptions, HCN-vib Enhancement, and Other MIR Characteristics for Galaxies Displaying Rovibrational CO Absorption
model of Spoon et al. (2003) and applying spectral smoothing to simulate the AKARI spectrum, we found \(\tau(\mathrm{CO})\) to be \(\simeq 0.17\) after removing the contributions from these ice features. Therefore, NGC 4945 displays only shallow gaseous CO absorption in the sample of Baba et al. 2022 (Section 6.5.4).
#### 6.5.2 Iras F00183-7111
IRAS F00183-7111 is a well-known ULIRG (\(L_{\mathrm{IR}}\simeq 7\times 10^{12}~{}L_{\odot}\); Spoon et al., 2004) for its deep ice absorptions, but it also displays deep gaseous rovibrational CO absorption (Section 1). Spoon et al. (2004) presented its low-resolution (\(R\simeq 80\) at 5 \(\mu\)m) Spitzer IRS spectrum and analyzed its blended CO absorption feature to find \(T_{\mathrm{ex}}\sim 720\) K and log \(N_{\mathrm{CO}}=19.5\) cm\({}^{-2}\).11 They inferred an optically thick, warm, dense (\(>3\times 10^{6}\) cm\({}^{-3}\)), and thin (0.03 pc) CO-absorbing layer in the vicinity of the nucleus. Therefore, the CO-absorbing region in this ULIRG is similar to those in IRAS 08572+3915 (see below) and NGC 4418.
Footnote 11: Baba et al. (2018) also analyzed the same spectrum and derived \(T_{\mathrm{ex}}\simeq 330\) K and log \(N_{\mathrm{CO}}=21.2\) cm\({}^{-2}\). Although their results appear different from those of Spoon et al. 2004, the differences can be explained by degeneracy between \(N_{\mathrm{CO}}\) and \(T_{\mathrm{ex}}\) when analyzing the low-resolution spectra in general, as demonstrated by Baba et al. (2018) (their Figure 12).
The apparent CO optical depth at the IRS resolution, measured in a very similar way to the AKARI CO measurement, is \(\tau(\mathrm{CO},\mathrm{IRS})\simeq 0.4\). Because the blended absorption feature is relatively flat near its absorption peak mainly due to the high excitation temperature (Baba et al., 2018), the CO optical depth at the AKARI resolution (\(R\simeq 160\)) is expected to be similar to that at the IRS resolution. Therefore, IRAS F00183-7111 displays relatively deep CO absorption when compared to NGC 4418 and other galaxies displaying deep CO absorption in the sample of Baba et al. (2022) (Section 6.5.4). Unfortunately, neither vibrationally excited HCN emission at submillimeter wavelengths or absorptions of rovibrational C\({}_{2}\)H\({}_{2}\) and HCN transitions around 14 \(\mu\)m (see below) are detected (Spoon et al., 2009; Imanishi and Nakanishi, 2014), making a meaningful comparison with the Baba et al. (2022) sample difficult.
#### 6.5.3 Iras 08572+3915
IRAS 08572+3915 is the first ULIRG in which the rovibrational CO absorption lines were studied in detail using high-resolution spectroscopy (Section 1). Although the physical properties of the CO-absorbing gas (e.g., excitation temperature, column density; Onishi et al., 2021) in this galaxy are similar to those in NGC 4418, the kinematics and the excitation mechanism of the gas, as well as the inferred circum-nuclear structure, are quite different from those in NGC 4418. Onishi et al. (2021) found that each of the deep \({}^{12}\)CO absorption lines shows a complex broad profile (\(\sim 300\) km s\({}^{-1}\) in total; see also Shirahata et al., 2013), and identified three kinematical components. They argued that all of these components are associated with the clouds near the surface of the AGN torus along the line of sight of the observer at different radial distances from the AGN. Each of these clouds displays either radial outflow or inflow motion along the line of sight. The very warm (\(\simeq 720\) K) innermost component is in the LTE conditions, while the two outer components are likely radiatively excited. The background light comes from hot (\(\sim 1500\) K) dust near the dust sublimation radius (0.5 pc). They proposed an XDR heating model to explain the very large column of the warm CO-absorbing gas. Matsumoto et al. (2022) demonstrated that all of these characteristics can be reproduced by a hydrodynamic radiation-driven fountain model of AGN tori in low-luminosity AGNs.
Apart from the CO absorption, one notable difference between the two galaxies is that NGC 4418 displays enhanced rotational HCN emission at the vibrationally excited state (\(v_{2}=1\)) at submillimeter wavelengths (hereafter, HCN-vib emission), while IRAS 08572+3915 does not. We will discuss its implications below using the AKARI sample with CO absorption.
#### 6.5.4 AKARI Sample with CO Absorption of Baba et al. (2022)
Baba et al. (2022) compiled the published AKARI CO measurements for a sample of HCN-vib emission measurements, including NGC 4418 and IRAS 08572+3915, and investigated the conditions for deep rovibrational CO absorptions. They suggested that deep CO absorptions are often found in objects with enhanced HCN-vib emission (\(J=3\)-2) relative to the infrared luminosity (\(L_{\mathrm{HCN-vib}}/L_{\mathrm{IR}}>1\times 10^{-8}\); Falstad et al., 2019; hereafter, we consider the HCN-vib emission to be enhanced when \(L_{\mathrm{HCN-vib}}/L_{\mathrm{IR}}>1\times 10^{-8}\)). For example, NGC 4418 is one of their sample galaxies with deep CO absorption (see below) and enhanced HCN-vib emission (Sakamoto et al., 2010, 2021), following the proposed trend of Baba et al. (2022). Baba et al. (2022) argued that the CO absorption is deep when the circum-nuclear conditions are similar to those expected in the CONs, which are defined to show \(L_{\mathrm{HCN-vib}}/L_{\mathrm{IR}}>1\times 10^{-8}\)
(Falstad et al., 2019).12 This is because HCN is vibrationally excited in the vicinity of compact MIR-emitting sources and the HCN-vib emission is enhanced when the HCN-vib-emitting gas encloses most of the nucleus before a wide-angle outflow develops and disrupts the obscuration (Falstad et al., 2019, 2021; see also Section 6.6).
Footnote 12: Falstad et al. (2021) revised the CON definition to have larger surface HCN-vib brightness (\(\Sigma_{\rm HCN-vib}\)) than 1 \(L_{\odot}\) pc\({}^{-2}\) and found that the CONs defined with large \(\Sigma_{\rm HCN-vib}\) also display \(L_{\rm HCN-vib}/L_{\rm IR}>1\times 10^{-8}\).
Because the HCN-vib emission is the transition from the state that is almost exclusively radiatively excited by absorbing rovibrational HCN lines at 14.0 \(\mu\)m (hereafter, rovibrational HCN absorption), the enhancement of the HCN-vib emission and the depth of the rovibrational HCN absorption are likely to be physically closely related (Sakamoto et al., 2010; Gonzalez-Alfonso & Sakamoto, 2019; Sakamoto et al., 2021). In fact, NGC 4418 also displays the deep rovibrational HCN absorption as well as the rovibrational C\({}_{2}\)H\({}_{2}\) (13.7 \(\mu\)m) absorption (Lahuis et al., 2007; hereafter, rovibrational C\({}_{2}\)H\({}_{2}\) absorption). Lahuis et al. (2007) argued that both absorptions originate from warm (200-700 K) dense (\(\gtrsim 1\times 10^{7}\) cm\({}^{-3}\)) gas, and that these absorptions likely trace the same gas as the CO absorptions. For NGC 4418, Lahuis et al. (2007) derived \(T_{\rm ex,C2H_{2},HCN}=300\) K with an uncertainty of up to 30% using a similar analysis as for the CO absorption on the rovibrational C\({}_{2}\)H\({}_{2}\) and HCN absorptions, in rough agreement with our \(T_{\rm ex,CO}\) measured with the CO rotation diagram (Section 4). With such a possible physical relationship in mind, below we further characterize NGC 4418 by combining MIR information (rovibrational C\({}_{2}\)H\({}_{2}\), HCN, and CO absorptions and spectral classification of Spoon et al. (2007) that is based on the strengths of the PAH 6.2 \(\mu\)m emission and the 9.7 \(\mu\)m absorption) and the HCN-vib enhancement.
Five objects in the sample of Baba et al. (2022) display deep CO absorptions (\(\tau\)(CO) \(\gtrsim 0.5\); Table 2). All of them show characteristic red SEDs between 5-20 \(\mu\)m without any sign of strong starburst contamination. Among them, three objects (NGC 4418, IRAS 08572+3915 (Section 6.5.3), and IRAS 15250+3609) are classified as 3A (with the smallest PAH 6.2 \(\mu\)m equivalent width and the deepest 9.7 \(\mu\)m absorption), suggesting that warm dust emission from the compact region around the nucleus dominates their 5-20 \(\mu\)m SEDs. The other two objects are classified as 3B (with a slightly larger PAH 6.2 \(\mu\)m equivalent width when compared to 3A; IRAS 20551-4250) and 2B (also with a slightly shallower 9.7 \(\mu\)m absorption; UGC 5101). These classes are often interpreted as a mixture of the starburst (1C) and the 3A SEDs (e.g., Spoon et al., 2007). In fact, UGC 5101 displays signs of starburst contamination at 8-15 \(\mu\)m as revealed by the high-spatial-resolution ground-based observation of Martinez-Paredes et al. (2015). Two (out of all five) objects (NGC 4418 and UGC 5101) display enhanced HCN-vib emission. NGC 4418 also displays deep rovibrational C\({}_{2}\)H\({}_{2}\) and HCN absorptions as mentioned above. UGC 5101, on the other hand, displays moderate rovibrational C\({}_{2}\)H\({}_{2}\) and HCN absorptions, and it is also likely due to the starburst contamination (Martinez-Paredes et al., 2015). It is noteworthy that the remaining three objects do not display enhanced HCN-vib emission but moderate to deep rovibrational C\({}_{2}\)H\({}_{2}\) and HCN absorptions instead; IRAS 15250+3609 and IRAS 08572+3915 display both deep rovibrational C\({}_{2}\)H\({}_{2}\) and HCN absorptions, and IRAS 20551-4250 displays deep rovibrational C\({}_{2}\)H\({}_{2}\) and marginal HCN absorptions.
Six objects in the sample of Baba et al. (2022) display enhanced HCN-vib emission (Table 2). Apart from NGC 4418 and UGC 5101 that display deep CO absorptions as mentioned above, two (out of four) objects display moderate \(\tau\)(CO) (\(=0.4\); Arp 220, IRAS 17208-0014). Both are likely contaminated by starbursts because they are classified as 3B (for the whole eastern and western nuclei; Arp 220) and 2B (IRAS 17208-0014). Another sign of such contamination is also found at \(\lesssim 5\)\(\mu\)m, where the continuum is almost flat and is much bluer than in the 3A objects, suggesting stellar continuum contribution. This probably underestimates their CO depths. Arp 220 displays deep rovibrational C\({}_{2}\)H\({}_{2}\) and HCN absorptions, while IRAS 17208-0014 displays the shallower absorptions. The remaining two objects (IC 860, Zw 049.057) display shallower CO absorptions (\(\tau\)(CO) \(=0.3\)). They are classified as 1C (almost pure starburst; Zw 049.057) and 2B (between 3A and 1C; IC 860). Their CO depths are likely severely underestimated due to the stellar continuum contamination, as suggested also by the much bluer continuum at \(\lesssim 5\)\(\mu\)m. IC 860 displays deep rovibrational C\({}_{2}\)H\({}_{2}\) and HCN absorptions despite the contamination. Zw 049.057 displays little sign of rovibrational C\({}_{2}\)H\({}_{2}\) and HCN absorptions most likely due to the severe starburst contamination (and perhaps also due to the lower S/N of the spectrum; Pereira-Santaella et al., 2010).
In summary, we found that objects with deep rovibrational CO absorptions (\(\tau\)(CO) \(\gtrsim 0.5\); IRAS 15250+3609, IRAS 08572+3915 NW, UGC 5101, IRAS 20551-4250, and NGC 4418) often also display deep rovibrational C\({}_{2}\)H\({}_{2}\) and HCN absorptions as well. Objects with shal
lower CO absorptions (\(\tau\)(CO) \(<\) 0.5) but with enhanced HCN-vib emission (Arp 220, IRAS 17208-0014, IC 860, and Zw 049.057) often display moderate to deep rovibrational C\({}_{2}\)H\({}_{2}\) and HCN absorptions. Their CO depths are likely underestimated due to contamination. This explains, at least in part, the large scatter of \(\tau\)(CO) for similarly enhanced HCN-vib emission. Taking such a possible CO underestimation into account, the deep rovibrational CO absorption seems to be more closely associated with the deep rovibrational C\({}_{2}\)H\({}_{2}\) and HCN absorptions than with the enhanced HCN-vib emission.
### The Buried Nucleus of NGC 4418
The close associations among the rovibrational C\({}_{2}\)H\({}_{2}\), HCN, and CO absorptions above indicate that all of these absorption lines trace the warm gas along the line of sight toward the MIR-emitting compact source, whereas enhancement of the HCN-vib emission, which is powered by the MIR radiation (e.g., Sakamoto et al., 2010), increases as the volume of the warm gas optically thick to the MIR emission increases (Gonzalez-Alfonso and Sakamoto, 2019). The HCN-vib is particularly enhanced when the circum-nuclear region is in a greenhouse condition (Gonzalez-Alfonso and Sakamoto, 2019; see below). The volume of such warm gas likely changes depending on whether the circum-nuclear obscuring structure is in the form of a torus with a large opening angle toward its pole directions or a fully enclosed shroud (Baba et al., 2022; see their Figure 13 for a schematic illustration), or depending on the different evolutionary stages of the wide-angle outflow from the nucleus (before or after it disrupts the shroud; Falstad et al., 2019; see their Figure 2 for a schematic illustration; and references therein). In this scenario, objects with both deep MIR rovibrational absorptions and large enhancement of the HCN-vib emission are likely to be buried in a compact dusty envelope of warm dense gas. NGC 4418 most likely belongs to this class. We note that NGC 4418 indeed exhibits large enough column density (log \(N_{\rm H2}\gtrsim 25\) cm\({}^{-2}\)) to cause the HCN-vib enhancement (Sakamoto et al., 2021). IRAS 08572+3915, on the other hand, belongs to the case of an AGN torus seen from an almost edge-on viewing angle, as proposed by Onishi et al. (2021) and Matsumoto et al. (2022).
As for NGC 4418, we argued that both warm dust and CO coexist within the compact nucleus despite considerable uncertainties in their radial distributions (Section 6.3). In such a situation, the so-called greenhouse effect will likely work efficiently (Gonzalez-Alfonso and Sakamoto, 2019; Sakamoto et al., 2021). In contrast to typical PDR/XDR models (e.g., Meijerink and Spaans, 2005), gas cooling in greenhouse is inefficient due to high opacity at MIR and FIR, and CO can be efficiently heated over a larger volume by gas-dust collisions due to higher gas density and hotter dust temperature. This mechanism creates an extended warm CO region beyond the limits of the typical models (up to \(N_{\rm CO}\sim 10^{16}\) cm\({}^{-2}\) and \(\sim 10^{18}\) cm\({}^{-2}\) for warm, \(>100\) K, CO in PDR and XDR, respectively; Baba et al., 2018). To test the greenhouse effect for NGC 4418, we need a better understanding of the dust distribution within the presumed 5 \(\mu\)m photosphere, in particular, whether the dust shroud fully encloses the central heating source (or how efficient the greenhouse effect is) and whether its covering factor is unity (or whether the shroud has small holes through which observers can see inside).
To obtain more insights into the compact dusty gaseous shroud around the central heating source, we need a self-consistent modeling of the entire gas, dust, and radiation field in a more realistic environment for NGC 4418 to simultaneously reproduce the CO excitation and its absorption, the deep 9.7 \(\mu\)m absorption, and the steep drop of the SED below \(\sim\) 5 \(\mu\)m. Our analysis of the warm CO absorptions (Sections 3.1, 3.2) assumed a simple foreground layer, despite that the warm CO is most likely embedded in a geometrically thick dust shroud with the radial gradients in temperature and density. Ideally, such a model should also include FIR and submillimeter information (Gonzalez-Alfonso et al., 2012; Costagliola et al., 2013; Mashian et al., 2015; Sakamoto et al., 2021, 2021). In particular, the deep (with an apparent optical depth of \(\sim\) 0.9) and broad (spanning over 250 km s\({}^{-1}\) in total including its multiple unresolved transitions spanning over 126 km s\({}^{-1}\)) CN (\(J\) = 6-5) absorption at 680 GHz measured with a 0\(\farcs\)15 beam toward the center (Sakamoto et al., 2021) is noteworthy for its similarity to the rovibrational CO absorptions in terms of both the depth and the line width. The CN lines are often used as tracers of highly excited regions and, in principle, can explore even deeper than the CO absorptions due to their lower opacity at submillimeter wavelengths than at MIR.
## 7 Summary
We investigated the buried nucleus of the nearby LIRG NGC 4418 using fundamental CO rovibrational absorptions and inferred a large column density (\(N_{\rm H2}\sim 5\times 10^{23}\) cm\({}^{-2}\) in front of the 5 \(\mu\)m photosphere) of warm (\(T_{\rm kin}\simeq T_{\rm ex}\simeq 170\) K) molecular gas by assuming an isothermal plane-parallel slab illuminated by a compact background MIR-emitting source. The very deep and partly saturated \({}^{12}\)CO absorptions indicate the large covering factor (\(>0.86\)). The absorption profiles are broad (110 km s\({}^{-1}\) FWHM) near the systemic velocity
(\(|dV|<10\) km s\({}^{-1}\)), suggesting a large turbulent motion with little bulk radial motion within the warm CO gas. We modeled that the warm CO absorber almost covers the central heating source and that it is an inner thin layer (of the order of 0.1-1 pc) around the 5 \(\mu\)m photosphere (at \(r=\)several parsecs) of a compact shroud of gas and dust (\(d\sim 100\) pc).
We thank Drs. T. Usuda and S. Oyabu for their support in preparing and performing the observations. Y.O. and K.S. acknowledge the support from the Ministry of Science and Technology (MOST) of Taiwan through the grants MOST 109-2112-M-001-021- (Y.O.) and MOST 111-2112-M-001-039- (K.S.). T.N. and S.B. are supported by JSPS KAKENHI grant Nos. 21H04496 (T.N. and S.B.) and 23H05441 (T.N.). K.M. is a Ph.D. Fellow of the Flemish Fund for Scientific Research (FWO-Vlaanderen) and acknowledges the financial support provided through grant No. 1169822N. Subaru(IRCS) emcee (Foreman-Mackey et al., 2013)
|
2302.14011 | Causal isotonic calibration for heterogeneous treatment effects | We propose causal isotonic calibration, a novel nonparametric method for
calibrating predictors of heterogeneous treatment effects. Furthermore, we
introduce cross-calibration, a data-efficient variant of calibration that
eliminates the need for hold-out calibration sets. Cross-calibration leverages
cross-fitted predictors and generates a single calibrated predictor using all
available data. Under weak conditions that do not assume monotonicity, we
establish that both causal isotonic calibration and cross-calibration achieve
fast doubly-robust calibration rates, as long as either the propensity score or
outcome regression is estimated accurately in a suitable sense. The proposed
causal isotonic calibrator can be wrapped around any black-box learning
algorithm, providing robust and distribution-free calibration guarantees while
preserving predictive performance. | Lars van der Laan, Ernesto Ulloa-Pérez, Marco Carone, Alex Luedtke | 2023-02-27T18:07:49Z | http://arxiv.org/abs/2302.14011v2 | # Causal isotonic calibration
###### Abstract
We propose causal isotonic calibration, a novel nonparametric method for calibrating predictors of heterogeneous treatment effects. In addition, we introduce a novel data-efficient variant of calibration that avoids the need for hold-out calibration sets, which we refer to as cross-calibration. Causal isotonic cross-calibration takes cross-fitted predictors and outputs a single calibrated predictor obtained using all available data. We establish under weak conditions that causal isotonic calibration and cross-calibration both achieve fast doubly-robust calibration rates so long as either the propensity score or outcome regression is estimated well in an appropriate sense. The proposed causal isotonic calibrator can be wrapped around any black-box learning algorithm to provide strong distribution-free calibration guarantees while preserving predictive performance.
## 1 Introduction
Estimation of causal effects via both randomized experiments and observational studies is critical to understanding the effects of interventions and informing policy. Moreover, it is often the case that understanding treatment effect heterogeneity can provide more insights than overall population effects (Obermeyer and Emanuel, 2016; Athey, 2017). For instance, a study of treatment effect heterogeneity can help elucidate the mechanism of an intervention, design policies targeted to subpopulations who can most benefit (Imbens and Wooldridge, 2009), and predict the effect of interventions in populations other than the ones in which they were developed. These necessities have arisen in a wide range of fields, such as marketing (Devriendt et al., 2018), the social sciences (Imbens and Wooldridge, 2009), and the health sciences (Kent et al., 2018). For example, in the health sciences, heterogeneous treatment effects (HTEs) are of high importance to understanding and quantifying how certain exposures or interventions affect the
health of various subpopulations (Dahabreh et al., 2016; Lee et al., 2020). Potential applications include prioritizing treatment to certain sub-populations when treatment resources are scarce, or individualizing treatment assignments when the treatment can have no effect (or even be harmful) in certain subpopulations (Dahabreh et al., 2016). As an example, treatment assignment based on risk scores has been used to provide clinical guidance in cardiovascular disease prevention (Lloyd-Jones et al., 2019) and to improve decision-making in oncology (Collins and Varmus, 2015; Cucchiara et al., 2018).
A wide range of statistical methods are available for assessing HTEs, with recent examples including Wager and Athey (2018), Carnegie et al. (2019), Lee et al. (2020) and Nie and Wager (2021), among others. In particular, many methods, including Imbens and Wooldridge (2009) and Dominici et al. (2020), scrutinize HTEs via conditional average treatment effects (CATEs). The CATE is the difference in the conditional mean of the counterfactual outcome corresponding to treatment versus control given covariates, which can be defined at a group or individual level. When interest lies in predicting treatment effect, the CATE can be viewed as the oracle predictor of the individual treatment effect that can feasibly be learned from data. Optimal treatment rules have been derived based on the sign of the CATE estimator (Murphy, 2003; Robins, 2004), with more recent works incorporating the use of flexible CATE estimators (Luedtke and van der Laan, 2016). Thus, due to its wide applicability and scientific relevance, CATE estimation has been of great interest in statistics and data science.
Regardless of its quality as a proxy for the true CATE, it is generally accepted that predictions from a given treatment effect predictor can still be useful for decision-making. However, theoretical guarantees for rational decision-making using a given treatment effect predictor typically hinge on the predictor being a good approximation of the true CATE. Accurate CATE estimation can be challenging because the nuisance parameters involved can be non-smooth, high-dimensional, or otherwise difficult to model correctly. Additionally, a CATE estimator obtained from samples of one population, regardless of its quality, may not generalize well to different target populations (Frangakis, 2009). Usually, CATE estimators (often referred to as learners) build upon estimators of the conditional mean outcome given covariates and treatment level (i.e., outcome regression), the probability of treatment given covariates (i.e., propensity score), or both. For instance, plug-in estimators such as those studied in Kunzel et al. (2019) -- so-called T-learners -- are obtained by taking the difference between estimators of the outcome regression obtained separately for each treatment level. T-learners can suffer in performance because they rely on estimation of nuisance parameters that are at least as non-smooth or high-dimensional as the CATE, and are prone to the misspecification of involved outcome regression models; these issues can result in slow convergence or inconsistency of the CATE estimator. Doubly-robust CATE estimation strategies (Wager and Athey, 2018; Nie and Wager, 2021; Kennedy, 2020) mitigate some of these issues by allowing for comparatively fast CATE estimation rates even when nuisance parameters are estimated at slow rates. However, their predictive accuracy still relies on potentially strong smoothness conditions on the data-generating distribution that may not hold in practice. Even when the CATE is estimated consistently, predictions based on statistical learning methods often produce biased predictions that overestimate or underestimate the true CATE in the extremes of the predicted values (van Klaveren et al., 2019). For example, the 'pooled cohort equations' (Goff et al., 2014) risk model used to predict cardiovascular disease has been found to underestimate risk in patients with lower socioeconomic status or chronic inflammatory diseases (Lloyd-Jones
et al., 2019). The implications of biased treatment effect predictors are profound when used to guide treatment decisions and can range from harmful use to withholding of treatment (van Calster et al., 2019).
Due to the consequence of treatment decision-making, it is essential to guarantee, under minimal assumptions, that treatment effect predictions are representative in magnitude and sign of the actual effects. A desirable property of a treatment effect predictor is for the average treatment effect among individuals with identical predictions to be close to their shared prediction value. This property is commonly known as _calibration_ in prediction settings. The aims of calibration and prediction are fundamentally different. For instance, a constant treatment effect predictor can be well-calibrated even though it is a poor predictor of treatment effect heterogeneity (Gupta et al., 2020).
In the machine learning literature, calibration has been widely used to enhance prediction models for classification and regression (Bella et al., 2010). However, due to the comparatively little research on calibration of treatment effect predictors, such benefits have not been realized in the context of heterogeneous treatment effect prediction. Several works have contributed to addressing this gap in the literature. Zhang et al. (2016) and Josey et al. (2022) consider calibration of marginal treatment effect estimates for new populations but do not consider HTEs. Leng and Dimmery (2021) proposed a CATE calibration method based on parametric maximum likelihood estimation. However, their method requires that the calibration error is approximately linear in the predictions and the availability of randomized experimental data. Xu and Yadlowsky (2022) proposed a nonparametric doubly-robust estimator of the calibration error of a given treatment effect predictor. However, while their work enables the detection of uncalibrated predictors, it does not offer guidance on obtaining calibrated predictors. Our work builds upon their contributions by providing a nonparametric doubly-robust method for calibrating treatment effect predictors.
This paper is organized as follows. In Section 2, we introduce our notation and formally define calibration. There we also provide an overview of current calibration methods. In Section 3, we outline our proposed approach, and we describe its theoretical properties in Section 4. Specifically, we derive the convergence rate of its calibration measure and provide a bound on its predictive accuracy. In Section 5, we examine the performance of our method via a variety of simulation studies. We conclude with a discussion of our findings and provide practical guidance in Section 6.
## 2 Statistical setup
### Notation and definitions
Suppose we observe \(n\) independent and identically distributed realizations of data unit \(O:=(W,A,Y)\) drawn from a distribution \(P\), where \(W\in\mathcal{W}\subset\mathbb{R}^{d}\) is a vector of baseline covariates, \(A\in\{0,1\}\) is a binary indicator of treatment, and \(Y\in\mathcal{Y}\subset\mathbb{R}\) is an outcome. For instance, \(W\) can include a patient's demographic characteristics and medical history, \(A\) can indicate whether an individual is treated (1) or not (0), and \(Y\) could be a binary indicator of a successful clinical outcome. We denote by \(\mathcal{D}_{n}:=\{O_{1},O_{2},\ldots,O_{n}\}\) the observed dataset, with \(O_{i}:=(W_{i},A_{i},Y_{i})\) representing the observation on the \(i^{th}\) study unit.
For covariate value \(w\in\mathcal{W}\) and treatment level \(a\in\{0,1\}\), we denote by \(\pi_{0}(w):=P(A=1|W=w)\) the propensity score and by \(\mu_{0}(a,w):=E(Y\,|\,A=a,W=w)\) the outcome regression. The individual treatment effect is \(Y_{1}-Y_{0}\), where \(Y_{a}\) represents the potential outcome obtained by setting \(A=a\). Without loss of generality, we assume that higher values of \(Y_{1}-Y_{0}\) are desirable. We also assume that the contrast \(\tau_{0}(w):=\mu_{0}(1,w)-\mu_{0}(0,w)\) equals the true CATE, \(E(Y_{1}-Y_{0}\,|\,W=w)\), which holds under certain causal assumptions (Rubin, 1974). Throughout, we denote by \(\|\cdot\|\) the \(L^{2}(P)\) norm, that is, \(\|f\|^{2}=\int[f(w)]^{2}dP_{W}(w)\) for any given \(P_{W}\)-square integrable function \(f:\mathcal{W}\to\mathbb{R}\), where \(P_{W}\) is the marginal distribution of \(W\) implied by \(P\). We deliberately take as convention that the median \(\operatorname{median}\{x_{1},x_{2},\ldots,x_{k}\}\) of a set \(\{x_{1},x_{2},\ldots,x_{k}\}\) equals the \(\lfloor k/2\rfloor^{th}\) order statistic of this set, where \(\lfloor k/2\rfloor:=\max\{z\in\mathbb{N}:z\leq k/2\}\).
Let \(\tau:\mathcal{W}\to\mathbb{R}\) be a treatment effect predictor, that is, a function that maps a realization \(w\) of \(W\) to a treatment effect prediction \(\tau(w)\). In practice, \(\tau\) can be obtained using any black-box algorithm. Below, we first consider \(\tau\) to be fixed, though we later address situations in which \(\tau\) is learned from the data used for subsequent calibration. We denote by \(\gamma_{0}(\tau,w):=E[Y_{1}-Y_{0}|\tau(W)=\tau(w)]\) the conditional mean of the individual treatment effect given treatment effect score value \(\tau(w)\). By the tower property, \(\gamma_{0}(\tau,w)=E[\tau_{0}(W)\,|\,\tau(W)=\tau(w)]\), and so, expectations only involving \(\gamma_{0}(\tau,W)\) and other functions of \(W\) can be taken with respect to \(P_{W}\).
The solution to an isotonic regression problem is typically nonunique. Throughout this text, we follow Groeneboom and Lopuhaa (1993) in taking the unique cadlag piece-wise constant solution of the isotonic regression problem that can only take jumps at observed values of the predictor.
### Measuring calibration and the calibration-distortion decomposition
Various definitions of risk predictor calibration have been proposed in the literature -- see Gupta and Ramdas (2021) and Gupta et al. (2020) for a review. Here, we outline our definition of calibration and its rationale. Given a treatment effect predictor \(\tau\), the best predictor of the individual treatment effect in terms of mean-squared error is \(w\mapsto\gamma_{0}(\tau,w):=E[Y_{1}-Y_{0}\,|\,\tau(W)=\tau(w)]\). By the law of total expectation, this predictor has the property that, for any interval \([a,b)\),
\[E\left\{\left[\tau_{0}(W)-\gamma_{0}(\tau,W)\right]I(\gamma_{0}(\tau,W)\in[a, b))\right\}=0. \tag{1}\]
Equation 1 indicates that \(\gamma_{0}(\tau,\cdot)\) is perfectly calibrated in the interval \([a,b)\). Therefore, when a given predictor \(\tau\) is such that \(\tau(W)=\gamma_{0}(\tau,W)\) with \(P\)-probability one, \(\tau\) is said to be perfectly calibrated Gupta et al. (2020).
In general, perfect calibration cannot realistically be achieved in finite samples. A more modest goal is for the predictor \(\tau\) to be approximately calibrated in that \(\tau(w)\) is close to \(\gamma_{0}(\tau,w)\) across all covariate values \(w\in\mathcal{W}\). This naturally suggests the calibration measure:
\[\operatorname{CAL}(\tau):=\int\left[\gamma_{0}(\tau,w)-\tau(w)\right]^{2}dP_{W }(w). \tag{2}\]
This measure, referred to as the \(\ell_{2}\)-expected calibration error, arises both in prediction Gupta et al. (2020) and in assessment of treatment effect heterogeneity Xu and Yadlowsky (2022). We note that \(\operatorname{CAL}(\tau)\) is zero if \(\tau\) is perfectly calibrated. Additionally, averaging in \(\operatorname{CAL}(\tau)\) with respect to measures other than \(P_{W}\) could be more relevant in certain applications; such cases
can occur, for instance, when there is a change of population that results in covariate shift and we are interested in measuring how well \(\tau\) is calibrated in the new population.
Interestingly, the above calibration measure plays a role in a decomposition of the mean squared error between the treatment predictor and the true CATE, in that
\[\text{MSE}(\tau)\ :=\ \|\tau_{0}-\tau\|^{2}=\text{CAL}(\tau)+\text{DIS}(\tau)\, \tag{3}\]
with \(\text{DIS}(\tau):=E\{var[\tau_{0}(W)\,|\,\tau(W)]\}\) a quantity we term the distortion of \(\tau\). We refer to the above as a _calibration-distortion_ decomposition of the mean-squared error. To interpret \(\text{DIS}(\tau)\), we find it helpful to envision a scenario in which a distorted message is passed between two persons. The goal is for Person 2 to discern the value of \(\tau_{0}(w)\), where the value of \(w\in\mathcal{W}\) is only known to Person 1. Person 1 transmits \(w\), which is then distorted through a function \(\tau\) and received by Person 2. Person 2 knows the functions \(\tau\) and \(\tau_{0}\), and may use this information to try to discern \(\tau_{0}(w)\). If \(\tau\) is one-to-one, \(\tau_{0}(w)\) can be discerned by simply applying \(\tau_{0}\circ\tau^{-1}\) to the received message \(\tau(w)\). More generally, whenever there exists a function \(f\) such that \(\tau_{0}=f\circ\tau\), Person 2 can recover the value of \(\tau_{0}(w)\). For example, if \(\tau=\tau_{0}\) then \(f\) is the identity function. If no such function \(f\) exists, it may not be possible for Person 2 to recover the value of \(\tau_{0}(w)\). Instead, they may predict \(\tau_{0}(w)\) based on \(\tau(w)\) via \(\gamma_{0}(\tau,w)\). Averaged over \(W\sim P_{W}\), the mean-squared error of this approach is precisely \(\text{DIS}(\tau)\). See Equation 3 in Kuleshov and Liang (2015) for a related decomposition of \(E\left[\{Y-\tau(X)\}^{2}\right]=\text{MSE}(\tau)+E\left[\{Y-\tau_{0}(X)\}^{2}\right]\) derived in the context of probability forecasting.
The calibration-distortion decomposition shows that, at a given level of distortion, better-calibrated treatment effect predictors have lower mean-squared error for the true CATE function. We will explore this fact later in this work when showing that, in addition to improving calibration, our proposed calibration procedure can improve the mean-squared error of many treatment effect predictors.
### Calibrating predictors: desiderata and classical methods
In most calibration methods, the key goal is to find a function \(\theta:\mathbb{R}\rightarrow\mathbb{R}\) of a given predictor \(\tau\) such that \(\text{CAL}(\theta\circ\tau)<\text{CAL}(\tau)\), where \(\theta\circ\tau\) refers to the composed predictor \(w\mapsto\theta(\tau(w))\). A mapping \(\theta\) that pursues this objective is referred to as a _calibrator_. Ideally, a calibrator \(\theta_{n}\) for \(\tau\) constructed from the dataset \(\mathcal{D}_{n}\) should satisfy the following desiderata:
1. \(\text{CAL}(\theta_{n}\circ\tau)\) tends to zero quickly as \(n\) grows;
2. \(\theta_{n}\circ\tau\) and \(\tau\) are comparably predictive of \(\tau_{0}\).
Property 1 states the primary objective of a calibrator, that is, to yield a well-calibrated predictor. Property 2 requires that the calibrator not destroy the predictive power of the initial predictor in the pursuit of Property 1, which would occur if the calibration term in decomposition (3) were made small at the cost of dramatic inflation of the distortion term.
In classification and regression settings Huang et al. (2020), the most commonly used calibration methods include Platt's scaling (Platt et al., 1999), histogram binning Zadrozny and Elkan (2001), Bayesian binning into quantiles (Naeini et al., 2015), and isotonic calibration
(Niculescu-Mizil and Caruana, 2005). Broadly, Platt's scaling is designed for binary outcomes and uses the estimated values of the treatment effect predictor to fit the logistic regression model
\[\operatorname{logit}P(Y=1\,|\,\tau(W)=t)=\alpha+\beta t\]
with \(\alpha,\beta\in\mathbb{R}\). While it typically satisfies Property 2, Platt's scaling is based on strong parametric assumptions and, as a consequence, may lead to predictions with significant calibration error, even asymptotically (Gupta et al., 2020). Nevertheless, Platt's scaling may be preferred when there are limited data available. Histogram binning, also known as quantile binning, involves partitioning the sorted values of the predictor into a fixed number of bins. Given an initial prediction, the calibrated prediction is given by the empirical mean of the observed outcome values within the corresponding prediction bin. A major limitation of histogram binning is that it requires a priori specification of the number of bins. Selecting too few bins can significantly degrade the predictive power of the calibrated predictor, whereas selecting too many bins can lead to poor calibration. Bayesian binning improves upon histogram binning by considering multiple binning models and their combinations; nevertheless, it still suffers from the need to pre-specify binning models and prior distributions. Finally, isotonic calibration is a histogram binning method that learns the bins from data using isotonic regression (Barlow and Brunk, 1972). Specifically, the bins are selected by minimizing an empirical mean-squared error criterion under the constraint that the calibrated predictor is a nondecreasing transformation of the uncalibrated predictor. This monotonicity constraint is natural as it ensures that the calibrator preserves the (non-strict) ranking of the uncalibrated predictions. Despite its popularity and strong performance in practice (Zadrozny and Elkan, 2002; Niculescu-Mizil and Caruana, 2005; Gupta and Ramdas, 2021), to date whether isotonic calibration satisfies distribution-free calibration guarantees remains an open question (Gupta, 2022).
In this work, we will show that isotonic calibration satisfies a distribution-free calibration guarantee in the sense of Property 1. We further establish that Property 2 holds, in the sense that the isotonic selection criterion ensures that the calibrated predictor is at least as predictive as the original predictor up to negligible error.
## 3 Causal isotonic calibration
Inspired by isotonic calibration, we propose a doubly-robust calibration method for treatment effects, which we refer to as _causal isotonic calibration_. Causal isotonic calibration takes a given predictor trained on some dataset and performs calibration using an independent (or hold-out) dataset. Mechanistically, causal isotonic calibration first automatically learns uncalibrated regions of the given predictor. Calibrated predictions are then obtained by consolidating individual predictions within each region into a single value using a doubly-robust estimator of the ATE. In addition, we introduce a novel data-efficient variant of calibration which we refer to as cross-calibration. In contrast with the standard calibration approach, _causal isotonic cross-calibration_ takes cross-fitted predictors and outputs a single calibrated predictor obtained using all available data. Our methods can be implemented using standard isotonic regression software.
Let \(\tau\) be a given treatment effect predictor that, for now, is assumed to have been built using an external dataset, and suppose that \(\mathcal{D}_{n}\) is the available calibration dataset. For any given
realization \(o:=(w,a,y)\) of the data unit, we define
\[\chi_{0}(o):=\tau_{0}(w)+\frac{a-\pi_{0}(w)}{\pi_{0}(w)[1-\pi_{0}(w)]}\left[y-\mu _{0}(a,w)\right]\]
and refer to \(\chi_{0}(O)\) as a pseudo-outcome. The pseudo-outcome serves as a surrogate for the CATE and has been used in previous methods for estimating \(\tau_{0}\)(Luedtke and van der Laan, 2016; Kennedy, 2020). If the function \(\chi_{0}\) were known, an external predictor \(\tau\) could be calibrated using \(\mathcal{D}_{n}\) by performing the isotonic regression of the pseudo-outcomes \(\chi_{0}(O_{1}),\chi_{0}(O_{2}),\ldots,\chi_{0}(O_{n})\) onto the calibration sample predictions \(\tau(W_{1}),\tau(W_{2}),\ldots,\tau(W_{n})\). However, \(\chi_{0}\) depends on \(\pi_{0}\) and \(\mu_{0}\), which are usually unknown and must be estimated.
A natural approach for causal isotonic calibration is as follows. First, define \(\chi_{n}\) as the estimated pseudo-outcome function based on estimates \(\mu_{n}\) and \(\pi_{n}\) derived from \(\mathcal{D}_{n}\). Then, a calibrated predictor is given by \(\theta_{n}\circ\tau\), where the calibrator \(\theta_{n}\) is found via isotonic regression as a minimizer over \(\mathcal{F}_{iso}:=\{\theta:\mathbb{R}\rightarrow\mathbb{R};\ \theta\text{ is monotone nondecreasing}\}\) of the empirical least-squares risk function
\[\theta\mapsto\frac{1}{n}\sum_{i=1}^{n}\left[\chi_{n}(O_{i})-\theta\circ\tau(W _{i})\right]^{2}\.\]
However, this optimization problem requires a double use of \(\mathcal{D}_{n}\): once, for creating the pseudo-outcomes \(\chi_{n}(O_{i})\), and a second time, in the calibration step. This double usage could lead to over-fitting (Kennedy, 2020), and so we recommend obtaining pseudo-outcomes via sample splitting or cross-fitting. Sample splitting involves randomly partitioning \(\mathcal{D}_{n}\) into \(\mathcal{E}_{m}\cup\mathcal{C}_{\ell}\), with \(\mathcal{E}_{m}\) used to estimate \(\mu_{0}\) and \(\pi_{0}\), and \(\mathcal{C}_{\ell}\) used to carry out the calibration step -- see Algorithm 1 for details. Cross-fitting improves upon sample splitting by using all available data to estimate \(\mu_{0}\) and \(\pi_{0}\) as well as to carry out the calibration step. Algorithm 4, outlined in Appendix B, is the cross-fitted variant of Algorithm 1.
```
0: predictor \(\tau\), training data \(\mathcal{E}_{m}\), calibration data \(\mathcal{C}_{\ell}\)
1: obtain estimate \(\chi_{m}\) of \(\chi_{0}\) using \(\mathcal{E}_{m}\);
2: perform isotonic regression to find \[\theta_{n}^{*}=\operatorname*{argmin}_{\theta\in\mathcal{F}_{iso}}\sum_{i\in \mathcal{I}_{\ell}}[\chi_{m}(O_{i})-\theta\circ\tau(W_{i})]^{2}\] with \(\mathcal{I}_{\ell}\) the set of indices for observations in \(\mathcal{C}_{\ell}\subset\mathcal{D}_{n}\);
3: set \(\tau_{n}^{*}:=\theta_{n}^{*}\circ\tau\).
4:\(\tau_{n}^{*}\)
```
**Algorithm 1** Causal isotonic calibration
In practice, the external dataset used to construct \(\tau\) for input into Algorithm 1 is likely to arise from a sample splitting approach wherein a large dataset is split in two, with one half used to estimate \(\tau\) and the other to calibrate it. This naturally leads to the question of whether there is an approach that fully utilizes the entire dataset for both fitting an initial estimate of \(\tau_{0}\) and calibration. Algorithm 2 describes causal isotonic cross-calibration, which provides a means
to accomplish precisely this. In brief, this approach applies Algorithm 1 a total of \(k\) times on different splits of the data, where for each split an initial predictor of \(\tau_{0}\) is fitted based on the first subset of the data and this predictor is calibrated using the second subset. These \(k\) calibrated predictors are then aggregated via a pointwise median. Interestingly, other aggregation strategies, such as pointwise averaging, can lead to uncalibrated predictions (Gneiting and Ranjan, 2013; Rahaman and Thiery, 2020). A computationally simpler variant of Algorithm 2 is given by Algorithm 3. In this implementation, a single isotonic regression is performed using the pooled out-of-fold predictions; this variant may also yield more stable performance in finite-samples than Algorithm 2 -- see Section 2.1.2 of Xu and Yadlowsky (2022) for a related discussion in the context of debiased machine learning.
```
1:dataset \(\mathcal{D}_{n}\), # of cross-fitting splits \(k\)
2:partition \(\mathcal{D}_{n}\) into datasets \(\mathcal{C}^{(1)},\mathcal{C}^{(2)},\ldots,\mathcal{C}^{(k)}\);
3:for\(s=1,2,\ldots,k\)do
4: let \(j(i)=s\) for each \(i\in\mathcal{C}^{(s)}\);
5: set \(\mathcal{E}^{(s)}:=\mathcal{D}_{n}\backslash\mathcal{C}^{(s)}\);
6: get estimate \(\chi_{n,s}\) of \(\chi_{0}\) from \(\mathcal{E}^{(s)}\);
7: get initial predictor \(\tau_{n,s}\) of \(\tau_{0}\) from \(\mathcal{E}^{(s)}\);
8:endfor
9: perform isotonic regression using pooled out-of-fold predictions to find \[\theta_{n}^{*}=\operatorname*{argmin}_{\theta\in\mathcal{F}_{iso}}\sum_{i=1}^ {n}\left[\chi_{n,j(i)}(O_{i})-(\theta\circ\tau_{n,j(i)})(W_{i})\right]^{2};\]
10: set \(\tau_{n,s}^{*}:=\theta_{n}^{*}\circ\tau_{n,s}\) for \(s=1,2,\ldots,k\);
11: set \(\tau_{n}^{*}:w\mapsto\operatorname{median}\{\tau_{n,1}^{*}(w),\tau_{n,2}^{*}( w),\ldots,\tau_{n,k}^{*}(w)\}\).
12:\(\tau_{n}^{*}\)
```
**Algorithm 2** Causal isotonic cross-calibration (unpooled)
Large-sample theoretical properties
We now present theoretical results for causal isotonic calibration. We first obtain results for causal isotonic calibration described by Algorithm 1 applied to a fixed predictor \(\tau\). We also establish mean squared error guarantees for the calibrated predictor and argue that the proposed calibrator satisfies Properties 1 and 2. We then use these results to obtain analogous calibration guarantees for causal isotonic cross-calibration described by Algorithm 2.
For ease of presentation, we only establish theoretical results for the case where the nuisance estimators are obtained using sample splitting. With minor modifications, our results can be readily extended to cross-fitting by arguing along the lines of Newey and Robins (2018). In that spirit, we assume that the available data \(\mathcal{D}_{n}\) is the union of a training dataset \(\mathcal{E}_{m}\) and a calibration dataset \(\mathcal{C}_{\ell}\) of sizes \(m\) and \(\ell\), respectively, with \(n=m+\ell\) and \(\min\{m,\ell\}\to\infty\) as \(n\to\infty\). Let \(\tau_{n}^{*}\) be the calibrated predictor obtained from Algorithm 1 using \(\tau\), \(\mathcal{E}_{m}\) and \(\mathcal{C}_{\ell}\). We assume the evaluation at \(o\) of the estimated pseudo-outcome \(\chi_{m}\) used to calibrate \(\tau\) is given by
\[\chi_{m}(o):=\tau_{m}(w)+\frac{a-\pi_{m}(w)}{\pi_{m}(w)[1-\pi_{m}(w)]}\left[y- \mu_{m}(a,w)\right],\]
where \(\tau_{m}(w):=\mu_{m}(1,w)-\mu_{m}(0,w)\), and \(\mu_{m}\) and \(\pi_{m}\) are estimates of the outcome regression and propensity score obtained using \(\mathcal{E}_{m}\). Below, we make use of the following conditions.
**Condition 1** (bounded outcome support).: _The \(P\)-support \(\mathcal{Y}\) of \(Y\) is a uniformly bounded subset of \(\mathbb{R}\)._
**Condition 2** (positivity).: _There exists \(\epsilon>0\) such that \(P[\epsilon<\pi_{0}(W)<1-\epsilon]=1\)._
**Condition 3** (independence).: _Estimators \(\pi_{m}\) and \(\mu_{m}\) do not use any data in \(\mathcal{C}_{\ell}\)._
**Condition 4** (bounded range of \(\pi_{m}\), \(\mu_{m}\), \(\tau\)).: _There exist \(0<\eta,\alpha<\infty\) such that \(P[\eta<\pi_{m}(W)<1-\eta]=P[|\mu_{m}(A,W)|<\alpha]=P[|\tau(W)|<\alpha]=1\) for \(m=1,2,\ldots\)_
**Condition 5** (bounded variation of best predictor).: _The function \(\theta_{0}:\mathbb{R}\mapsto\mathbb{R}\) such that \(\theta_{0}\circ\tau=\gamma_{0}(\tau,\cdot)\) is of bounded total variation._
It is worth noting that the initial predictor and its best monotone transformation can be arbitrarily poor CATE predictors. Condition 1 holds trivially when outcomes are binary, but even continuous outcomes are often known to satisfy fixed bounds (e.g., physiologic bound, limit of detection of instrument) in applications. Condition 2 is standard in causal inference and requires that all individuals have a positive probability of being assigned to either treatment or control. Condition 3 follows as a direct consequence of the sample splitting approach, because the estimators are obtained from an independent sample from the data used to carry the calibration step. Condition 4 requires that the estimators of the outcome regression and propensity score be bounded; this can be enforced, for example, by threshholding when estimating these regression functions. Condition 5 excludes cases in which the best possible predictor of the CATE given only the initial predictor \(\tau\) has pathological behavior, in the sense that it has infinite variation norm as a (univariate) mapping of \(\tau\). We stress here that isotonic regression is used only as a tool for calibration, and our theoretical guarantees do not require any monotonicity on components
of the data-generating mechanism -- for example, \(\gamma_{0}(\tau,w)\) need not be monotone as a function of \(\tau(w)\).
The following theorem establishes the calibration rate of the calibrated predictor \(\tau_{n}^{*}\) obtained using causal isotonic calibration.
**Theorem 1** (\(\tau_{n}^{*}\) is well-calibrated).: _Under Conditions 1-5, as \(n\to\infty\), it holds that_
\[\operatorname{CAL}(\tau_{n}^{*})=O_{P}\left(\ell^{-2/3}+\|(\pi_{m}-\pi_{0})( \mu_{m}-\mu_{0})\|^{2}\right).\]
The calibration rate can be expressed as the sum of an oracle calibration rate and the rate of a quadratic bias term involving nuisance estimators. Notably, the causal isotonic calibrator rate can satisfy Property 1 at the oracle rate \(\ell^{-2/3}\) as long as \(m:=m(\ell)\) and \(E\left\|(\pi_{m(\ell)}-\pi_{0})(\mu_{m(\ell)}-\mu_{0})\right\|\) shrinks no slower than \(\ell^{-1/3}\), which requires that one or both of \(\pi_{0}\) and \(\mu_{0}\) is estimated well in an appropriate sense. If \(\pi_{0}\) is known, as in most randomized experiments, the calibration rate \(\ell^{-2/3}\) can be achieved even when \(\mu_{m}\) is inconsistent, thereby providing distribution-free calibration guarantees.
The following theorem states that the predictor obtained by taking pointwise medians of calibrated predictors is also calibrated.
**Theorem 2** (Pointwise median preserves calibration).: _Let \(\tau_{n,1}^{*},\tau_{n,2}^{*},\ldots,\tau_{n,k}^{*}\) be predictors, and define pointwise \(\tau_{n}^{*}(w):=\operatorname{median}\{\tau_{n,1}^{*}(w),\tau_{n,2}^{*}(w), \ldots,\tau_{n,k}^{*}(w)\}\). Then,_
\[\operatorname{CAL}(\tau_{n}^{*})\leq k\sum_{s=1}^{k}\operatorname{CAL}(\tau_ {n,s}^{*})\,\]
_where the median operation is defined as in Section 2.1._
Under similar conditions, Theorem 2 combined with a generalization of Theorem 1 that handles random \(\tau\) (see Theorem 7 in Appendix C.4) establishes that a predictor \(\tau_{n}^{*}\) obtained using causal isotonic cross-calibration (Algorithm 2) has calibration error \(\operatorname{CAL}(\tau_{n}^{*})\) of order
\[O_{P}\left(n^{-2/3}+\max_{1\leq s\leq k}\left\|(\pi_{n,s}-\pi_{0})(\mu_{n,s}- \mu_{0})\right\|^{2}\right)\]
as \(n\to\infty\), where \(\mu_{n,s}\) and \(\pi_{n,s}\) are the outcome regression and propensity score estimators obtained after excluding the \(s^{th}\) fold of the full dataset. In fact, Theorem 2 is valid for any calibrator of the form \(\tau_{n}^{*}:w\mapsto\tau_{n,s_{n}(w)}^{*}(w)\), where \(s_{n}(w)\) is any random selector that may depend on the covariate value \(w\). This suggests that the calibration rate for the median-aggregated calibrator implied by Theorem 2 is conservative as it also holds for the worst-case oracle selector that maximizes calibration error.
We now establish that causal isotonic calibration satisfies Property 2, that is, it maintains the predictive accuracy of the initial predictor \(\tau\). In what follows, predictive accuracy is quantified in terms of mean squared error. At first glance, the calibration-distortion decomposition appears to raise concerns that causal isotonic calibration may distort \(\tau\) so much that the predictive accuracy of \(\tau_{n}^{*}\) may be worse than that of \(\tau\). This possibility seems especially concerning given that the ouput of isotonic regression is a step function, so that there could be many \(w,w^{\prime}\in\mathcal{W}\) such that
\(\tau(w)\neq\tau(w^{\prime})\) but \(\tau_{n}^{*}(w)=\tau_{n}^{*}(w^{\prime})\). The following theorem alleviates this concern by establishing that the mean squared error is, up to a remainder term that decays with sample size, no larger than the mean squared error of the initial treatment effect predictor \(\tau\). A consequence of this theorem is that causal isotonic calibration does not distort \(\tau\) so much as to destroy its predictive performance.
In the theorem below, we define the best isotonic approximation of the CATE given the initial predictor \(\tau\) as
\[\tau_{0}^{*}:=\operatorname*{argmin}_{\theta\circ\tau:\theta\in\mathcal{F}_{iso }}\left\|\tau_{0}-\theta\circ\tau\right\|.\]
**Theorem 3** (Causal isotonic calibration does not inflate mean squared error much).: _Under Conditions 1-5,_
\[\|\tau_{n}^{*}-\tau_{0}^{*}\|=O_{P}\left(\ell^{-1/3}+\|(\pi_{m}-\pi_{0})(\mu_ {m}-\mu_{0})\|\right)\]
_as \(n\to\infty\). As such, as \(n\to\infty\), the inflation in root mean squared error from causal isotonic calibration satisfies_
\[\sqrt{\operatorname{MSE}(\tau_{n}^{*})}-\sqrt{\operatorname{MSE}(\tau)}\leq O _{P}\left(\ell^{-1/3}+\|(\pi_{m}-\pi_{0})(\mu_{m}-\mu_{0})\|\right).\]
A similar mean squared error bound can be established for causal isotonic cross-calibration as defined in Algorithm 2.
## 5 Simulation studies
### Data-generating mechanisms
We examined the behavior of our proposal under two data-generating mechanisms. The first data-generating mechanism (Scenario 1) includes a binary outcome whose conditional mean is an additive function (on the logit scale) of non-linear transformations of four confounders with treatment interactions. The second mechanism (Scenario 2) includes instead a continuous outcome with conditional mean linear on covariates and treatment interactions, and with more than 100 covariates of which only 20 are true confounders. In both scenarios, the propensity score follows a logistic regression model. All covariates were independent and uniformly distributed on \((-1,+1)\). The sample size considered were \(1,000\), \(2,000\), \(5,000\) and \(10,000\). Additional details on these simulation scenarios are provided in Appendix D.1.
### CATE estimation
In Scenario 1, to estimate the CATE, we implemented gradient-boosted regression trees (GBRT) with maximum depths equal to 2, 5, and 8 (Chen and Guestrin, 2016), random forests (RF) (Breiman, 2001), generalized linear models with lasso regularization (GLMnet) (Friedman et al., 2010), generalized additive models (GAM) (Wood, 2017), and multivariate adaptive regression splines (MARS) (Friedman, 1991). In Scenario 2, we implemented RF, GLMnet, and a combination of variable screening with lasso regularization followed by GBRT with maximum depth determined via cross-validation. We used the implementation of these estimators found in R package sl3(Coyle et al., 2021).
For calibration, we used the variant of causal isotonic cross-calibration outlined as Algorithm 3. Additional information on the implementation of the causal isotonic calibrators is provided in Appendix D.2.
### Performance metrics
We compared the performance of the causal isotonic calibrator to its uncalibrated version in terms of three metrics: the calibration measure defined in (1), mean squared error, and the calibration bias within bins defined by the first and last prediction deciles. The calibration bias within bins is given by the measure in (2) standardized by the probability of falling within each bin. For each simulation iteration, the metric was estimated empirically using an independent sample \(\mathcal{V}\) of size \(n_{\mathcal{V}}=10^{4}\). These metric estimates were then averaged across 500 simulations. Additional details on these metrics is provided in Appendix D.3.
### Simulation results
Results from Scenario 1 are summarized in Figure 0(a). The predictors based on GLMnet, RF, GAM, and MARS happened to be well-calibrated, and so, causal isotonic calibration did not lead to noticeable improvements in calibration error. In contrast, causal isotonic calibration of GBRT substantially decreased its calibration error, regardless of tree depth and sample size. In terms of mean squared error, calibration improved the predictive performance of GBRT and GAM, and preserved the performance of GLMnet and MARS. The calibration bias within bins of prediction was generally smaller after calibration, with a more notable improvement on GBRT -- see Table 2 in Appendix E.
Results from Scenario 2 are summarized in Figure 0(b). The predictors based on RF and GBRT with GLMnet screening were poorly calibrated, and causal isotonic calibration substantially reduced their calibration error. Calibration did not noticeably change the already small calibration error of the GLMnet predictions; however, calibration substantially reduced the calibration error within quantile bins of its predictions -- see Table 3 in Appendix E. Finally, with respect to mean squared error, causal isotonic calibration improved the performance of RF and GBRT with variable screening, and yielded similar performance to GLMnet.
In Figure 2 of Appendix E, we compared the performance of calibration using hold-out sets to cross-calibration. We found substantial improvements in mean squared error and calibration by using cross-calibration over conventional calibration.
## 6 Conclusion
In this work, we proposed causal isotonic calibration as a novel method to calibrate treatment effect predictors. In addition, we establish that the pointwise median of calibrated predictors is also calibrated. This allowed us to develop a data-efficient variant of causal isotonic calibration using cross-fitted predictors, thereby avoiding the need for a hold-out calibration dataset. Our proposed methods guarantee that, under minimal assumptions, the calibration error defined in (2) vanishes at a fast rate of \(\ell^{-2/3}\) with little or no loss in predictive power, where \(\ell\) denotes the number of observations used for calibration. Importantly, this property holds regardless of how well the initial predictor \(\tau\) approximates the true CATE function. To our knowledge, our method is the first in the literature to directly calibrate HTE predictors without requiring trial data or parametric assumptions. Potential applications of our method include data-driven decision-making with strong robustness guarantees.
Our method has some limitations. The calibration guarantees for our method require that either \(\mu_{0}\) or \(\pi_{0}\) be estimated sufficiently well. Flexible data-adaptive learning methods can be
Figure 1: Monte Carlo estimates of calibration error and mean squared error for uncalibrated and causal isotonic cross-calibrated predictors for Scenarios 1 and 2.
used to satisfy this condition. If the treatment assignment mechanism \(\pi_{0}\) is known, this condition can be trivially satisfied. Hence, our method can be readily applied to calibrate treatment effect predictors and better understand treatment effect heterogeneity in clinical trials. For proper calibration, our results require that all confounders be measured and adjusted for. In future work, it will be important to consider treatment effect calibration in settings with unmeasured confounding, perhaps using instrumental variables (Okui et al., 2012) or sensitivity analysis (Tchetgen Tchetgen, 2014).
In simulations, we found that causal isotonic cross-calibration led to well-calibrated predictors across all settings. The benefits of calibration were especially prominent in high-dimensional settings and for tree-based methods such as gradient-boosted regression trees. This is of particularly high relevance given that regression trees have become popular for CATE estimation, due to both their flexibility (Athey and Imbens, 2016) and interpretability (Lee et al., 2020). Additionally, we found that calibration generally preserves the predictive power of the original predictor; in some cases, it even improves predictive accuracy. We also found that cross-calibration substantially improved the mean squared error of the resulting predictor relative to approaches based on hold-out calibration sets.
Though our focus was on treatment effect estimation, our theoretical arguments can be readily adapted to provide guarantees for isotonic calibration in regression and classification problems. Hence, we have provided an affirmative answer to the open question of whether it is possible to establish distribution-free calibration guarantees for isotonic calibration (Gupta, 2022).
**Acknowledgements.** Research reported in this publication was supported by NIH grants DP2-LM013340 and R01-HL137808. The content is solely the responsibility of the authors and does not necessarily represent the official views of the funding agencies.
|
2303.03644 | Quantifying separability in limit groups via representations | We show that for any finitely generated subgroup $H$ of a limit group $L$
there exists a finite-index subgroup $K$ containing $H$, such that $K$ is a
subgroup of a group obtained from $H$ by a series of extensions of centralizers
and free products with $\mathbb Z$. If $H$ is non-abelian, the $K$ is fully
residually $H$. We also show that for any finitely generated subgroup of a
limit group, there is a finite-dimensional representation of the limit group
which separates the subgroup in the induced Zariski topology. As a corollary,
we establish a polynomial upper bound on the size of the quotients used to
separate a finitely generated subgroup in a limit group. This generalizes the
results of Louder, McReynolds and Patel. Another corollary is that a hyperbolic
limit group satisfies the Geometric Hanna Neumann conjecture. | Keino Brown, Olga Kharlampovich | 2023-03-07T04:21:51Z | http://arxiv.org/abs/2303.03644v2 | # Quantifying separability in limit groups via representations
###### Abstract
We show that for any finitely generated subgroup \(H\) of a limit group \(L\) there exists a finite-index subgroup \(K\) containing \(H\), such that \(K\) is a subgroup of a group obtained from \(H\) by a series of extensions of centralizers and free products with \(\mathbb{Z}\). If \(H\) is non-abelian, the \(K\) is fully residually \(H\). We also show that for any finitely generated subgroup of a limit group, there is a finite-dimensional representation of the limit group which separates the subgroup in the induced Zariski topology. As a corollary, we establish a polynomial upper bound on the size of the quotients used to separate a finitely generated subgroup in a limit group. This generalizes the results in [10]. Another corollary is that a hyperbolic limit group satisfies the Geometric Hanna Neumann conjecture.
## 1 Introduction
A group is said to _retract_ onto a subgroup if the inclusion map of the subgroup into the group admits a left-inverse. In which case, the left-inverse is called a _retraction_ and the subgroup a _retract_. In [22], Wilton proves that if \(H\) is a finitely generated subgroup of a limit group \(L\), \(g\in L-H\), then \(H\) is a retract of some finite-index subgroup \(K\leq L\) which contains \(H\) but not \(g\)[22]. We will refer to the smallest set of groups containing all finitely generated free groups that is closed under extensions of centralizers as ICE. By [6], limit groups are precisely the finitely generated subgroups of groups from ICE. We will modify the construction, from [22], of a finite-index subgroup \(K\leq G\), where \(G\) is an ICE group, in such a way that not only is there a retraction \(K\to H\), but, for a non-abelian \(H\), a discriminating family of retractions (for each finite set \(S\) of non-trivial elements in \(K\), there is a retraction from \(K\) onto \(H\) that is injective on \(S\)). In other words, \(K\) is fully residually \(H\). This finite-index subgroup \(K\) will be a group obtained from \(H\) by a finite chain of groups \(H=K_{0}<\ldots<K_{n}=K\), where \(K_{i+1}\) is either \(K_{i}*F\), where \(F\) is some free group or \(K_{i+1}\) is an extension of a centralizer in \(K_{i}\). We will call a group obtained by such a chain an \(H-\)GICE group. (If \(H\) is free, then the classes of \(H\)-GICE groups and \(ICE\) groups containing \(H\), coincide.) It is well known that an extension of a centralizer of a
limit group \(G\) is fully residually \(G\). This was first proved in [11] (one can find a detailed proof, for example, in [16, Lemma 3.7].) It is also known that a free product of a non-abelian limit group \(G\) and a free group is fully residually \(G.\) Therefore, each group in the chain used to construct \(K\) is fully residually \(H\).
**Theorem 1**.: _Let \(G\) be an ICE-group, \(H\) be a finitely generated subgroup, and \(g\in L-H\). Then, there exists a finite-index subgroup \(K\) of \(G\) such that \(H\leq K\), \(K\) is an \(H\)-GICE group, and \(g\not\in K\)._
**Corollary 2**.: _Let \(L\) be a limit group, \(H\) be a finitely generated subgroup, and \(g\in L-H\). Then, there exists a finite-index subgroup \(K\) of \(L\) such that \(H\leq K\), \(K\) is a subgroup of an \(H\)-GICE group, and \(g\not\in K\)._
This theorem implies the following result.
**Theorem 3**.: _Let \(L\) be a limit group, \(H\) be a finitely generated non-abelian subgroup, and \(g\in L-H\). Then, there exists a finite-index subgroup \(K\) of \(L\) such that \(H\leq K\), \(K\) is fully residually \(H,\) and \(g\not\in K\)._
Theorem 3 is also true when \(L\) is abelian and, therefore, H is abelian (see Remarks at the end of the proof). In the case when \(H\) is abelian and \(L\) is non-abelian a finite-index subgroup of \(L\) cannot be fully residually \(H\).
**Theorem 4**.: _Let \(L\) be a limit group. If \(H\) is a finitely generated non-abelian subgroup of \(L,\) then there is a faithful representation \(\rho_{H}:L\to GL(V)\) such that \(\overline{\rho_{H}(H)}\cap\rho_{H}(L)=\rho_{H}(H)\), where \(\overline{\rho_{H}(H)}\) is the Zariski closure of \(\rho_{H}(H).\)_
Likewise, this theorem is true when \(L\) is abelian.
**Corollary 5**.: _Let \(L\) be a limit group and \(S\) be a finite generating set for \(L.\) If \(H\leq L\) is a finitely generated subgroup, then there exists a constant \(N>0\) such that for each \(g\in L-H,\) there exist a finite group \(Q\) and a homomorphism \(\varphi:L\longrightarrow Q\) such that \(\varphi\left(g\right)\notin\varphi\left(H\right)\) and \(\left|Q\right|\leq\left|\left|g\right|\right|_{S}^{N}.\) If \(K=H\ker\varphi,\) then \(K\) is a finite-index subgroup of \(L\) whose index is at most \(\left|Q\right|\leq\left|\left|g\right|\right|_{S}^{N}\) with \(H\leq K\) and \(g\notin K.\) Moreover, the index of the normal core of the subgroup \(K\) is bounded above by \(\left|Q\right|\)._
To use Theorem 4 for the proof of this corollary in the case when \(L\) is non-abelian and \(H\) is abelian we can take instead of \(H\) a non-abelian subgroup \(H_{1}=H\ast\left\langle x\right\rangle\) for a suitable element \(x.\)
Our Theorem 4 and Corollary 5 generalize results for free and surface groups from [10]. We use [10] to deduce Corollary 5 from Theorem 4. Corollary 5 establishes polynomial bounds on the size of the normal core of the finite index subgroup used in separating \(g\) from \(H\). The constant \(N\) explicitly depends on the subgroup \(H\) and the dimension of \(V\) in Theorem 4. For a general finite index subgroup, the upper bound for the index of the normal core is factorial in the index of the subgroup. It is for this reason that we include the statement about the normal core of \(K\) at the end of the corollary.
Recently, several effective separability results have been established; see [2]-[10], [12]-[14], [17]-[21]. Most relevant here are papers [10], [9]. The methods used in [9] give linear bounds in terms of the word length of \(|g|\) on the index of the subgroup used in the separation but do not produce polynomial bounds for the normal core of that finite index subgroup. We can also obtain bounds on the index of the separating subgroup on the order of magnitude \(C|g|,\) where \(C\) is a constant depending on \(L\) and \(H\).
In Section 6 we will formulate the Geometric Hanna Neumann conjecture by Antolin and Jaikin-Zapirain for limit groups and give a proof (due to Jaikin-Zapirain) that Theorem 1 implies the conjecture for hyperbolic limit groups (Theorem 29).
## 2 Preliminaries
**Definition 6**.: A family \(\mathcal{F}\) of \(H\)-homomorphisms (identical on \(H\)) from a group \(G\) onto a subgroup \(H\) is called a _discriminating family_ if for any finite set \(S\) of non-trivial elements in \(G\) there exists a homomorphism \(\psi\in\mathcal{F}\) such that for any \(g\in S,\)\(\psi(g)\neq 1.\) We say \(G\) is _fully residually_\(H\) if there exists a discriminating family of \(H\)-homomorphisms from \(G\) to \(H.\)
**Definition 7**.: Let \(G\) be a group and \(C_{G}(u)\) denote the centralizer of an element \(u\in G\). An _extension of a centralizer_ of \(G\) is the group
\[(G,u)=\langle G,t_{1},\ldots,t_{k}\mid[c,t_{i}]\,,c\in C_{G}(u),[t_{i},t_{j}],i,j=1,\ldots,k\rangle.\]
Similarly, if we extend centralizers of several non-conjugated elements \(u_{1},\ldots,u_{m}\) in \(G\) we denote the obtained group by \((G,u_{1},\ldots,u_{m}).\)
An _iterated extension of centralizers_ is obtained by finitely many applications of this construction to a finitely generated free group and is called an ICE-group. In this case we can assume that each centralizer is extended only once. In other words, on each step \(C_{G}(u)\) is cyclic.
Let
\[F=G_{0}<G_{1}=(G,u_{1})<\ldots<G_{n}=(G_{n-1},u_{n})=G \tag{1}\]
be a chain of centralizer extensions to obtain an ICE-group \(G\). Then we always assume that in this chain centralizers in \(G_{i}\) are extended before centralizers in \(G_{i+1}.\) We can modify this chain the following way
\[F=G_{0}<G_{i_{1}}<\ldots<G_{i_{k}}=G, \tag{2}\]
where \(G_{i_{1}}=(G_{0},u_{1},\ldots,u_{i_{1}}),\) where \(u_{1},\ldots,u_{i_{1}}\) are in \(G_{0}\) is obtained from \(G_{0}\) by extending all the centralizers of elements from \(G_{0}\) that appear in the first chain. Similarly \(G_{i_{j+1}}\) is obtained from \(G_{i_{j}}\) by extending all the centralizers of elements in \(G_{i_{j}}\) that were extended in the first chain.
**Definition 8**.: Let \(G\) be an ICE group. Then associated with \(G\) is a finite \(K(G,1)\) space, called an _ICE space_, which is constructed as follows:
1. If \(G\) is free, then take \(X=K(G,1)\) to be a compact graph of suitable rank.
2. If \(G\) is obtained from a group \(G^{\prime}\) by an extension of a centralizer and \(Y=K(G^{\prime},1)\), then given an essential closed curve \(\partial_{+}:S^{1}\longrightarrow Y\) representing a generator of \(C_{G^{\prime}}(g^{\prime})\) and a coordinate circle \(\partial_{-}:S^{1}\longrightarrow T\), where \(T\) is a torus, take \[X=Y\sqcup\left([-1,1]\times S^{1}\right)\sqcup T,\] identifying \((\pm 1,\theta)\in[-1,1]\times S^{1}\) with \(\partial_{\pm}\left(\theta\right).\)
**Remark 9**.: Associated to each ICE space \(X\) is a graph of spaces decomposition whose vertices are \(Y\) and \(T\) and edges are circles.
**Definition 10**.: A group \(G\) is an _\(H\)-GICE group_ if it is obtained from \(H\) by a series of free products with free groups and extensions of centralizers. Here, GICE stands for generalized iterated centralizer extension.
If \(H\) is a non-abelian limit group, then any \(H\)-GICE group and its subgroups containing \(H\) are fully residually \(H\), see, for example [16]. Therefore, Theorem 1 implies Theorem 3.
**Definition 11**.: Let each of the following spaces have a chosen basepoint, and suppose that the maps are basepoint preserving. Let \(\rho:(B^{\prime},b^{\prime})\to(B,b)\) be a covering map. Let \(\delta:(A,a)\to(B,b)\) be a map, where \(A\) is a connected complex (in our case \(A\) will be a loop). Let \(\kappa:(A^{\prime},a^{\prime})\to(A,a)\) be the smallest cover of \((A,a)\) such that the map \(\delta\circ\kappa\) has a lift \(\delta^{\prime}\). We call \(\delta^{\prime}:(A^{\prime},a^{\prime})\to(B^{\prime},b^{\prime})\) the _elevation_ of \(\delta\).
Two elevations \({\delta_{1}}^{\prime}:{A_{1}}^{\prime}\to B^{\prime}\) and \({\delta_{2}}^{\prime}:{A_{2}}^{\prime}\to B^{\prime}\) are _isomorphic_ if there exists a homeomorphism \(\iota:{A_{1}}^{\prime}\to{A_{2}}^{\prime}\) covering the identity map on \(A\), such that \({\delta_{1}}^{\prime}={\delta_{2}}^{\prime}\circ\iota\).
For more information on elevations we refer to [22, Section 2].
**Definition 12**.: Let \(X\) and \(X^{\prime}\) be graphs of spaces (\(X^{\prime}\) is not assumed to be connected). A _pre-covering_ is a locally injective map \(X^{\prime}\longrightarrow X\) that maps vertex spaces and edge spaces of \(X^{\prime}\) to vertex spaces and edge spaces of \(X\) respectively and restricts to a covering on each vertex space and each edge space. Furthermore, for each edge space \(e^{\prime}\) of \(X^{\prime}\) mapping to an edge space \(e\) of \(X\), the diagram of edge maps
is required to commute. The domain \(X^{\prime}\) is called a _pre-cover_.
**Definition 13**.: ([22, Definition 3.1]) Let \(X\) be a complex, \(X^{\prime}\longrightarrow X\) be a covering, and
\[\mathcal{L}=\{\delta_{i}:C_{i}\longrightarrow X\}\]
be a finite collection of independent, essential loops. The cover \(X^{\prime}\) is said to be _tame over_\(\mathcal{L}\) if the following holds: let \(\Delta\subset X^{\prime}\) be a finite subcomplex and
\[\mathcal{L}^{\prime}=\{\delta^{\prime}_{j}:C^{\prime}_{j}\longrightarrow X^{ \prime}\}\]
be a finite collection of pairwise non-isomorphic infinite degree elevations, each of which is an elevation of some loop in \(\mathcal{L}.\) Then for all sufficiently large positive integers \(d\) there exists an intermediate finite-sheeted covering
\[X^{\prime}\longrightarrow\hat{X}\longrightarrow X\]
such that
1. each \(\delta^{\prime}_{j}\) descends to some degree \(d\) elevation \(\hat{\delta_{j}}\)
2. the \(\hat{\delta_{j}}\) are pairwise non-isomorphic,
3. \(\Delta\) embeds into \(\hat{X}\), and
4. there exists a retraction \(\rho:\pi_{1}(\hat{X})\longrightarrow\pi_{1}(X^{\prime})\) such that \[\rho(\hat{\delta}_{j*}(\pi_{1}(\hat{C_{j}})))\subset\delta^{\prime}_{j*}(\pi_ {1}(C^{\prime}_{j}))\] for each \(j.\)
_Remark_.: We will also say a covering \(X^{\prime}\longrightarrow X\) is tame over a given set of finite independent, essential loops whenever its domain \(X^{\prime}\) is.
Notice, that covers of tori are tame over coordinate circles, see [22, Lemma 3.3].
**Definition 14**.: The cover \(X^{\prime}\) is _strongly tame over_\(\mathcal{L}\) if it is tame over \(\mathcal{L}\) and \(\pi_{1}(\hat{X})\) is a \((\pi_{1}(X^{\prime})*F)\)-GICE group, where \(F\) is a free group with basis \(\{\hat{\delta}_{j*}(\pi_{1}(\hat{C}_{j}))\}.\)
**Definition 15**.: A group \(G\) is said to admit a _local GICE structure_ if for each finitely generated subgroup \(H\leq G\) and a finite set of elements \(g_{i}\not\in H\) one can construct a finite-index subgroup \(K\) containing \(H\) and not containing these elements such that \(K\) is a \(H\)-GICE group.
## 3 Proof of Theorems 1, 3
We will follow the construction in [22] changing it a couple of times to prove a theorem similar to [22, Theorem 3.8]. One difference is that we will use
induction on the number of steps in chain (2) while [22, Theorem 3.8] is proved by induction on the number of steps in chain (1).
Let \(X\) be an ICE space constructed by gluing several tori \(T_{1},\ldots,T_{k}\) to a simpler ICE space \(Y\) with edge spaces being loops. Let \(H\subset\pi_{1}(X)\) be a finitely generated subgroup and \(X^{H}\to X\) be the corresponding covering. Then \(X^{H}\) inherits a graph of spaces decomposition, with vertex spaces the connected components of the pre-images of the vertex spaces of \(X\) and edge spaces and maps given by all the (isomorphism classes of) elevations of the edge maps to the vertex spaces of \(X^{H}\). Let \(X^{\prime}\subseteq X^{H}\) be a core of \(X^{H}\). A **core** is a connected subgraph of spaces with finite underlying graph such that the inclusion map is a \(\pi_{1}\)-isomorphism. Since \(H\) is finitely generated, a core exists. Let \(\Delta\subset X^{H}\) be a finite subcomplex. Enlarging \(X^{\prime}\) if necessary we can assume \(\Delta\subset X^{\prime}\).
Replacing the tameness hypothesis in [22, Proposition 3.4] by strong tameness, we have the following.
**Proposition 16**.: _(Passing to finite-sheeted pre-covers) Let \(X\) be an ICE space constructed by gluing several tori \(T_{1},\ldots,T_{k}\) to a simpler ICE space \(Y\) with edge spaces being loops, Let \(X^{\prime}\to X\) be a pre-covering with finite underlying graph. Every vertex space \(V^{\prime}\) of \(X^{\prime}\) covers some vertex space \(V\) of \(X.\) Assume that each \(Y^{\prime}\) is strongly tame over the set of edge maps incident at \(Y\). Let \(\Delta\subset X^{\prime}\) be a finite subcomplex. Then there is a finite-sheeted intermediate pre-covering_
\[X^{\prime}\to\bar{X}\to X\]
_such that_
1. \(\Delta\) _embeds into_ \(\bar{X};\) _and_
2. \(\pi_{1}(\bar{X})\) _is a_ \(\pi_{1}(X^{\prime})\)_-GICE group._
Proof.: Let \(\Delta_{0}\) be a finite complex that contains \(\Delta\) and all the compact edge spaces of \(X^{\prime}.\) Let \(V^{\prime}\) be a vertex space of \(X^{\prime}\) covering the vertex \(V\) of \(X.\) Set \(\Delta_{V^{\prime}}=V^{\prime}\cap\Delta_{0}\) and consider the edge maps \(\partial_{i}^{\prime}:e_{i}\to V^{\prime}\) of edges \(e_{i}\) incident at \(V^{\prime}\) that are infinite-degree elevations of \(\partial_{\pm}:e\to V.\) Since each \(V^{\prime}\) is tame over the set of edge maps incident at \(V\) and each \(Y^{\prime}\) is strongly tame over the set of edge maps incident at \(Y\), for all sufficiently large \(d\) there exists a an intermediate finite-sheeted covering
\[V^{\prime}\to\bar{V}\to V\]
such that
1. \(\Delta_{V^{\prime}}\) embeds into \(\bar{V}\),
2. each \(\partial_{i}^{\prime}\) descends to some degree \(d\) elevation \(\bar{\partial}_{i}\) of \(\partial_{\pm}.\)
If \(d\) is large enough, we can take it to be the same \(d\) over all vertex spaces of \(X^{\prime}.\) Let \(\bar{X}\) be the graph of spaces with the same underlying graph as \(X^{\prime}\), but
with the corresponding \(\bar{V}\) in place of \(V^{\prime}.\) If \(e^{\prime}\) is an edge space of \(X^{\prime}\) then the edge map
\[\partial_{\pm}:e^{\prime}\to V^{\prime}\]
descends to a finite-degree map \(\partial_{\pm}:\bar{e}_{\pm}\rightarrow\bar{V}.\) Because \(\bar{e}_{+}\to e\) and \(\bar{e}_{-}\to e\) are coverings of \(e\) with the same degree, we have a finite-sheeted pre-cover \(\bar{X}.\) By construction, \(\Delta\) embeds into \(\bar{X}.\) Since the compact edge spaces are added to \(\Delta,\) non-isomorphic finite degree elevations are mapped into non-isomorphic elevations. This implies that \(\pi_{1}(\bar{X})\) decomposes as a graph of groups, with the same underlying graph as the decomposition of \(\pi_{1}(X^{\prime}).\)
Consider a non-abelian vertex group \(\pi_{1}(\bar{V})\) of \(\pi_{1}(\bar{X})\) (this means \(V=Y\)). To obtain \(\pi_{1}(\bar{V})\) we first take \(\pi_{1}(V^{\prime\prime})=\pi_{1}(V^{\prime})*F\), the free product with cyclic groups corresponding to elevations of degree \(d\) obtained from infinite degree elevations of edge maps, and then by a series of extensions of centralizers and free products with free groups. A cyclic fundamental group of an elevation of degree \(d\) obtained from an infinite degree elevation of an edge map extends the abelian fundamental group of an infinite cover of some torus \(T_{i}\). On the group level this corresponds to the extension of the centralizer of an abelian free factor of \(\pi_{1}(X^{\prime})\) (and, therefore, extension of a centralizer of \(\pi_{1}(X^{\prime})\) itself, because the extending element is in the free factor \(F\) of \(\pi_{1}(V^{\prime})*F\). So, to obtain \(\pi_{1}(\bar{X})\) we first extend centralizers of \(\pi_{1}(X^{\prime})\) corresponding to abelian free factors. We also extend centralizers of all \(\pi_{1}(T^{\prime}),\) where \(T^{\prime}\) covers some \(T_{i}\) so that all \(\bar{T}\)'s become finite covers. Denote by \(X^{\prime\prime}\) the pre-cover that is obtained from \(X^{\prime}\) by replacing the covers of tori by finite covers as above and replacing \(V^{\prime}\) by \(V^{\prime\prime}\) for each \(V\) that is not a torus. Second, we notice that the free constructions that were applied to each \(\pi_{1}(V^{\prime})*F\) to obtain \(\pi_{1}(\bar{V}),\) for \(V^{\prime}\) covering the vertex \(V=Y\), can be thought as applied to the whole group \(\pi_{1}(X^{\prime\prime}).\) Replacing each \(V^{\prime\prime}\) by \(\bar{V}\) for covers \(V^{\prime}\) of the non-abelian vertex group \(V=Y\) we obtain \(\bar{X}.\)
**Lemma 17**.: _([22, Lemma 3.5] Let \(T\) be a torus and \(\delta:S^{1}\to T\) be an essential loop. Then for every positive integer \(d\) there exists a finite-sheeted covering \(\hat{T}_{d}\to T\) so that \(\delta\) has a single elevation \(\hat{\delta}\) to \(\hat{T}_{d}\) and \(\hat{\delta}\) is of degree \(d.\)_
**Lemma 18**.: _(cf [22, Lemma 3.6]) Let \(Y\) be a space such that \(\pi_{1}(Y)\) has local GICE structure and \(\delta:S^{1}\to Y\) be a based essential loop. Then for every positive integer \(d\) there exists a finite-sheeted covering \(\hat{Y}_{d}\to Y\) so that \(\delta\) has an elevation \(\hat{\delta}\) of degree \(d\) to \(\hat{Y}_{d}\) and \(\pi_{1}(\hat{Y}_{d})\) is an \(\langle\hat{\delta}\rangle\)-GICE group._
Proof.: Because \(\pi_{1}(Y)\) has local GICE structure, for every positive integer \(d\) there exists a finite-sheeted covering \(\hat{Y}_{d}\to Y\) so that \(\hat{Y}_{d}\to Y\) is a \(\langle\delta^{d}\rangle\)-GICE group. Note that \(\delta^{k}\not\in\pi_{1}(\hat{Y}_{d})\) for \(0<k<d,\) therefore \(\hat{\delta}\) is an elevation of degree \(d.\)
**Proposition 19**.: _(cf [22, Proposition 3.7])(Completing a finite-sheeted pre-cover to a cover) Let \(X\) be an ICE space constructed by gluing together tori \(T_{1},\ldots,T_{k}\) and a simpler ICE space \(Y,\) as above. Assume that \(\pi_{1}(Y)\) admits a local GICE structure. Let \(\bar{X}\to X\) be a finite-sheeted connected pre-covering._
_Then there exists an inclusion \(\bar{X}\hookrightarrow\hat{X}\) extending \(\bar{X}\to X\) to a covering \(\hat{X}\to X\) such that \(\pi_{1}(\hat{X})\) is a \(\pi_{1}(\bar{X})-GICE\) group._
Proof.: Follows the proof of [22, Proposition 3.7]. The addition of copies of \(T_{i,d}\) correspond to extensions of centralizers. The addition of \(Y_{d}\)'s correspond by Lemma 18 to taking a free product with infinite cyclic group and then a GICE over the obtained group. Indeed, \(\pi_{1}(Y)\) has a local GICE-structure, therefore \(Y_{d}\) is \(C-GICE\) group, where \(C\) is a cyclic group generated by the boundary element.
A collection of elements \(g_{1},\ldots,g_{n}\) of a group \(G\) is called _independent_ if whenever there exists \(h\in G\) such that \(g_{i}^{h}\) and \(g_{j}\) commute, then, in fact, \(i=j\).
**Proposition 20**.: _(cf [22, Proposition 3.8]) Let \(X\) be an ICE space constructed by gluing together tori \(T_{1},\ldots,T_{k}\) and a simpler ICE space \(Y\). Let \(H\leq\pi_{1}(X)\) be a finitely generated subgroup and let \(X^{H}\to X\) be the corresponding covering. Suppose \(\mathcal{L}\) is a (possible empty) set of hyperbolic loops that generate maximal cyclic subgroups of \(\pi_{1}(X)\). Then \(X^{H}\) is strongly tame over \(\mathcal{L}\)._
Proof.: The proof is an induction on the length of the chain (2). Notice, that the induction basis holds by [22, Corollary 1.8]. Indeed, if \(H\) is a finitely generated subgroup of \(\pi_{1}(X)\), where \(X\) is a graph, then the cover \(X^{H}\) is strongly tame over the set of independent elements \(\{\gamma_{i}\}\) such that each generate a maximal cyclic subgroup, because it is tame and for a finite-sheeted intermediate covering
\[X^{H}\to\hat{X}\to X\]
\(\pi_{1}(\hat{X})=H*F\), where \(F\) is a free group.
Fix a finitely generated non-abelian subgroup \(H\subset\pi_{1}(X)\), and let \(X^{H}\to X\) be the corresponding covering. There exists a core \(X^{\prime}\subseteq X^{H}\). Let \(\Delta\subset X^{H}\) be a finite subcomplex. Enlarging \(X^{\prime}\) if necessary we can assume \(\Delta\subset X^{\prime}\), infinite degree elevations of hyperbolic loops \(\{\delta_{i}\}\) are first restricted to elevations \(\{\delta_{i}\}\) and then made disparate. This is possible by [22, Lemma 2.24] without changing the fundamental group.
As in the proof of [22, Theorem 3.8], \(X^{\prime}\) is extended to a pre-cover \(\bar{X}\) where elevations \(\{\delta_{i}\}^{\prime}\) are extended to full elevations \(\bar{\delta}_{j}:\bar{D}_{j}\to\bar{X}\) of degree \(d\) by [22, Lemma 2.23]. By [22, Lemma 2.23], \(\pi_{1}(\bar{X})=\pi_{1}(X^{\prime})*F\), where \(F\) is a free group generated by \(\pi_{1}(\bar{\delta}_{i*}(\bar{D}_{j}))\)'s. Enlarging \(\Delta\) again we assume that the images of the \(\bar{\delta}_{j}\) are contained in \(\Delta\).
By Proposition 16 there exists an intermediate finite-sheeted pre-covering
\[\bar{X}\to\hat{X}\to X,\]
into which \(\Delta\) injects. Since \(\Delta\) injects into \(\hat{X}\) we have that \(\bar{\delta}_{j}\) descends to an elevation \(\hat{\delta}_{j}=\bar{\delta}_{j}\).
Finally, \(\hat{X}\) can be extended to a finite sheeted covering \(\hat{X}^{+}\) by Proposition 19.
We have that \(\Delta\) injects into \(\hat{X}^{+}\). By Proposition 19, \(\pi_{1}(\hat{X}^{+})\) is a \(\pi_{1}(\hat{X})\)-GICE group. By Proposition 16, \(\pi_{1}(\hat{X})\) is a \(\pi_{1}(\bar{X})\)-GICE group. And \(\pi_{1}(\bar{X})=\pi_{1}(X^{\prime})*F\). Therefore, by transitivity, \(\pi_{1}(\hat{X}^{+})\) is a \(\pi_{1}(X^{\prime})*F\)-GICE group. Since \(H=\pi_{1}(X^{\prime}),\) the proposition is proved.
Theorem 1 follows from the proposition (with the empty set \(\mathcal{L}\)). Since every limit group is a subgroup of an ICE-group by [15], Corollary 2 follows from Theorem 1. If \(H\) is non-abelian, then \(H\)-GICE groups are fully residually \(H\) and subgroups of fully residually \(H\) groups that contain \(H\) are also fully residually \(H\). Therefore Theorem 3 follows from Theorem 1.
**Example 1**. Let us illustrate the proof of Theorem 3 with an example when \(L\) is just an extension of a centralizer of a free group. Consider the group
\[L=F(a,b)*_{\langle a\rangle}\langle a,t,|[a,t]=1\rangle,\]
where \(F(a,b)\) is a free group, a subgroup
\[H=\langle a^{2},b^{2}\rangle*_{\langle a^{2}\rangle}\langle a^{2}\rangle*_{ \langle a^{2}\rangle}\langle a^{2t},b^{2t}\rangle\]
and \(g=\Delta=b\not\in H.\) Let us construct a finite-index subgroup \(K\) such that \(H\leq K,\)\(b\not\in K\) and \(K\) is an \(H\)-GICE group.
In Fig. 1 we show the space \(X\) such that \(L=\pi_{1}(X)\). Here \(X\) is a graph of spaces with one edge and two vertices. The loops labelled by \(a\) and \(t\) are generating loops of the torus \(T\) with a fundamental group \(\langle a,t,|[a,t]=1\rangle\) and the bouquet of loops labelled by \(a\) and \(b\) has a fundamental group \(F(a,b)\). A pre-cover \(X^{\prime}\) corresponding to \(H\) is a pre-cover with the finite graph. It is a graph of spaces with two edges and three vertices, \(H=\pi_{1}(X^{\prime}).\) The space corresponding to the vertex in the middle is the cylinder that is an infinite cover of the torus \(T\). The other two vertex spaces are infinite covers of the bouquet of loops.
In Fig. 2 we make \(X^{\prime}\) into a finite-sheeted pre-cover \(\bar{X}\) as it is done in Proposition 16. The space \(\bar{X}\) has the same underlying graph as \(X^{\prime}\), but the vertex spaces are now finite covers of the vertex spaces of \(X\). The torus with the fundamental group generated by \(a^{2},t^{2}\) is a cover of \(T\) of degree 4. Two other vertex spaces are graphs that are covers of degree 3 of the bouquet of loops in \(X\). We have
\[\pi_{1}(\bar{X})=\langle a^{2},b^{2},a^{-1}ba,b^{-1}ab\rangle*_{\langle a^{2} \rangle}\langle a^{2},t^{2}\rangle*_{\langle a^{2}\rangle}\langle a^{2t},b^{2 t},t^{-1}a^{-1}bat,t^{-1}b^{-1}abt\rangle.\]
We have that \(\pi_{1}(\bar{X})\) is obtained from \(H\) by taking a free product with \(\langle a^{-1}ba,b^{-1}ab\rangle\) and \(\langle t^{-1}a^{-1}bat,t^{-1}b^{-1}abt\rangle\) and then extending the centralizer of \(a^{2}\) by \(t^{2}\). There are two hanging elevations of the loop labelled by \(a\) in \(\bar{X}\). They both have degree 1.
Figure 1: ICE space \(X\) and a pre-cover \(X^{\prime}\) with a finite graph
Figure 3 shows a finite cover \(\hat{X}\) of \(X\). It is obtainef from \(\bar{X}\) by attaching two tori \(T_{1}\) to the hanging elevations of the loop labelled by \(a\) (as in Proposition 19). Then \(K=\pi_{1}(\hat{X})\) is obtained from \(\pi_{1}(\bar{X})\) by extending centralizers of \(b^{-1}ab\) (by \(b^{-1}tb\)) and of \(t^{-1}b^{-1}abt\) (by \(t^{-1}b^{-1}tbt\)). Therefore \(K\) is an \(H\)-GICE group, \(b\not\in K\). Notice that \([L:K]=6.\)
**Example 2** (Figures 4-6) Now with the same \(L\) we take
\[H=\langle a^{2},b^{2}\rangle*_{\langle a^{2}\rangle}\langle a^{2}\rangle*_{ \langle a^{2}\rangle}\langle a^{2t},b^{2t}\rangle*\langle t^{ba}\rangle\]
and \(g=\Delta=b\not\in H.\) In this example we will have an edge in \(X^{\prime}\) corresponding to an infinite degree elevation of the loop labelled by \(a\). Then \(\pi_{1}(\bar{X})\) is obtained from \(H\) by the following chain: \(H<H_{1},\) where
\[H_{1}=\langle H,a^{ba},t^{2}|[a^{ba},t^{ba}]=1,[a^{2},t^{2}]=1\rangle,\]
\(H_{1}<\pi_{1}(\bar{X}),\) where
\[\pi_{1}(\bar{X})=H_{1}*\langle a^{t},a^{b^{-1}a},b^{3a},b^{at},a^{bt}\rangle,\]
Figure 2: Finite-sheeted pre-cover \(\bar{X}\)
where the last group is freely generated by the five given elements. So, to obtain \(pi_{1}(\bar{X})\) from \(H\) we used two centralizer extensions and then a free product with a free group. To obtain \(K=\pi_{1}(\hat{X})\) from \(pi_{1}(\bar{X})\) we make three centralizer extensions, as Figure 6 shows and obtain s finite cover of \(X\) of degree 8.
Figure 4: Pre-cover \(X^{\prime}\) with finite graph
Figure 5: Finite sheeted pre-cover \(\bar{X}\)
Figure 6: Finite cover \(\hat{X}\)
**Remark 21**.: Theorem 3 is also true when \(L\) is abelian, therefore, free abelian.
Proof.: We take a basis \(a_{1},\ldots a_{n}\) of \(L\) such that \(H\) has a basis \(a_{1}^{k_{1}},\ldots,a_{r}^{k_{r}}.\) Let
\[g=a_{1}^{m_{1}}\ldots a_{n}^{m_{n}}.\]
If \(m_{i}\) is not divisible by \(k_{i}\) for some \(i=1,\ldots,r,\) then we take \(K\) generated by \(a_{1}^{k_{1}},\ldots,a_{r}^{k_{r}},a_{r+1},\ldots,a_{n}\). If each \(m_{i}\) is divisible by \(k_{i}\) for \(i=1,\ldots,r,\) then some of \(m_{r+1},\ldots,m_{n}\) is non-zero, because \(g\not\in H.\) Suppose \(m_{n}\neq 0.\) Then take \(K\) generated by \(a_{1}^{k_{1}},\ldots,a_{r}^{k_{r}},a_{r+1},\ldots,a_{n}^{m_{n}+1}.\)
**Remark 22**.: In the case when \(H\) is abelian and \(L\) is non-abelian a finite-index subgroup of \(L\) cannot be fully residually \(H\). In this case there exists \(x\in L\) such that \(g\notin H_{1}=\langle H,x\rangle=H\ast\langle x\rangle.\)
Proof.: Take some \(x\in L\) such that \([h,x]\neq 1\) for \(h\in H\). Then for any \(h\in H\), elements \(h,x\) generate a free subgroup. Therefore \(H_{1}=\langle H,x\rangle=H\ast\langle x\rangle\). If \(g\not\in H_{1}\), then we found \(x\). If \(g\in H_{1}\), then \(g\) can be uniquely written as
\[g=h_{1}x^{k_{1}}\ldots h_{r}x^{k_{r}},\]
where \(h_{1},\ldots,h_{r}\) are elements in \(H\), all, except maybe \(h_{1}\) non-trivial. Let \(k\) be a positive number that is larger than all \(|k_{1}|,\ldots,|k_{r}|.\) Then \(g\not\in H_{2}=\langle H,x^{k}\rangle\) and we can take \(x^{k}\) instead of \(x.\)
## 4 Proof of Theorem 4
**Definition 23**.: [10] Let \(G\) be a finitely generated group and \(H\) a finitely generated subgroup of \(G.\) For a complex affine algebraic group \(\mathbf{G}\) and any representation \(\rho_{0}\in Hom(G,\mathbf{G}),\) we have the closed affine subvariety
\[R_{\rho_{0},H}(G,\mathbf{G})=\{\rho\in Hom(G,\mathbf{G}):\rho_{0}(h)=\rho(h) \text{ for all }h\in H\}\]
The representation \(\rho_{0}\) is said to _strongly distinguish_\(H\) in \(G\) if there exist representations \(\rho,\rho^{\prime}\in R_{\rho_{0},H}(G,\mathbf{G})\) such that \(\rho(g)\neq\rho^{\prime}(g)\) for all \(g\in G-H.\)
If \(L\) is a closed surface group or a free group, then Theorem 4 follows from [10, Theorem1.1]. Suppose \(L\) is not a surface group and not an abelian group. Let \(\mathbf{G}\) be a complex affine algebraic group. By the following lemma, it is sufficient to construct a faithful representation \(\rho\in Hom(L,\mathbf{G})\) that strongly distinguishes \(H\) in \(L\).
**Lemma 24**.: _[_10_, Lemma 3.1]_ _Let \(G\) be a finitely generated group, \(\mathbf{G}\) a complex algebraic group, and \(H\) a finitely generated subgroup of \(G.\) If \(H\) is strong distinguished by a representation \(\rho\in Hom(G,\mathbf{G})\), then there exists a representation \(\varrho:G\longrightarrow\mathbf{G}\times\mathbf{G}\) such that \(\varrho(G)\cap\overline{\varrho(H)}=\varrho(H),\) where \(\overline{\varrho(H)}\) is the Zariski closure of \(\varrho(H)\) in \(\mathbf{G}\times\mathbf{G}\)._
**Proposition 25**.: _Let \(L\) be a limit group and \(H\) a non-abelian finitely generated subgroup. There exist a finite-index subgroup \(K\leq L\) and a faithful representation \(\rho_{\omega}:K\rightarrow\mathbf{G}\) that strongly distinguishes \(H\) in \(K\)._
Proof.: By Theorem 3, there exists a finite-index subgroup \(K\) of \(L\) such that \(K\) is fully residually \(H\). Let \(\rho\) be a faithful representation of \(H\) in \(\mathbf{G}\). We order balls \(B_{t}\) of radius \(t\) in the Cayley graph of \(K\) and finite sets \(S_{t}=B_{t}\cap(K-H)\). Since we have a discriminating family of \(H\)-homomorphisms from \(K\) to \(H,\) we can construct for any \(t\in\mathbb{N}\) representations \(\rho_{t}\) and \(\rho_{t}^{\prime}\) in \(Hom(K,\mathbf{G})\) that coincide on \(H\), distinguish all elements in \(S_{t}\), and map \(B_{t}\) monomorphically. Selecting a non-principal ultrafilter \(\omega\in\mathbf{N}\), we have two associated ultraproduct representations \(\rho_{\omega},{\rho^{\prime}}_{\omega}:K\rightarrow\mathbf{G}\) (see [10, Proof of Lemma 3.2]). These representations are faithful because each \(B_{t}\) is mapped monomorphically on a co-finite set of \(j\in\mathbb{N}\) and for any \(g\in K-H\), \(\rho_{\omega}(g)\neq{\rho^{\prime}}_{\omega}(g)\).
Let us prove the first statement of Theorem 4. The proof of [10, Theorem 1.1] shows that it is sufficient to have a representation of \(K\) that strongly distinguishes \(H\). Indeed, like in [10, Corollary 3.3], we can construct a representation \(\Phi:K\to GL(2,\mathbb{C})\times GL(2,\mathbb{C})\) such that \(\Phi(g)\in Diag(GL(2,\mathbb{C}))\) if and only if \(g\in H\). Setting \(d_{H}=[G:K]\), we have the induced representation
\[{Ind_{K}}^{G}(\Phi):G\to GL(2d_{H},\mathbb{C})\times GL(2d_{H},\mathbb{C}).\]
Recall, that when \(\Phi\) is represented by the action on the vector space \(V\) and \(G=\cup_{i=0}^{t}g_{i}K\), then the induced representation acts on the disjoint union \(\sqcup_{i=0}^{t}g_{i}V\) as follows
\[g\Sigma g_{i}v_{i}=\Sigma g_{j(i)}\Phi(k_{i})v_{i},\]
where \(gg_{i}=g_{j(i)}k_{i},\) for \(k_{i}\in K.\) Taking \(\rho={Ind_{K}}^{G}(\Phi)\), it follows from the construction of \(\rho\) and definition of induction that \(\rho(g)\in\overline{(\rho(H))}\) if and only if \(g\in H\). If we set \(\rho=\rho_{H}\), then Theorem 4 is proved.
## 5 Proof of Corollary 5
Given a complex algebraic group \(\mathbf{G}<GL(n,\mathbb{C})\), there exist polynomials \(P_{1},\ldots,P_{r}\in\mathbb{C}\left[X_{i,j}\right]\) such that
\[\mathbf{G}=\mathbf{G}\left(\mathbb{C}\right)=V\left(P_{1},\ldots,P_{r}\right)= \left\{X\in\mathbb{C}^{n^{2}}\mid P_{k}(X)=0,k=1,\ldots,r\right\}\]
We refer to the polynomials \(P_{1},\ldots,P_{r}\) as _defining polynomials_ for \(\mathbf{G}\). We will say that \(\mathbf{G}\) is \(K\)-defined for a subfield \(K\subset\mathbb{C}\) if there exists defining polynomials \(P_{1},\ldots,P_{r}\in K\left[X_{i,j}\right]\) for \(\mathbf{G}\). For a complex affine algebraic subgroup \(\mathbf{H}<\mathbf{G}<GL(n,\mathbb{C})\), we will pick the defining polynomials for \(\mathbf{H}\) to contain a defining set for \(\mathbf{G}\) as a subset. Specifically, we have polynomials \(P_{1},...,P_{r_{\mathbf{G}}},P_{r_{\mathbf{G}}+1},...,P_{r_{\mathbf{H}}}\) such that
\[\mathbf{G}=V\left(P_{1},\ldots,P_{r_{\mathbf{G}}}\right)\text{ and }\mathbf{H}=V \left(P_{1},\ldots,P_{r_{\mathbf{H}}}\right) \tag{3}\]
If \(\mathbf{G}\) is defined over a number field \(K\) with associated ring of integers \(\mathcal{O}_{K},\) we can find polynomials \(P_{1},\ldots,P_{r}\in\mathcal{O}_{K}\left[X_{i,j}\right]\) as a defining set by clearing denominators. For instance, in the case when \(K=\mathbb{Q}\) and \(\mathcal{O}_{K}=\mathbb{Z},\) these are multivariable integer polynomials.
For a fixed finite set \(X=\left\{x_{1},\ldots,x_{t}\right\}\) with associated free group \(F(X)\) and any group \(G,\) the set of homomorphisms from \(F(X)\) to \(G,\) denoted by \(Hom\left(F\left(X\right),G\right),\) can be identified with \(G^{t}=G_{1}\times\ldots\times G_{t}.\) For any point \(\left(g_{1},\ldots,g_{t}\right)\in G^{t},\) we have an associated homomorphism \(\varphi_{\left(g_{1},\ldots,g_{t}\right)}:F\left(X\right)\longrightarrow G\) given by \(\varphi_{\left(g_{1},\ldots,g_{t}\right)}\left(x_{i}\right)=g_{i}.\) For any word \(w\in F(X),\) we have a function \(\mathrm{Eval}_{w}:Hom(F(X),G)\longrightarrow G\) defined by \(\mathrm{Eval}_{w}(\varphi_{\left(g_{1},\ldots,g_{t}\right)})(w)=w(g_{1}, \ldots,g_{t}).\) For a finitely presented group \(\Gamma,\) we fix a finite presentation \(\left\langle\gamma_{1},\ldots,\gamma_{t}\mid r_{1},\ldots,r_{t^{\prime}}\right\rangle\), where \(X=\left\{\gamma_{1},\ldots,\gamma_{t}\right\}\) generates \(\Gamma\) as a monoid and \(\left\{r_{1},\ldots,r_{t^{\prime}}\right\}\) is a finite set of relations. If \(\mathbf{G}\) is a complex affine algebraic subgroup of \(Gl_{n}(n,\mathbb{C}),\) the set \(Hom(\Gamma,\mathbf{G})\) of homomorphisms \(\rho:\Gamma\longrightarrow\mathbf{G}\) can be identified with an affine subvariety of \(G^{t}.\) Specifically,
\[Hom(\Gamma,\mathbf{G})=\left\{\left(g_{1},\ldots,g_{t}\right)\in\mathbf{G}^{t }\mid r_{j}\left(g_{1},\ldots,g_{t}\right)=I_{n}\text{ for all }j\right\} \tag{4}\]
If \(\Gamma\) is finitely generated, \(Hom(\Gamma,\mathbf{G})\) is an affine algebraic variety by the Hilbert Basis Theorem.
The set \(Hom(\Gamma,\mathbf{G})\) also has a topology induced by the analytic topology on \(G^{t}.\) There is a Zariski open subset of \(Hom(\Gamma,\mathbf{G})\) that is smooth in the this topology called the smooth locus, and the functions \(\mathrm{Eval}_{w}:Hom(\Gamma,\mathbf{G})\longrightarrow\mathbf{G}\) are analytic on the smooth locus. For any subset \(S\in G\) and representation \(\rho\in Hom(\Gamma,\mathbf{G}),\)\(\overline{\rho(S)}\) will denote the Zariski closure of \(\rho(S)\) in \(\mathbf{G}.\)
**Lemma 26**.: _([10, Lemma 5.1]) Let \(\mathbf{G}\leq GL\left(n,\mathbb{C}\right)\) be a \(\overline{\mathbb{Q}}\)-algebraic group, \(L\leq\mathbf{G}\) be a finitely generated subgroup, and \(\mathbf{A}\leq\mathbf{G}\) be a \(\overline{\mathbb{Q}}\)-algebraic subgroup. Then, \(H=L\cap\mathbf{A}\) is closed in the profinite topology._
Proof.: Given \(g\in L-H,\) we need a homomorphism \(\varphi:L\longrightarrow Q\) such that \(\left|Q\right|<\infty\) and \(\varphi\left(g\right)\notin\varphi\left(H\right).\) We first select polynomials \(P_{1},...,P_{r_{\mathbf{G}}},...,P_{r_{\mathbf{A}}}\in\mathbb{C}\left[X_{i,j}\right]\) satisfying (3). Since \(\mathbf{G}\) and \(\mathbf{A}\) are \(\overline{\mathbb{Q}}\)-defined, we can select \(P_{j}\in\mathcal{O}_{K_{0}}\left[X_{i,j}\right]\) for some number field \(K_{0}/\mathbb{Q}.\) We fix a finite set \(\left\{l_{1},\ldots,l_{r_{L}}\right\}\) that generates \(L\) as a monoid. In order to distinguish between elements of \(L\) as an abstract group and the explicit elements in \(\mathbf{G},\) we set \(l=M_{l}\in\mathbf{G}\) for each \(l\in L.\) In particular, we have a representation given by \(\rho_{0}:L\longrightarrow\mathbf{G}\) given by \(\rho_{0}(l_{t})=M_{l_{t}}\). We set \(K_{L}\) to be the field generated over \(K_{0}\) by the set of matrix entries \(\left\{\left(M_{t}\right)_{i,j}\right\}_{t,i,j}\).
It is straightforward to see that \(K_{L}\) is independent of the choice of the generating set for \(L.\) Since \(L\) is finitely generated, the field \(K_{L}\) has finite transcendence degree over \(\mathbb{Q}\) and so \(K_{L}\) is isomorphic to a field of the form \(K(T)\) where \(K/\mathbb{Q}\) is a number field and \(T=\left\{T_{1},\ldots,T_{d}\right\}\) is a transcendental basis (See [10]). For each, \(M_{l_{t}},\) we have \((M_{l_{t}})_{i,j}=F_{i,j,t}(T)\in K_{L}\). In particular, we can view the \((i,j)\)-entry of the matrix \(M_{l_{t}}\) as a rational function in \(d\) variables with coefficients in some number field \(K\). Taking the ring generated over \(\mathcal{O}_{K_{0}}\) by the set \(\left\{\left(M_{l_{t}}\right)_{i,j}\right\}_{t,i,j}\), \(R_{L}\) is obtained from \(\mathcal{O}_{K_{0}}\left[T_{1},\ldots,T_{d}\right]\)
by inverting a finite number of integers and polynomials. Any ring homomorphism \(R_{L}\longrightarrow R\) induces a group homomorphism \(GL(n,R_{L})\longrightarrow GL(n,R),\) and since \(L\leq GL(n,R_{L}),\) we obtain \(L\longrightarrow GL(n,R)\). If \(g\in L-H\) then there exists \(r_{\mathbf{G}}<j_{g}\leq r_{\mathbf{A}}\) such that \(Q_{g}=P_{j_{g}}\left(\left(M_{l}\right)_{1,1},\ldots,\left(M_{l}\right)_{n,n} \right)\neq 0\). Using Lemma 2.1 in [6], we have a ring homomorphism \(\psi_{R}:R_{L}\longrightarrow R\) with \(\left|R\right|<\infty\) such that \(\psi_{R}(Q_{g})\neq 0\). Setting, \(\rho_{R}:GL(n,R_{L})\longrightarrow GL(n,R)\) we assert that \(\rho_{R}(g)\notin\rho_{R}(H)\). To see this, set \(\overline{M}_{\eta}=\rho_{R}(\eta)\) for each \(\eta\in L\), and note that \(\psi_{R}(P_{j}((M_{\eta})_{1,1},\ldots,M_{\eta})_{n,n}))=P_{j}((\overline{M}_ {\eta})_{1,1},\ldots,(\overline{M}_{\eta})_{n,n})\). For each \(h\in H\), we know that \(P_{j_{l}}\left((M_{h})_{i,j}\right)=0\) and so \(P_{j}((\overline{M}_{\eta})_{1,1},\ldots,(\overline{M}_{\eta})_{n,n})=0\). However, by selection of \(\psi_{R}\), we know that \(\psi_{R}(Q_{g})\neq 0\) and so \(\rho_{R}(g)\notin\rho_{R}(H)\).
Theorem 4 and Lemma 26 imply Corollary 5.
Proof.: Since \(H\leq L\) is finitely generated, by Theorem 4, there is a faithful representation
\[\rho_{H}:L\longrightarrow GL\left(n,\mathbb{C}\right)\]
such that \(\overline{\rho_{H}(H)}\cap\rho_{H}(L)=\rho_{H}(H)\). We can construct the representation in Theorem 4 so that \(\mathbf{G}=\overline{\rho_{H}(L)}\) and \(\mathbf{A}=\overline{\rho_{H}(H)}\) are both \(\overline{\mathbb{Q}}\)-defined. So, by Lemma 26, we can separate \(H\) in \(L.\) Next, we quantify the separability of \(H\) in \(L.\) Toward that end, we need to bound the order of the ring \(R\) in the proof of Lemma 26 in terms of the word length of the element \(g.\) Lemma 2.1 from [6] bounds the size of \(R\) in terms of the coefficient size and degree of the polynomial \(Q_{g}.\) It follows from a discussion on pp 412-413 of [6] that the coefficients and degree can be bounded in terms of the word length of \(g,\) and that the coefficients and degrees of the polynomials \(P_{j}.\) Because the \(P_{j}\) are independent of the word \(g,\) there exists a constant \(N_{0}\) such that \(\left|R\right|\leq\left|\left|g\right|\right|^{N_{0}}.\) By construction, the group \(Q\) we seek is a subgroup of \(GL(n,R).\) Thus, \(\left|Q\right|\leq\left|R\right|^{n^{2}}\leq\left|\left|g\right|\right|^{N_{0} n^{2}}.\) Taking \(N=N_{0}n^{2}\) completes the proof.
## 6 The Hanna Neumann conjecture for hyperbolic limit groups
Y. Antolin and A. Jaikin-Zapirain proved in [1] the geometric Hanna Neumann conjecture for surface groups and formulated the Geometric Hanna Neumann conjecture for limit groups [1, Conjecture 1]as follows. Let G be a limit group. Then for every two finitely generated subgroups \(U\) and \(W\) of \(G\)
\[\Sigma_{x\in U\backslash G/W}\bar{\chi}(U\cap xWx^{-1})\leq\bar{\chi}(U)\bar{ \chi}(W)\]
Here for a virtually FL-group \(\Gamma\) we define its Euler characteristic as
\[\chi(\Gamma)=\frac{1}{[\Gamma:\Gamma_{0}]}\Sigma_{i=0}^{\infty}(-1)^{i}dim_{ \mathbb{Q}}H_{i}(\Gamma_{0},\mathbb{Q}),\]
where \(\Gamma_{0}\) is an FL-subgroup of \(\Gamma\) of finite index. And \(\bar{\chi}(\Gamma)=\max\{0,-\chi(\Gamma)\}\). Observe that for a non-trivial finitely generated free group \(\Gamma\), \(\bar{\chi}(\Gamma)=d(\Gamma)-1\), where \(d(\Gamma)\) is the number of generators, for a surface group \(\Gamma\) we have \(\bar{\chi}(\Gamma)=d(\Gamma)-2\). By a surface group we mean the fundamental group of a compact closed surface of negative Euler characteristic. Notice that by [1] limit groups are FL-groups. Notice also that for hyperbolic limit groups \(dim_{\mathbb{Q}}H_{i}(\Gamma_{0},\mathbb{Q})=0\) for \(i>2\).
In this section we will prove the conjecture for hyperbolic limit groups.
The notion of \(L^{2}\)-independence was introduced in [1]. The group \(G\) is \(L^{2}\)-Hall, if for every finitely generated subgroup \(H\) of \(G\), there exists a subgroup \(K\) of \(G\) of finite index containing \(H\) such that \(H\) is \(L^{2}\)-independent in \(K\). Let \(G\) be a hyperbolic limit group. By [1, Theorem 1.3], if \(G\) satisfies the \(L^{2}\)-Hall property, then the geometric Hanna Neumann conjecture holds for \(G\).
As explained in [1, Lemma 4.1] and the comment after the lemma, since the limit groups satisfy the strong Atiyah conjecture, if \(G\) is a limit group and \(H\leq K\) subgroups in \(G\), then \(H\) is \(L^{2}\)-independent in \(K\) if the comrestriction map
\[cor:H_{1}(H;\mathcal{D}_{\mathbb{Q}[G]})\to H_{1}(K;\mathcal{D}_{\mathbb{Q}[G]})\]
is injective. Here \(\mathcal{D}_{\mathbb{Q}[G]}\) denote the Linnell division ring.
**Lemma 27**.: _Let \(G\) be a limit group and \(H\leq K\) subgroups of \(G\). Assume that there an abelian subgroup \(B\) of \(G\) such that \(K=\langle H,B\rangle=H\ast_{A}B,\) where \(A=H\cap B\). Then the comrestriction map \(cor:H_{1}(H;\mathcal{D}_{\mathbb{Q}[G]})\to H_{1}(K;\mathcal{D}_{\mathbb{Q}[G ]})\) is injective._
Proof.: By [8, Theorem2(2)], we obtain the exact sequence
\[H_{1}(A;\mathcal{D}_{\mathbb{Q}[G]})\rightarrow^{(cor,-cor)}H_{1}(H;\mathcal{ D}_{\mathbb{Q}[G]})\oplus H_{1}(B;\mathcal{D}_{\mathbb{Q}[G]})\rightarrow^{(cor,cor)}H_{1}( K;\mathcal{D}_{\mathbb{Q}[G]}).\]
Since \(A\) is abelian, \(H_{1}(A;D_{Q[G]})=0\). Indeed, the division ring generated by \(Q[A]\) inside \(D_{Q[G]}\) is isomorphic to the field of fractions \(R\) of \(Q[A]\), and so \(D_{Q[G]}\) is also an \(R\)-vector space. Thus, \(D_{Q[G]}\) is flat as a \(Q[A]\)-module. In particular, \(H_{1}(A;D_{Q[G]})=0\).
So the comrestriction map
\[H_{1}(H;\mathcal{D}_{\mathbb{Q}[G]})\to H_{1}(K;\mathcal{D}_{\mathbb{Q}[G]})\]
is injective.
**Corollary 28**.: _A limit group is \(L^{2}\)-Hall._
Proof.: Let \(G\) be an ICE- group and \(H\) a finitely generated subgroup of \(G\). Then by Theorem 1 there exists a finite chain of groups \(H=K_{0}<\ldots<K_{n}=K\) with \(K\) of finite index in \(G\), where \(K_{i+1}\) is either \(K_{i}\ast\mathbb{Z}\) or \(K_{i+1}\) is an extension of a centralizer of \(K_{i}\). By Lemma 27, the comrestriction maps
\[H_{1}(K_{i};\mathcal{D}_{\mathbb{Q}[G]})\to H_{1}(K_{i+1};\mathcal{D}_{ \mathbb{Q}[G]})\]
are injective. Hence \(H\) is \(L^{2}\)-independent in \(K\). Now, let \(H<L<G\), then \(H\) will be \(L^{2}\)-independent in \(L\cap K\) because the composition of cornerstrictions maps is overstriction. Thus \(L\) is \(L^{2}\)-Hall.
Therefore we obtain the following theorem.
**Theorem 29**.: _The geometric Hanna Neumann conjecture is true for hyperbolic limit groups._
Acknowledgements
We thank A. Vdovina and H. Wilton for very useful discussions. We thank A. Jaikin-Zapirain for explaining how the Hanna Neumann conjecture follows from Theorem 1.
|
2305.16879 | Dynamical exchange-correlation potential formalism for
spin-$\frac{1}{2}$ Heisenberg and Hubbard chains: the
antiferromagnetic/half-filled case | The exchange-correlation potential formalism previously introduced and
applied to the one-dimensional Hubbard model has been extended to spin systems
and applied to the case of the one-dimensional antiferromagnetic
spin$-\frac{1}{2}$ Heisenberg model. Within the spin exchange-correlation
potential formulation, a new sum rule for spin-systems is derived. The
exchange-correlation potential for the Heisenberg model is extrapolated from
exact diagonalization results of small antiferromagnetic Heisenberg clusters.
This procedure is also employed to revisit and computationally improve the
previous investigation of the exchange-correlation potential of the half-filled
Hubbard model, which was based on the exchange-correlation potential of the
dimer. Numerical comparisons with exact benchmark calculations for both the
Heisenberg and the Hubbard models indicate that, starting from the
exchange-correlation potential of a finite cluster, the extrapolation procedure
yields a one-particle spectral function with favorable accuracy at a relatively
low computational cost. In addition, a comparison between the ground state
energies for the one-dimensional Hubbard and Heisenberg models displays how the
well known similarity in behavior of the two models at large interactions
manifests within the exchange-correlation potential formalism. | Zhen Zhao, Claudio Verdozzi, Ferdi Aryasetiawan | 2023-05-26T12:32:30Z | http://arxiv.org/abs/2305.16879v1 | # Dynamical exchange-correlation potential formalism for spin-\(\frac{1}{2}\) Heisenberg
###### Abstract
The exchange-correlation potential formalism previously introduced and applied to the one-dimensional Hubbard model has been extended to spin systems and applied to the case of the one-dimensional antiferromagnetic spin-\(\frac{1}{2}\) Heisenberg model. Within the spin exchange-correlation potential formulation, a new sum rule for spin-systems is derived. The exchange-correlation potential for the Heisenberg model is extrapolated from exact diagonalization results of small antiferromagnetic Heisenberg clusters. This procedure is also employed to revisit and computationally improve the previous investigation of the exchange-correlation potential of the half-filled Hubbard model, which was based on the exchange-correlation potential of the dimer. Numerical comparisons with exact benchmark calculations for both the Heisenberg and the Hubbard models indicate that, starting from the exchange-correlation potential of a finite cluster, the extrapolation procedure yields a one-particle spectral function with favorable accuracy at a relatively low computational cost. In addition, a comparison between the ground state energies for the one-dimensional Hubbard and Heisenberg models displays how the well known similarity in behavior of the two models at large interactions manifests within the exchange-correlation potential formalism.
## I Introduction
Lattice models, in spite of their apparent simplicity, can be very valuable to reveal important features in low dimensional and highly correlated quantum systems. This certainly is the case of two highly paradigmatic models of condensed matter physics, namely the Hubbard [1] and spin-\(\frac{1}{2}\) quantum Heisenberg models [2].
For several decades, these two models have been a testground for new theoretical and computational methods [3; 4; 5]. Notably, they have been used to describe phenomena such as the Mott transition [6], high \(T_{\rm c}\) superconductivity[7], quantum spin liquids [8], and quantum entanglement [9; 10]. Furthermore, via suitable parameterization from first-principles ground-state calculations, they have also been used to describe the dynamical behavior of real materials, which is experimentally measurable, e.g., neutron scattering and angle-resolved photoemission spectroscopy. This model approach is very useful when first-principles descriptions are too complicated to perform. (see e.g. [11; 12; 13; 14]).
There are a number of approaches of increasing sophistication being continuosly developed to solve the Hubbard and Heisenberg models [15; 16; 17; 18; 19; 20; 21; 22]. Exact analytical solutions remain scarce. In one dimension (1D), both models are integrable and exactly solvable via Bethe ansatz [23; 24]. Yet, exact analytic treatments for higher dimensional or even extended 1D systems (e.g., with next-nearest-neighbor coupling) are in general not available. As it happens, already in 1D not all quantities of interest can be accessed: the Bethe ansatz provides information about the energy dispersion [25; 26] but not, for example, the spectral weight, one of the more interesting quantities to consider when studying dynamical correlations, which are usually directly connected to experimental results.
On the numerical side, several approaches can be suitably employed for both models, such as Exact diagonalization (ED) [27], Quantum Monte Carlo (QMC) [28; 29; 30], and Density Matrix Renormalization Group (DMRG) [31; 32; 33], to name a few. ED gives exact and complete information about the system, but is restricted to small systems, thus unable to capture the thermodynamic limit features. DMRG and QMC are applicable to fairly large systems and with high accuracy in 1D [34; 35; 36], but for higher dimensions the computational cost increases rapidly [37; 38; 39].
Density Functional Theory (DFT) [40; 41; 42; 43; 44], a standard methodology for first-principles treatment of materials, has also been used to study the two models, [45], via direct adaptation and application of the lattice case [46; 47; 48; 49; 50], to calculate the model parameters from first-principles (e.g., Hubbard \(U\)[51; 52; 53] and Heisenberg \(J\)[54]), but also to use model results as input to realistic calculations [55]. Although formally exact, DFT in practice requires approximations for the exchange-correlation energy [56].
The local-density approximation (LDA) and its extension to local-spin-density approximation (LSDA) are widely used in DFT [57; 43; 58]. L(S)DA successfully describes many materials, but does not perform well in strongly correlated systems, and much effort has been devoted to improving it. With focus on model lattice systems, one way is to use the exact Bethe ansatz solution of the Hubbard model to approximate the correlation energy of an inhomogeneous lattice system [59]. A similar employment of DFT has also been considered for the Heisenberg model [60]. What is noteworthy about these L(S)DA approaches when applied to the Hubbard
and Heisenberg models is that the exchange-correlation term has information about the lattice structure and dimensionality of the system.
From a different perspective, a formalism based on the dynamical exchange-correlation potential (Vxc) was recently introduced [61]. The formalism is not limited by system size, system dimensionality, or type and range of the interaction, and it is thus useful to describe electronic and magnetic structures in general situations. A main feature of the dynamical Vxc formulation is that the coupling between the dynamical Vxc and the Green function occurs as a direct product in space and time. In contrast, the self-energy, which is traditionally used to calculate the Green function, acts on the Green function as a convolution in space and time.
As a first application of the framework, the lattice one-particle Green function of the infinite 1D Hubbard chain was determined [61; 62] using an extrapolation scheme, starting from the dynamical Vxc of the Hubbard dimer as input. In spite of the simplicity of the approximation used and the low computational load, the scheme provides estimates of the band gap and spectral function in favorable agreement with the results obtained from the Bethe ansatz and the Dynamical Density Matrix Renormalization Group (DDMRG) [63]. One general conclusion from this investigation is that the Vxc formalism provides a simple picture of the one-electron spectrum: for a given momentum, a time-independent term in Vxc together with the kinetic energy term determine the main peak of the spectral function, while a time-dependent term in the form of an exponential couples the Green functions with different momenta and generates incoherent structures or satellite peaks. The energy variable appearing in the exponent can be understood as the main bosonic excitations of the system.
More recently, as a step towards the study of realistic systems, the Vxc of the homogeneous electron gas was calculated within the random-phase approximation [64] with the long-term aim of constructing the Vxc as a universal functional of the ground-state density within the local-density approximation.
## II This work, and plan of the paper
In this work, the Vxc framework is extended to spin systems, more specifically to the 1D Heisenberg model. The Vxc-based equation of motion and the sum rule for the spin exchange-correlation hole are derived. Furthermore, the extrapolation scheme employed in the previous work for the 1D Hubbard chain is adopted. The essential idea of the extrapolation scheme is to start from the Vxc of a finite cluster (kernel), which can be calculated accurately using an exact diagonalization method or other methods such as the density-matrix renormalization group. By a suitable extrapolation, this is then used to determine the Green function of the corresponding lattice model. The spin Vxc framework within the extrapolation scheme is applied to calculate the spectral functions of the 1D spin\(-\frac{1}{2}\) antiferromagnetic (AFM) Heisenberg model in the thermodynamic limit, starting from the spin Vxc of small clusters.
In addition, the 1D Hubbard chain is revisited. In the previous work, the Hubbard dimer was the kernel, which was used to calculate the Green function of the 1D Hubbard chain. In this work, in order to improve the quality of the starting Vxc, the cluster size is enlarged so that additional information arising from interactions beyond nearest-neighbor is captured. The improved Vxc is then used to calculate the Green function of the half-filled 1D Hubbard chain.
To summarize, the main outcomes of the present work are: (i) derivation of the Vxc-based equation of motion and the sum rule of the spin exchange-correlation hole for the 1D Heisenberg model, which can be readily generalized to other spin systems; (ii) calculations of the spinon Green function for the 1D AFM Heisenberg lattice by extrapolating from a finite-cluster spinon Vxc; (iii) improved treatment of the Vxc of the half-filled 1D Hubbard lattice from the previous work by using as kernel a Vxc from a finite cluster; (iv) illustration on how in the Vxc formalism the well known large-\(U\) limit (where results from the Hubbard model match those from the AFM Heisenberg one) is recovered.
The plan of the paper is as follows: in Section III, we review briefly the general Vxc formalism. Then, in section III.1 and III.2 we extend and apply the approach to the 1D AFM Heisenberg model. Specifically, in section III.3 and III.4, we derive an analytic expression for the spinon Vxc for a four-site chain, and compute the lattice dynamical structure factor by extrapolating the finite cluster Vxc to the infinite case. In section IV, we revisit the 1D Hubbard model and compute the exact Vxc of a finite cluster larger than the dimer, with which we improve previous results in the infinite chain limit. In Section V we discuss Vxc from a comparative perspective, addressing the ground-state energy for both the 1D AFM Heisenberg model and the half-filled 1D Hubbard model in the large \(U\) limit. Finally, in Section VI we provide some conclusive remarks and an outlook.
## III General formalism and application to the Heisenberg chain
For a system with a one-body term and two-body interactions, the Hamiltonian reads
\[\begin{array}{c}\hat{H}=\int dr\hat{\psi}^{\dagger}(r)h^{0}(r)\hat{\psi}(r) \\ +\frac{1}{2}\int dr_{1}dr_{2}\hat{\psi}^{\dagger}(r_{1})\hat{\psi}^{\dagger}(r _{2})v(r_{1},r_{2})\hat{\psi}(r_{2})\hat{\psi}(r_{1}),\end{array} \tag{1}\]
where \(\hat{\psi}(r)\) is the fermionic field operator and \(r=(\mathbf{r},\sigma)\) is a combined space and spin variable. The time-ordered Green function is defined in the Heisenberg picture as
\[iG(1,2):=\langle\mathcal{T}\hat{\psi}(1)\hat{\psi}^{\dagger}(2)\rangle, \tag{2}\]
where the argument numbers label the space-time \(1:=(r_{1},t_{1})\), \(\langle.\rangle\) denotes the zero-temperature ground-state expectation value, and \(\mathcal{T}\) is the time-ordering symbol. The equation of motion in the Vxc formalism is given by [61]
\[[i\partial_{t_{1}}-h(r_{1})-V^{\rm xc}(1,2)]G(1,2)=\delta(1-2), \tag{3}\]
where the single-particle term
\[h(r)=h^{0}(r)+V^{\rm H}(r) \tag{4}\]
contains the Hartree potential
\[V^{\rm H}(r)=\int dr^{\prime}v(r,r^{\prime})\langle\hat{\psi}^{\dagger}(r^{ \prime})\hat{\psi}(r^{\prime})\rangle \tag{5}\]
The Vxc reproduces the interaction term containing a special case of the two-particle Green function, i.e.,
\[V^{\rm xc}(1,2)iG(1,2)=\int d3v(1,3)\langle\mathcal{T}\hat{\psi }^{\dagger}(3)\hat{\psi}(3)\hat{\psi}(1)\hat{\psi}^{\dagger}(2)\rangle\] \[-V^{\rm H}(1)iG(1,2),\]
For fermion field operators and in the presence of Coulomb interactions, the bare exchange part of Vxc can be obtained by considering the lowest order of the first term on the RHS of Eq. (II.1),
\[V^{\rm x}(1,2)iG(1,2)=-\int d3v(1-3)G(1,3)G(3,2). \tag{7}\]
### Spin-spin interactions
For systems with spin-spin interactions, an observable of central interest is the spin dynamical structure factor, whose longitudinal and transverse terms are
\[S^{zz}(k,\omega)=\frac{1}{N}\sum_{pq}\int dt\langle\hat{S}^{z}_{p}(t)\hat{S}^{ z}_{q}(0)\rangle e^{i\omega t}e^{-ik(p-q)} \tag{8}\]
and
\[S^{+-}(k,\omega)=\frac{1}{N}\sum_{pq}\int dt\langle\hat{S}^{+}_{p}(t)\hat{S}^{ -}_{q}(0)\rangle e^{i\omega t}e^{-ik(p-q)}, \tag{9}\]
where \(\hat{S}^{z,+,-}_{p}(t)\) are the spin field operators in the Heisenberg picture.
For the Hubbard model, the spin dynamical structure factor can be obtained by solving a two-particle Green function
\[G^{(2)}_{ppqq}(t):=\langle\mathcal{T}\big{[}\hat{c}^{\dagger}_{p\uparrow}(t) \hat{c}_{p\downarrow}(t)\big{]}\big{[}\hat{c}^{\dagger}_{q\downarrow}\hat{c}_ {q\uparrow}\big{]}\rangle, \tag{10}\]
but the equation of motion of the two-particle Green function contains three-particle Green function, and thus is generally difficult to solve. Simplification is however recovered for large repulsion, where charge transfer becomes less likely and spin correlations can be obtained by studying the AFM Heisenberg model. It is thus of fundamental and practical interest to discuss the Vxc formalism directly for the Heisenberg model.
The isotropic 1D Heisenberg Hamiltonian with nearest neighbour (NN) exchange coupling is given by
\[\hat{H}^{\rm Heis}=-J\sum_{p}\Big{[}\frac{1}{2}(\hat{S}^{+}_{p}\hat{S}^{-}_{p +1}+h.c.)+\hat{S}^{z}_{p}\hat{S}^{z}_{p+1}\Big{]}. \tag{11}\]
where for convenience we use an even total number of sites before taking the thermodynamic limit. We define the Green function with spin field operators
\[iG_{pq}(t)=\theta(t)\langle\hat{S}^{+}_{p}(t)\hat{S}^{-}_{q}(0) \rangle+\theta(-t)\langle\hat{S}^{-}_{q}(0)\hat{S}^{+}_{p}(t)\rangle, \tag{12}\]
in which the Heisenberg \(J\) is the analog of the two-particle interaction in Eq. (1). From the Heisenberg equation of motion for the spin field operators, the equation of motion of the Green function reads
\[i\partial_{t}G_{pq}(t)+iF_{pq}(t)=2\delta_{pq}\delta(t)\langle\hat{S}^{z}_{p}\rangle \tag{13}\]
where the interaction term is
\[F_{pq}(t)=-\sum_{l}J_{pl}[\langle p,l;q\rangle-\langle l,p;q\rangle]. \tag{14}\]
Here,
\[\langle l,p;q\rangle:=\langle\mathcal{T}\hat{S}^{z}_{l}(t^{+})\hat{S}^{+}_{p}( t)\hat{S}^{-}_{q}(0)\rangle, \tag{15}\]
and \(J_{pl}=J(\delta_{l,p+1}+\delta_{l,p-1})\) for the 1D NN exchange coupling. One can define the spin exchange-correlation potential analogous to the charge case as follows:
\[V^{\rm xc}_{pp,qq}(t)iG_{pq}(t):= F_{pq}(t)-V^{\rm H}_{p}iG_{pq}(t)-\sum_{l}V^{\rm F}_{pl}iG_{lq}(t),\]
where the last two terms on the right-hand side, \(V^{\rm H}\) and \(V^{\rm F}\), are the analog of the Hartree and exchange potentials, respectively:
\[V^{\rm H}_{p}(t) := -\sum_{l}J_{pl}\langle\hat{S}^{z}_{l}\rangle, \tag{17}\] \[V^{\rm F}_{pl}(t) := J_{pl}\langle\hat{S}^{z}_{p}\rangle. \tag{18}\]
Consequently, a spin correlator \(g_{lpq}(t)\) can be defined such that
\[\langle l,p;q\rangle=iG_{pq}(t)g_{lpq}(t)\langle\hat{S}^{z}_{l}\rangle, \tag{19}\]
while the spin exchange-correlation hole \(\rho^{\rm xc}\) is defined as
\[\rho^{\rm xc}_{lpq}(t)iG_{pq}(t) = -\langle l,p;q\rangle+\langle\hat{S}^{z}_{l}\rangle iG_{pq}(t). \tag{20}\]
Denoting the total \(z\)-component of the spin by \(S^{z}=\sum_{l}\langle\hat{S}^{z}_{l}\rangle\), and observing that
\[\sum_{l}\langle l,p;q\rangle=\big{[}\theta(-t)+S^{z}\big{]}iG_{pq}(t), \tag{21}\]
we can obtain a sum rule for general spin interactions:
\[\sum_{l}\rho^{\rm xc}_{lpq}(t)=-\sum_{l}\big{[}g_{lpq}(t)-1\big{]} \langle\hat{S}_{l}\rangle=-\theta(-t). \tag{22}\]
The detailed derivation is provided in Appendix A.
In this paper, we consider only the case of AFM coupling, i.e. \(J<0\) so that \(S^{z}=0\). For a translationally invariant system, the Hartree and Fock terms (Eq. (17), (18)) vanish, and thus the two-spinon Vxc is then
\[V^{\rm xc}_{pp,qq}(t)iG_{pq}(t)=-J\sum_{\delta=\pm 1}\Big{[}\langle p,p+ \delta;q\rangle-\langle p+\delta,p;q\rangle\Big{]},\]
with the corresponding exchange term given by
\[F^{\rm x}_{pq}(t) :=V^{\rm x}_{pp,qq}(t)iG_{pq}(t)= \tag{23}\] \[J\Big{[}G_{p+1,p}(0^{-})G_{pq}(t)+G_{p-1,p}(0^{-})G_{pq}(t)-\] \[G_{p,p+1}(0^{-})G_{p+1,q}(t)-G_{p,p-1}(0^{+})G_{p-1,q}(t)\Big{]}.\]
### The infinite chain
We specialize now the description to the case of the homogeneous infinite Heisenberg chain, where \(\langle S^{z}_{p}\rangle\equiv s\) is site-independent due to translational symmetry. It is convenient to move to the momentum domain, with the Green function and \(V^{\rm xc}\) defined via the Fourier transform as
\[G(k,t) = \frac{1}{N}\sum_{pq}G_{pq}(t)e^{-ik(p-q)} \tag{24}\] \[V^{\rm xc}(k,t) = \frac{1}{N^{2}}\sum_{pq}V^{\rm xc}_{pp,qq}(t)e^{-ik(p-q)}, \tag{25}\]
and where the equation of motion for the Green function becomes
\[i\partial_{k}G(k,t)-\sum_{k^{\prime}}V^{\rm xc}(k-k^{\prime},t)G (k^{\prime},t)=2s\delta(t). \tag{26}\]
In the momentum representation, the exchange term becomes
\[F^{\rm x}(k,t) = \frac{4J}{N}G(k,t)\sin\frac{k}{2}\sum_{k^{\prime}}G(k^{\prime},0 ^{-})\sin(k^{\prime}-\frac{k}{2}) \tag{27}\] \[\approx iJ\lambda|\sin k|G(k,t),\]
where we have neglected the \(k^{\prime}\)-dependence in the weight represented by \(G(k^{\prime},0^{-})\), performed the sum over \(k^{\prime}\) and subsumed all the constants into \(\lambda\). To proceed further, the dynamical part of Vxc is separated from the static exchange term \(V^{\rm s}(k)=F^{\rm x}(k,t)/iG(k,t)\), i.e.
\[\sum_{k^{\prime}}V^{\rm xc}(k-k^{\prime},t)G(k^{\prime},t)=V^{ \rm s}(k)G(k,t)\] \[+Z^{\rm sp}(k,t)G(k,t), \tag{28}\]
to finally arrive at the solution to the equation of motion Eq.(26):
\[G(k,t)=G(k,0^{+})e^{-iV^{\rm s}t}e^{-i\int_{0}^{t}Z^{\rm sp}(k,t^{\prime})dt}. \tag{29}\]
In this expression, the (\(k\)-dependent) static exchange term \(V^{\rm s}\) determines the main peak of the spectral function, and the dynamical correlation term \(Z^{\rm sp}(k,t)\) produces the satellite structure. To attain an explicit solution, it is expedient to solve for a reference Green function by keeping only the static \(V^{\rm s}\) term in the equation of motion. This simplified solution contains the lower boundary of the two-spinon energy dispersion [15]
\[G^{\rm lb}(k,\omega)=\frac{1}{\omega-(-J)\lambda|\sin k|}, \tag{30}\]
and permits to determine the constant \(\lambda\) in Eq. (27) from the analytic form of the two-spinon spectrum.
### A four-site spin chain
It is useful to start our discussion of Vxc in finite spin-clusters by considering a four-site chain. This is the minimal cluster (with even number of sites) in which Vxc is nonzero. Furthermore, it is easy to obtain a compact analytical solution, that illustrates qualitatively several features present also in larger clusters (in which our solution is numerical in character). To illustrate the features of the four-site Vxc, we choose one of its diagonal elements as a representative case, namely
\[V^{\rm xc}_{11,11}(t>0)=-J\frac{(\frac{(xy+x)(xy+x+2y)}{a_{+}^{2 }})f_{1}+(x^{2}+x)f_{2}+(\frac{(xy-3x)(xy-3x+2y-4)}{a_{-}^{2}})f_{3}}{(\frac{xy +x+2y}{a_{+}})^{2}f_{1}+x^{2}f_{2}+(\frac{xy-3x+2y-4}{a_{-}})^{2}f_{3}} \tag{31}\]
where \(x=1+\sqrt{3},y=1+\sqrt{2},a_{\pm}=\sqrt{8\pm 4\sqrt{2}}\), and \(f_{i=1,2,3}\) are time oscillation factors determined by the difference between the spin excitation energies and the
ground state energy. The full details and the explicit forms are given in Appendix B, together with other elements of Vxc. It is useful at this point to move from site orbitals \(\{\varphi_{a}\}\) to bonding-like ones \(\{\phi_{\mu}\}\). In analogy to what is done with a Bloch basis in a Hubbard lattice, we set \(\phi_{\mu}=U_{\mu a}\varphi_{a},\varphi_{a}=U_{a\mu}\phi_{\mu}\), in which \(\mu=A,B,C,D\), \(a=1,2,3,4\) and the \(U\) matrix is
\[U=\frac{1}{2}\left(\begin{array}{rrrr}1&1&1&1\\ 1&-1&1&-1\\ 1&1&-1&-1\\ 1&-1&-1&1\end{array}\right). \tag{32}\]
For the Green functions, the transformation reads \(G_{\mu\nu}=\sum_{ab}U_{\mu a}G_{ab}U^{*}_{b\nu}\) and \(G_{ab}=\sum_{\mu\nu}U^{*}_{a\mu}G_{\mu\nu}U_{vb}\). One can define
\[V^{\rm xc}_{\mu\alpha,\beta\nu}(t):=\sum_{mn}U_{\mu m}U^{*}_{m\alpha}V^{\rm xc }_{mm,nn}(t)U_{\beta n}U^{*}_{n\nu} \tag{33}\]
such that the equation of motion is now
\[i\partial_{t}G_{\mu\nu}(t)-\sum_{\alpha\beta}V^{\rm xc}_{\mu\alpha,\beta\nu}( t)G_{\alpha\beta}(t)=s_{\mu\nu}\delta(t), \tag{34}\]
where \(s_{\mu\nu}=2\sum_{pq}U_{\mu p}\langle S^{z}_{p}\rangle\delta_{pq}U^{*}_{q\nu}\). Comparing the equation of motion for the diagonal terms \(G_{\mu\mu}\),
\[[i\partial_{t}-V^{\rm xc}_{\mu\mu,\mu\mu}]G_{\mu\mu}(t)-\sum_{ \gamma\neq\mu}V^{\rm xc}_{\mu\gamma,\gamma\mu}G_{\gamma\gamma}\] \[\qquad-\sum_{\gamma\neq\delta}V^{\rm xc}_{\mu\gamma,\delta\mu}(t) G_{\gamma\delta}(t)=s_{\mu\mu}\delta(t) \tag{35}\]
to the infinite-chain equation of motion Eq. (26), we note the following: i) \(G_{\mu\mu}\) maps to \(G(k)\); ii) the contribution from fully off-diagonal terms \(V^{\rm xc}_{\mu\nu,\delta\mu}\) should be negligible; iii) \(V^{xc}_{\mu\gamma,\gamma\mu}\), which maps to \(V(k)\), depends only on the difference of \(\mu,\gamma\); iv) the weights of the higher excitation term \(f_{3}\) are relatively small.
According to i-iv) and ignoring the high energy-excitation contributions from \(f_{3}\), one thus arrives at an approximate expression for the matrix elements of \(V^{xc}_{\mu\gamma,\gamma\mu}\):
\[V^{\rm xc}_{BB,BB}(t>0) \approx -J\alpha\] \[V^{\rm xc}_{BC,CB}(t>0) \approx -J\beta\exp[\frac{iJt}{\sqrt{2}}], \tag{36}\]
whereas \(V^{\rm xc}_{BD,DB}(t>0)\approx 0,V^{\rm xc}_{BA,AB}(t>0)\approx 0\), in which
\[\alpha:=\frac{xy+x}{xy+x+2y}=\frac{2x+2}{xy+x+2}, \tag{37}\] \[\beta:=\frac{1}{4}(\frac{a_{+}}{xy+x+2y}+\frac{a_{+}}{xy+x+2})^{2} (x^{2}+x-\alpha x^{2}), \tag{38}\]
and as in (31), \(a_{+}=\sqrt{8+4\sqrt{2}}\). The analytic spinon Vxc in the bonding basis and its main excitation approximation are shown in Fig. 1. Ignoring the high-excitation factor \(f_{3}\) reduces the fine-structure details in Vxc. Consequently, \(V^{\rm xc}_{BB,BB}\) simplifies to a constant whereas \(V^{\rm xc}_{BC,CB}\) oscillates with a single frequency and a constant magnitude, and all other components are negligible.
### Infinite chain from cluster extrapolation
In Fig. 2, we show \({\rm Re}Z^{\rm sp}(k,t)\) as obtained from the cluster Vxc discussed in the previous section.
It can be seen that \({\rm Re}Z^{\rm sp}(k,t)\) oscillates in time around a momentum-dependent term, a behavior that can be understood as due to a single quasiparticle-like main excitation. We therefore propose the following ansatz for \(Z^{\rm sp}\) in the infinite-chain case:
\[Z^{\rm sp}(k,t)={\cal A}(k)e^{-i\omega^{p}(k)t}+{\cal B}(k), \tag{39}\]
where the amplitude \({\cal A}\), the spinon excitation energy \(\omega^{\rm sp}\), and the shift term \({\cal B}\) all increase as \(k\) increases from \(0\) to \(\pi\). The Green function is given by inserting the ansatz into Eq.(29):
\[G^{\rm sp}(k,t>0)=\] \[G^{\rm sp}(k,0^{+})e^{-i[V^{*}(k)+{\cal B}(k)]t}e^{\frac{{\cal A }(k)}{2\varphi(k)}(e^{-i\omega^{p}(k)t}-1)}, \tag{40}\]
where the static potential is \(V^{\rm s}(k)=-J\pi|\sin k|/2\).
Expanding the last term on the RHS of Eq.(40) to first order in \(e^{-i\omega^{\rm sp}(k)t}\), one gets an approximate Green function
\[G^{\rm sp}_{(1)}(k,t>0)=G^{\rm sp}(k,0^{+})e^{-i[V^{*}(k)+{\cal B}(k)]t}\times\]
Figure 1: Real part of Vxc of four-site spin-\(\frac{1}{2}\) AFM Heisenberg chain, in the unit of \(|J|\). Top panel: exact. Bottom panel: results when the high-excitation contribution is ignored (see Eqs. (36)-(38) and related discussion).
\[\big{[}1+\frac{\mathcal{A}(k)}{\omega^{\rm sp}(k)}(e^{-i\omega^{\rm sp}(k)t}-1) \big{]}, \tag{41}\]
which in the frequency domain becomes
\[G_{(1)}^{\rm sp}(k,\omega)=G^{\rm sp}(k,0^{+})\Big{[}\frac{1-\frac{\mathcal{A}( k)}{\omega-[V^{\rm s}(k)+\mathcal{B}(k)]}}{\omega-[V^{\rm s}(k)+\mathcal{B}(k) ]}+\frac{\frac{\mathcal{A}(k)}{\omega^{\rm sp}(k)}}{\omega-[V^{\rm s}(k)+ \mathcal{B}(k)+\omega^{\rm sp}(k)]}\Big{]}. \tag{42}\]
From Eq. (42), it can be seen that the main peak position of the dynamical structure factor is given by \(V^{\rm s}+\mathcal{B}\). The spinon excitation energy \(\omega^{\rm sp}\) transfers weight from the main peak to higher-energy region resulting in satellite peaks at \(V^{\rm s}+\mathcal{B}+\omega^{\rm sp}\). The relative weight between the main peak and the satellite is determined by the amplitude term \(\mathcal{A}\) and the spinon energy \(\omega^{\rm sp}\). Specifically at \(k=\pi\), the finite cluster solution gives nonzero \(\mathcal{B}\), which opens a spin gap that does not exist for the spin\(-\frac{1}{2}\) lattice. We attribute this to finite-size effects, and thus we adjust \(\mathcal{B}\) to a smaller value in our extrapolation.
Based on our discussion so far, we now present the lattice case obtained by extrapolating the cluster Vxc. With \(Z^{\rm sp}\) obtained via ED from a twelve-site cluster, we estimate the parameters \(\mathcal{A},\mathcal{B}\) and \(G(k,0^{+})\) by linear interpolation. The spinon excitation energy is estimated by fitting the cluster \(\omega^{\rm sp}\) to the two-spinon spectrum boundary,
\[\omega^{\rm sp}\rightarrow(-J)\pi\big{[}\sin\frac{k}{2}-\frac{1}{2}|\sin k| \big{]}. \tag{43}\]
The longitudinal and transverse spin dynamical structure factors are then calculated from the spinon Green function. Since for a spin isotropic system \(S^{zz}\) and \(S^{+-}\) differ by a constant factor, we only calculate the spectral function of the Green function (Eq. (40)), as shown in Fig. 3 (interestingly, approximating the term \(\exp\Big{\{}\frac{\mathcal{A}}{\omega^{\rm sp}}\big{[}\exp(-i\omega^{\rm sp }t)-1\big{]}\Big{\}}\) by \(1+\frac{\mathcal{A}}{\omega^{\rm sp}}\big{[}\exp(-i\omega^{\rm sp}t)-1\big{]}\) gives no marked changes of the properties of \(G^{\rm sp}(k,\omega)\)). A notable aspect in the behavior on the spin dynamical structure factor is that both the peak locations and the relative weights are close to the inelastic neutron scattering data from 1D compound KCuF\({}_{3}\)[21]. Coming to more specific features, \(S(k,\omega)\) is very small (i.e., close to zero) at small \(k\), while, for a generic \(k\), most of its spectral weight is concentrated around the main peak and the satellite peak. As \(k\rightarrow\pi\), the relative weight between the main peak and the satellite peak increases and the spectrum with broadening factor 0.1 is gapless.
While providing a good description of the dynamical structure factor for the 1D AFM Heisenberg model, the present implementation of the spin Vxc approach is also subject to some limitations. This can be seen by, e.g., comparing the dynamical structure factor from the Vxc approach with the two-spinon lower and upper boundaries (dashed line in Fig. 3). It is apparent that the main peak frequency \(\omega=V^{\rm s}(k)+\mathcal{B}\) is still slightly overestimated. To reduce the finite size effects due to a parameter \(\mathcal{B}(\pi)\) originating from a twelve-site cluster, we set \(\mathcal{B}(\pi)\) to be the same as the broadening factor, i.e. about 0.2 (see Fig. 2). However, the actual Bethe ansatz value of \(\mathcal{B}(\pi)\) should be zero. The overall point is that, to obtain a more accurate dynamical structure factor, and to avoid the finite size effects inherent in the extrapolation from a small cluster, more powerful external methods
Figure 2: Real part of \(Z^{\rm sp}\) from a spin-\(\frac{1}{2}\) AFM Heisenberg ring. Top (bottom) panel: results for a ring with 8 (12) sites.
need to be employed (e.g., the algebraic Bethe ansatz).
These considerations might reveal weaknesses of the extrapolation procedure. However, it must also be clearly stressed that this implementation of the Vxc approach captures most of the qualitative features of the 1D AFM Heisenberg model with a very low computational load and this central attractive feature of the method is expected to also apply in more challenging situations, e.g. in higher dimensions, where rigorous references like the Bethe ansatz are not available.
## IV Improving the treatment of the 1D-Hubbard lattice
Encouraged by the 1D AFM Heisenberg chain results obtained with a Vxc extrapolated from clusters, we now revisit the case of the 1D Hubbard Hamiltonian,
\[\hat{H}^{\rm Hub}=-\Delta\sum_{p\sigma}[\hat{c}^{\dagger}_{p,\sigma}\hat{c}_{p+ 1,\sigma}+h.c.]+U\sum_{p}\hat{n}_{p\uparrow}\hat{n}_{p\downarrow}, \tag{44}\]
using also in this case a Vxc obtained from small (Hubbard) clusters. In Eq. (44), \(p=1,2,\cdots,N\) are the site labels (with \(N\rightarrow\infty\) eventually), \(\sigma=\uparrow,\downarrow\) is the spin label, \(\Delta\) is the hopping energy and \(U>0\) is the local repulsion. In the site basis, the spin-up channel Green function is
\[G_{pq}(t)=-i\theta(t)\langle\hat{c}_{p\uparrow}(t)\hat{c}^{\dagger}_{q\uparrow }(0)\rangle+i\theta(-t)\langle\hat{c}^{\dagger}_{q\uparrow}(0)\hat{c}_{p \uparrow}(t)\rangle \tag{45}\]
and the Vxc reads
\[V^{\rm{xc}}_{pp,qq}(t)iG_{pq}(t)=U\langle\mathcal{T}\hat{c}^{ \dagger}_{p\downarrow}(t)\hat{c}_{p\downarrow}(t)\hat{c}_{p\uparrow}(t)\hat{ c}^{\dagger}_{q\uparrow}(0)\rangle\] \[-U\rho_{p\downarrow}iG_{pq}(t), \tag{46}\]
where \(\rho_{p\downarrow}\) is the spin-down particle density at site \(p\). The exchange part of Vxc fulfils
\[V^{\rm{x}}_{pp,qq}(t)iG_{pq}(t)=-UG_{pp}(0^{-})G_{pq}(t), \tag{47}\]
where \(G_{pp}(0^{-})=i\langle\hat{c}^{\dagger}_{p\uparrow}\hat{c}_{p\uparrow}\rangle =i\rho_{p\uparrow}\); thus, the exchange part of Vxc of the Hubbard model is static and cancels the Hartree potential at half-filling, in contrast to the Heisenberg model in which the exchange part depends on the momentum. In general, the exchange part is time-dependent [64].
Written in the momentum domain, the equation of motion for the Hubbard lattice takes the form
\[[i\partial_{k}-\varepsilon_{k}]G(k,t)-\sum_{k^{\prime}}V^{\rm{xc}}(k-k^{ \prime},t)G(k^{\prime},t)=\delta(t), \tag{48}\]
where \(\varepsilon_{k}=-2\Delta\cos k\) is the kinetic energy. Eq. (48) shows that the interaction term, expressed as the direct product of Vxc and Green function in space-time domain, is a convolution in the momentum domain. It has been shown [62] that the main peak position of the electron (hole) spectral functions can be described with \(V^{\rm{xc}}(k=0)\), together with the kinetic energy, while \(V^{\rm{xc}}(k=\pi)\) plays an important role in determining the satellite peaks. One can also write the interaction term as a direct product in momentum domain,
\[\sum_{k^{\prime}}V^{\rm{xc}}(k-k^{\prime},t)G(k^{\prime},t)=\] \[V^{\rm{xc}}(0,t)G(k,t)+Y(k,t)G(k,t), \tag{49}\]
which gives explicitly the solution for the Green function:
\[G(k,t>0)=G(k,0^{+})e^{-i\varepsilon_{k}t} e^{-i\int_{0}^{t}dt^{\prime}V^{\rm{xc}}(0,t^{\prime})}\] \[\times e^{-i\int_{0}^{t}dt^{\prime}V(k,t^{\prime})}, \tag{50a}\] \[G(k,t<0)=G(k,0^{-})e^{-i\varepsilon_{k}t} e^{i\int_{0}^{t}dt^{\prime}V^{\rm{xc}}(0,t^{\prime})}\] \[\times e^{i\int_{t}^{0}dt^{\prime}Y(k,t^{\prime})}. \tag{50b}\]
One can then use an \(N\)-site cluster with twisted boundary conditions [65] to parameterize \(G(k,0^{\pm})\) and thus the generalized Vxc in the momentum domain becomes
\[Z^{\rm{cl}}(k,t):=V^{\rm{xc}}(0,t)+Y(k,t). \tag{51}\]
Figure 3: Dynamic structure factor of 1D spin-\(\frac{1}{2}\) AFM Heisenberg lattice calculated with Vxc, with broadening 0.1. Top panel: weight factor \(G(k,t=0)\) considered as unit. Bottom panel: weight renormalised with cluster \(G(k,0^{+})\). The blue dashed curves are the boundaries for two-spinon processes.
### Extrapolation from finite clusters
The Vxc of clusters with 6 and 8 sites and with periodic boundary condition was computed using ED. The Hubbard \(U\) was chosen to be \(7.74\) with \(\Delta=1\), to allow for comparisons with previous work and the DDMRG results from the literature. In contrast to the dimer case, the cluster Vxc exhibits multiple sharp peaks as a function of time \(t\). Time snapshots of Vxc as a function of \(k\) are shown in Fig. 4. For \(t\simeq 0\), we have that \(V^{\rm xc}(k,t)\approx V^{\rm xc}(\pi-k,t)\), but such behavior is unseen during the time evolution. The particle-hole symmetry leads to \(V^{\rm xc}(k,-t)=-V^{\rm xc}(k,t)\), and the increase of cluster size from \(N=6\) to \(8\) does not change qualitatively the characteristics of \(V^{\rm xc}\) as a function of \(k\).
The dynamical properties of Vxc can be better illustrated through \(Z^{\rm el}\), a generalisation of Vxc in the momentum basis defined in Eq. (51). Due to degeneracy, \(Z^{\rm el}(-k,t)=Z^{\rm el}(k,t)\) and, because of particle-hole symmetry, \(Z^{\rm el}(k,-t)=-Z^{\rm el}(\pi-k,t)\). To improve the simulation of \(Z^{\rm el}\), we use a cluster with twisted boundary condition, that provides larger \(k\)-point sampling. The real part of \(Z^{\rm el}(k,t)\) with twisted boundary condition is shown in Fig. 5: for small \(k\), it oscillates weakly in time (with small amplitude and long period). However, where the bandgap opens (\(k\to\frac{\pi}{2}\)), the oscillation of \({\rm Re}Z^{\rm el}\) is more evident. For \(k\to\pi\), \({\rm Re}Z^{\rm el}\) exhibits sharp peaks at certain times. The peaks can be both positive and negative: mathematically, this means that some of the zeros of the Green function are located where the interaction term (Eq. (46)) has nonzero finite (positive or negative) values. These spiky structures cannot be fitted into a weighted sum of several (but finite in number) oscillations, indicating that a model beyond the single-energy quasiparticle picture is necessary.
Provided with the numerically exact Vxc for \(N=6,8\) clusters, we reconsider the approximate scheme proposed in the previous work based on the Hubbard dimer (\(N=2\)) [62]. The dimer admits two \(k\)-points (\(k=0,\pi\)), with the corresponding approximate values for Vxc given by
\[V^{\rm xc}(k=0,t) \approx \frac{\alpha U}{2}, \tag{52a}\] \[V^{\rm xc}(k=\pi,t) \approx \frac{\alpha U}{2}(1-\alpha^{2})e^{-i2\Delta t}. \tag{52b}\]
Here, the constant \(\alpha\) depends only on \(\frac{U}{\Delta}\) (the explicit dependence relation is shown in Appendix C together with a summary of the properties of the Vxc obtained from the Hubbard dimer) and \(2\Delta\) in the exponential represents the main excitation energy. In what follows, we use Eqs. (52a) and (52b) to compute the hole part of the spectral function, with the particle part obtainable via the particle-hole symmetry \(A^{\rm e}(k,\omega)=A^{\rm h}(\pi-k,-\omega)\). When \(|k|\leq\frac{\pi}{2}\), the hole part of the Green function given by the dimer model is
\[G^{\rm h}(k,\omega)=\frac{1}{\omega-\omega_{\rm h}^{\rm h}-i \eta}[1-\mathcal{V}^{xc}(\omega)] \tag{53}\] \[\mathcal{V}^{xc}(\omega)=\frac{1}{N}\sum_{k^{\prime}}^{\rm occ} \frac{V^{\rm xc}(\pi,0)}{\omega-[\varepsilon_{k^{\prime}}-V^{\rm xc}(0)-2 \Delta]-i\eta}, \tag{54}\]
where \(\eta\) is a broadening factor. The spectral function of \(G^{\rm h}\) has a main peak at \(\omega_{\rm h}^{\rm h}\), determined by \(V^{\rm xc}(k=0)\) and by the kinetic energy: \(\omega_{\rm h}^{\rm h}=\varepsilon_{k}-V^{\rm xc}(0)\). The term \(\mathcal{V}^{xc}(\omega)\) gives rise to a continuous satellite region. Its relative weight to the main peak is \(V^{\rm xc}(\pi,0)\), and its lower/upper boundaries are given by the minimum/maximum occupied state kinetic energy
\[\omega_{k}^{\rm h,lower} = \varepsilon_{0}-V^{\rm xc}(0)-2\Delta \tag{55a}\] \[\omega_{k}^{\rm h,upper} = \varepsilon_{\frac{\pi}{2}}-V^{\rm xc}(0)-2\Delta. \tag{55b}\]
The dimer model [62] managed to capture the main structure of the hole spectra of the Hubbard lattice, but can be improved in several aspects: The main peak position given by the model is just the kinetic energy \(\varepsilon_{k}=-2\Delta\cos k\) plus a constant determined by \(U\), while the true \(k-\)dependence of \(\omega^{\rm h}\) should be more complicated; the upper and lower boundaries of the satellite
Figure 4: \(V^{\rm xc}(k)\) of finite Hubbard ring, \(U=7.74,\Delta=1\), in the unit of \(U\). Top panel: real part. Bottom panel: imaginary part. \(V(k,t)=V(-k,t)\) and \(V(k,t)=-V(-k,-t)\).
part given by the model are independent of \(k\), which is also an oversimplification. Rewriting Eq. (50b) in the spirit of a factorization into a main-peak and a satellite term,
\[G(k,t<0)=G(k,0^{-})e^{-i(\varepsilon_{k}+Z^{\rm b,main}_{k})t}\times\] \[e^{i\int_{t}^{0}dt^{\prime}Z^{\rm b,nat}(k,t^{\prime})}, \tag{56}\]
where \(Z^{\rm b,main}_{k}+Z^{\rm b,sat}(k,t)=Z^{\rm el}(k,t)\) for \(t<0\), one can see that: i) A momentum-dependent static term, \(Z^{\rm b,main}_{k}\), which is not present in the dimer model, together with \(\varepsilon_{k}\), determines the main peak; ii) the dispersion of \(\omega^{\rm h,lower}\) and \(\omega^{\rm h,upper}\) can be explained by the satellite term \(Z^{\rm el,sat}(k,t^{\prime})\). Compared with Fig. 5, \(Z^{\rm h,main}_{k}\) is seen to be the time-independent part around which \(Z^{\rm el}(k,t)\) oscillates; and \(Z^{\rm h,sat}(k,t)\) represents a series of excitation energies. The spike-like \({\rm Re}Z(k,t)\) for \(k\to 0,t<0\) is a consequence of multiple excitation energies and large satellite peaks, while the less oscillatory \({\rm Re}Z(k,t)\) for \(k\rightarrow\pi,t<0\) explains the lack of strong satellites of the hole spectral functions \(A^{\rm h}(k\rightarrow\pi,\omega)\).
Taking advantage of the physical picture given by the dimer model, we include the correction to the occupied \(k\) values by adding a set of momentum-dependent parameters, \(l_{1,2,3}\), such that i) \(\alpha\rightarrow\alpha l_{1}(k)\), ii) the main excitation determining the satellite boundaries (Eq. (55a), (55b)) becomes \(2\Delta\to 2\Delta l_{2}(k)\), and the effective kinetic energy in the summation of Eq. (53) becomes \(\varepsilon_{k^{\prime}}\rightarrow-2\Delta\cos k^{\prime}l_{3}(k)\). The parameterized dispersion relations of the key frequencies are
\[\omega^{\rm h}_{k}=-2\Delta\cos k-\frac{\alpha U}{2}l_{1}(k) \tag{57a}\] \[\omega^{\rm h,lower}_{k}=-2\Delta[l_{3}(k)+l_{2}(k)]-\frac{ \alpha U}{2}l_{1}(k)\] (57b) \[\omega^{\rm h,upper}_{k}=-2\Delta l_{2}(k)-\frac{\alpha U}{2}l_{1}(k). \tag{57c}\]
Thus, the hole part bandwidth for a given momentum, the satellite width, and the band gap respectively are
\[\omega^{\rm h}_{k}-\omega^{\rm h,lower}_{k}=2\Delta\Bigl{[}l_{2} (k)+l_{3}(k)-\cos k\Bigr{]}, \tag{58a}\] \[\omega^{\rm h,upper}_{k}-\omega^{\rm h,lower}_{k}=2\Delta l_{3}( k),\] (58b) \[E_{\rm g}=\alpha Ul_{1}(\frac{\pi}{2}). \tag{58c}\]
This means that the main peak location, the bandwidth, and the satellite region width from cluster calculations can be used to determine the parameters \(l_{1,2,3}\), which are then used to calculate the lattice spectral functions \(A(k,\omega)\) for \(k<\frac{\pi}{2}\). For \(k>\frac{\pi}{2}\), where the dimer model gives zero weight for the hole part spectrum, the cluster results show that the corresponding Vxc can be approximated with a single-energy excitation,
\[Z^{\rm el}(k>\frac{\pi}{2},t<0)\approx\mathcal{A}_{k}e^{-i\omega^{\rm el}_{k}t }+\mathcal{B}_{k}, \tag{59}\]
where the parameters \(\mathcal{A},\mathcal{B}\) and \(\omega^{\rm el}_{k}\) are estimated from cluster result, which is similar to the treatment for the spinon Vxc (Eq. (39)). Combing the \(l_{1,2,3}\)-involved occupied region and the \(\mathcal{A},\mathcal{B},\omega^{\rm el}\)-involved unoccupied region, the hole part spectral function can now be calculated for the whole Brillouin zone. The spectral functions for selected \(k\) values are shown in Fig. 6.
Compared with the dimer model, the cluster Vxc-based parametrization improves the agreement with DDMRG in several aspects. Specifically, i) The missing weights for unoccupied \(k\) points appear when using as input a cluster Vxc. ii) The main peak positions (and thus the bandgap value as well) are more accurate. In fact, the bandgap value from the dimer model, \(\alpha U\), shows a discrepancy with the Bethe ansatz exact value at small \(U\), do to the lack of long-range screening effects. Using a cluster Vxc, however, removes the disagreement.
Figure 5: Real part of \(Z^{\rm el}(k,t)\) of finite Hubbard chain with twisted boundary condition, \(U=7.74,\Delta=1\), in the unit of \(U\). Left and middle panel: with shorter time scale and fewer \(k\)-points. \(N=8,6\), respectively. Right panel: \(N=6\), longer time and more \(k\)-points, peaks out of the color scale not shown. \(Z^{\rm el}(-k,t)=Z^{\rm el}(k,t)\) and \(Z^{\rm el}(k,-t)=-Z^{\rm el}(\pi-k,t)\). For a discussion of the negative peak in the middle panel, see the main text.
iii) Both boundaries and relative weight of the satellite structure are better described by the cluster Vxc and its momentum-basis generalization \(Z^{\rm el}\); iv) The total weight of the hole/electron part cannot be renormalized within the dimer model, because the non-interacting Green function used in the dimer model can only fix the total spectral weight: \(\int d\omega A^{\rm h}(k,\omega)=\theta(k_{F}-k)\). With a cluster Vxc, using \(\langle c_{k}^{\dagger}c_{k}\rangle\), we can rescale the total spectral weight for each \(k\) value.
Yet, the main peak \(\omega^{\rm h}\):s in Fig. 6 is in general lower than the one from DDMRG. This can be understood as due to the band gap narrowing upon increasing the numbers of site (the eight-site cluster we used leads to the overestimation of the gap and thus of the main peak position).
We conclude our discussion of the Hubbard chain, by considering its spectral functions in real space that we obtain starting from those in the momentum domain:
\[A(r,\omega)=\frac{1}{2\pi}\int dkA(k,\omega)e^{ikr} \tag{60}\]
where \(r=0,1,2,\cdots\) is in units of lattice parameter. \(A(r,\omega)\) describes the correlation strength between two space points separated by \(r\), at a given energy \(\omega\). The local case \(A(r=0,\omega)\) corresponds to the density of states.
Results for \(A(r,\omega)\) with a eight-site kernel are shown in Fig. 7, whilst those from a six-site kernel with different \(U\) and \(r\) are reported in the appendix. The cluster Vxc result for \(A(r=0,\omega)\) shows better agreement with DDMRG than the dimer model. Also, the NN spectral weight at positive energy is predominantly negative, and for \(r\geq 2\), \(A(r,\omega)\) exhibits nodal structures. Concerning the role of electronic correlations, spatial spectral functions with different \(U\) values become qualitatively alike at large repulsion (\(U>4\)), but the band gap value keeps increasing with \(U\). Finally, spectral functions calculated with eight-site and six-site kernels are qualitatively similar (see the appendix for the six-site case), with similarities in the overall shape and in the number of nodes. However, the estimated value of the band gap improves on increasing the cluster size.
Figure 7: Spatial spectral functions, calculated with the eight-site Vxc as kernel (for the six-site case, see the Appendix) for \(U=7.74,\Delta=1\) and broadening \(\eta=0.1\). 64 \(k\)-points are used to approximate the \(k\)-integral, according to the Chadi-Cohen method [66].
Figure 6: Momentum-resolved hole part spectral function \(A^{h}(k,\omega)\) for \(U=7.74,\Delta=1\). For \(k<\frac{\pi}{2}\), the parameters \(l_{1,2,3}\) are determined using the peak locations of the eight-site twisted boundary condition cluster spectrum. For \(k>\frac{\pi}{2}\), \(Z^{\rm el}\) of the eight-site twisted boundary condition cluster is used via Eq. (59) to calculate \(A^{h}\). Top (middle) panel: \(k\)-points chosen to compare with DDMRG results, without (with) renormalized weight. Bottom panel: the satellite structure is approximated with two peaks at the satellite region boundaries, in order to get clearer dispersion branches. The \(k\) values are \(\frac{\pi}{24}\times 0,1,2,\cdots,64\). The locations of the spinon branch (\(0<k<\pi/2,-3.5<\omega<-2\)), the holon branches, and the lower boundary of the holon-spinon continuum (\(\pi/2<k<\pi,\omega<-6\)) are close to the DDMRG result [63]. For the spinon branch, we have \(\omega(k=0)=-3.25\), which differs from the DDMRG result (\(\approx-3\)) because the finite cluster gives in general larger band gap. In all calculations, we set the broadening parameter \(\eta=0.1\).
## V Vxc from Hubbard and Heisenberg models: a comparative discussion
It is well known that the 1D spin\(-\frac{1}{2}\) AFM Heisenberg model becomes equivalent to the 1D half-filled Hubbard model in the large \(U\) regime [67; 68]. After having discussed Vxc in the two models separately, it can be useful to look at both models together using as perspective the behavior of Vxc in such limit. Meanwhile, \(Z^{\rm el}\) and \(Z^{\rm sp}\) do not show a direct asymptotic behavior \(Z^{\rm el}\big{|}_{U\to\infty}=Z^{\rm sp}\), because they are coupled to the single-particle Green function (Eq. (45)) and single spin-flipping Green function (Eq. (12)) respectively. For the Hubbard model, the term corresponding to \(Z^{\rm sp}\) is coupled to the two-particle Green function \(\langle\mathcal{T}[\hat{c}^{\dagger}_{\varphi\uparrow}(t)\hat{c}_{\varphi \downarrow}(t)]\left[\hat{c}^{\dagger}_{q\downarrow}\hat{c}_{q\uparrow} \right]\rangle\). Equation of motion of higher order Green function needs to be solved for the Hubbard model to calculate the 'higher order Vxc' that is comparable with the spinon Vxc under large repulsion. This means the Vxc formalism for Heisenberg model, having a similar sum rule (Eq. (22)), reduces the difficulty in deriving the equation of motion and improves the interpretability via the quasiparticle picture.
Instead of solving the higher order Green function, we consider a more modest task of comparing the lattice ground state energies for the two models. In the large \(U\) limit [68],
\[\lim_{U\to\infty}\frac{E^{\rm Hub}_{0}}{N}=\frac{1}{U}(4\frac{E^{\rm Heis}_{0} }{N}-1) \tag{61}\]
where \(E^{\rm Hub}_{0}\) is the ground state energy of a \(N\)-site Hubbard ring with \(\Delta=1\), and \(E^{\rm Heis}_{0}\) is the ground state energy of a \(N\)-site AFM Heisenberg ring with \(J=-1\). Both energies can be calculated from the Green function via
\[\frac{E^{\rm Heis}_{0}}{N}=\frac{3}{2}\langle S^{+}_{1}(t=0^{+})S^{-}_{2}\rangle, \tag{62}\]
and
\[\frac{E^{\rm Hub}_{0}}{N}=-\big{[}2\langle\hat{c}^{\dagger}_{1\uparrow}\hat{c }_{2\uparrow}(t=0^{-})\rangle-i\partial_{t}\langle\hat{c}^{\dagger}_{1 \uparrow}\hat{c}_{1\uparrow}(0^{-})\rangle\big{]}. \tag{63}\]
In the frequency domain,
\[\frac{E^{\rm Heis}_{0}}{N} = \frac{3i}{4\pi}\int G^{\rm sp}(r=1,\omega)d\omega, \tag{64}\] \[\frac{E^{\rm Hub}_{0}}{N} = \frac{i}{2\pi}\int\Big{[}2G^{\rm el}(r=1,\omega)-\omega G^{\rm el }(r=0,\omega)\Big{]}d\omega.\]
To perform a comparison, we compute the ground state energy of the Hubbard lattice in two ways: i) by directly using the electron Vxc at different \(U\) values, and ii) by calculating \(E^{\rm Heis}_{0}\) for a \(J=-1\) Heisenberg lattice with the spinon Vxc, to be then used in the effective \(E^{\rm Hub}_{0}\) of Eq. (61). The differences between the results from these two prescriptions and the exact Bethe ansatz solution are shown in Fig. 8. The \(E_{0}\) results from ED for a six-site ring are also shown as a reference.
For \(U<10\), the repulsion strength is not large enough for Eq. (61) to be valid, leading to a discrepancy between the total energies for the two lattice models. However, in such region, \(E^{\rm Hub,Vxc}_{0}\) (red dots) is already close to the exact Bethe ansatz value, and the difference gets smaller on increasing \(U\). For \(U>30\), the ED results for the two models converge, meaning that the large repulsion limit is reached. The Vxc-based energies \(E_{0}\) for the two models also converge to the exact Bethe ansatz value.
However, the effective Vxc-based Heisenberg result is rather accurate, with absolute error less than \(10^{-4}\): this can be understood as a result of i) using the two-spinon upper and lower boundaries in the extrapolation, and ii) adjusting the \(\mathcal{B}\) parameter from the cluster within the zero spin gap picture. In contrast, the Vxc-based Hubbard result is extrapolated without a good reference, and is more affected by the finite size effects. Thus, the difference with the Bethe ansatz result is larger.
As an overall remark, the comparative analysis of this Section shows the versatility of the Vxc approach across different lattice models, with results that are consistent with trends and benchmarks from other methods.
## VI Conclusion and outlook
We have presented a novel exchange correlation potential (Vxc) formalism for the one-dimensional antiferromagnetic Heisenberg model, and derived a general new sum rule for spin systems. Our spin-formulation is a tailored extension of a previously introduced general framework for many-body systems that include both charge
Figure 8: Difference between the exact Bethe ansatz \(\frac{E^{\rm Hub}_{0}}{N}\) and the Vxc-based results for i) the 1D Hubbard model and ii) the 1D AFM Heisenberg model, and the ED results for iii) a six-site Hubbard cluster and iv) a six-site Heisenberg cluster. For both models, Vxc is extrapolated from a six-site kernel.
and spin degrees of freedom. Together with the new formulation, we have also devised a procedure to obtain, from a Vxc extracted from small finite clusters, an extrapolation to the thermodynamical limit. This procedure to access Vxc, originally devised for spin systems, has also permitted us to revisit and improve the treatment of the half-filled one-dimensional Hubbard model, a system already considered in earlier work within the Vxc approach. For both the 1D AFM Heisenberg model and the 1D Hubbard model, the static exchange term of Vxc was derived and shown to exhibit model distinctive properties. For the 1D AFM Heisenberg model, the static exchange term corresponds to the lower boundary of the two-spinon spectrum. For the Hubbard model, the local \(U\) leads to a constant \(V^{\rm x}\), which cancels the Hartree potential.
For both models, the spectral functions calculated within the Vxc approach show favourable agreement with DDMRG and with experimental results. Furthermore, a single-energy quasiparticle picture can be used to explain the dynamics of the spinon Vxc for the 1D AFM Heisenberg model and the unoccupied/occupied part of the hole/electron Vxc for the 1D Hubbard model. Finally, we showed how the Vxc formalism captures the equivalence of the two models in the large \(U\) limit, by a comparative analysis via the lattice ground state energies.
In conclusion, our results indicate that the Vxc formalism provides an alternative way of calculating single-particle Green function which is (computationally) cost-beneficial but also physically well defined. Looking forward, we plan to apply such dimensionality- and interaction-insensitive scheme to models of increasing complication and higher dimensionality. At the same time, we intend to explore ways to devise Vxc approximations with the goal of improving over the local-(spin)-density approximation of density functional theory, in a progression toward a first principle implementation for real materials.
## VII Acknowledgments
F.A. gratefully acknowledges financial support from the Knut and Alice Wallenberg Foundation (KAW 2017.0061) and the Swedish Research Council (Vetenskapsradet, VR 2021-04498_3). C.V. gratefully acknowledges financial support from the Swedish Research Council (VR 2017-03945 and 2022-04486).
## Appendix A Sum rule and exchange term of Heisenberg chain
Equation of motion of the Heisenberg model is
\[i\partial_{t}G_{pq}(t)+iF_{pq}(t)=2\delta_{pq}\delta(t)\langle S^{z}_{p}\rangle \tag{101}\]
where the interaction term is
\[F_{pq}(t)=-J\sum_{\delta}[\langle p,p+\delta;q\rangle-\langle p+\delta,p;q \rangle], \tag{102}\]
and
\[\langle l,p;q\rangle := \langle{\cal T}\hat{S}^{z}_{l}(t^{+})\hat{S}^{+}_{p}(t)\hat{S}^{ -}_{q}(0)\rangle. \tag{103}\]
The correlator \(g_{lpq}(t)\) and the exchange-correlation hole are defined to fulfill:
\[\langle l,p;q\rangle = iG_{pq}(t)g_{lpq}(t)\langle\hat{S}^{z}_{l}\rangle \tag{104}\] \[\rho^{\rm sc}_{lpq}(t)iG_{pq}(t) = -\langle l,p;q\rangle+\langle\hat{S}^{z}_{l}\rangle iG_{pq}(t)\] (105) \[\rho^{\rm sc}_{lpq}(t) = -\big{[}g_{lpq}(t)-1\big{]}\langle\hat{S}^{z}_{l}\rangle. \tag{106}\]
For \(t>0\),
\[\sum_{l}\langle l,p;q\rangle=S^{z}iG_{pq}(t), \tag{107}\]
and for \(t<0\),
\[\sum_{l}\langle l,p;q\rangle = \sum_{l}\Big{[}\langle\hat{S}^{-}_{q}(0)\hat{S}^{+}_{p}(t)\hat{S} ^{z}_{l}(t)\rangle+\langle\hat{S}^{-}_{q}(0)\hat{S}^{+}_{p}(t)\rangle\delta_{p }\Big{]} \tag{108}\] \[= (1+S^{z})iG_{pq}(t).\]
Eq. (107) and (108) can be written in a compact form as
\[\sum_{l}\langle l,p;q\rangle=\big{[}\theta(-t)+S^{z}\big{]}iG_{pq}(t). \tag{109}\]
Therefore the correlator fulfills
\[\sum_{l}iG_{pq}(t)\big{[}g_{lpq}(t)-1\big{]}\langle\hat{S}^{z}_{l}\rangle = \sum_{l}\langle l,p;q\rangle-\sum_{l}\langle\hat{S}^{z}_{l}\rangle \tag{110}\] \[= \theta(-t)iG_{pq}(t),\]
from which the sum rule can be retrieved:
\[\sum_{l}\rho^{\rm sc}_{lpq}(t)=-\theta(-t). \tag{111}\]
The exchange term of spinon Vxc can be derived from the variational method
\[\frac{\delta G_{pq}(t)}{\delta\varphi_{l}(t^{+})}=-\langle l,p;q\rangle+ \langle\hat{S}^{z}_{l}\rangle iG_{pq}(t). \tag{112}\]
so the interaction term can be written as
\[F_{pq}(t)=-J\sum_{\delta}\Big{[}\frac{\delta G_{pq}(t)}{\delta \varphi_{p+\delta}(t^{+})}-\frac{\delta G_{p+\delta,q}(t)}{\delta\varphi_{p}(t ^{+})}\] \[+\langle S^{z}_{p}\rangle iG_{p+\delta,q}(t)-\langle S^{z}_{p+ \delta}\rangle iG_{pq}(t)\Big{]} \tag{113}\]
According to the definition of Vxc,
\[V^{\rm sc}_{pp,qq}(t)iG_{pq}(t)=-J\sum_{\delta}\Big{[}\frac{\delta G_{pq}(t)}{ \delta\varphi_{p+\delta}(t^{+})}-\frac{\delta G_{p+\delta,q}(t)}{\delta\varphi _{p}(t^{+})}\Big{]} \tag{114}\]
and
\[V_{p}^{\rm H} = J\sum_{\delta}\langle S_{p+\delta}^{z}\rangle, \tag{15}\] \[V_{p}^{\rm F} = -J\langle S_{p}^{z}\rangle, \tag{16}\]
the equation of motion can be rewritten as
\[[i\partial_{t}-V_{p}^{\rm H}]G_{pq}(t)-\sum_{\delta}V_{p}^{\rm F} G_{p+\delta,q}(t)\] \[-V_{pp,qg}^{\rm x}(t)G_{pq}(t)=2\delta_{pq}\delta(t)\langle S_{p}^ {z}\rangle \tag{17}\]
Figure 9: Spatial spectral function of Hubbard chain, calculated with six-site kernel.
Considering
\[\frac{\delta G(1,2)}{\delta\varphi(3)}=-\int d4d5G(1,4)\frac{\delta G^{-1}(4,5)}{ \delta\varphi(3)}G(5,2) \tag{101}\]
and the lowest order of the vertex function is
\[\frac{\delta G^{-1}(4,5)}{\delta\varphi(3)}=-\delta(4-5)\delta(4-3), \tag{102}\]
one gets the exchange part
\[V^{\rm x}_{pp,qq}(t)iG_{pq}(t)=-J\times\] \[\sum_{\delta=\pm 1}\Big{[}G_{p,p+\delta}(0^{-})G_{p+\delta,q}(t)-G_{p +\delta,p}(0^{-})G_{pq}(t)\Big{]}. \tag{103}\]
## Appendix B Analytic Vxc of four-site Heisenberg chain
To compute the Green function for positive time,
\[G_{pq}(t) = \langle\Psi|e^{iHt}\hat{S}^{+}_{p}e^{-iHt}\hat{S}^{-}_{q}|\Psi\rangle \tag{104}\]
where \(|\Psi\rangle\) is the ground state, one needs to use a complete set of eigenstates \(|n\rangle\) which give nonzero weight elements \(\langle n|\hat{S}^{-}_{q}|\Psi\rangle\). For an even number of sites and AFM coupling, the total z-spin of \(|\Psi\rangle\) is zero, which means that the states \(\{n\}\}\) are in the \(S^{z}=-1\) sector. Labeling the eigenenergy corresponding to state \(|n\rangle\) with \(E^{-}_{n}\), the Green function can be written as
\[G_{pq}(t>0) =\sum_{n}e^{-i(E^{-}_{n}-E^{0})t}\langle\Psi|\hat{S}^{+}_{p}|n \rangle\langle n|\hat{S}^{-}_{q}|\Psi\rangle, \tag{105}\]
and the high order term for positive time is
\[\langle l,p;q\rangle_{t>0}=\sum_{n}e^{-i(E^{-}_{n}-E^{0})t}\langle\Psi|\hat{S} ^{z}_{l}\hat{S}^{+}_{p}|n\rangle\langle n|\hat{S}^{-}_{q}|\Psi\rangle. \tag{106}\]
By diagonalizing the Hamiltonian in the \(S^{z}=0\) and \(S^{z}=-1\) sectors, one respectively gets \(\{|\Psi\rangle;E^{0}\}\) and \(\{|n\rangle;E^{-}_{n}\}\), and thus the weight elements \(\langle n|\hat{S}^{-}_{q}|\Psi\rangle\) and \(\langle n|\hat{S}^{-}_{p}\hat{S}^{z}_{l}|\Psi\rangle\). Out of the 4 states of \(|n\rangle\), only 3 of them give nonzero \(\langle n|\hat{S}^{-}_{q}|\Psi\rangle\). Explicitly, the time factors are
\[f_{1} = e^{-i(E^{-}_{0}-E^{0})t}=e^{{{{i}}}J(\frac{\sqrt{3}-\sqrt{2}+1} {2})t}, \tag{107}\] \[f_{2} = e^{-i(E^{-}_{1}-E^{0})t}=e^{{{{i}}}J(\frac{\sqrt{3}+1}{2})t}\] (108) \[f_{3} = e^{-i(E^{-}_{2}-E^{0})t}=e^{{{{i}}}J(\frac{\sqrt{3}+1}{2})t}. \tag{109}\]
The independent elements of \(V^{\rm xc}\) in orbital basis can be calculated with \(V^{\rm xc}_{pq}(t)=\frac{F_{pq}(t)}{iG_{pq}(t)}\):
\[V^{\rm xc}_{11} = -J\frac{(\frac{(xy+x)(xy+x+2y)}{a^{2}_{+}})f_{1}+(x^{2}+x)f_{2}+( \frac{(xy-3x)(xy-3x+2y-4)}{a^{2}_{-}})f_{3}}{(\frac{xy+x+2y}{a_{+}})^{2}f_{1}+x ^{2}f_{2}+(\frac{(xy-3x+2y-4)}{a_{-}})^{2}f_{3}} \tag{110}\] \[V^{\rm xc}_{22} = -J\frac{(\frac{2(x+1)(xy+x+2)}{a^{2}_{+}})f_{1}+(x^{2}+x)f_{2}+( \frac{(2x+1)(xy-3x-2)}{a^{2}_{-}})f_{3}}{(\frac{xy+x+2y}{a_{+}})^{2}f_{1}+x^{2} f_{2}+(\frac{xy-3x-2}{a_{-}})^{2}f_{3}}\] (111) \[V^{\rm xc}_{12} = -J\frac{(-\frac{(xy+x)(xy+x+2)}{a^{2}_{+}})f_{1}-(x^{2}+x)f_{2}-( \frac{(xy-3x)(xy-3x-2)}{a^{2}_{-}})f_{3}}{(-\frac{xy+x+2y(xy+x+2)}{a^{2}_{+}}) f_{1}-x^{2}f_{2}-(\frac{(xy-3x+2y-4)(xy-3x-2)}{a^{2}_{-}})f_{3}}\] (112) \[V^{\rm xc}_{13} = -J\frac{(\frac{(xy+x)(xy+x+2)}{a^{2}_{+}})f_{1}-(x^{2}+x)f_{2}+( \frac{(xy-3x)(xy-3x-2)}{a^{2}_{-}})f_{3}}{(\frac{(xy+x+2y)(xy+x+2)}{a^{2}_{+}}) f_{1}-x^{2}f_{2}+(\frac{(xy-3x+2y-4)(xy-3x-2)}{a^{2}_{-}})f_{3}}\] (113) \[V^{\rm xc}_{14} = -J\frac{-(\frac{(xy+x)(xy+x+2y)}{a^{2}_{+}})f_{1}+(x^{2}+x)f_{2}-( \frac{(xy-3x)(xy-3x+2y-4)}{a^{2}_{-}})f_{3}}{-(\frac{xy+x+2y}{a_{+}})^{2}f_{1}+ x^{2}f_{2}-(\frac{xy-3x+2y-4}{a_{-}})^{2}f_{3}}\] (114) \[V^{\rm xc}_{23} = -J\frac{-\frac{(2(x+1)(xy+x+2)}{a^{2}_{+}})f_{1}+(x^{2}+x)f_{2}-( \frac{(x+1)(xy-3x-2)}{a^{2}_{-}})f_{3}}{-(\frac{xy+x+2}{a_{+}})^{2}f_{1}+x^{2} f_{2}-(\frac{xy-3x-2}{a_{-}})^{2}f_{3}}. \tag{115}\]
where the constant factors \(x,y\) and \(a_{\pm}\) are defined in the main text. The terms in 'bonding-antibonding' basis are then
\[V^{\rm xc}_{BB,BB}=\frac{1}{16}\Big{[}\quad 2(V^{\rm xc}_{11}+V^{\rm xc}_{14}+V^{ \rm xc}_{22}+V^{\rm xc}_{23})+4(V^{\rm xc}_{12}+V^{\rm xc}_{13})\Big{]} \tag{116}\]
\[V_{BC,CB}^{\rm xc} = \frac{1}{16}\Big{[}\quad 2(V_{11}^{\rm xc}-V_{14}^{\rm xc}+V_{22}^{ \rm xc}-V_{23}^{\rm xc})+4(V_{12}^{\rm xc}-V_{13}^{\rm xc})\Big{]} \tag{14}\] \[V_{BA,AB}^{\rm xc} = \frac{1}{16}\Big{[}\quad 2(V_{11}^{\rm xc}-V_{14}^{\rm xc}+V_{22}^{ \rm xc}-V_{23}^{\rm xc})-4(V_{12}^{\rm xc}-V_{13}^{\rm xc})\Big{]}\] (15) \[V_{BD,DB}^{\rm xc} = \frac{1}{16}\Big{[}\quad 2(V_{11}^{\rm xc}+V_{14}^{\rm xc}+V_{22}^{ \rm xc}+V_{23}^{\rm xc})-4(V_{12}^{\rm xc}+V_{13}^{\rm xc})\Big{]} \tag{16}\]
Appendix C The dependence of the \(\alpha\) parameter on \(\frac{U}{\Delta}\) in the Hubbard dimer model
The equations in this subsection are rewritten from the Hubbard dimer work [62]. With a two-site open ends chain, the half-filled Hubbard Hamiltonian Eq.(44) can be analytically solved, given the analytic bonding (\(k=0\)) and anti-bonding (\(k=\pi\)) Vxc:
\[V^{\rm xc}(k=0,t>0)=\frac{\alpha U}{2}\frac{1-\alpha^{2}e^{-i4\Delta t}}{1- \alpha^{4}e^{-i4\Delta t}} \tag{17a}\] \[V^{\rm xc}(k=\pi,t>0)=\frac{\alpha U}{2}\frac{(1-\alpha^{2})e^{-i2\Delta t}}{1- \alpha^{4}e^{-i4\Delta t}}, \tag{17b}\]
where \(\alpha=\frac{1-\kappa}{1+\kappa}\), and \(\kappa=\frac{1}{4}\Big{(}\sqrt{(\frac{U}{\Delta})^{2}+16}-\frac{U}{\Delta} \Big{)}\). After neglecting the higher excitation term \(e^{-i4\Delta t}\) in Eq. (17), the approximated dimer Vxc in the main text (Eq. (52)) is obtained.
|
2310.14200 | Dynamic Resource Management in CDRT Systems through Adaptive NOMA | This paper introduces a novel adaptive transmission scheme to amplify the
prowess of coordinated direct and relay transmission (CDRT) systems rooted in
non-orthogonal multiple access principles. Leveraging the maximum ratio
transmission scheme, we seamlessly meet the prerequisites of CDRT while
harnessing the potential of dynamic power allocation and directional antennas
to elevate the system's operational efficiency. Through meticulous derivations,
we unveil closed-form expressions depicting the exact effective sum throughput.
Our simulation results adeptly validate the theoretical analysis and vividly
showcase the effectiveness of the proposed scheme. | Hongjiang Lei, Mingxu Yang, Ki-Hong Park, Nasir Saeed, Xusheng She, Jianling Cao | 2023-10-22T06:29:18Z | http://arxiv.org/abs/2310.14200v1 | # Dynamic Resource Management in CDRT Systems through Adaptive NOMA
###### Abstract
This paper introduces a novel adaptive transmission scheme to amplify the prowess of coordinated direct and relay transmission (CDRT) systems rooted in non-orthogonal multiple access principles. Leveraging the maximum ratio transmission scheme, we seamlessly meet the prerequisites of CDRT while harnessing the potential of dynamic power allocation and directional antennas to elevate the system's operational efficiency. Through meticulous derivations, we unveil closed-form expressions depicting the exact effective sum throughput. Our simulation results adeptily validate the theoretical analysis and vividly showcase the effectiveness of the proposed scheme.
Coordinated direct and relay transmission, non-orthogonal multiple access, dynamic power allocation, outage probability.
## I Introduction
### _Background and Related Work_
In the era of the Internet of Things (IoT), addressing the pressing challenge of managing scarce spectrum resources while facilitating seamless connectivity for a multitude of devices has emerged as a critical concern. Among the various techniques, non-orthogonal multiple access (NOMA) has surfaced as a promising solution. This approach leverages superimposed coding at the transmitter and employs successive interference cancellation (SIC) at the receiver, effectively catering to distinct rate and delay requirements [1]. Cooperative NOMA (CNOMA) technology, proven and prolific, stands out for its capacity to significantly extend network coverage and elevate overall system performance [2, 3, 4]. Building on this foundation, Kim _et al._ explored CNOMA systems, yielding incisive closed-form expressions for the ergodic sum rate (ESR) [5]. Their analytical revelations underscored the superior spectral efficiency (SE) intrinsic to CNOMA systems equipped with a dedicated relay, positioning them ahead of conventional cooperative systems. Furthermore, Luo _et al._ derived into the realm of CNOMA systems augmented by a buffer-assisted relay [6]. Their work introduced an adaptive transmission scheme designed to maximize the cumulative throughput. In this pursuit, they derived expressions encapsulating the essence of the effective sum throughput (EST), unraveling new vistas for performance optimization. Moreover, the work in [7] introduced underlay CNOMA paradigm. At its core lies in the empowered capability of a full-duplex near-user to serve as a conduit for relaying signals to a distant user. Herein, adaptive strategies, spanning cooperative and non-cooperative modes along with transmit antenna selection, were proposed to get better performance. Defly, the authors derived expressions illuminating the outage probability (OP) and the contours of the EST, further underlining the potency of this innovative approach.
Additionally, multi-antenna technology can significantly increase the CNOMA system's performance due to beamforming array gain [8]. In this context, [9] navigated through the intricacies of a CNOMA configuration embellished with a multi-antenna relay. Notably, the study encompassed the derivation of closed-form expressions for both the exact and lower bound of the OP, tailored explicitly for scenarios where the relay's transmit antenna was thoughtfully selected. Meanwhile, Han _et al._ delved into multi-antenna satellite cooperative systems, investigating fixed and variable gain relay schemes, all within the backdrop of imperfect SIC [10]. They derived analytical expressions encapsulating the exact and asymptotic OP. Moreover, Lv _et al._ investigated the potential of cooperative NOMA domain infused with multi-antenna two-way relays. Notably, their work yielded two novel cooperative schemes: multiple-access NOMA and time-division NOMA [11]. Their work involved ingenious transmission paradigms, wherein antenna and relay were inextricably interwoven through joint selection. Importantly, this holistic study culminated in formulating expressions that elegantly encapsulated both the exact and asymptotic OP, adding further depth to the understanding of these configurations.
NOMA-based coordinated direct and relay transmission (CDRT) has garnered substantial attention as a promising strategy to bolster system capacity [12]-[19]. Notably distinct from CNOMA, the CDRT scheme capitalizes on the full range of SIC results at the near-user. This attribute effectively curtails interference from the relay to the near-user during the second time slot, thus augmenting the efficiency of parallel link transmission. This configuration efficiently enhances system SE by transmitting multiple signals to NOMA users on the same resource block. An early stride into this work was taken with the introduction of a downlink CDRT system
[12], where analytical expressions for the OP and ESR were elucidated. Notably, this work showcased that CDRT systems' ESR surpasses conventional CNOMA systems. A subsequent investigation by Liu _et al._ explored the outage performance of a satellite-based CDRT system, culminating in closed-form expressions capturing the exact and asymptotic OP [13]. Meanwhile, Zou _et al._ ventured into device-to-device CDRT systems, deriving the closed-form ESR expression and revealing its dependence on the relay-near user distance and power allocation [14]. The exploration further intensified with a dynamic transmission scheme proposed by Xu _et al._, where near-users alternated between forward and receive modes based on the first slot decoding outcomes [15]. Their insights led to closed-form expressions encompassing exact and asymptotic OP and ESR, while power allocation coefficients were meticulously optimized. Towards a holistic design, Xu _et al._ introduced a physical layer network coding-infused CDRT scheme, fostering joint uplink and downlink transmission advancements [16]. The outcome was the derivation of closed-form expressions capturing the OP, EST, and ESR for scenarios embracing perfect and imperfect SIC. Shifting focus towards finite block lengths, Yuan _et al._ delved into CDRT system performance, yielding an approximate expression for the EST alongside an optimized power allocation coefficient [17]. With direct links, amplify-and-forward (AF), and decode-and-forward (DF) relays, Anand _et al._ embarked on an intricate investigation [18]. Their findings culminated in closed-form expressions for exact and asymptotic OP and EST, orchestrating an optimization interplay involving power allocation coefficients and rate thresholds to achieve maximal EST while preserving far-user quality of service. In incremental AF relay-assisted CDRT systems, Anand _et al._ traversed the domain of imperfect SIC [19]. Their findings underscored the potency of incremental signaling and dual combining at the far-user and near-user, harmoniously enriching throughput and energy efficiency.
### _Motivation and Contributions_
Upon delving into the array of existing studies, it becomes apparent that the performance of CDRT systems has been thoroughly examined across various scenarios. However, a conspicuous gap remains in the realm of investigating beamforming techniques and dynamic power allocation (DPA) strategies within the CDRT framework, serving as the impetus for the current paper. It's important to underscore the foundational premise of the CDRT system, which hinges upon the successful decoding of signals by the near-user for the edge-user [20]. If this pivotal condition is not met, the relay's interference can hinder the parallel transmission's effectiveness. Therefore, ensuring the near users' adopt decoding of edge-user signals is a critical challenge within the CDRT framework. In this study, we propose an innovative adaptive NOMA-based CDRT scheme that dynamically adjusts the transmit power at the transmitter to guarantee the relay's successful decoding of signals intended for the edge-user. This strategy is underpinned by the rationale that, when close to the relay, the near-user's proximity to the transmitter assures the relay's accurate decoding of edge-user information. Consequently, the near-user exhibits an elevated likelihood of successful signal decoding, underpinning the rationale for this approach. Notably, this scheme can improve edge-user performance, given that the edge-user's signal-to-noise ratio (SNR) is contingent on the SNRs of both transmission hops. Moreover, to further enrich the near-user experience quality, we harness a beamforming scheme that optimally directs signal propagation. Simultaneously, we employ directional antenna transmission to mitigate interference at the near user's end. The main contributions of th are summarized as follows.
1. An adaptive CDRT scheme is proposed to enhance performance through DPA and beamforming schemes. More specifically, the maximum ratio transmission (MRT) scheme is utilized, and the transmitter sends superimposed signals with a DPA scheme to ensure that the relay can successfully decode the signals for the edge-user. Meanwhile, a beamforming strategy is adopted to increase the probability that center-user will successfully decode the target signal.
2. The closed-form expressions of exact EST for the proposed scheme are derived. To attain more insights, we adopt the single/multiple antenna based-CDRT schemes with fixed power allocation (FPA) and beamforming-CDRT scheme between transmitter and relay to enhance the quality of edge-user's information as a benchmark to prove that the proposed scheme can achieve superior performance in EST. Then, the effect of parameters such as the distance between the transmitter and near user/the relay and the rate threshold on EST was analyzed. Monte Carlo simulation results are provided to prove the accuracy of the derived analytical expressions.
3. Relative to [17] wherein different scenarios in which the relay can or cannot decode the signals for the edge-user were considered. However, the condition for successful decoding was not considered and no schemes were proposed to deal with these events. The power allocation coefficient is designed to ensure that the relay successfully decodes the information for the edge-user, and beamforming technology is utilized to improve system performance. Moreover, the closed-form expressions of exact EST are derived.
4. Relative to [21] wherein the transmitter utilizes beamforming technology to improve the performance of the downlink NOMA system, the closed-form expressions for the approximate and asymptotic block error rate and ESR are derived. We study the downlink CDRT system with a DPA strategy, and closed-form expressions of exact throughput OP and EST are derived. Technically speaking, it is much more challenging to study the performance of the CDRT system than the NOMA system.
### _Organization_
The rest of this work is organized as follows. Section II describes the system model. The EST of the considered systems is analyzed in Section III. Section IV presents the
numerical and simulation results, and this work is concluded in Section V.
## II System Model
Fig. 1 illustrates a NOMA-based CDRT system consisting of a transmitter \((S)\), a center-user \((U_{1})\), and an edge-user \((U_{2})\), where all the nodes are equipped with a single antenna unless otherwise stated. There is no direct link between \(S\) and \(U_{2}\) due to deep fading and shadowing, and the communication link between \(S\) and \(U_{2}\) must be deployed via a DF relay \((R)\). Moreover, all the wireless links are assumed to experience quasi-static independent Rayleigh fading, and all nodes operate in the half-duplex mode. To simplify the analysis, subscripts \(s\), \(r\), 1, and 2 denote the \(S\), \(R\), \(U_{1}\), and \(U_{2}\), respectively. The channel coefficient and average channel gain between \(p\) and \(q\) are denoted by \(h_{p,q}\) and \(\lambda_{p,q}=d_{p,q}^{-\alpha}\) for \(p,q\in\{s,1,2,t\}\) (\(p\neq q\)), where \(d_{p,q}\) denotes the distance between \(p\) and \(q\) and \(\alpha\) signifies the path loss exponent, respectively. Furthermore, the distances are assumed to be \(d_{\rm s,r}<d_{\rm s,2}\)[12].
The transmission block is divided into two equal time slots. In the first time slot \((t_{1})\), \(S\) broadcasts a superimposed signal, \(x_{\rm s}=\sqrt{a_{1}P_{\rm s}}x_{1}+\sqrt{a_{2}P_{\rm s}}x_{2}\), where \(x_{i}\) and \(a_{i}\) denote the signal and power allocation coefficient for \(U_{i}\), respectively, \(i=1,2\), \(a_{1}+a_{2}=1\), and \(P_{\rm s}\) denotes the transmit power at \(S\). In the second time slot \((t_{2})\), \(R\) decodes and forwards \(x_{2}\) with power \(P_{\rm r}\). Simultaneously, \(S\) transmits a new signal \(x_{3}\) to \(U_{1}\).
### _Dynamic Power when \(S\) equipped with a Single Antenna_
The received signal at \(U_{1}\) is expressed as
\[y_{1}^{t_{1}}=h_{\rm s1}x_{\rm s}+n_{1}^{t_{1}}, \tag{1}\]
where \(n_{1}^{t_{1}}\) signifies the additive white Gaussian noise (AWGN) with zero mean and variance \(\sigma^{2}\). Subsequently, \(U_{1}\) utilizes SIC detection following the decoding order of \(x_{2}\to x_{1}\) to obtain better performance on \(x_{1}\) and the corresponding achievable rate of \(x_{2}\) is expressed as
\[R_{1}^{x_{2}}=\frac{1}{2}\ln\left(1+\gamma_{1}^{x_{2}}\right), \tag{2}\]
where \(\gamma_{1}^{x_{2}}=\frac{\rho_{\rm s}a_{2}Y_{1}}{1+\rho_{\rm s}a_{1}Y_{1}}\), \(Y_{1}=\left|h_{\rm s,1}\right|^{2}\), and \(\rho_{\rm s}=\frac{P_{\rm r}}{\sigma^{2}}\) denotes the normalized power at \(S\). Thus, the corresponding achievable rate of \(x_{2}\) is expressed as
\[R_{1}^{x_{1}}=\frac{1}{2}\ln\left(1+\gamma_{1}^{x_{1}}\right),\;{\rm when}\; \;R_{1}^{x_{2}}\geq R_{\rm th}^{x_{2}}, \tag{3}\]
where \(\gamma_{1}^{x_{1}}=\rho_{\rm s}a_{1}Y_{1}\) and \(R_{\rm th}^{x_{2}}\) signifies the target rate threshold for \(x_{2}\).
Meanwhile, \(x_{2}\) is directly decoded at \(R\) and the achievable rate is expressed as
\[R_{\rm r}^{x_{2}}=\frac{1}{2}\ln\left(1+\gamma_{\rm r}^{x_{2}}\right), \tag{4}\]
where \(\gamma_{\rm r}^{x_{2}}=\frac{\rho_{\rm s}a_{2}X_{1}}{1+\rho_{\rm s}a_{1}X_{1}}\) and \(X_{1}=\left|h_{\rm s,r}\right|^{2}\).
In the second time slot, \(R\) decodes and forwards \(x_{2}\) to \(U_{2}\) and \(S\) transmit \(x_{3}\) to \(U_{1}\). The received signals at \(U_{1}\) and \(U_{2}\) are expressed as
\[y_{1}^{t_{2}}=h_{\rm s,1}\sqrt{P_{\rm s}}x_{3}+h_{\rm r,1}\sqrt{P_{\rm r}}x_{ 2}+n_{1}^{t_{2}}, \tag{5}\]
\[y_{2}^{t_{2}}=h_{\rm r,2}\sqrt{P_{\rm r}}x_{2}+n_{2}^{t_{2}}, \tag{6}\]
respectively, where \(n_{1}^{t_{2}}\) and \(n_{2}^{t_{2}}\) denote the AWGN at \(U_{1}\) and \(U_{2}\) in \(t_{2}\). Since \(x_{2}\) is decoded in the first slot at \(U_{1}\) and can be deleted from \(y_{1}^{t_{2}}\), then the achievable rate of \(x_{3}\) at \(U_{1}\) is expressed as
\[R_{1}^{x_{3}}=\frac{1}{2}\ln\left(1+\gamma_{1}^{x_{3}}\right),\;{\rm when}\;R_ {1}^{x_{2}}\geq R_{\rm th}^{x_{2}}, \tag{7}\]
where \(\gamma_{1}^{x_{3}}=\rho_{\rm s}Y_{1}\). And the achievable rate of \(x_{2}\) at \(U_{2}\) is expressed as
\[R_{2}^{x_{2}}=\frac{1}{2}\ln\left(1+\gamma_{2}^{x_{2}}\right),\;{\rm when}\;R_ {\rm r}^{x_{2}}\geq R_{\rm th}^{x_{2}}, \tag{8}\]
where \(\gamma_{2}^{x_{2}}=\rho_{\rm r}\left|h_{\rm r,2}\right|^{2}\) and \(\rho_{\rm r}=\frac{P_{\rm r}}{\sigma^{2}}\) denotes the normalized power at \(R\).
#### Ii-A1 Dynamic power based on \(S\)-\(U_{1}\) link (DPU) in the first time slot
As stated in [20], there is a premise for the CDRT system that \(U_{1}\) should successfully decode \(x_{2}\) to eliminate the interference in the second slot. Then the power allocation at \(S\) is utilized to meet \(R_{1}^{x_{2}}\geq R_{\rm th}^{x_{2}}\). Based on (2), the condition is obtained as
\[a_{1}\leq\theta\left(1-\frac{\tau_{1}}{Y_{1}}\right),Y_{1}>\tau_{1}, \tag{9}\]
Fig. 1: A NOMA-based CDRT system consisting of a transmitter (\(S\)), two users (\(U_{1}\) and \(U_{2}\)), and a relay (\(R\)).
where \(\theta=\frac{1}{1+\theta_{2}}\), \(\theta_{2}=\exp\left(2R_{\rm th}^{x_{2}}\right)-1\), and \(\tau_{1}=\frac{\theta_{2}}{\rho_{\rm r}}\).
**Remark 1**.: _Based on (9), one can observe that \(Y_{1}>\tau_{1}\) must be satisfied in this scenario to utilize the CDRT scheme. This signifies that \(x_{2}\) can be successfully decoded when \(x_{2}\) occupies the channel alone and all the power at \(S\) is allocated to \(x_{2}\). This is easy to follow since the NOMA scheme can improve the spectrum efficiency when the channel quality is relatively high._
When \(Y_{1}<\tau_{1}\), the system cannot work normally.
#### Ii-A2 Dynamic power based on \(S\)-\(R\) link (DPR) in the first slot
Only if \(R\) is guaranteed to successfully decode \(x_{2}\) to forward the information in the second time slot. Another DPA scheme is proposed where the power allocation at \(S\) is dynamically adjusted to meet \(R_{\rm r}^{x_{2}}\geq R_{\rm th}^{x_{2}}\). Then the following condition is obtained as
\[a_{1}\leq\theta\left(1-\frac{\tau_{1}}{X_{1}}\right),X_{1}>\tau_{1}. \tag{10}\]
For the scenarios with \(X_{1}<\tau_{1}\), the system also cannot work normally.
### _Dynamic Power when \(S\) equipped with Multiple Antennas_
Based on (9) and (10), one can observe that when \(Y_{1}<\tau_{1}\) or \(X_{1}<\tau_{1}\) are not satisfied, it is difficult to design power allocation coefficient to ensure that \(U_{1}\) or \(R\) are successfully decoded \(x_{2}\). Thus, a new adaptive transmission scheme, MDPR, is proposed in this subsection. We adopt \(S\) equipped with \(N\) antennas to address the above problem. The MRT scheme is utilized on \(S-U_{1}\) link to enhance the channel quality and the power at \(S\) is dynamically adjusted based on \(S\)-\(R\) link to meet \(R_{\rm r}^{x_{2}}\geq R_{\rm th}^{x_{2}}\)1, and the corresponding received signals are expressed as
Footnote 1: Compared with \(R\), \(U_{1}\) is assumed to be closer to \(S\). The channel quality of the \(S\)-\(U_{1}\) link is more robust than that of the \(S\)-\(R\) link with high probability. When \(R\) is guaranteed to decode \(x_{2}\), \(U_{1}\) can decode \(x_{2}\) with a higher probability.
\[y_{1}^{t_{1}}={\bf h}_{{\rm s},1}{\bf w}x_{{\rm s}}+n_{1}^{t_{1}}, \tag{11}\]
where \({\bf h}_{{\rm s},1}\) denotes the channel coefficients between the \(S\) and \(U_{1}\) and \({\bf w}=\frac{{\bf h}_{{\rm s},1}^{t_{1}}}{\left\|{\bf h}_{{\rm s},1}\right\|}\) is the the beamforming vector. The corresponding achievable rate of \(x_{i}\)\((i=1,2)\) are expressed as
\[R_{1,{\rm noma}}^{x_{2},{\rm m}}=\frac{1}{2}\ln\left(1+\gamma_{1,{\rm noma}}^ {x_{2}}\right), \tag{12}\]
\[R_{1,{\rm noma}}^{x_{1},{\rm m}}=\frac{1}{2}\ln\left(1+\gamma_{1,{\rm noma}}^ {x_{1}}\right),\;{\rm when}\;\;R_{1,{\rm noma}}^{x_{2},{\rm m}}\geq R_{\rm th}^ {x_{2}}, \tag{13}\]
respectively, where \(\gamma_{1,{\rm noma}}^{x_{2}}=\frac{a_{2\rho}\left[{\bf h}_{{\rm s},1}\right\|^ {2}}{a_{1}\rho_{\rm s}\left\|{\bf h}_{{\rm s},1}\right\|^{2}+1}\), \(\gamma_{1,{\rm noma}}^{x_{1}}=a_{1}\rho_{\rm s}\left\|{\bf h}_{{\rm s},1} \right\|^{2}\), and the superscript'm' denotes multiple antennas scenarios.
Meanwhile, \(x_{2}\) is directly decoded at \(R\) and the achievable rate is expressed as
\[R_{{\rm r},{\rm noma}}^{x_{2},{\rm m}}=\frac{1}{2}\ln\left(1+\gamma_{{\rm r}, {\rm noma}}^{x_{2}}\right), \tag{14}\]
where \(\gamma_{{\rm r},{\rm noma}}^{x_{2}}=\frac{a_{2\rho}\rho_{\rm r}Y_{{\rm s},r}}{ a_{1}\rho_{\rm s}\left\|{\bf h}_{{\rm s},1}\right\|^{2}}\) and \(Y_{{\rm s},{\rm r}}=\frac{\left|{\bf h}_{{\rm s},1}\right\|^{2}}{\left\|{\bf h }_{{\rm s},1}\right\|^{2}}\).
With the same method as (10), we obtain
\[a_{1}\leq\theta\left(1-\frac{\tau_{1}}{Y_{{\rm s},{\rm r}}}\right),Y_{{\rm s}, {\rm r}}>\tau_{1}. \tag{15}\]
To enhance the performance of \(U_{2}\) and minimize the interference from \(R\) to \(U_{1}\) in the second time slot, \(R\) utilizes a directional antenna transmission and antenna gain \(G\) is approximately denoted as [22]
\[G=\left\{\begin{array}{cc}G_{0},&{\rm inside\ the\ mainlobe},\\ \eta G_{0},&{\rm outside\ the\ mainlobe},\end{array}\right. \tag{16}\]
where \(G_{0}\) is antenna gain for the mainlobe and \(\eta<1\) is the attenuating factor for the sidelobe gain. The received signals at \(U_{1}\) and \(U_{2}\) in the second time slot are expressed as
\[y_{1}^{t_{2}}={\bf h}_{{\rm s},1}{\bf w}\sqrt{P_{\rm s}}x_{3}+h_{{\rm r},1} \sqrt{\eta G_{0}P_{\rm r}}x_{2}+n_{1}^{t_{2}}, \tag{17}\]
\[y_{2}^{t_{2}}=h_{{\rm r},2}\sqrt{G_{0}P_{\rm r}}x_{2}+n_{2}^{t_{2}}, \tag{18}\]
respectively.
Since \(x_{2}\) is decoded in the first slot at \(U_{1}\) and can be deleted from \(y_{1}^{t_{2}}\), then the achievable rate of \(x_{3}\) at \(U_{1}\) is expressed as
\[R_{1,{\rm noma}}^{x_{3},{\rm m}}=\frac{1}{2}\ln\left(1+\gamma_{1,{\rm noma}}^{x_ {3}}\right),\;{\rm when}\;\;R_{1,{\rm noma}}^{x_{2},{\rm m}}\geq R_{\rm th}^{x _{2}}, \tag{19}\]
where \(\gamma_{1,{\rm noma}}^{x_{3}}=G_{0}\rho_{\rm s}\|{\bf h}_{{\rm s},1}\|^{2}\).
And the achievable rate of \(x_{2}\) at \(U_{2}\) is expressed as
\[R_{2,{\rm noma}}^{x_{2},{\rm m}}=\frac{1}{2}\ln\left(1+\gamma_{2,{\rm noma}}^{x_ {2}}\right),\;{\rm when}\;\;R_{{\rm r},{\rm noma}}^{x_{2},{\rm m}}\geq R_{\rm th }^{x_{2}}, \tag{20}\]
where \(\gamma_{2,{\rm noma}}^{x_{2}}=G_{0}\rho_{\rm r}\left|{h}_{{\rm r},2}\right|^{2}\) and \(\rho_{\rm r}=\frac{P}{\sigma^{2}}\).
For the scenarios with \(Y_{{\rm s},{\rm r}}<\tau_{1}\), OMA scheme is utilized to transmit signals. Specifically, the first time-slot (\(t_{1}\)) is divided into two equal parts and transmit \(x_{1}\) and \(x_{2}\) with MRT, respectively. The corresponding achievable rate at \(U_{1}\) and \(R\) are expressed as
\[R_{1,{\rm noma}}^{x_{1},{\rm m}}=\frac{1}{4}\ln\left(1+\gamma_{1,{\rm noma}}^{x_ {1}}\right), \tag{21}\]
\[R_{{\rm r},{\rm noma}}^{x_{2},{\rm m}}=\frac{1}{4}\ln\left(1+\gamma_{{\rm r},{ \rm coma}}^{x_{2}}\right), \tag{22}\]
respectively, where \(\gamma_{1,{\rm coma}}^{x_{1}}=\rho_{\rm s}\|{\bf h}_{{\rm s},1}\|^{2}\), \(\gamma_{{\rm r},{\rm coma}}^{x_{2}}=\rho_{\rm s}\|{\bf h}_{{\rm s},{\rm r}}\|^{2}\), and the pre-log factor of \(\frac{1}{4}\) is multiplied since the first time slot is divided into two equal parts.
It must be noted that \(x_{2}\) also can be received at \(U_{1}\) in the scenarios with OMA scheme and the corresponding achievable rate at \(U_{1}\) is expressed as
\[R_{1,{\rm coma}}^{x_{2},{\rm m}}=\frac{1}{4}\ln\left(1+\gamma_{1,{\rm coma}}^{x_ {2}}\right), \tag{23}\]
where \(\gamma_{1,{\rm coma}}^{x_{2}}=\rho_{\rm s}Y_{{\rm s},1}\) and \(Y_{{\rm s},1}=\frac{\left|{\bf h}_{{\rm s},1}\right\|^{2}}{\left\|{\bf h}_{{\rm s}, 1}\right\|^{2}}\).
where \(\tau_{2}=\frac{\exp\left(4R_{\rm th}^{x_{2}}\right)-1}{\rho_{\rm s}}\). Then the corresponding achievable rate at \(U_{1}\) is expressed as
\[R_{1,\rm{oma}}^{x_{3},\rm{m}}=\left\{\begin{array}{ll}\frac{1}{2}\ln\left(1+ \gamma_{1,\rm{oma}}^{x_{3},\rm{ms}}\right),&Y_{\rm{s},1}>\tau_{2},\\ \frac{1}{2}\ln\left(1+\gamma_{1,\rm{oma}}^{x_{3},\rm{mf}}\right),&Y_{\rm{s},1 }<\tau_{2},\end{array}\right. \tag{25}\]
where \(\gamma_{1,\rm{oma}}^{x_{3},\rm{ms}}=G_{0}\rho_{\rm s}\|\mathbf{h}_{\rm{s},1}\|^ {2}\) and \(\gamma_{1,\rm{oma}}^{x_{3},\rm{mf}}=\frac{G_{0}\rho_{\rm s}\|\mathbf{h}_{\rm{s},1}\|^{2}}{\eta G_{0}\rho_{\rm s}\|h_{\rm{s},1}\|^{2}+1}\), the superscript's' and 'f' denotes success and failure, respectively. The corresponding achievable rate at \(U_{2}\) is expressed as
\[R_{2,\rm{oma}}^{x_{2},\rm{m}}=\frac{1}{2}\ln\left(1+\gamma_{2,\rm{oma}}^{x_{2} }\right), \tag{26}\]
where \(\gamma_{2,\rm{oma}}^{x_{2}}=G_{0}\rho_{\rm r}|h_{\rm{r},2}|^{2}\).
Based on [21] and [23], the CDF and PDF of \(\|\mathbf{h}_{\rm{s},\rm{d}}\|^{2}\) are expressed as
\[F_{\|\mathbf{h}_{\rm{s},\rm{d}}\|^{2}}\left(y\right)=1-\exp\left(-\psi_{\rm{s },\rm{d}}y\right)\sum_{m=0}^{N-1}\xi_{\rm{s},\rm{d}}y^{m}, \tag{27}\]
\[f_{\|\mathbf{h}_{\rm{s},\rm{d}}\|^{2}}\left(y\right)=\frac{\psi_{\rm{s},\rm{d }}^{N}}{\Gamma\left(N\right)}y^{N-1}\exp\left(-\psi_{\rm{s},\rm{d}}y\right), \tag{28}\]
where \(\rm{d}\in\{1,\rm{r}\}\), \(\psi_{\rm{s},\rm{d}}=\frac{1}{\lambda_{\rm{s},\rm{d}}}\), \(\xi_{\rm{s},\rm{d}}=\frac{\psi_{\rm{s},\rm{d}}^{m}}{m!}\), and \(\Gamma\left(x\right)\) is the gamma function, respectively. The PDF of \(Y_{\rm{s},\rm{r}}\) is given as [21]
\[f_{Y_{\rm{s},\rm{r}}}\left(y\right)=\psi_{\rm{s},\rm{r}}\exp\left(-\psi_{\rm{ s},\rm{r}}y\right). \tag{29}\]
## III Effective Sum Throughput Analysis
In this section, the scenario is considered wherein the traffic is the delay-sensitive and the signals are transmitted at the constant rate. The effective sum throughput (EST) is utilized as the performance metric, which is expressed as [24]
\[\Psi=\sum_{i=1}^{3}R_{\rm{th}}^{x_{i}}\left(1-P_{\rm{out}}^{x_{i}}\right) \tag{30}\]
where \(R_{\rm{th}}^{x_{i}}\) signifies the target threshold for \(x_{i}\) and \(P_{\rm{out}}^{x_{i}}\) is the OP of \(x_{i}\), which is derived in the following subsections.
### _Exact Outage Probability Analysis with the DPU scheme_
Substituting \(a_{1}=\theta\left(1-\frac{\tau_{1}}{Y_{1}}\right)\) into \(\gamma_{1}^{x_{1}}\), the OP of \(x_{1}\) based on \(R_{1}^{x_{2}}\geq R_{\rm{th}}^{x_{2}}\) in DPU scheme is obtained as
\[\begin{split} P_{\rm{out}}^{x_{1},\rm{DPU}}&=1- \Pr\left\{Y_{1}>\tau_{1},R_{1}^{x_{2}}\geq R_{\rm{th}}^{x_{2}},R_{1}^{x_{1}} \geq R_{\rm{th}}^{x_{1}}\right\}\\ &=1-\Pr\left\{Y_{1}>\tau_{1},R_{1}^{x_{1}}\geq R_{\rm{th}}^{x_{1} }\right\}\\ &=1-\Pr\left\{Y_{1}\geq A_{0}+\tau_{1}\right\}\\ &=1-\exp\left(-\psi_{\rm{s},1}\left(A_{0}+\tau_{1}\right)\right), \end{split} \tag{31}\]
where \(A_{0}=\frac{\theta_{1}\left(\theta_{2}+1\right)}{\rho_{\rm s}}\) and \(\theta_{1}=\exp\left(2R_{\rm{th}}^{x_{1}}\right)-1\). It must be noted that only when both two hops are not outage, \(x_{2}\) can be successfully decoded at \(U_{2}\). Thus, the OP of \(x_{2}\) is expressed as
\[\begin{split}& P_{\rm{out}}^{x_{2},\rm{DPU}}=1-\Pr\left\{Y_{1}> \tau_{1},R_{\rm{r}}^{x_{2}}\geq R_{\rm{th}}^{x_{2}},R_{2}^{x_{2}}\geq R_{\rm{ th}}^{x_{2}}\right\}\\ &=1-\Pr\left\{Y_{1}>\tau_{1},\frac{\rho_{\rm{s}}a_{2}X_{1}}{1+\rho _{\rm{s}}a_{1}X_{1}}\geq 6_{2},\rho_{\rm{r}}|h_{\rm{r},2}|^{2}\geq\theta_{2}\right\}\\ &=1-\Pr\left\{Y_{1}>\tau_{1},X_{1}\geq Y_{1},|h_{\rm{r},2}|^{2} \geq\frac{\theta_{2}}{\rho_{\rm{r}}}\right\}\\ &=1-\exp\left(-\frac{\psi_{\rm{r},2}\theta_{2}}{\rho_{\rm{r}}} \right)\int_{\tau_{1}}^{\infty}\left(1-F_{X_{1}}\left(y\right)\right)f_{Y_{1}} \left(y\right)dy\\ &=1-\frac{\psi_{\rm{s},1}}{\psi_{\rm{s},\rm{r}}+\psi_{\rm{s},1}} \exp\left(-\left(\psi_{\rm{s},\rm{r}}+\psi_{\rm{s},1}\right)\tau_{1}-\frac{\psi _{\rm{r},2}\theta_{2}}{\rho_{\rm{r}}}\right),\end{split} \tag{32}\]
Similarly, it is worth noting that \(R_{1}^{x_{2}}\geq R_{\rm{th}}^{x_{2}}\) is satisfied, \(x_{3}\) can be decoded without interference. Thus, the OP of \(x_{3}\) is obtained as
\[\begin{split} P_{\rm{out}}^{x_{3},\rm{DPU}}&=1-\Pr \left\{Y_{1}>\tau_{1},R_{1}^{x_{2}}\geq R_{\rm{th}}^{x_{2}},R_{1}^{x_{3}}\geq R_{ \rm{th}}^{x_{3}}\right\}\\ &=1-\Pr\left\{Y_{1}>\tau_{1},R_{1}^{x_{3}}\geq R_{\rm{th}}^{x_{3}} \right\}\\ &=1-\Pr\left\{Y_{1}>\max\left(\tau_{1},\frac{\theta_{3}}{\rho_{ \rm{s}}}\right)\right\}\\ &=\left\{\begin{array}{ll}1-\exp\left(-\psi_{\rm{s},1}\tau_{1} \right),&\theta_{2}>\theta_{3},\\ 1-\exp\left(-\frac{\psi_{\rm{s},1}\theta_{2}}{\rho_{\rm{s}}}\right),&\theta_{2}< \theta_{3},\end{array}\right.\end{split} \tag{33}\]
where \(\theta_{3}=\exp\left(2R_{\rm{th}}^{x_{3}}\right)-1\).
### _Exact Outage Probability Analysis with DPR scheme_
With the same method as (31) and by utilizing
\[\int_{x_{0}}^{x_{1}}e^{-ax-\frac{1}{2}}dx\approx\left(e^{-ax_{0}}-e^{-ax_{1}} \right)\sqrt{\frac{4b}{a}}K_{1}\left(\sqrt{4ab}\right), \tag{34}\]
which is verified in [25], the OP of \(x_{1}\) in DPR scheme is obtained as
\[\begin{split} P_{\rm{out}}^{x_{1},\rm{DPR}}&=1-\Pr\left\{X_{1 }>\tau_{1},R_{1}^{x_{2}}\geq R_{\rm{th}}^{x_{2}},R_{1}^{x_{1}}\geq R_{\rm{ th}}^{x_{1}}\right\}\\ &=1-\Pr\left\{X_{1}>\tau_{1},\frac{\rho_{\rm{s}}a_{2}Y_{1}}{1+\rho _{\rm{s}}a_{1}Y_{1}}\geq\theta_{2},\rho_{\rm{s}}a_{1}Y_{1}\geq\theta_{1} \right\}\\ &=1-\Pr\left\{X_{1}>\tau_{1},Y_{1}\geq\max\left(X_{1},\frac{A_{0} X_{1}}{X_{1}-\tau_{1}}\right)\right\}\\ &=1-\Pr\left\{Y_{1
\[P_{\text{out}}^{x_{\text{s}},\text{DPR}} =1-\Pr\left\{X_{1}>\tau_{1},R_{1}^{x_{2}}\geq R_{\text{th}}^{x_{2}}, R_{1}^{x_{3}}\geq R_{\text{th}}^{x_{3}}\right\} \tag{37}\] \[=1-\Pr\left\{X_{1}>\tau_{1},Y_{1}\geq\max\left(X_{1},\frac{\theta_ {3}}{\rho_{\text{s}}}\right)\right\}\] \[=1-\Pr\left\{X_{1}>\tau_{1},Y_{1}\geq X_{1},X_{1}>\frac{\theta_ {3}}{\rho_{\text{s}}}\right\}-\Pr\left\{X_{1}>\tau_{1},Y_{1}\geq\frac{\theta_ {3}}{\rho_{\text{s}}},X_{1}<\frac{\theta_{3}}{\rho_{\text{s}}}\right\}\] \[=\left\{\begin{array}{cc}1-\frac{\psi_{x,r}}{\psi_{x,1}+\psi_{ x,r}}\exp\left(-\left(\psi_{\text{s},1}+\psi_{\text{s},r}\right)\tau_{1} \right),&\theta_{2}>\theta_{3}\\ 1-\exp\left(-\tau_{1}\psi_{\text{s},r}-\frac{\theta_{3}\psi_{x,1}}{\rho_{\text {s}}}\right)+\frac{\psi_{x,1}}{\psi_{x,1}+\psi_{x,r}}\exp\left(-\frac{\theta_ {3}\left(\psi_{\text{s},1}+\psi_{x,r}\right)}{\rho_{\text{s}}}\right),& \theta_{2}<\theta_{3}\end{array}\right.\]
\[P_{\text{out}}^{x_{\text{s}},\text{DPR}} =1-\Pr\left\{X_{1}>\tau_{1},R_{1,\text{noma}}^{x_{2}}\geq R_{ \text{th}}^{x_{2}},R_{1,\text{noma}}^{x_{3}}\geq R_{\text{th}}^{x_{3}}\right\} \tag{38}\] \[=1-\Pr\left\{X_{1}>\tau_{1},\left|h_{\text{r},2}\right|^{2}\geq \frac{\theta_{2}}{\rho_{\text{s}}}\right\}\] \[=1-\exp\left(-\psi_{\text{s},r}\tau_{1}-\frac{\psi_{\text{r},2} \theta_{2}}{\rho_{\text{r}}}\right).\]
The OP of \(x_{3}\) in DPR scheme is expressed as (37), shown at the top of the next page.
### _Exact Outage Probability Analysis with MDPR scheme_
Substituting \(a_{1}=\theta\left(1-\frac{\tau_{1}}{\Psi_{x,r}}\right)\) into \(\gamma_{1,\text{noma}}^{x_{1}}\) and utilizing [26, (3.471.9)], the OP of \(x_{1}\) in MDPR scheme is obtained as (38), shown at the top of this page, where \(A_{1}=\left(1-\exp\left(-\psi_{\text{s},r}\tau_{1}\right)\right)\xi_{\text{s},1}{\rho_{\text{s}}}^{-m}\), \(A_{2}=\frac{\xi_{\text{s},1}\left(m\right)}{(\theta_{\text{s}})^{m}}\Big{(} \frac{\psi_{x,r}\theta_{2}}{\psi_{\text{s},1}}\Big{)}^{\frac{1}{2}}\exp\left(- \psi_{\text{s},r}\tau_{1}\right)\), and \(A_{3}=\frac{\psi_{x,1}}{\theta\rho_{\text{s}}}\).
It must be noted that only when both two hops are not outage, \(x_{2}\) can be successfully decoded at \(U_{2}\). Thus, the OP of \(x_{2}\) in MDPR scheme is obtained as (39), shown at the top of this page, where \(A_{4}=\left(1-\exp\left(-\psi_{\text{s},r}\tau_{1}\right)\right)\xi_{\text{s},\text{r}}{\rho_{\text{s}}}^{-m}\), \(A_{5}=\frac{\psi_{\text{s},1}}{\rho_{\text{s}}}\), and \(A_{6}=\frac{\psi_{\text{s},2}}{\tilde{C}_{\text{op},2}}\).
Similarly, utilizing [26, (3.351.3)], the OP of \(x_{3}\) in MDPR scheme is obtained as (40), shown at the top of the next page, where \(A_{7}=\xi_{\text{s},1}\exp\left(-\psi_{\text{s},r}\tau_{1}\right)\rho_{\text{s} }^{-m}\), \(A_{8}=\left(1-\exp\left(-\psi_{\text{s},r}\tau_{1}\right)\right)\xi_{\text{s},1} {\rho_{\text{s}}}^{-m}\psi_{r,1}n!(\eta G_{0}\rho_{\text{r}})^{n}\left(m\right) \left(\frac{A_{9}}{\psi_{r,1}}\right)^{n+1}\), \(A_{9}=\frac{\psi_{\text{s},1}\rho_{\text{s}}}{\psi_{\text{s},1}\eta\rho_{\text{s}}}\), and \(A_{10}=1-\exp\left(-\psi_{\text{s},1}\tau_{2}\right)\).
## IV Simulation Results
Simulation results are presented in this section to validate the proposed scheme's effectiveness. The effects of system parameters on the performance of the considered scheme, such as normalized power, the distance between the transmitter and receiver, and power allocation coefficients, are investigated. The main parameters are set as follows: \(d_{\text{s},1}=10\) m, \(d_{\text{s},r}=15=d_{\text{r},1}=15\) m, \(d_{\text{r},2}=10\) m, \(\alpha=2\), \(R_{\text{th}}^{x_{1}}=R_{\text{th}}^{x_{2}}=R_{\text{th}}^{x_{3}}=R_{\text{th}} =0.2\) nat/s/Hz, \(\eta=0.7\), and \(N=10\), respectively. In all the figures, 'Sim' and 'Ana' denote the simulation and numerical, respectively. One can observe that the simulation and numerical results match perfectly to verify the correctness of the analysis.
The following three schemes are utilized as benchmarks to prove the superiority of the proposed schemes:
1. A NOMA-based CDRT system with FPA scheme ('Ben1') [5]: In this scheme, the condition of whether \(R\) can decode the information of \(U_{2}\) was not considered and all the nodes were equipped with a single antenna.
2. A NOMA-based CDRT system with FPA and beamforming schemes ('Ben2'): In this scheme, the FPA scheme is utilized at \(S\) and the beamforming scheme is designed based on \(S\)-\(R\) link.
3. A NOMA-based CDRT system with DPA and beamforming schemes ('Ben3'): In this scheme, the beam
forming scheme is designed based on \(S\)-\(R\) link and the DPA scheme is based on \(S\)-\(U_{1}\).
Table I summarizes the difference for all the schemes.
Fig. 2 presents the impact of \(\rho\) and varying \(d_{\text{s,1}}\) on the OPs and EST. In Fig. 2(a), the OP of \(x_{1}\) in the DPR scheme decreases and remains constant with \(\rho\) increasing. This is because the first step of the SIC on \(U_{1}\) is not guaranteed; therefore, the OP of \(x_{1}\) is dependent on whether \(R_{1}^{x_{2}}\) is an outage or not. Unlike the DPR scheme, the OP of \(x_{1}\) decreases in the DPU scheme with \(\rho\) increases because the first step of the SIC on \(U_{1}\) is invariably satisfied. The OP of DPU and Ben3 is almost equal; this is because the beamforming scheme in Ben3 is based on \(S\)-\(R\) link and the power allocation is based on \(S\)-\(U_{1}\) link. In the MDPR scheme, the OP of \(x_{1}\) initially decreases, subsequently increases, and decreases with \(\rho\) increases. The reason is that the OMA scheme dominates in the lower-\(\rho\); as \(\rho\) increases, NOMA is utilized. One can find that the MDPR scheme has the best outage performance for \(x_{1}\), which is since the beamforming scheme is based on \(S\)-\(U_{1}\) and the power allocation is based on \(S\)-\(R\) ensures that \(x_{2}\) can be decoded on \(R\). In Fig. 2(b), we can observe that \(P_{\mathrm{out}}^{x_{2}}\) with the DPU scheme decreases with \(\rho\) until it becomes a constant. Unlike 2(a), the outage performance for the DPR scheme outperforms that for the DPU scheme because that \(R\) can successfully decode \(x_{2}\) in the DPR scheme. Similar to 2(a), the MDPR scheme performs best because the decoding at \(R\) and \(U_{1}\) are considered. The outage performance of \(x_{2}\) for Ben3 is also reduced until a constant, which outperforms that of the DPU but underreports that of the DPR. This is because the DPA in Ben3 is based on \(S\)-\(U_{1}\) and the beamforming scheme is utilized on \(S\)-\(R\), respectively. Comparing the MDPR with Ben3, one can observe that DPA is more effective in enhancing the outage performance. Similar to Fig. 2(a), the OP of \(x_{3}\) for the DPU and MDPR schemes decreases with \(\rho\) increases in Fig. 2(c). However, \(P_{\mathrm{out}}^{x_{3}}\) of the MDPR scheme decreases rapidly with increasing \(\rho\) because \(x_{3}\) is not affected by \(x_{2}\). Fig. 2(d) demonstrates the superior performance of the MDPR scheme against others due to the drastically improved outage performance of \(x_{1}\) and \(x_{3}\). In Fig. 2, we can also observe that, as \(d_{\text{s,1}}\) increases, the path loss between \(S\) and \(U_{1}\) increases and the outage performance of \(x_{1}\) and \(x_{3}\) become worse.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Scheme & Antenna at \(S\) & Power Allocation Scheme & Beamforming scheme \\ \hline DPU & Single & DPA based on \(S\)-\(U_{1}\) & — \\ \hline DPR & Single & DPA based on \(S\)-\(R\) & — \\ \hline Ben1 & Single & FPA & — \\ \hline Ben2 & Multiple & FPA & based on \(S\)-\(U_{1}\) \\ \hline Ben3 & Multiple & DPA based on \(S\)-\(U_{1}\) & based on \(S\)-\(R\) \\ \hline The proposed (MDPR) & Multiple & DPA based on \(S\)-\(R\) & based on \(S\)-\(U_{1}\) \\ \hline \end{tabular}
\end{table} TABLE I: Comparisons of Schemes.
Fig. 3 demonstrates the OPs and EST vs \(d_{\text{s,r}}\) for varying \(\rho\) and \(d_{\text{s,r}}\). From Fig. 3(a), it can be observed that, as \(d_{\text{s,r}}\) increasing, the OP of \(x_{1}\) for the DPR scheme decreases to a constant. The reason is, as \(d_{\text{s,r}}\) increases, \(a_{1}\) decreases (\(a_{2}\) increases), then the achievable rate at \(U_{1}\) decoding \(x_{2}\) increases, then \(P_{\text{out}}^{x_{1}}\) eventually converges to a constant, which equals to the ratio of \(a_{2}\) to \(a_{1}\). Moreover, increasing \(d_{\text{s,r}}\), \(P_{\text{out}}^{x_{1}}\) of the MDPR scheme decreases and then increases, which is due to the fact that the OMA and NOMA scheme dominates in the lower-\(\rho\) and the larger-\(\rho\) region, respectively, and \(P_{\text{out}}^{x_{1}}\) of the OMA and NOMA scheme is decreasing/increasing with increasing \(d_{\text{s,r}}\). \(P_{\text{out}}^{x_{1}}\) of the DPU scheme is independent of \(d_{\text{s,r}}\) because the power allocation coefficient is independent of \(d_{\text{s,r}}\). In Fig. 3(b), \(P_{\text{out}}^{x_{2}}\) for the DPU and DPR schemes with larger \(d_{\text{s,r}}\) underperfoms that with lower \(d_{\text{s,r}}\) because of the increased path loss. The effect of \(d_{\text{s,r}}\) on \(P_{\text{out}}^{x_{2}}\) with
Fig. 2: OPs and EST for varying \(\rho\) and \(d_{\text{s,1}}\).
the MDPR scheme is almost negligible since decoding \(x_{2}\) is guaranteed at \(R\) in MDPR scheme and \(P_{\rm out}^{x_{2}}\) mainly depends on the quality of the second hop. It is worth noting in Fig. 3(c) \(d_{\rm s,r}\) has an different effect on \(P_{\rm out}^{x_{2}}\) with the DPR scheme in the lower-\(\rho\) and the large-\(\rho\) region. Further, \(d_{\rm s,r}\) does not affect \(P_{\rm out}^{x_{2}}\) with the MDPR and DPU schemes. This is because \(x_{2}\) is successfully decoded in the first time slot then the \(x_{2}\) forwarded by R will not interfere with \(x_{3}\). In Fig. 3(d), one can observe \(d_{\rm s,r}\) has no significant effect on the EST of the MDPR scheme. This is because \(d_{\rm s,r}\) only affects \(P_{\rm out}^{x_{2}}\) with the MDPR. In the lower-\(\rho\) region, the EST with DPU scheme outperforms that with the DPR scheme, which verifies the necessity to ensure that \(U_{1}\) can successfully decode \(x_{2}\).
Fig. 4 plots the effect of \(\rho\) and \(R_{\rm th}\) on the OPs and EST. From Figs. 4(a) - 4(c), it is easy to observe that the outage performance of \(x_{i}\) worsens due to the higher rate requirement caused by increasing \(R_{\rm th}\). In Fig. 4(d), one can observe that EST with lower \(R_{\rm th}\) outperforms that with larger \(R_{\rm th}\) in the lower-power region because the OP of \(x_{1}\) and \(x_{3}\) dominates the EST. Moreover, the EST for the DPU scheme outperforms that for the DPR in the lower-power region because the outage performance of \(x_{1}\) with the DPU scheme is superior to that with the DPR scheme.
Fig. 5 demonstrates the OPs and EST vs \(\rho\) for varying \(a_{1}\). One can observe from Fig. 5(a) that \(P_{\rm out}^{x_{1}}\) of Ben1 and Ben2 with \(a_{1}=0.2\) worse than that with \(a_{1}=0.5\), but better than that with \(a_{1}=0.7\), which indicating the existence of an optimal power allocation factor and reflecting the advantages of DPA scheme. Moreover, \(P_{\rm out}^{x_{1}}\) with the DPR scheme is worse than that with the DPU scheme because \(U_{1}\) can decode \(x_{1}\) with the condition that \(x_{2}\) can be decoded successfully. Fig. 5(b) shows that \(P_{\rm out}^{x_{2}}\) with the DPR scheme is superior to that with the DPU scheme because the DPR scheme improves the performance of \(x_{2}\) and the DPU scheme is designed to enhance the performance of \(x_{1}\). Fig. 5(c) demonstrates that \(P_{\rm out}^{x_{3}}\) with the DPU scheme is preferred to that with the DPR scheme because parallel transmission can be realized only when \(U_{1}\) can decode \(x_{2}\). Based on Figs. 2(d), 3(d), 4(d), and 5(d), one can find that the proposed MDPR scheme can improve the performance of the NOMA-based CDRT systems in the lower-\(\rho\) region.
Fig. 6 provides a comparison of the EST versus \(R_{\rm th}\) with \(\rho=23\) dB. It is easy to observe that there is an optimal \(R_{\rm th}\) to maximize the EST for both the DPR and DPU scheme because \(R_{\rm th}\) increases faster than that of the OP for the signals, results in the increasing of the EST. As \(R_{\rm th}\) increases, the OP deteriorates; thus, the EST decreases. For the MDPR scheme, there are two \(R_{\rm th}\) to optimize the EST of the considered systems, corresponding to \(x_{1}\) and \(x_{3}\), respectively.
Fig. 7 presents the impact of \(a_{1}\) on the EST with \(\rho=35\) dB. One can observe that the EST of Ben1 initially increases and subsequently decreases, and that of Ben2 decreases with increasing \(a_{1}\). The OPs of the two benchmark approach 1 when \(a_{1}\) exceeds 0.55, then the EST is approximately equal to 0. This is because, as \(a_{1}\) increases, \(x_{2}\) can not be decoded at \(U_{1}\) in the first time slot and parallel transmission cannot be realized in the second time slot. This verifies that the DPA scheme outperforms that with the FPA scheme.
## V Conclusion
In this paper, we proposed an adaptive scheme to provide reliability for the CDRT system through DPA and beamforming strategy. To ensure the CDRT system is implemented correctly, we designed the power allocation scheme to guarantee the relay can successfully decode the message for the edge-user with the DPA scheme. The beamforming scheme
Fig. 4: OPs and EST for varying \(\rho\) and \(R_{\rm th}\).
was utilized to ensure the center-user can remove the user interference and achieve parallel transmission. To characterize the reliable performance of the proposed schemes, we derived the expression for exact OP and EST and analyzed the effects of system parameters on the EST in detail. The correctness of the analytical results was verified through Monte Carlo simulation. Simulation results demonstrated that the proposed adaptive CDRT scheme eliminated error floor for the edge-user and achieved better reliability than the benchmark scheme. In future work, our focus will seamlessly transition into unraveling the enigmatic layers of secrecy performance within the adaptive CDRT paradigm.
|
2309.01008 | Luminosity determination using Z boson production at the CMS experiment | The measurement of Z boson production is presented as a method to determine
the integrated luminosity of CMS data sets. The analysis uses proton-proton
collision data, recorded by the CMS experiment at the CERN LHC in 2017 at a
center-of-mass energy of 13 TeV. Events with Z bosons decaying into a pair of
muons are selected. The total number of Z bosons produced in a fiducial volume
is determined, together with the identification efficiencies and correlations
from the same data set, in small intervals of 2 pb$^{-1}$ of integrated
luminosity, thus facilitating the efficiency and rate measurement as a function
of time and instantaneous luminosity. Using the ratio of the
efficiency-corrected numbers of Z bosons, the precisely measured integrated
luminosity of one data set is used to determine the luminosity of another. For
the first time, a full quantitative uncertainty analysis of the use of Z bosons
for the integrated luminosity measurement is performed. The uncertainty in the
extrapolation between two data sets, recorded in 2017 at low and high
instantaneous luminosity, is less than 0.5%. We show that the Z boson rate
measurement constitutes a precise method, complementary to traditional methods,
with the potential to improve the measurement of the integrated luminosity. | CMS Collaboration | 2023-09-02T19:12:59Z | http://arxiv.org/abs/2309.01008v2 | # Luminosity determination using Z boson production at the CMS experiment
###### Abstract
The measurement of \(Z\) boson production is presented as a method to determine the integrated luminosity of CMS data sets. The analysis uses proton-proton collision data, recorded by the CMS experiment at the CERN LHC in 2017 at a center-of-mass energy of \(13\,\mathrm{TeV}\). Events with \(Z\) bosons decaying into a pair of muons are selected. The total number of \(Z\) bosons produced in a fiducial volume is determined, together with the identification efficiencies and correlations from the same dataset, in small intervals of \(20\,\mathrm{pb}^{-1}\) of integrated luminosity, thus facilitating the efficiency and rate measurement as a function of time and instantaneous luminosity. Using the ratio of the efficiency-corrected numbers of \(Z\) bosons, the precisely measured integrated luminosity of one data set is used to determine the luminosity of another. For the first time, a full quantitative uncertainty analysis of the use of \(Z\) bosons for the integrated luminosity measurement is performed. The uncertainty in the extrapolation between two data sets, recorded in 2017 at low and high instantaneous luminosity, is less than 0.5%. We show that the \(Z\) boson rate measurement constitutes a precise method, complementary to traditional methods, with the potential to improve the measurement of the integrated luminosity.
The CMS Collaboration Collaboration
The CMS Collaboration
The measurement of \(Z\) boson production is presented as a method to determine the integrated luminosity of CMS data sets. The analysis uses proton-proton collision data, recorded by the CMS experiment at the CERN LHC in 2017 at a center-of-mass energy of \(13\,\mathrm{TeV}\). Events with \(Z\) bosons decaying into a pair of muons are selected. The total number of \(Z\) bosons produced in a fiducial volume is determined, together with the identification efficiencies and correlations from the same dataset, in small intervals of \(20\,\mathrm{pb}^{-1}\) of integrated luminosity, thus facilitating the efficiency and rate measurement as a function of time and instantaneous luminosity. Using the ratio of the efficiency-corrected numbers of \(Z\) bosons, the precisely measured integrated luminosity of one data set is used to determine the luminosity of another. For the first time, a full quantitative uncertainty analysis of the use of \(Z\) bosons for the integrated luminosity measurement is performed. The uncertainty in the extrapolation between two data sets, recorded in 2017 at low and high instantaneous luminosity, is less than 0.5%. We show that the \(Z\) boson rate measurement constitutes a precise method, complementary to traditional methods, with the potential to improve the measurement of the integrated luminosity.
The CMS Collaboration Collaboration
The CMS Collaboration
The CMS Collaboration
The measurement of \(Z\) boson production is presented as a method to determine the integrated luminosity of CMS data sets. The analysis uses proton-proton collision data, recorded by the CMS experiment at the CERN LHC in 2017 at a center-of-mass energy of \(13\,\mathrm{TeV}\). Events with \(Z\) bosons decaying into a pair of muons are selected. The total number of \(Z\) bosons produced in a fiducial volume is determined, together with the identification efficiencies and correlations from the same dataset, in small intervals of \(20\,\mathrm{pb}^{-1}\) of integrated luminosity, thus facilitating the efficiency and rate measurement as a function of time and instantaneous luminosity. Using the ratio of the efficiency-corrected numbers of \(Z\) bosons, the precisely measured integrated luminosity of one data set is used to determine the luminosity of another. For the first time, a full quantitative uncertainty analysis of the use of \(Z\) bosons for the integrated luminosity measurement is performed. The uncertainty in the extrapolation between two data sets, recorded in 2017 at low and high instantaneous luminosity, is less than 0.5%. We show that the \(Z\) boson rate measurement constitutes a precise method, complementary to traditional methods, with the potential to improve the measurement of the integrated luminosity.
_Submitted to the European Physical Journal C_
[MISSING_PAGE_POST]
Introduction
In the CERN LHC, during the Run 2 data-taking period in 2015-2018, about 300 million events with \(Z\) bosons decaying into pairs of muons were recorded by the CMS experiment. Precision cross section measurements were performed [1, 2, 3, 4, 5] that provide (i) important tests of theoretical calculations [6, 7, 8]; (ii) input to fits of the parton distribution functions (PDFs) of the proton [9, 10, 11, 12]; and (iii) constraints on backgrounds to searches for new physics [13].
Events with a \(Z\) boson decaying into a pair of muons have a remarkably clean experimental signature and a large cross section that facilitates high-precision measurements. Samples of \(Z\) bosons are also used as standard tools for detector calibrations and efficiency studies. The precisely known \(Z\) boson mass and width [14] are used to calibrate energy scales and momenta and to determine the detector resolution [15, 16]. Efficiencies for lepton triggering, reconstruction, and identification are determined using the "tag-and-probe" method [1, 15, 16, 17].
The large Drell-Yan (DY) cross section for the production of \(Z\) bosons, and the possibility of simultaneously determining both the yield and the detection efficiency in situ, i.e., from the same event sample, make the process useful for precision measurements of the integrated luminosity. This was discussed before the start of the LHC [18]. During LHC operation, measurements of the \(Z\) boson rate already proved to be a useful and independent method for the LHC machine operators and experiments to monitor the relative instantaneous luminosity delivered to the ATLAS and CMS experiments [19]. The use of \(Z\) boson production as a measure of relative luminosities was also explored by the ATLAS experiment [20].
Both muons from the \(Z\) boson decay are detectable within the fiducial volume of the CMS detector in about one third of the \(Z\) boson events. The fiducial \(Z\) boson cross section in proton-proton (pp) collisions at 13 TeV has been measured to be \(\sigma^{\mathrm{Z}}\mathcal{B}(\mathrm{Z}\rightarrow\mu^{+}\mu^{-})=694\pm 6 \,(\mathrm{syst})\pm 17\,(\mathrm{lumi})\,\mathrm{pb}\)[3]. Theoretical predictions are available up to next-to-next-to-leading order (N\({}^{3}\)LO) [8] in quantum chromodynamics (QCD). Electroweak corrections, including mixed QCD-electroweak corrections, are also available [6, 21, 22]. The current uncertainty in the prediction of the fiducial cross section is about 3%, and mainly originates from limited knowledge of proton PDFs and higher-order corrections [7]. Within this uncertainty, the integrated luminosity can be directly determined from the measured number of \(Z\) bosons corrected for efficiencies.
In practice, precision luminosity calibrations at the LHC are obtained from van der Meer (vdM) scan data [23, 24, 25, 26, 27, 28, 20], which are more precise. In vdM scans, which are performed at low instantaneous luminosity with zero crossing angle between the two beams, the two beams are separated in two orthogonal directions transverse to the parallel beam axes. In each scan step, for a given beam separation, the event rate measured in the luminosity detectors is recorded to determine the beam overlap area. Together with the beam currents and the measured head-on collision rate, a luminosity calibration constant, referred to as the visible cross section, is determined. A full vdM scan campaign takes about six hours per experiment and is usually performed once per year, with specifically configured beams to maximize the accuracy and precision of the measurement. A detailed description of vdM scans is reported in Ref. [28].
The most precise integrated luminosity measurement in CMS to date, achieved for the 2016 data-taking period, has a total uncertainty of 1.2% [28]. Roughly half of the total uncertainty is due to the luminosity integration over the full year of data taking. This uncertainty, in turn, is composed of the uncertainty in the extrapolation of the visible cross section obtained in the vdM scan to standard data-taking conditions at high instantaneous luminosity, and the uncertainty in the integration of the instantaneous luminosity over time, obtained from comparisons
Introduction
The CMS detector is a central-tier (CMS) system consisting of a barrel and two endcap sections, covering the pseudorapidity range \(|\eta|<2.5\), and a lead tungstate crystal electromagnetic calorimeter (ECAL), composed of a barrel and two endcap sections. Forward calorimeters extend the pseudorapidity range from \(\abs{\eta}<2.4\) to \(\abs{\eta}<3.0\), and extend the pseudorapidity range from \(\abs{\eta}<2.5\) to \(\abs{\eta}<5.0\). Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for study of particle-flow reconstruction. Muons are measured in gas-ionization detectors embedded in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid, providing a potential for the study of particle-flow reconstruction.
tracker, a lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass and scintillator hadron calorimeter, each composed of a barrel and two endcap sections. Forward calorimeters extend the pseudorapidity (\(\eta\)) coverage provided by the barrel and endcap detectors. The muon system consists of gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid. A more detailed description of the CMS detector, together with a definition of the coordinate system used and the relevant kinematic variables, is reported in Ref. [33].
The silicon tracker measures charged particles in the pseudorapidity range \(|\eta|<3.0\)[34, 35]. An iterative approach is used to build tracker tracks, executing a sequence of tracking algorithms, each with slightly distinct logic [16]. Muons are measured in the range \(|\eta|<2.4\), with detection planes made using three technologies: drift tubes, cathode strip chambers, and resistive plate chambers. Matching muons to tracks measured in the silicon tracker results in a relative transverse momentum (\(p_{\mathrm{T}}\)) resolution of 1% in the barrel and 3% in the endcaps, for muons with \(p_{\mathrm{T}}\) about 100 GeV [16]. The particle-flow (PF) algorithm [36] reconstructs and identifies each individual particle in an event, combining information from the various CMS detector components. Jets are clustered using the anti-\(k_{\mathrm{T}}\) jet finding algorithm [37, 38] with the tracks assigned to candidate vertices as inputs, and the associated missing transverse momentum \(p_{\mathrm{T}}^{\mathrm{miss}}\), taken as the negative vector \(p_{\mathrm{T}}\) sum of those jets [39]. The primary vertex (PV) is taken to be the vertex with the largest \(\sum p_{\mathrm{T}}^{2}\) of its associated tracks, as described in Section 9.4 of Ref. [40].
Events of interest are selected using a two-tiered trigger system. The first level (L1), comprised of custom hardware processors, uses information from the calorimeters and muon detectors to select events at a rate of around 100 kHz within a fixed latency of 4 \(\mu\)s [41]. The second level, known as the high-level trigger (HLT), consists of a farm of processors running a version of the full event reconstruction software optimized for fast processing, and reduces the event rate to around 1 kHz before data storage [42].
During LHC Run 2, the main CMS luminosity subdetectors (luminometers) were the silicon pixel detector, the hadron forward calorimeter (HF), the pixel luminosity telescope (PLT) [43], and the fast beam conditions monitor (BCM1F) [44]. A separate data acquisition system is used to collect and store HF, PLT, and BCM1F data, as well as LHC beam-related data. A more detailed description of the CMS luminosity system is reported in Ref. [28]. For all comparisons in this paper, the reference integrated luminosity is obtained with the CMS luminometers, calibrated as described in Ref. [32] and using updated corrections for the afterglow effects in the HF luminosity measurement.
The analysis described in this paper is largely independent of Monte Carlo (MC) simulations. However, MC simulations are used for two purposes: to determine the expected DY invariant mass distribution of the signal measured in the CMS detector; and to study possible biases in the pileup-dependent measurement of the muon track-finding efficiencies. Simulated event samples of the DY process, \(\mathrm{Z}/\gamma^{*}\to\ell\ell\), are produced at leading order using the MadGraph5_aMC@nlo (v2.6.5) [45] generator, interfaced with pythia (v8.240) [46] for the parton shower simulation. The parameters describing the modeling of the parton shower and underlying event are based on the CP5 tune [47]. The generated MC events are passed through a full simulation of the detector using Geant4[48].
## 0.3 The Z boson candidate selection and efficiency determination
The events were recorded using a single-muon trigger (HLT muon) that requires at least one muon candidate with \(p_{\mathrm{T}}>24\) GeV and loose isolation criteria [49]. The lowPU data were
recorded using different, looser trigger configurations than those used for the highPU data. To obtain identical trigger configurations for the two data sets, the trigger decision in the lowPU was recalculated from raw data using the trigger configuration of the highPU data.
Selected muon candidates consist of a reconstructed "outer" standalone track in the muon system, matched to an "inner" track reconstructed in the silicon tracker [34]. The outer track is required to have signals in at least two muon detector planes. The inner track must have at least one valid hit in the silicon pixel detector and hits in more than five strip tracker layers. The matching is done by comparing parameters of the two tracks propagated onto a common surface. A combined Kalman filter fit [50] is performed in which the information from the inner and outer tracks is used to obtain a "global" muon track. For global muons, the inner and outer tracks are required to have \(p_{\mathrm{T}}>20\GeV\), lie within \(|\eta|<2.4\), and to be matched within \(\Delta R=\sqrt{(\Delta\eta)^{2}+(\Delta\phi)^{2}}<0.3\). Quality criteria on the global muon track fit are imposed, and it is required that the muon candidate is also reconstructed with the PF algorithm [36]. No requirements are imposed on the impact parameters of the muon track. Isolation criteria are omitted to maintain efficiency also at high pileup. For muons with \(p_{\mathrm{T}}<200\GeV\), i.e., about 99% of identified muon candidates, the track parameters are taken from the inner track. In other cases, the track parameters are determined by combining information from the inner and outer tracks. For all muon tracks, \(p_{\mathrm{T}}>25\GeV\) is required to ensure that the trigger efficiency reaches a plateau.
A \(Z\) boson candidate is identified as a pair of opposite-charge muons with an invariant mass of \(60<m_{\mu\mu}<120\GeV\). At least one of the two muon candidates is required to be matched with an HLT muon within \(\Delta R<0.1\). To obtain the actual number of produced \(Z\) bosons, the number of reconstructed and selected \(Z\) boson candidates, the trigger efficiency, the muon-identification efficiency, as well as the background arising from nonresonant production, are determined from dedicated fits to the data, as explained in the following.
### Trigger efficiency and signal extraction
The trigger efficiency and the number of \(Z\) boson candidates are determined from fits to the invariant dimuon mass distributions of events with exactly one (\(N_{1}\)) or exactly two (\(N_{2}\)) selected muons matched to an HLT muon. The observables \(N_{1}\) and \(N_{2}\) follow the relations
\[\begin{split} N_{1}&=2\epsilon_{\mathrm{HLT}}^{\mu }\big{(}1-C_{\mathrm{HLT}}\epsilon_{\mathrm{HLT}}^{\mu}\big{)}\epsilon_{\mathrm{ ID}}^{Z}N^{Z}+N_{1}^{\mathrm{bkg}},\\ N_{2}&=C_{\mathrm{HLT}}\big{(}\epsilon_{\mathrm{ HLT}}^{\mu}\big{)}^{2}\epsilon_{\mathrm{ID}}^{Z}N^{Z}+N_{2}^{\mathrm{bkg}}. \end{split} \tag{2}\]
Here, the quantity \(\epsilon_{\mathrm{HLT}}^{\mu}\) refers to the HLT muon trigger efficiency. The correction factor \(C_{\mathrm{HLT}}\) accounts for the correlation between the HLT efficiencies of the two muons. A value of \(C_{\mathrm{HLT}}>1\) indicates a positive correlation between the two muons, i.e., an increased probability for the second muon to pass the HLT if the first muon passes it. The determination of \(C_{\mathrm{HLT}}\) is presented in Section 3.2. The terms \(N_{1}^{\mathrm{bkg}}\) and \(N_{2}^{\mathrm{bkg}}\) describe the contributions from nonresonant backgrounds. The reconstruction efficiency \(\epsilon_{\mathrm{ID}}^{Z}\) is separately determined from the data, as described in Section 3.3.
To determine \(\epsilon_{\mathrm{HLT}}^{\mu}\) and \(N^{\mathrm{Z}}\), two fits are performed to two histograms binned in \(m_{\mu\mu}\) for \(Z\) candidates contributing to \(N_{1}\) and \(N_{2}\). In the fit, the signal is modeled by a histogram template generated from simulated \(Z\to\mu\mu\) events, convolved with a Gaussian function to take into account muon momentum scale and resolution differences between data and simulation. A falling exponential function is used to describe the nonresonant background. In Fig. 1, examples of two distributions and the results of the fits are presented. The sample shown here
corresponds to an integrated luminosity of \(20\,\mathrm{pb}^{-1}\), yielding about \(12\,000\)\(Z\) boson candidates.
### Muon trigger correlation
The correlation between the trigger efficiencies of the two HLT muons is described by the correction factor \(C_{\mathrm{HLT}}\), as introduced in Eq. (2). The dependence of \(C_{\mathrm{HLT}}\) on the pileup is of particular interest in this analysis because it does not cancel in the ratio in Eq. (1), and thus constitutes an important source of systematic uncertainty. The correlation was investigated in simulation, and it is largely understood to originate from isolation requirements in the trigger selection.
We determine \(C_{\mathrm{HLT}}\) from an MC simulation sample of \(Z\to\mu\mu\) events. As a proxy to the amount of pileup in a given event, we use the number of reconstructed PVs, \(N_{\mathrm{PV}}\), an observable that is directly accessible event-by-event in both data and simulation. At fixed pileup, the distribution of \(N_{\mathrm{PV}}\) approximately follows a Poisson distribution with a mean at about 80% of the true pileup, as determined from DY simulation.
In the simulation, \(C_{\mathrm{HLT}}\) is obtained directly, by rearranging Eq. (2), as
\[C_{\mathrm{HLT}}=\frac{4N^{Z}\epsilon_{\mathrm{ID}}^{Z}N_{2}^{\mathrm{sig}}}{ \left(N_{1}^{\mathrm{sig}}+2N_{2}^{\mathrm{sig}}\right)^{2}}, \tag{3}\]
where \(N_{1}^{\mathrm{sig}}\) and \(N_{2}^{\mathrm{sig}}\) are the number of signal events, corresponding to \(N_{1}-N_{1}^{\mathrm{bkg}}\) and \(N_{2}-N_{2}^{\mathrm{bkg}}\) in the data.
We use data to validate the result for \(C_{\mathrm{HLT}}\) obtained in the simulation. To this end, events are analyzed that are triggered independently of the muon trigger, namely by using the trigger condition \(p_{\mathrm{T}}^{\mathrm{miss}}>120\,\mathrm{GeV}\) in which the contribution from muons is not included. This
Figure 1: The upper panels show the reconstructed invariant mass distributions of \(Z\) boson candidates in a \(20\,\mathrm{pb}^{-1}\) sample of data for events where one (left) or two (right) muons pass the single-muon trigger selection. The blue curve shows the fitted background contribution and the red curve illustrates the modeled signal-plus-background contribution. The error bars indicate the statistical uncertainties. The numbers of signal and background candidates are given by \(N_{i}^{\mathrm{sig}}=N_{i}-N_{i}^{\mathrm{bkg}}\) and \(N_{i}^{\mathrm{bkg}}\), respectively. Also indicated are the \(\chi^{2}\) values per degree of freedom (dof). The lower panels contain the pulls of the distributions, defined as the difference between the data and the fit model in each bin, divided by the statistical uncertainty estimated from the expected number of entries given by the model.
trigger also records \(Z\) boson candidates for which the number of HLT muons is zero, and, thus, an additional relation for the number of reconstructed \(Z\) boson candidates with no HLT muons, denoted as \(N_{0}\), is obtained,
\[N_{0}=\left(1-2\epsilon_{\mathrm{HLT}}^{\mu}+C_{\mathrm{HLT}}(\epsilon_{ \mathrm{HLT}}^{\mu})^{2}\right)\epsilon_{\mathrm{ID}}^{Z}N^{Z}+N_{0}^{\mathrm{ bkg}}. \tag{4}\]
Together with Eq. (2), we obtain three equations for \(N_{0}\), \(N_{1}\), and \(N_{2}\) with three unknowns, \(\epsilon_{\mathrm{HLT}}^{\mu}\), \(C_{\mathrm{HLT}}\), and \(\epsilon_{\mathrm{ID}}^{Z}N^{Z}\). The correction factor \(C_{\mathrm{HLT}}\) can thus be determined from the number of signal events in the three categories, each obtained from a fit. The fits are performed separately in six bins of \(N_{\mathrm{PV}}\) where the number of bins and their boundaries are chosen such that the number of events per bin are similar.
The result is presented in Fig. 2. The red lines indicate the expectation from the simulation in which \(C_{\mathrm{HLT}}\) is at the level of 0.1-0.2% above unity for \(N_{\mathrm{PV}}\sim 30\). Within the limited statistical precision of the data, good agreement of the simulation with the data is observed. We assign a systematic uncertainty of 100% of the correction, which is represented by the gray band in the figure.
### Muon identification and reconstruction efficiency
The efficiency to reconstruct a \(Z\) boson, \(\epsilon_{\mathrm{ID}}^{Z}\), depends on the muon identification and reconstruction efficiency \(\epsilon_{\mathrm{ID}}^{\mu}\) for each of the two muons. In the simulation, the pileup-dependent correlation between the two identified muons is of the order of 0.01%, and thus \(\epsilon_{\mathrm{ID}}^{Z}=C_{\mathrm{ID}}\left(\epsilon_{\mathrm{ID}}^{\mu} \right)^{2}\). The value for \(C_{\mathrm{ID}}\approx 1.0001\) is taken from simulation and applied as a function of \(N_{\mathrm{PV}}\). The muon efficiency \(\epsilon_{\mathrm{ID}}^{\mu}\) is defined independently of the HLT muon efficiency, such that the total number of produced \(Z\) bosons is obtained from Eq. (2).
To determine \(\epsilon_{\mathrm{ID}}^{\mu}\), the following factorization ansatz is used:
\[\epsilon_{\mathrm{ID}}^{\mu}=\epsilon_{\mathrm{ID}|\mathrm{Gol}}^{\mu}\, \epsilon_{\mathrm{Gol}|\mathrm{Sta}}^{\mu}\,\epsilon_{\mathrm{Sta|Thk}}^{\mu}\, \frac{1}{c_{\mathrm{TkgP}}}, \tag{5}\]
Figure 2: Correction factor \(C_{\mathrm{HLT}}\) for the correlation between the measured muon trigger efficiencies of the two muons as a function of the number of reconstructed primary vertices, \(N_{\mathrm{PV}}\), in the simulation (lines) and the data (points). The data points are drawn at the mean value of \(N_{\mathrm{PV}}\) in each bin of the measurement. The horizontal error bars on the points show the bin width, and the vertical error bars show the statistical uncertainty. The gray band indicates the \(\pm 100\%\) uncertainty in the correction factor.
where the efficiency \(\epsilon^{\mu}_{\text{ID}|\text{Glo}}\) is the fraction of global muons that fulfill the full set of muon identification requirements; the efficiency \(\epsilon^{\mu}_{\text{Glo}|\text{Sta}}\) is the global muon efficiency, given by the fraction of standalone muons that also qualify as global muon; and the efficiency \(\epsilon^{\mu}_{\text{Sta}|\text{Trk}}\) is the standalone muon efficiency, defined as the fraction of muons with good inner tracks that are matched within \(\Delta R<0.3\) to outer standalone muon tracks with \(p_{\text{T}}>20\,\text{GeV}\) and \(|\eta|<2.4\). To obtain an unbiased set of inner tracks for the measurement of the efficiency \(\epsilon^{\mu}_{\text{Sta}|\text{Trk}}\), inner tracks that are seeded from the extrapolation of outer standalone muon tracks are excluded. The term \(c_{\text{T\&P}}\) accounts for the correlations between the efficiency terms in Eq. (5). The pileup dependence of the correction from \(c_{\text{T\&P}}\) between the lowPU and the highPU data sets is estimated from simulation to be about 0.01%.
The efficiencies are determined from the data using a "tag-and-probe" methodology [1]. Identified muon candidates that are matched to the HLT muon are selected as "tag". For each tag, a probe muon candidate of opposite charge is selected under the condition that the muon candidate pair has an invariant mass between 60 and 120. The efficiency \(\epsilon^{\mu}_{x|y}\) is then measured as
\[\epsilon^{\mu}_{x|y}=\frac{n^{\text{P}}}{n^{\text{P}}+n^{\text{f}}}, \tag{6}\]
where \(y\) denotes the reference sample of muon candidates and \(x\) is the probe criterion. The numbers \(n^{\text{P}}\) and \(n^{\text{f}}\) correspond to the number of events that pass and fail the test criterion, respectively.
For each of the efficiencies, and in bins of 20, fits to the \(m_{{}_{H}}\) distributions of the passing and failing distributions are performed. In the fits, the same shapes as described in Section 0.3.1 are used to describe the signal. In the histograms with passing probes, the background contribution is low and a falling exponential is used. In the case of failing probes, the nonresonant background is much larger and a more complex analytic function, comprising an exponential at high mass above the \(Z\) boson resonance and an error function at low mass, is fit. To ensure a bias-free measurement of \(\epsilon^{\mu}_{\text{Glo}|\text{Sta}}\), the outer standalone muon track parameters are used to determine \(m_{{}_{H}}\) for the passing and failing probes. Since the resolution of these tracks is much worse, the invariant mass requirement is widened to 50-130. In the case that, in a given event, the probe muon also fulfills the tag muon requirements, the tag-and-probe muons are indistinguishable and both muons are used as probes. Quantitative results for the measurement of the efficiencies are presented in Section 0.4.
### Acceptance correction
To determine the true number of \(Z\) bosons in the visible phase space, an acceptance correction for losses, or gains, due to the finite resolution of the reconstructed muon tracks is required. The correction affects the number of reconstructed \(Z\) bosons itself. The efficiencies are also affected, primarily in the matching of inner and outer tracks, and, to a lesser extent, if muon tracks for passing and for failing probes have different resolutions. The size of the correction is determined from the simulation by comparing the efficiency-corrected number of \(Z\) bosons as obtained from the measurement with the generated number of \(Z\) bosons in the visible phase space, as defined for bare leptons after final-state radiation (FSR), but before detector simulation.
For outer muon tracks, resolution effects lead to an acceptance correction of about 1.35%, which is independent of pileup and constant over the full year of data taking. For inner tracks, the acceptance correction is 0.15% at low pileup, and it is negligibly small for the highPU data set. This pileup dependency of 0.15% is applied as an additional correction, and an uncertainty of
## 0.4 Event reconstruction and selection
The CMS detector is a multipurposepurpose (), a superconducting solenoid, and a magnetic field of 3.8T. Within the solenoid volume, the magnetic field is measured in the range 15 to 100 \(\mathrm{\SIUnitSymbolMicro m}\). The bore diameter of the solenoid is 13.8T. Within the solenoid volume, the magnetic field is measured in the range 15 to 100 \(\mathrm{\SIUnitSymbolMicro m}\). The bore diameter of the solenoid is 1.8T. Within the solenoid volume, the magnetic field is measured in the range 15 to 100 \(\mathrm{\SIUnitSymbolMicro m}\). The bore diameter of the solenoid is 1.
The results are shown in Fig. 3. The results are compared to the reference luminosity measurement, in the LHC fill 6255, recorded on September 29, 2017. Each bin corresponds to about 20 pb\({}^{-1}\), as determined by the reference measurement. For shape comparison, the integrated \(Z\) boson rate is normalized to the reference integrated luminosity. The panel at the bottom shows the ratio of the two measurements. The vertical error bars show the statistical uncertainty in the \(Z\) boson rate. Right: the measured single-muon efficiencies as functions of time for the same LHC fill. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
The vertical error bars show the statistical uncertainty in the efficiency. The vertical error bars show the statistical uncertainty in the efficiency.
cancel, as detailed in the following section.
In Fig. 5, the distribution of the ratios between the \(Z\) luminosity and the reference luminosity as obtained from the CMS luminosity systems is shown. Each entry in the histogram corresponds to an interval of \(20\,\mathrm{pb}^{-1}\) in the highPU data recorded in 2017. The central values of both measurements are in good agreement with a difference of 0.3%. The standard deviation of about 1.2%, is predominantly of statistical nature, and close to the expectation for the pure statistical uncertainty of about \(12\,000\)\(Z\) boson candidates reconstructed in intervals of \(20\,\mathrm{pb}^{-1}\) each. The ratio of \(Z\) luminosity and reference luminosity as a function of the integrated luminosity is shown in Fig. 6. This figure shows a good stability of the \(Z\) luminosity measurement over the full year. No significant patterns in time are observed.
Figure 5: Distribution of the ratio of integrated luminosities between \(Z\) boson counting and the reference luminometer. The entries, each corresponding to one interval of \(20\,\mathrm{pb}^{-1}\) of highPU data, are weighted with the respective measured luminosity.
Figure 6: The luminosity as measured from \(Z\) bosons divided by the reference luminosity as a function of the integrated luminosity for the 2017 highPU data. Each green point represents the ratio from one measurement of the number of \(Z\) bosons. The blue lines show the averages of 50 consecutive measurements, each containing about \(1\,\mathrm{fb}^{-1}\) of data. The gray band has a width of 1.5%, corresponding to the uncertainty in the ratio of the integrated reference luminosities from the lowPU to the one of highPU [32].
### Statistical and systematic uncertainties, and additional cross checks
The uncertainties in the analysis were studied with the focus on the ratio \(r=N_{\text{highPU}}^{\text{Z}}/N_{\text{lowPU}}^{\text{Z}}\) of the \(Z\) boson counts between two data samples in 2017 as presented in Eq. (1). The full list of considered sources of uncertainty in the cross sections and their ratio is given in Table 1, and described in the following.
Statistical uncertainties are driven by the number of available \(Z\) bosons and also include the statistical uncertainty in the efficiencies. As mentioned above, in one interval of \(20\,\text{pb}^{-1}\), about \(12\,000\)\(Z\) bosons candidates with two muons in the final state are available, leading to an average statistical uncertainty of \(1.17\%\). For all intervals combined, the statistical uncertainty for the full 2017 highPU data is negligibly small. The lowPU data set corresponds to an integrated luminosity of about \(200\,\text{pb}^{-1}\), and this contributes a statistical uncertainty of about \(0.35\%\).
As discussed in Section 3.2, the correction factor for correlations in the trigger efficiencies of the two muons \(C_{\text{HLT}}\) is determined from data and simulation; it is about \(0.1\%\) above unity for the highPU sample, consistently for data and MC simulation. The uncertainty in \(C_{\text{HLT}}\) is assigned to be \(100\%\) of the correction.
Possible correlations between the two identified muons and imperfect factorization of muon identification and reconstruction efficiencies were discussed in Section 3.3. The simulation shows negligible effects, and corrections at the level of \(0.01\%\) are applied. The corresponding uncertainties are estimated to be \(100\%\) of the correction.
The limited resolution of the reconstructed muon tracks leads to a bias in the measurement, as described in Section 3.4. The bias from the inner track resolution is smaller, but pileup
\begin{table}
\begin{tabular}{l c c c} & \(\delta N_{\text{highPU}}^{\text{Z}}\) [\%] & \(\delta N_{\text{lowPU}}^{\text{Z}}\) [\%] & \(\delta\Big{(}N_{\text{highPU}}^{\text{Z}}/N_{\text{lowPU}}^{\text{Z}}\Big{)}\) [\%] \\ \hline HLT correlation \(C_{\text{HLT}}\) & \(\pm 0.1\) & \(\pm 0.06\) & \(\pm 0.04\) \\ Dimuon correlation \(C_{\text{ID}}\) & \(\pm 0.00\) & \(\mp 0.01\) & \(\pm 0.01\) \\ Inner-outer track correlation \(c_{\text{T\&P}}\) & \(\pm 0.01\) & \(\mp 0.01\) & \(\pm 0.01\) \\ Inner track resolution & \(\pm 0.01\) & \(\pm 0.16\) & \(\mp 0.15\) \\ Outer track resolution & \(\pm 1.35\) & \(\pm 1.36\) & \(\mp 0.01\) \\ L1 muon prefiring & \(\pm 0.15\) & \(\pm 0.15\) & \(0\) \\ ECAL prefiring & \(\pm 0.04\) & \(\pm 0.14\) & \(\mp 0.10\) \\ Signal modeling up & \(-0.63\) & \(-0.75\) & \(+0.19\) \\ Signal modeling down & \(+0.51\) & \(+0.71\) & \(-0.21\) \\ Background modeling up & \(-0.15\) & \(-0.31\) & \(+0.16\) \\ Background modeling down & \(-0.09\) & \(-0.05\) & \(-0.04\) \\ Systematic up & \(+1.45\) & \(+1.56\) & \(+0.31\) \\ Systematic down & \(-1.50\) & \(-1.60\) & \(-0.28\) \\ Statistical & \(\pm 0.03\) & \(\pm 0.35\) & \(\pm 0.35\) \\ Total up & \(+1.45\) & \(+1.60\) & \(+0.47\) \\ Total down & \(-1.50\) & \(-1.64\) & \(-0.45\) \\ \end{tabular}
\end{table}
Table 1: Summary of the uncertainties in the number of delivered \(Z\) bosons in the 2017 highPU and lowPU data, and their ratio. The symbol \(\delta\) denotes the relative uncertainty, i.e., \(\delta x=\Delta x/x\). The systematic and statistical uncertainties are added in quadrature to obtain the total uncertainty.
## 0.4 Event reconstruction and selection
The analysis is performed using data collected with the CMS detector at \(\sqrt{s}=7\TeV\) and corresponding corresponding event-by-event reconstruction. The data are collected with the same trigger and the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the same-sign triggers as the data. The data are collected with the data. The data are collected with the same-sign triggers as the data. The data are collected with the data.
The results from \(Z\) boson counting are independent of the conventional luminosity measurements. They can be treated as uncorrelated in combinations, which can lead to significant improvements in the combined uncertainty.
Taking the current precision of 1.7% for the integrated luminosity in the lowPU data [32], the integrated luminosity in the highPU 2017 data could potentially be determined to a precision of better than 1.8%, in contrast to the preliminary uncertainty of the reference luminosity measurement of 2.3% [32].
A unique aspect of \(Z\) boson counting is that the relevant efficiency corrections as a function of time can be calibrated from the same event sample. This feature makes the method robust not only against small changes in detector response, but also across different detector configurations. In general, once a precision measurement of the integrated luminosity is available, such as that for the lowPU data in 2017, the integrated luminosity for all data recorded at the same center-of-mass energy can be determined using the \(Z\) boson counting. However, each transfer between data sets requires detailed studies of the correlations of the muon trigger and the reconstruction efficiencies.
In this paper, the full analysis was presented for the data from 2017, when a dedicated and sufficiently large sample of lowPU data was recorded. Under such conditions, a large fraction of the systematic uncertainties cancels in the ratio. For the most precise CMS measurement of the luminosity to date [28], published for 2016, an extrapolation and integration uncertainty of 0.7% was reported. For 2016, no lowPU data set was recorded. Further studies on the impact of different detector conditions would be required to extrapolate from the 2016 data set. If, hypothetically, an extrapolation uncertainty of 0.5% for \(Z\) boson counting were achievable also in the 2016 data, the uncertainty of 1.2% in the total integrated luminosity for 2016 could be improved to 1.1%.
The dominant contribution to the uncertainty comes from the statistical uncertainty, which is driven by the size of the lowPU data sample. The lowPU data recorded in 2017 correspond to an integrated luminosity of about \(200\,\mathrm{pb}^{-1}\). A significant increase of the sample size, e.g., by a factor 3 or 4, would make the statistical uncertainty negligible.
In the coming years, during the ongoing LHC Run 3 and beyond, additional measurements and studies on the main systematic uncertainties will be performed, and that is expected to improve the precision of the method further. Furthermore, the method is expected to contribute substantially to the combination of integrated luminosity measurements for different data sets.
\begin{table}
\begin{tabular}{l c c c} & \(\delta N_{\text{highPU}}^{\text{Z}}\) [\%] & \(\delta N_{\text{lowPU}}^{\text{Z}}\) [\%] & \(\delta\left(N_{\text{highPU}}^{\text{Z}}/N_{\text{lowPU}}^{\text{Z}}\right)\) [\%] \\ \hline Lum. bin size \(30\,\mathrm{pb}^{-1}\) & \(-0.05\) & \(-0.01\) & \(-0.04\) \\ Lum. bin size \(15\,\mathrm{pb}^{-1}\) & \(+0.04\) & \(+0.07\) & \(-0.03\) \\ Mass bin width \(1\,\mathrm{GeV}\) & \(-0.02\) & \(-0.06\) & \(+0.03\) \\ Mass bin width \(0.25\,\mathrm{GeV}\) & \(-0.01\) & \(+0.01\) & \(-0.02\) \\ Mass range \([50,130]\) GeV & \(+1.25\) & \(+1.24\) & \(+0.00\) \\ Mass range \([70,110]\) GeV & \(-2.32\) & \(-2.26\) & \(-0.05\) \\ \end{tabular}
\end{table}
Table 2: Summary of cross checks performed by varying the length of the luminosity interval, the bin width of the \(m_{\mu\mu}\) histograms, and the range of the fit. As in Table 1, the resulting variations of the number of \(Z\) bosons in the 2017 highPU and lowPU data, and their ratio, are shown. The \(\delta\) denotes the relative variations, i.e., \(\delta x=\Delta x/x\).
In the 2017 data sample, the \(\PZ\) boson is reconstructed in the 2018 data sample, and the \(\PZ\) boson is reconstructed in the 2017 data sample. The \(\PZ\) boson is reconstructed in the 2018 data sample, and the \(\PZ\) boson is reconstructed in the 2018 data sample.
## Acknowledgements
We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centers and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies: BMWFW and FWF (Austria); FNRS and FWO (Belgium); CNPq, CAPES, FAPERJ, and FAPESP (Brazil); MES (Bulgaria); CERN; CAS, MoST, and NSFC (China); COLCIENCIAS (Colombia); MSES and CSF (Croatia); RPF (Cyprus); SENESCYT (Ecuador); MoER, ERC IUT, and ERDF (Estonia); Academy of Finland, MEC, and HIP (Finland); CEA and CNRS/IN2P3 (France); BMBF, DFG, and HGF (Germany); GSRT (Greece); OTKA and NIH (Hungary); DAE and DST (India); IPM (Iran); SFI (Ireland); INFN (Italy); MSIP and NRF (Republic of Korea); LAS (Lithuania); MOE and UM (Malaysia); BUAP, CINVESTAV, CONACYT, LNS, SEP, and UASLP-FAI (Mexico); MOS (Montenegro); MBIE (New Zealand); PAEC (Pakistan); MSHE and NSC (Poland); FCT (Portugal); JINR (Dubna); MON, RosAtom, RAS, RFBR, and NRC KI (Russia); MESTD (Serbia); SEIDI, CPAN, PCTI, and FEDER (Spain); MOSTR (Sri Lanka); Swiss Funding Agencies (Switzerland); MST (Taipei); ThEPCenter, IPST, STAR, and NSTDA (Thailand); TUBITAK and TAEK (Turkey); NASU and SFFR (Ukraine); STFC (United Kingdom); DOE and NSF (USA). Individuals have received support from the Marie-Curie programme and the European Research Council and Horizon 2020 Grant, contract Nos. 675440 (European Union); the Leventis Foundation; the A. P. Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation a la Recherche dans l'Industrie et dans l'Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the F.R.S.-FNRS and FWO (Belgium) under the "Excellence of Science - EOS" - be.h project n. 30820817; the Beijing Municipal Science & Technology Commission, No. Z191100007219010 and Fundamental Research Funds for the Central Universities (China); the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic; the Shota Rustaveli National Science Foundation, grant FR-22-985 (Georgia); the Deutsche Forschungsgemeinschaft (DFG), under Germany's Excellence Strategy - EXC 2121 "Quantum Universe" - 390833306, and under project number 400140256 - GRK2497; the Hellenic Foundation for Research and Innovation (HFRI), Project Number 2288 (Greece); the Hungarian Academy of Sciences, the New National Excellence Program - UNKP, the NKFIA research grants K 124845, K 124850, K 128713, K 128786, K 129058, K 131991, K 133046, K 138136, K 143460, K 143477, 2020-2.2.1-ED-2021-00181, and TKP2021-NKTA-64 (Hungary); the Council of Science and Industrial Research, India; the Latvian Council of Science; the Ministry of Education and Science, project no. 2022/WK/14, and the National Science Center, contracts Opus 2021/41/B/ST2/01369 and 2021/43/B/ST2/01552 (Poland); the Fundacao para a Ciencia e a Tecnologia, grant CEECIND/01334/2018 (Portugal); the National Priorities Research Program by Qatar National Research Fund; MCIN/AEI/10.13039/501100011033, ERDF "a way of making Europe", and the Programa Estatal de Fomento de la Investigacion Cientifica y Tecnica de Excelencia Maria de Maeztu, grant MDM-2017-0765 and Programa Severo Ochoa del Principado de Asturias (Spain); the Chulalongkorn Academic into Its 2nd Century Project Advancement Project, and the National Science, Research and Innovation Fund via the Program Management Unit for Human Resources & Institutional Development, Research and Innovation, grant B05F650021 (Thailand); the Kavli Foundation; the Nvidia Corporation; the SuperMicro Corporation; the Welch Foundation, contract C-1845; and the Weston Havens Foundation (USA).
|
2301.02323 | Ab initio calculation of carrier mobility in semiconductors including
ionized-impurity scattering | The past decade has seen the emergence of ab initio computational methods for
calculating phonon-limited carrier mobilities in semiconductors with predictive
accuracy. More realistic calculations ought to take into account additional
scattering mechanisms such as, for example, impurity and grain-boundary
scattering. In this work, we investigate the effect of ionized-impurity
scattering on the carrier mobility. We model the impurity potential by a
collection of randomly distributed Coulomb scattering centers, and we include
this relaxation channel into the ab initio Boltzmann transport equation, as
implemented in the EPW code. We demonstrate this methodology by considering
silicon, silicon carbide, and gallium phosphide, for which detailed
experimental data are available. Our calculations agree reasonably well with
experiments over a broad range of temperatures and impurity concentrations. For
each compound investigated here, we compare the relative importance of
electron-phonon scattering and ionized-impurity scattering, and we critically
assess the reliability of Matthiessen's rule. We also show that an accurate
description of dielectric screening and carrier effective masses cam improve
quantitative agreement with experiments. | Joshua Leveillee, Xiao Zhang, Emmanouil Kioupakis, Feliciano Giustino | 2023-01-05T22:42:57Z | http://arxiv.org/abs/2301.02323v1 | # _Ab initio_ calculation of carrier mobility in semiconductors
###### Abstract
The past decade has seen the emergence of _ab initio_ computational methods for calculating phonon-limited carrier mobilities in semiconductors with predictive accuracy. More realistic calculations ought to take into account additional scattering mechanisms such as, for example, impurity and grain-boundary scattering. In this work, we investigate the effect of ionized-impurity scattering on the carrier mobility. We model the impurity potential by a collection of randomly distributed Coulomb scattering centers, and we include this relaxation channel into the _ab initio_ Boltzmann transport equation, as implemented in the EPW code. We demonstrate this methodology by considering silicon, silicon carbide, and gallium phosphide, for which detailed experimental data are available. Our calculations agree reasonably well with experiments over a broad range of temperatures and impurity concentrations. For each compound investigated here, we compare the relative importance of electron-phonon scattering and ionized-impurity scattering, and we critically assess the reliability of Matthiessen's rule. We also show that an accurate description of dielectric screening and carrier effective masses cam improve quantitative agreement with experiments.
## I Introduction
The ability to predict the charge transport properties of semiconductors using non-empirical _ab initio_ methods is of paramount importance for the design of next-generation electronics, neuromorphic computing, energy-efficient lighting, and energy conversion and storage. For example, as beyond-silicon materials for next-generation field-effect transistors are being explored, such as wide-gap semiconductors like GaN [1], SiC [2], and Ga\({}_{2}\)O\({}_{3}\)[3], or high-mobility materials such as GaAs [4], _ab initio_ methods for calculating transport properties with predictive accuracy are acquiring an increasingly important role.
The past decade has seen numerous developments in first-principles calculations of phonon-limited charge transport coefficients such as the electrical conductivity in metals, and the drift and Hall mobilities in semiconductors [5; 6; 7; 8; 9; 10; 11]. More recently, several groups turned their attention to _ab initio_ calculations of additional scattering mechanisms [5; 12; 13; 14; 15; 16]. Among the various mechanisms, impurity scattering is of particular interest since ionized donors and acceptors are ubiquitous in high-purity doped semiconductors, and intrinsic point defects are unavoidable in all other materials [17; 18; 19]. In this work we focus on ionized-impurity scattering, which is expected to provide the most significant contribution to the carrier relaxation rates beyond phonons, given the long-ranged nature of the Coulomb potential.
Ionized-impurity scattering in semiconductors has first been studied via the Conwell-Weisskopf model. In this model, the scattering potential of the impurity is described using a Coulomb monopole immersed in the dielectric background of the semiconductor [20]. The long-range nature of this potential makes it ill-behaved at long-wavelength, and the singularity at long wavelengths is removed using an _ad hoc_ infrared cutoff. A better handling of this singularity is achieved in the Brooks-Herring model by considering free-carrier screening [21]. This latter model proved very successful [22], and is still widely used owing to its simplicity as it only requires the electronic density of states, the carrier effective mass, the high-frequency dielectric constant, and the impurity concentration. Further improvements upon these models were subsequently introduced, e.g., carrier statistics, dispersive electronic screening, two-impurity scattering, and atomic form factors [23]. While this class of models enjoyed considerable success with calculations of the carrier mobility of silicon, they do not perform as well with other semiconductors [24; 25]. These and similar other empirical adjustments make it harder to quantify the role of each scattering channels, and most importantly decrease the transferability of the models and ultimately their usefulness in materials design.
During the past decade, considerable progress has been achieved in _ab initio_ calculations of charge carrier mobilities [5; 6; 7; 8; 14; 26]. These approaches are based on the use of electronic band structures from density functional theory (DFT) [27; 28], as well as phonon dispersion relations and electron-phonon matrix elements from supercell calculations or from density-functional perturbation theory (DFPT) [29; 30; 31; 32]. To achieve a numerically converged sampling of the Brillouin zone, most calculations by now employ Wannier-Fourier interpolation [33; 34; 35]. Mobilities are then obtained by solving the _ab initio_
Boltzmann transport equation (\(ai\)BTE) [26]. The first study of ionized-impurity scattering from first principles was reported by Restrepo and Pantelides [5], and more recent, state-of-the-art calculations have been reported by Lu and coworkers [14]. In this latter work, the authors find good agreement between calculated mobilities and experimental data for silicon. Additional work using a semi-empirical approach combining DFT calculations and models was also reported recently [36; 37].
In this work, we investigate from first principles the effect of ionized-impurity scattering on the carrier mobility of semiconductors. To this aim, we take into account both carrier-phonon and carrier-impurity scattering on the same footing, within the \(ai\)BTE formalism as implemented in the EPW code. [38] Given that the shape of the impurity potential depends on the details of the crystal structure and its evaluation would require thermodynamic calculations of defects and defect levels [39], we limit ourselves to consider the monopole term of the scattering potential and a random distribution of impurities. This simplification allows us to achieve an elegant and compact formalism, and to compute carrier mobilities by using solely the concentration of ionized impurities as input. To validate our methodology, we perform calculations for three test systems: Si, 3C-SiC, and GaP. For Si there is an abundance of experimental data and previous calculations to compare with. 3C-SiC, which is also referred to as cubic SiC or \(\beta\)-SiC in the literature, is considered a promising candidate for next-generation power electronics [40; 41; 42]. Several experimental data sets are available for carrier mobility in 3C-SiC, especially for \(n\)-type (N) doping and less so for \(p\)-type doping (Al). GaP is a standard optoelectronic semiconductor which is of interest in non-linear optical switching [43; 44; 45]; experimental mobility data for GaP are available both for \(n\)-type doping (Sn) and \(p\)-type doping (Zn). For each of these compounds we calculate the temperature-dependent carrier mobility at variable impurity concentration. We investigate the relative importance of carrier-phonon and carrier-impurity scattering, and we examine the validity of the classic Matthessen's rule [46].
The manuscript is organized as follows. In Sec. II we briefly summarize the \(ai\)BTE formalism, we provide a detailed derivation of the matrix elements for carrier-impurity scattering, and we discuss the key approximations involved. In this section we also discuss free-carrier screening, and we examine under which conditions the Matthiessen rule can reliably be used in transport calculations. Section III is devoted to the implementation details and the calculation parameters used in this work. In Sec. IV we discuss our results for Si, SiC, and GaP. In particular, in Sec. IV.2 we present our calculated temperature- and concentration-dependent mobilities and compare our data with experiments. In Sec. IV.3 we analyze the relative importance of phonon- and impurity-mediated scattering processes in the carrier relaxation rates. In Sec. IV.4 we test Matthiessen's rule by comparing full \(ai\)BTE calculations with the results of separate calculations including only phonon-limited or impurity-limited mobilities. In Sec. IV.5 we investigate how the DFT dielectric screening and carrier effective masses influence calculated mobilities, and we test simple correction schemes along the lines of Ref. [10]. In Sec. V we summarize our findings and offer our conclusions. Additional details on the calculation procedure are discussed in the Appendices.
## II Theoretical approach
### Carrier mobility from the _ab initio_ Boltzmann transport equation
A detailed derivation of the \(ai\)BTE formalism is given in Ref. [38]. Here we limit ourselves to summarize the key equations in order to keep this manuscript self-contained. Within the linearized Boltzmann transport equation, the carrier mobility tensor is obtained as:
\[\mu_{\alpha\beta}=-\frac{2}{\Omega_{\rm{uc}}n_{\rm{c}}}\frac{1}{N_{\rm{uc}}} \sum_{n\bf{k}}v^{\alpha}_{n\bf{k}}\partial_{E_{\beta}}f_{n\bf{k}}, \tag{1}\]
where the factor of 2 is for the spin degeneracy, Greek indices indicate Cartesian directions, \(E_{\beta}\) indicate the Cartesian components of the electric field, and \(\partial_{E_{\beta}}f_{n\bf{k}}\) is the linear variation of the electronic occupation of the state with band index \(n\) and wavevector \(\bf{k}\) in response to the applied field. \(v^{\alpha}_{n\bf{k}}\) represents the expectation value of the velocity operator along the direction \(\alpha\), for the Kohn-Sham state \(n\bf{k}\). \(e\), \(n_{\rm{c}}\), \(\Omega_{\rm{uc}}\), and \(N_{\rm{uc}}\) indicate the electron charge, the carrier density, the volume of the unit cell, and the number of unit cells in the Born-von Karman (BvK) supercell, respectively. The \(n\)-summation extends over all Kohn-Sham states, although in practice only those states near the chemical potential contribute to the mobility. The \(\bf{k}\)-summation is over a uniform Brillouin zone grid.
The variation \(\partial_{E_{\beta}}f_{n\bf{k}}\) is obtained from the self-consistent solution of the equation:
\[-ev^{\beta}_{n\bf{k}}\frac{\partial f^{0}_{n\bf{k}}}{\partial \epsilon_{n\bf{k}}}=\sum_{m\bf{q}}\left[\tau^{-1}_{m\bf{k}+q\to n\bf{k}}\, \partial_{E_{\beta}}f_{m\bf{k}+q}\right.\] \[\left.-\tau^{-1}_{n\bf{k}\to m\bf{k}+q}\,\partial_{E_{\beta}}f_{ n\bf{k}}\right], \tag{2}\]
where \(f^{0}_{n\bf{k}}\) denotes the Fermi-Dirac occupation of the state \(n\bf{k}\) in the absence of electric field. The quantity \(\tau^{-1}_{n\bf{k}\to m\bf{k}+q}\) is the partial scattering rate from the Kohn-Sham state \(n\bf{k}\) to the state \(m\bf{k}+q\). In many-body perturbation theory, this rate is derived from the imaginary parts of the electron self-energy, therefore different scattering mechanisms simply add up to the lowest order in perturbation theory. In this work, we write the scattering rate as the sum of the rates of carrier-phonon scattering (ph) and carrier-impurity (imp) scattering:
\[\frac{1}{\tau_{n\bf{k}\to m\bf{k}+q}}=\frac{1}{\tau^{\rm{ph}}_{n\bf{k}\to m \bf{k}+q}}+\frac{1}{\tau^{\rm{imp}}_{n\bf{k}\to m\bf{k}+q}}. \tag{3}\]
The partial carrier-phonon scattering rate is given by [26]:
\[\frac{1}{\tau^{\rm ph}_{n{\bf k}\to m{\bf k}+{\bf q}}}=\frac{1}{N_{ \rm nc}}\sum_{\nu}\frac{2\pi}{\hbar}\left|g_{mn\nu}({\bf k},{\bf q})\right|^{2}\] \[\times\big{[}(n_{{\bf q}\nu}+1-f^{0}_{m{\bf k}+{\bf q}})\delta( \epsilon_{n{\bf k}}\!-\!\epsilon_{m{\bf k}+{\bf q}}-\hbar\omega_{{\bf q}\nu})\] \[+(n_{{\bf q}\nu}+f^{0}_{m{\bf k}+{\bf q}})\delta(\epsilon_{n{\bf k }}\!-\!\epsilon_{m{\bf k}+{\bf q}}+\hbar\omega_{{\bf q}\nu})\big{]}, \tag{4}\]
where \(\epsilon_{n{\bf k}}\) denote Kohn-Sham eigenstates, and \(\omega_{{\bf q}\nu}\) stands for the frequency of a phonon with branch index \(\nu\), wavevector \({\bf q}\), and Bose-Einstein occupation \(n_{{\bf q}\nu}\). The matrix elements \(g_{mn\nu}({\bf k},{\bf q})\) indicate the probability amplitude for the scattering of an electron from state \(n{\bf k}\) to state \(m{\bf k}+{\bf q}\) via a phonon \({\bf q}\nu\)[35]. The partial rate in Eq. (4) can be obtained either from Fermi's golden rule or from many-body perturbation theory [35]. The carrier-impurity scattering rate required in Eq. (3) is derived in the next section and is given by Eq. (17).
Together, Eqs. (1)-(4) and (17) define the _ai_BTE framework employed in this work. This approach consistently captures back-scattering and Umklapp processes, with a computational cost that is similar to more approximate approaches based on various relaxation-time approximations. We refer the reader to Ref. [26] for a comprehensive review of common approximations to the Boltzmann transport equation.
### Scattering of Carriers by ionized impurities in the monopole approximation
To obtain the carrier-impurity scattering rate \(1/\tau^{\rm imp}_{n{\bf k}\to m{\bf k}+{\bf q}}\) we proceed as follows: (i) We derive the matrix element of the scattering potential for a single impurity in a periodic BvK supercell of the crystal unit cell; (ii) We generalize the matrix element to consider a number \(N_{\rm imp}\) of impurities in the BvK supercell; (iii) From this matrix element, we obtain the scattering rate corresponding to the \(N_{\rm imp}\) impurities by using the first Born approximation; (iv) We average the resulting rate over a random uniform distribution of impurity positions using a method due to Kohn and Luttinger.
#### ii.2.1 Scattering potential and matrix element for single impurity
We employ the monopole approximation to describe the potential of an impurity of charge \(Ze\) located at the position \({\bf r}_{0}\) in the BvK supercell. A more refined choice would entail explicitly calculating the impurity potential in DFT and its matrix elements. This approach was pursued in Refs. [5] and [14], but it carries the disadvantage that one needs to compute defect energetics prior to mobility calculations, and then perform rotational averages to account for the randomness of the impurity orientation. Our simpler approach is useful for systematic transport calculations when detailed knowledge of the atomic-scale structure of impurities is lacking, and can be made more accurate by incorporating dipole and quadrupole terms along the lines of Refs. [47; 48; 49].
By solving the Poisson equation in the BvK supercell and considering a background anisotropic static dielectric constant tensor \(\mathbf{\varepsilon}^{0}=\varepsilon^{0}_{\alpha\beta}\), the potential of this point charge is found to be [see Eq. (S3) of Ref. [47]]:
\[\phi({\bf r};{\bf r}_{0})=\frac{4\pi}{\Omega_{\rm sc}}\frac{Ze}{4\pi\varepsilon _{0}}\sum_{{\bf q}}\sum_{{\bf G}\neq-{\bf q}}\frac{e^{i({\bf q}+{\bf G})\cdot ({\bf r}-{\bf r}_{0})}}{({\bf q}+{\bf G})\!\cdot\!\mathbf{\varepsilon}^{0}\!\cdot ({\bf q}+{\bf G})}, \tag{5}\]
modulo an inessential constant that reflects the compensating background charge. In this expression, \(\varepsilon_{0}\) is the vacuum permittivity, \({\bf G}\) is a reciprocal lattice vector, and the wavevector \({\bf q}\) belongs to a uniform Brillouin-zone grid. Here an in the following, we consider that the BvK cell consists of \(N_{\rm nc}\) unit cells, so that its volume is \(\Omega_{\rm sc}=N_{\rm nc}\Omega_{\rm nc}\), and that the Brillouin zone is discretized in a uniform grid of \(N_{\rm nc}\) points. The potential \(\phi({\bf r},{\bf r}_{0})\) is periodic over the BvK supercell.
The perturbation potential resulting from this impurity is \(V=\mp e\phi\) for electrons and holes, respectively. For definiteness, we consider electrons in the following. The matrix elements of the perturbation \(V\) between the Kohn-Sham states \(\psi_{n{\bf k}}\) and \(\psi_{m{\bf k}+{\bf q}}\) is given by:
\[g^{\rm imp}_{mn}({\bf k},{\bf q};{\bf r}_{0})=\langle\psi_{m{\bf k}+{\bf q}}|V ({\bf r};{\bf r}_{0})|\psi_{n{\bf k}}\rangle_{\rm sc}, \tag{6}\]
where the integral is over the supercell. The states can be written as \(\psi_{n{\bf k}}=N_{\rm nc}^{-1/2}e^{i{\bf k}\cdot{\bf r}}u_{n{\bf k}}\), where \(u_{n{\bf k}}\) is the Bloch-periodic part and is normalized in the unit cell. The combination of Eqs. (5) and (6) yields:
\[g^{\rm imp}_{mn}({\bf k},{\bf q};{\bf r}_{0})=\frac{-e^{2}}{4\pi\varepsilon_{0} }\frac{4\pi Z}{\Omega_{\rm sc}}\sum_{{\bf G}\neq-{\bf q}}\frac{e^{-i({\bf q}+{ \bf G})\cdot{\bf r}_{0}}B_{mn,{\bf G}}({\bf k},{\bf q})}{({\bf q}+{\bf G})\! \cdot\!\mathbf{\varepsilon}^{0}\!\cdot\!({\bf q}+{\bf G})}, \tag{7}\]
having defined the overlap integral:
\[B_{mn,{\bf G}}({\bf k},{\bf q})=\langle u_{m{\bf k}+{\bf q}}|e^{i{\bf G}\cdot{ \bf r}}|u_{n{\bf k}}\rangle_{\rm nc}, \tag{8}\]
which is evaluated over the unit cell.
#### ii.2.2 Scattering rate from multiple impurities within the first Born approximation
We now consider \(N_{\rm imp}^{\rm sc}\) impurities located at the positions \({\bf r}_{1},{\bf r}_{2},\cdots,{\bf r}_{N_{\rm imp}}\) in the BvK supercell. The corresponding perturbation potential is the sum of the potentials obtained in the previous section, \(V=\sum_{I=1}^{N_{\rm imp}^{\rm sc}}V({\bf r};{\bf r}_{I})\), therefore the generalization of Eq. (7) to the case of multiple identical impurities reads:
\[g^{\rm imp}_{mn}({\bf k},{\bf q};\{{\bf r}_{I}\}) =\frac{-e^{2}}{4\pi\varepsilon_{0}}\frac{4\pi Z}{\Omega_{\rm sc}} \sum_{{\bf G}\neq-{\bf q}}\frac{B_{mn,{\bf G}}({\bf k},{\bf q})}{({\bf q}+{\bf G })\!\cdot\!\mathbf{\varepsilon}^{0}\!\cdot({\bf q}+{\bf G})}\] \[\times\sum_{I=1}^{N_{\rm imp}^{\rm sc}}e^{-i({\bf q}+{\bf G}) \cdot{\bf r}_{I}}. \tag{9}\]
The total scattering rate out of state \(n{\bf k}\) associated with this matrix element can be written using the first Born approximation for the scattering matrix [50] [Eqs. (6.1.16) and (6.1.32)]:
\[\frac{1}{\tau_{n{\bf k}}^{\rm imp}}=\sum_{m{\bf q}}\frac{2\pi}{\hbar}|g_{mn}^{ \rm imp}({\bf k},{\bf q};\{{\bf r}_{I}\})|^{2}\delta(\epsilon_{n{\bf k}}- \epsilon_{m{\bf k}+{\bf q}}). \tag{10}\]
We note that this expression is an intensive quantity, as expected, i.e. it does not scale with the size of the BvK supercell [see discussion after Eq. (17)]. The partial scattering rate needed in Eq. (3) is then defined as:
\[\frac{1}{\tau_{n{\bf k}\to m{\bf k}+{\bf q}}^{\rm imp}}=\frac{2\pi}{\hbar}|g_{ mn}^{\rm imp}({\bf k},{\bf q};\{{\bf r}_{I}\})|^{2}\delta(\epsilon_{n{\bf k}}- \epsilon_{m{\bf k}+{\bf q}}). \tag{11}\]
Unlike Eq. (4), in this expressions we do not have the Fermi-Dirac occupations. These occupations drop out in the linearized Boltzmann transport equation, as it can be verified, for example, by setting \(n_{{\bf q}\nu}=0\) and \(\omega_{{\bf q}\nu}=0\) in Eq. (4). In Eq. (11) the Dirac delta function ensures energy conservation, consistent with the fact that we are considering the scattering by a fixed potential, i.e. we are neglecting the recoil of the impurity upon collision.
By combining Eqs. (9) and (11) we find:
\[\frac{1}{\tau_{n{\bf k}\to m{\bf k}+{\bf q}}^{\rm imp}}(\{{\bf r }_{i}\})=\frac{2\pi}{\hbar}\left[\frac{e^{2}}{4\pi\varepsilon_{0}}\frac{4\pi Z }{\Omega_{\rm sc}}\right]^{2}\delta(\epsilon_{n{\bf k}}-\epsilon_{m{\bf k}+{ \bf q}})\] \[\times\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Conwell and Weisskopf [20], who introduced an infrared cutoff to suppress the Coulomb singularity.
The formal way to overcome this difficulty is to observe that ionized impurities are accompanied by free-carriers, which introduce metallic-like screening of the impurity potentials. In the Thomas-Fermi model, free-carriers introduce an additional screening
\[\varepsilon_{\rm TF}(q)=1+\frac{q_{\rm TF}^{2}}{q^{2}}, \tag{18}\]
where \(q_{\rm TF}\) is the Thomas-Fermi wavevector. When used in combination with the impurity potential appearing in Eq. (17), this additional screening lifts the Coulomb singularity. In fact, by temporarily ignoring the \(\mathbf{G}\) vectors and the anisotropy of the dielectric tensor, free-carrier screening modifies the denominator of Eq. (17) as follows:
\[\frac{1}{(\varepsilon^{0}q^{2})^{2}}\quad\longrightarrow\quad\frac{1}{[ \varepsilon_{\rm TF}(q)\varepsilon^{0}q^{2}]^{2}}=\frac{1}{[\varepsilon^{0}(q ^{2}+q_{\rm TF}^{2})]^{2}}, \tag{19}\]
which tends to the finite value \(1/(\varepsilon^{0}q_{\rm TF}^{2})^{2}\) at long wavelength.
To incorporate free-carrier screening in our calculations, while taking into account all details of band structures and effective masses, we employ the Lindhard dielectric function instead of the Thomas-Fermi model, following Ref. [52]. The same approach was employed in Ref. [14]. The Lindhard dielectric function is given by:
\[\varepsilon_{\rm L}(q)=1-\frac{e^{2}}{4\pi\varepsilon_{0}}\frac{4\pi}{q^{2}} \frac{2}{N_{\rm uc}\Omega_{\rmuc}}\sum_{n\mathbf{k}}\frac{f_{n\mathbf{k}+ \mathbf{q}}^{0}-f_{n\mathbf{k}}^{0}}{\epsilon_{n\mathbf{k}+\mathbf{q}}- \epsilon_{n\mathbf{k}}}. \tag{20}\]
Since the density of free-carriers is typically low in doped semiconductors, we only need the long wavelength limit of this expression. In this limit, \((f_{n\mathbf{k}+\mathbf{q}}^{0}-f_{n\mathbf{k}}^{0})/(\epsilon_{n\mathbf{k}+ \mathbf{q}}-\epsilon_{n\mathbf{k}})=\partial f_{n\mathbf{k}}^{0}/\partial \epsilon_{n\mathbf{k}}\), therefore we can write:
\[\varepsilon_{\rm L}(q)=1+\frac{q_{\rm TF}^{2}}{q^{2}}, \tag{21}\]
having introduced the effective Thomas-Fermi vector:
\[q_{\rm TF}=\frac{e^{2}}{4\pi\varepsilon_{0}}\frac{2\cdot 4\pi}{N_{\rm uc} \Omega_{\rmuc}}\sum_{n\mathbf{k}}\left|\frac{\partial f_{n\mathbf{k}}^{0}}{ \partial\epsilon_{n\mathbf{k}}}\right|. \tag{22}\]
For parabolic bands, Eq. (21) reduces to the Thomas-Fermi or Debye model in the respective temperature limits. The free-carrier screening provides an additional screening mechanism to the dielectric screening of the insulating semiconductors, and is included in our calculations by replacing \(\mathbf{\varepsilon}^{0}\) in Eq. (17) by the total dielectric function:
\[\mathbf{\varepsilon}^{0}\quad\longrightarrow\quad\mathbf{\varepsilon}^{0}+\mathbf{1} \frac{q_{\rm TF}^{2}}{q^{2}}, \tag{23}\]
where \(\mathbf{1}\) denotes the \(3\times 3\) identity matrix. We note that this improved description of the screening includes temperature effects via the Fermi-Dirac occupations entering the definition of the effective Thomas-Fermi wavevector, Eq. (22).
### Matthiessen's Rule
Matthiessen's rule [46] is widely employed to interpret transport measurements. In the context of carrier transport in semiconductors, this rule can be stated as follows: the contributions of different scattering channels to the mobility can be obtained by adding the reciprocals of the individual mobilities. In the case of carrier-phonon and impurity-phonon scattering, we would have:
\[\frac{1}{\mu}=\frac{1}{\mu_{\rm ph}}+\frac{1}{\mu_{\rm imp}}. \tag{24}\]
In Sec. IV we proceed to quantify the reliability of this approximation by comparing mobility data calculated using the complete \(ai\)BTE including both phonons and impurities with the prediction of Eq. (24) obtained by calculating the mobility with these two scattering channels taken individually. We will show that this rule does not carry predictive power for the examples considered in this work.
From a formal standpoint, the rule expressed by Eq. (24) is obviously related to the choice of expressing the total scattering rates as the sum or the individual rates, see Eq. (3). That choice was motivated by the observation that, to first order in perturbation theory, different scattering channels do not mix. However, it is easy to see that, even when Eq. (3) is a good approximation, the additivity of the rates does not imply the Matthiessen rule as expressed by Eq. (24). To appreciate this point, we observe that the \(ai\)BTE in Eq. (2) can be recast as a linear system of the type:
\[A\times\{\partial_{E_{\beta}}f_{n\mathbf{k}}\}=b, \tag{25}\]
where the matrix \(A\) contains the partial scattering rates \(\tau_{n\mathbf{k}\to n^{\prime}\mathbf{k}}^{-1}\), the vector \(b\) contains the drift term on the left hand side of Eq. (2), and \(\{\partial_{E_{\beta}}f_{n\mathbf{k}}\}\) denotes the vector of solutions. If we break down the matrix \(A\) into its contributions from carrier-phonon and carrier-impurity scattering, \(A_{\rm ph}\) and \(A_{\rm imp}\) respectively, we see immediately that
\[\{\partial_{E_{\beta}}f_{n\mathbf{k}}\}=(A_{\rm ph}+A_{\rm imp})^{-1}b\neq A_{ \rm ph}^{-1}b+A_{\rm imp}^{-1}b, \tag{26}\]
therefore the additivity of the scattering rates does not imply the Matthiessen rule. This point can be made even more explicit by considering the self-energy relaxation time approximation to the \(ai\)BTE. The approximation consists of neglecting the first term on the r.h.s. of Eq. (2), and yields the following expression for the mobility:
\[\mu_{\alpha\beta} = -\frac{e}{\Omega_{\rmuc}n_{\rm c}}\frac{2}{N_{\rm uc}}\sum_{n \mathbf{k}}\frac{\partial f_{n\mathbf{k}}^{0}}{\partial\epsilon_{n\mathbf{k}} }v_{n\mathbf{k}}^{\alpha}v_{n\mathbf{k}}^{\beta} \tag{27}\] \[\times \frac{1}{\frac{1}{\tau_{n\mathbf{k}}^{\rm ph}}+\frac{1}{\tau_{n \mathbf{k}}^{\rm imp}}}.\]
For this expression to be amenable to Matthiessen's rule, the scattering rates would need to be independent of the electronic state, say \(\tau^{\rm ph}_{n{\bf k}}=\tau^{\rm ph}\) and \(\tau^{\rm imp}_{n{\bf k}}=\tau^{\rm imp}\). This is typically not the case in most semiconductors. Another special case where Matthiessen's formula is meaningful occurs when one scattering mechanism dominates over the others. For example, in Eq. (27), when \(\tau^{\rm ph}_{n{\bf k}}\gg\tau^{\rm imp}_{n{\bf k}}\), the expression reduces to the phonon-limited mobility. In this sense, Matthiessen's rule constitutes a simple interpolation formula between the limiting cases of phonon-limited and impurity-limited mobilities. We will analyze these aspects quantitatively in Sec. IV.
## III Computational methods
All calculations are performed using the Quantum ESPRESSO materials simulation suite [53], the EPW code [38], and the Wannier90 code [54]. We employ the PBE exchange and correlation functional [55] and optimized norm-conserving Vanderbilt (ONCV) pseudopotentials from the PseudoDojo repository [56; 57]. For consistency with previous work, we use the experimental lattice constant of Si, SiC, and GaP at room temperature, and the plane-wave kinetic energy cutoff and quadrupole tensors reported in Ref. [58]. We include spin-orbit coupling for the valence bands only, to capture the splitting of the valence band top. Key calculation parameters are summarized in Tab. 1.
We calculate effective mass tensors by finite differences, using a wavevector increment of \(0.01\times 2\pi/a\), where \(a\) is the lattice constant reported in Tab. 1. The dynamical matrix, the variations of the self-consistent potential, and the vibrational eigenfrequencies and eigenmodes are calculated using a square convergence threshold of \(10^{-16}\) Ry\({}^{2}\). This threshold refers to the change of the potential variation between two successive iterations, averaged over the unit cell. Electron energies, phonon frequencies, and electron-phonon matrix elements are initially computed on a coarse wavevector mesh using the EPW code. The electron Hamiltonian, the dynamical matrix, and the electron-phonon matrix elements are then interpolated onto fine Brillouin zone grids using Wannier-Fourier interpolation [33; 34]. Long-range dipole and quadrupole corrections are employed for improved interpolation of the electron-phonon matrix elements [47; 48; 49; 58; 59].
To compute carrier mobilities, only states within a narrow energy window of the band extrema are necessary. We find that, for the range of temperatures considered in this work (up to 500 K), a window of 400 meV is sufficient to obtain converged electron mobilities, and a window of 300 meV is sufficient for hole mobilities. At 300 K, converged results can be obtained by using a 200 meV window for both electrons and holes.
To evaluate the overlap matrices \(B_{mn,{\bf G}}({\bf k},{\bf q})\) required in Eq. (17) in the fine Brillouin zone grid, we follow the procedure of Ref. [47] and approximate them as:
\[B_{mn,{\bf G}}({\bf k},{\bf q})\approx\left[U({\bf k}+{\bf q})U^{\dagger}({ \bf k})\right]_{mn}, \tag{28}\]
where the unitary matrix \(U_{mn}({\bf k})\) is the diagonalizer of the interpolated Hamiltonian into the wavevector \({\bf k}\) of the fine grid. This approximation is motivated by the fact that the carrier-impurity matrix element in Eq. (17) is strongly peaked at \({\bf q}+{\bf G}=0\).
The Dirac delta functions appearing in Eqs. (4) and (17) are computed using Gaussian functions with a small broadening parameter. The results are sensitive to the choice of this parameter, therefore we accelerate the convergence by employing adaptive smearing. The procedure for the adaptive smearing of the carrier-phonon scattering rate, which involves a so-called type-III integral, is discussed in Refs. [6; 58; 60]. The calculation of the carrier-impurity scattering rates involves instead a type-II integral of the form:
\[I^{\rm II}_{n{\bf k}}=\sum_{m}\int\frac{d{\bf q}}{\Omega_{\rm BZ}}f_{mn}({\bf k },{\bf q})\,\delta(\epsilon_{m{\bf k}+{\bf q}}-\epsilon_{n{\bf k}}), \tag{29}\]
where \(\Omega_{\rm BZ}\) is the volume of the Brillouin zone. In this case, adaptive broadening can be achieved by using a state-dependent width \(\sigma_{m{\bf k}+{\bf q}}\). We follow the procedure by Ref. [60], which gives:
\[\sigma_{m{\bf k}+{\bf q}}=\frac{\alpha}{3}\sum_{i=1}^{3}{\bf v}_{m{\bf k}+{\bf q }}\cdot\frac{{\bf b}_{i}}{N_{i}}, \tag{30}\]
where \({\bf v}_{m{\bf k}+{\bf q}}\) is the band velocity, \({\bf b}_{i}\) is a primitive vector of the reciprocal lattice, and \(N_{i}\) denotes the number of \({\bf k}\)-points along the direction of \({\bf b}_{i}\). The coefficient \(\alpha\) is a tunable parameter. Previous work has used \(\alpha=0.29\) for electron-phonon scattering rates [6; 58]. We have performed a detailed converged test by comparing fixed-smearing and variable-smearing calculations, and found that values \(\alpha=0.1\)-\(0.3\) provide similar results. For simplicity, in this work we use \(\alpha=0.29\) as in previous work.
In principle we could perform calculations of carrier mobilities by setting the impurity concentration and the carrier concentration separately. This would be required, for example, for the investigation of compensation doping of semiconductors. To keep our results are general
\begin{table}
\begin{tabular}{l r r r} & & Si & 3C-SiC & GaP \\ \hline Lattice constant (Å) & 5.43 & 4.36 & 5.45 \\ Plane wave kinetic energy cutoff (eV) & 544 & 1088 & 1088 \\ \(Q_{\kappa_{1}}\) & 11.83 & 7.41 & 13.72 \\ \(Q_{\kappa_{2}}\) & -11.83 & -2.63 & -6.92 \\ Coarse \({\bf k}\) and \({\bf q}\) grids & \(12^{3}\) & \(12^{3}\) & \(12^{3}\) \\ Fine \({\bf k}\) and \({\bf q}\) electron grid & \(100^{3}\) & \(180^{3}\) & \(100^{3}\) \\ Fine \({\bf k}\) and \({\bf q}\) hole grid & \(100^{3}\) & \(100^{3}\) & \(100^{3}\) \\ \hline \end{tabular}
\end{table}
Table 1: Calculation parameters used in this work: Experimental lattice constant, plane wave kinetic energy cutoff, and non-vanishing elements of the quadrupole tensor are chosen to be consistent with Ref. [58].
as possible, in this work we choose to focus on the simpler scenario where each impurity creates one free carrier, therefore we set the carrier density to be equal to the impurity concentration. We do not consider carrier freeze-out at low temperature, since this would require the knowledge of defect energy levels. In our calculations, the role of the carrier concentration is mainly to modulate the effective Thomas-Fermi screening wavevector in Eq. (22).
## IV Results and discussion
### Electronic structure
Given the importance of effective masses in mobility calculations, in this section we review briefly the band structures and effective masses of Si, SiC, and GaP. Table 2 shows our calculated directional effective masses. Hole masses are given for the heavy-hole (hh) band, light hole (lh) band, and the spin-orbit split-off (so) band. The longitudinal (\(\parallel\)) and transverse (\(\perp\)) electron masses correspond to the principal axes of the ellipsoidal conduction band extrema.
In Tab. 2 we see that the light hole and split-off hole masses are fairly isotropic for all compounds considered in this work. For the heavy hole masses, the \(\Gamma\)-X direction ([100] crystallographic direction) exhibits the lightest masses, whereas considerably heavier masses are found along the \(\Gamma\)-K ([110]) and \(\Gamma\)-L ([111]) directions. Similarly, in all compounds considered here the longitudinal electron masses are considerably heavier than the corresponding transverse masses, as expected. SiC exhibits the heaviest hole masses among SiC, GaP, and Si; while GaP exhibits the heaviest electron masses.
Our calculated effective masses are in good agreement with previous calculations at the DFT level [10] as well as previous calculations at the GW level [10]. When comparing to experimental data, we see from Tab. 2 that our electron effective masses are within 10% of the corresponding experimental values, which is remarkable considering that we are using DFT/PBE.
In the case of the hole masses, our calculations are also in good agreement with experiments. Here we emphasize that the experimental values usually quoted are not the effective masses, but the cyclotron masses, which depend on the direction of the magnetic field and are reported in Tab. 2. These cyclotron masses correspond to averages of the directional masses and cannot be compared directly to DFT calculations. To extract the correct directional effective masses, in the case of silicon we used the Dresselhaus \(\mathbf{k}\cdot\mathbf{p}\) model which was fitted to experimental cyclotron data. In this model the heavy hole and light hole masses are parameterized as:
\[\epsilon_{\text{hh}}(\mathbf{k})= Ak^{2}+[B^{2}k^{4}+C^{2}(k_{x}^{2}k_{y}^{2}+k_{y}^{2}k_{z}^{2}+k_{z}^{2}k_{x} ^{2})]^{1/2}, \tag{31}\] \[\epsilon_{\text{lh}}(\mathbf{k})= Ak^{2}-[B^{2}k^{4}+C^{2}(k_{x}^{2}k_{y}^{2}+k_{y}^{2}k_{z}^{2}+k_{z}^{2}k_{x} ^{2})]^{1/2}, \tag{32}\]
where \(k=|\mathbf{k}|\) and the coefficients \(A\), \(B\), and \(C\) are \(-4.1\,\hbar^{2}/2m_{\text{e}}\), \(-1.6\,\hbar^{2}/2m_{\text{e}}\), and \(3.3\,\hbar^{2}/2m_{\text{e}}\), respectively [61]. From this parameterization we obtained the effective masses reported in Tab. 2 under the keyword "Dresselhaus". From this table we can see that, in the
\begin{table}
\begin{tabular}{l l c c} \hline \hline This work & & Si & SiC & GaP \\ \hline \multirow{3}{*}{\(m_{\text{hh}}^{*}\)} & \(\Gamma\)-X & 0.260 & 0.592 & 0.374 \\ & \(\Gamma\)-K & 0.550 & 1.412 & 0.837 \\ & \(\Gamma\)-L & 0.655 & 1.646 & 1.091 \\ & \(\Gamma\)-X & 0.189 & 0.423 & 0.143 \\ & \(\Gamma\)-K & 0.143 & 0.328 & 0.125 \\ & \(\Gamma\)-L & 0.134 & 0.309 & 0.117 \\ & \(\Gamma\)-X & 0.225 & 0.490 & 0.213 \\ \(m_{\text{so}}^{*}\) & \(\Gamma\)-K & 0.223 & 0.472 & 0.217 \\ & \(\Gamma\)-L & 0.214 & 0.436 & 0.206 \\ \(m_{\text{e,\parallel}}^{*}\) & & 0.959 & 0.672 & 1.069 \\ \(m_{\text{e,\perp}}^{*}\) & & 0.196 & 0.230 & 0.232 \\ \(E_{\text{g}}\) & & 0.554 & 1.359 & 1.566 \\ \(\varepsilon^{\infty}\) & & 13.00 & 6.93 & 10.53 \\ \(\varepsilon^{0}\) & & 13.00 & 10.23 & 12.57 \\ \hline \multicolumn{3}{l}{Experiment} & Si & SiC & GaP \\ \hline \multirow{3}{*}{\(m_{hh}^{*}\)} & \(\mathbf{B}\) along [001] & 0.46\({}^{a}\) & \multirow{3}{*}{\(0.54^{c}\)} \\ & \(\mathbf{B}\) along [110] & 0.53\({}^{a}\) & \\ & \(\mathbf{B}\) along [111] & 0.56\({}^{a}\) & 0.54\({}^{c}\) \\ & Dresselhaus \(\Gamma\)-X & 0.40 & \\ & Dresselhaus \(\Gamma\)-K & 0.56 & \\ & Dresselhaus \(\Gamma\)-L & 0.62 & \\ & \(\mathbf{B}\) along [001] & 0.171\({}^{a}\) & 0.45\({}^{b}\) \\ & \(\mathbf{B}\) along [110] & 0.163\({}^{a}\) & \\ & \(\mathbf{B}\) along [111] & 0.160\({}^{a}\) & 0.16\({}^{c}\) \\ & Dresselhaus \(\Gamma\)-X & 0.18 & \\ & Dresselhaus \(\Gamma\)-K & 0.16 & \\ & \(\mathbf{D}\)nesselhaus \(\Gamma\)-L & 0.15 & \\ \(m_{\text{e,\parallel}}^{*}\) & & 0.97\({}^{a}\) & 0.68\({}^{d}\) & 1.15\({}^{c}\), 2.0\({}^{k}\) \\ \(m_{\text{e,\perp}}^{*}\) & & 0.19\({}^{a}\) & 0.25\({}^{d}\) & 0.21\({}^{c}\), 0.25\({}^{k}\) \\ \(E_{\text{g}}\) & & 1.13\({}^{f}\) & 2.42\({}^{g}\) & 2.26\({}^{h}\) \\ \(\varepsilon^{\infty}\) & & 11.7\({}^{i}\) & 6.52\({}^{i}\) & 9.11\({}^{j}\) \\ \(\varepsilon^{0}\) & & 11.7\({}^{i}\) & 9.72\({}^{j}\) & 11.1\({}^{j}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Calculated band effective masses, band gaps, high-frequency and static dielectric constants of Si, 3C-SiC, and GaP. All calculations performed within DFT/PBE. Experimental data are from (a) [61] and [62], (b) [63], (c) [64], (d) [65], (e) [66], (f) [67], (g) [68], (h) [69], (i) [70], (j) [71], (k) [72], (l) [73]. All masses are give in units of the electron mass. The band gaps are in eV. The lines tagged “Dresselhaus” refer to the effective masses obtained from the Dresselhaus model fitted to experimental cyclotron data, from Ref. [61].
case of silicon, the light hole and heavy hole masses are close to our calculated results, with the exception of the \(\Gamma-X\) heavy-hole effective mass which is 65% of the experimental value[10].
Our calculated dielectric constants overestimate the experimental values by 15% at most, as expected from the underestimation of the band gaps [70; 71]. In Sec. IV.5 we discuss how one can improve the calculated mobilities by introducing _a posteriori_ corrections to the theoretical effective masses and dielectric constants.
### Carrier mobilities
#### iv.2.1 Silicon
Figure 1 shows a comparison between our calculated mobilities of silicon and available experimental data, as a function of temperature and impurity concentration. The mobilities without carrier-impurity scattering [black lines in panels (a) and (b)] decrease rapidly with temperature, as expected. We find temperature slopes (the \(\beta\) in \(\mu\sim T^{\beta}\)) of \(-2.1\) for electrons and \(-2.4\) for holes, in agreement with previous work [10; 58]. As we include carrier-impurity scattering, the room-temperature electron mobility of silicon reduces from 1381 cm\({}^{2}\)/Vs to 1153 cm\({}^{2}\)/Vs at 1.75\(\times\)10\({}^{16}\) cm\({}^{-3}\) [blue line in panel (a)] and to 812 cm\({}^{2}\)/Vs at 1.3\(\times\)10\({}^{17}\) cm\({}^{-3}\) [red line in panel (a)]. Similarly, the room-temperature hole mobility of silicon decreases from 600 cm\({}^{2}\)/Vs in the absence of impurities to 517 cm\({}^{2}\)/Vs for an impurity concentration of 2.4\(\times\)10\({}^{16}\) cm\({}^{-3}\)[blue line in panel (b)], and to 359 cm\({}^{2}\)/Vs at the impurity concentration of 2.0\(\times\)10\({}^{17}\) cm\({}^{2}\)/Vs [red line in panel (b)].
Our calculations for the temperature-dependent electron and hole mobilities show that a single power law becomes inadequate in the presence of impurity scattering. This is also seen in the experimental data from Refs. [74; 75; 76; 77], which are shown as open circles in Fig. 1. We note that our calculations are in good agreement with the experiments over a broad temperature range. The agreement worsens slightly at low temperature, where carrier-impurity scattering dominates. This effect likely relates to the fact that in our calculations all donors and acceptors are assumed to be fully ionized at all temperatures; as a result of this approximation, we are neglecting carrier freeze-out and hence we are likely overestimating the impurity concentration at low temperature. In Appendix A we show that, by taking into account the the effects of partial impurity ionization, the agreement with experiments improves at low temperature and high impurity concentration.
Panel (c) of Fig. 1 shows the room temperature electron mobility of silicon, as a function of impurity concentration. The electron mobility is relatively insensitive to the impurity concentration up to 10\({}^{16}\) cm\({}^{-3}\). A steep decrease in the electron mobility is seen as we approach a doping density of 10\({}^{17}\) cm\({}^{-3}\). Up to this concentration, our calculations (blue line) are in excellent agreement with experimental data (open black circles). Above 10\({}^{18}\) cm\({}^{-3}\), while the agreement with experiment is still good, we tend to slightly overestimate the measured electron mobility. This is likely due to two effects: (i) our formalism does not take into account multiple scattering events that become important at high impurity concentration, and (ii) our calculations do not include scattering by free-carrier plasmons, which dominate the mobility at high carrier density, as shown in Refs. [12; 23]. A similar overestimation was observed in Ref. [14].
Panel (d) of Fig. 1 shows the room temperature hole mobility of silicon as a function of impurity concentration. As for the electrons, we find generally good agreement between calculations (blue line) and experiments (open black circles) throughout the doping range. We emphasize that the vertical scales in panels (c) and (d) are different, and that the theory/experiment deviation at high impurity concentration is similar in both panels in absolute terms. At low impurity concentration, our calculations slightly overestimate the experimental data. This effect can be ascribed to the fact that our light hole effective masses are smaller than in experiments.
#### iv.2.2 Silicon carbide
In Fig. 2 we show our calculated mobilities of 3C-SiC as a function of temperature and impurity concentration, and we compare to experimental data from Refs. [78; 79; 80; 24; 81; 82; 83; 84]. In the case of 3C-SiC, the comparison with experiments is complicated by the high concentration of line defects that nucleate at lattice-mismatched growth substrates such as Si or 6H-SiC [85; 83], which makes it difficult to obtain data for defect-free samples. Furthermore, most experimental data are for co-doped samples, for which the impurity and carrier concentrations are more difficult to estimate.
In the absence of impurity scattering [black line in panel (a)], the low electron effective mass of SiC leads to very high theoretical mobilities, up to 33000 cm\({}^{2}\)/Vs at 100 K and up to 2000 cm\({}^{2}\)/Vs at room temperature. These high mobilities are in agreement with previous theoretical results [58]. In this case, we calculate an electron temperature exponent \(\beta=-2.9\).
In panel (a) of Fig. 2 we compare our calculations (blue line) with the data reported in Ref. [78] (red open circles). In that work, they synthesized 3C-SiC with \(n\)-type impurity density of 5.0\(\times\)10\({}^{16}\) cm\({}^{-3}\), and obtained electron mobilities at 100 K and 300 K of 2040 cm\({}^{2}\)/Vs and 584 cm\({}^{2}\)/Vs, respectively. In our calculations, when we consider the same impurity concentration, we find 2773 cm\({}^{2}\)/Vs and 1369 cm\({}^{2}\)/Vs at 100 K and 300 K, respectively; therefore we overestimate the experimental data by a factor of 30%-230%.
In panel (b) of Fig. 2 we show our calculated hole mobility of 3C-SiC as a function of temperature. In the absence of impurities (black line), the mobility decreases
with a temperature exponent \(\beta=-2.1\). In this case we could not find experimental data for uncompensated samples to compare with. Upon including impurity scattering with an impurity concentration of 10\({}^{18}\) cm\({}^{-3}\), we find a significant reduction of the mobility at low temperature (blue line), from 1373 cm\({}^{2}\)/Vs to 148 cm\({}^{2}\)/Vs. At 300 K, the mobility is reduced from 165 cm\({}^{2}\)/Vs without impurities to 81 cm\({}^{2}\)/Vs, in good agreement with the measured value of 50 cm\({}^{2}\)/Vs reported in Ref. [84].
Panels (c) and (d) of Fig. 2 show the room temperature electron and hole mobilities as a function of impurity concentration, respectively. The electron mobility calculated (blue line) at low ionized donor concentration (10\({}^{14}\) cm\({}^{-3}\)) is 2048 cm\({}^{2}\)/Vs, and significantly overestimates the measured value 1000 cm\({}^{2}\)/Vs by Ref. [79] (open black symbols). However, our calculations get closer to experimental data in the range of concentrations above 10\({}^{18}\) cm\({}^{-3}\)[80; 86; 24].
The hole mobility of 3C-SiC is significantly lower than the electron mobility, as expected from much heavier hole masses shown in Tab 2. Our calculations (blue line) at low doping yield a mobility of 164 cm\({}^{2}\)/Vs, to be compared to 220 cm\({}^{2}\)/Vs measured in \(p\)-channel 3C-SiC devices [82] (open black symbols). We note that the vertical scales in panels (c) and (d) differ, and that our calculated hole mobilities are in better agreement with experiment in relative terms. In particular, our data for the hole mobility fall right in the middle of the experimental trend shown in panel (d).
#### iv.2.3 Gallium phosphide
Figure 3 shows our mobility calculations for GaP and a comparison with experimental data. In panel (a) we have the calculated electron mobilities as a function of temperature. In the absence of impurities, the calculated electron mobility (black line) decreases with a temperature exponent \(\beta=-2.2\); the calculated mobilities at 100 K and 300 K are 4293 cm\({}^{2}\)/Vs and 328 cm\({}^{2}\)/Vs, respectively. Upon including the effect of impurity scattering (blue line), the mobility decreases significantly, reaching 157 cm\({}^{2}\)/Vs at room temperature for an impurity concentration of 2.5\(\times\)10\({}^{18}\) cm\({}^{-3}\). This value is in good agreement with the measured mobility of 100 cm\({}^{2}\)/Vs by Ref. [87] (blue open circles). We note that the electron mobility of GaP is significantly lower than in silicon, despite the electron effective masses being comparable. In Sec. IV.3 we show that this effect arises from the additional polar phonon scattering that electrons experience in GaP, which is absent in silicon.
Panel (b) of Fig. 3 shows the calculated phonon-limited hole mobility (black line), the mobility calculated by including impurity scattering (blue line), and experimental data (open red circles). The phonon-limited hole mobility decreases with temperature with an exponent \(\beta=-2.5\). The calculated mobilities in the absence of impurities are 5096 cm\({}^{2}\)/Vs and 252 cm\({}^{2}\)/Vs at 100 K and 300 K, respectively. Upon including impurity scattering with a concentration of 2\(\times\)10\({}^{18}\) cm\({}^{-3}\), the mobility at room temperature decreases to 124 cm\({}^{2}\)/Vs, in good agreement with the measured value of 90 cm\({}^{2}\)/Vs by Ref. [87].
Panel (c) of Fig. 3 shows the room temperature electron mobility of GaP as a function of impurity concentration. In the absence of impurity scattering, we calculate a mobility of 328 cm\({}^{2}\)/Vs(blue line), which compares well with the maximum value 258 cm\({}^{2}\)/Vs measured in ultrapure samples in Ref. [88] (open black symbols). In the intermediate doping regime, our calculated electron mobilities overestimate the experimental data by a factor of two [87; 88; 89; 90], but the agreement improves at high doping levels.
Figure 3(d) shows the room temperature hole mobility of GaP as a function of impurity concentration. The calculated hole mobility is 269 cm\({}^{2}\)/Vs at low impurity concentration, and decreases to 94 cm\({}^{2}\)/Vs at a concentration of 10\({}^{19}\) cm\({}^{-3}\) (blue line). Our calculations are within a factor of two from the highest measured hole mobilities across the same doping range [87; 91; 92] (open black symbols). We note that electron and hole mobilities in GaP are very similar across a wide range of temperatures and impurity concentrations (both in experiments and in our calculations), therefore GaP is an ambipolar semiconductor with well-balanced electron and hole transport.
### Carrier scattering rates
In this section we analyze and compare the scattering rates resulting from carrier-phonon and carrier-impurity processes in Si, SiC, and GaP. The Brooks-Herring model for carrier-impurity scattering [21], which is based on the parabolic band approximation, predicts a scattering rate that scales as \(\epsilon^{-3/2}\), where \(\epsilon\) is the electron eigenvalue referred to the band extremum. This trend is a result of two competing effects: as the energy of the initial state increases above the band bottom, the scattering phase space increases as \(\epsilon^{1/2}\), while at the same time the square modulus of the carrier-impurity matrix element given in Eq. (17) decreases as \(1/q^{4}\), which is of the order of \(\epsilon^{-2}\). This simple trend is opposite to what is expected from non-polar optical scattering and acoustic phonon scattering, which tend to increase with energy.
Figure 4 shows the scattering rates \(\tau_{n\mathbf{k}}^{-1}\) of holes and electrons in Si [panels (a) and (b)], SiC [panels (c) and (d)], and GaP [panels (e) and (f)]. For consistency, we set the impurity concentration to 10\({}^{17}\) cm\({}^{-3}\) in all cases, which is in the middle of the range considered in Figs. 1-3, and the temperature to 300 K. In line with the above discussion, the carrier-impurity scattering rates decrease as we move away from the band extrema, while the carrier-phonon scattering rates increase. In the two polar semiconductors that we are considering, SiC and GaP, we also see a sudden jump in the carrier-phonon scatter
ing rates. This effect happens when the carrier energy reaches the threshold for the emission of a longitudinal optical phonon, thereby activating polar phonon scattering [47].
Panels (a) and (b) of Fig. 4 show that, in the case of silicon, the carrier-ionized-impurity scattering rates near the band edges are an order of magnitude higher than carrier-phonon rates (for an impurity concentration of \(10^{17}\) cm\({}^{-3}\)). The additional scattering by carriers causes a reduction of the mobility by \(\sim 30\%\) for both electrons and holes, indicating that impurity scattering is a significant effect at this impurity concentration. The rise of the carrier-electron scattering rates at energies around 150 meV that can be seen in panel (b) correspond to interband scattering between the two lowest conduction bands.
Panels (c) and (d) of Fig. 4 show the scattering rates in SiC. Unlike in silicon, here the electron and hole scattering rates differ considerably. In the case of holes, the carrier-phonon and carrier-impurity scattering rates are comparable in magnitude near the band edge, while in the case of electrons the carrier-impurity scattering dominates. This difference is reflected in the calculated mobilities, where carrier-impurity scattering reduces the phonon-limited mobility of holes by \(\sim 20\%\) and of electrons by \(\sim 50\%\) (for the impurity concentration \(10^{17}\) cm\({}^{-3}\)).
Data for GaP are shown in panels (e) and (f) of Fig. 4. In this case the carrier-phonon scattering rates are comparable to the carrier-impurity scattering rates. Accordingly, the mobilities are reduced by \(\sim\)10% from their values without impurity scattering.
### Deviations from Matthiessen's Rule
In Sec. IV.4 we discussed how Matthiessen's rule is formally justified only when the scattering rates are state-independent constants, or when one scattering mechanism dominates over all other mechanisms. To place that reasoning on a quantitative footing, in Fig. 5 we explicitly assess the predictive accuracy of the Matthiessen rule.
For this test, we compute the mobilities of Si, SiC, and GaP by considering the following four scenarios: (i) phonon-limited mobility \(\mu_{\rm ph}\) (i.e., without including carrier-impurity scattering); (ii) impurity-limited mobility \(\mu_{\rm imp}\) (i.e., without including carrier-phonon scattering); (iii) the mobility according to Matthiessen's rule, as obtained by combining (i) and (ii) using \(1/\mu_{\rm M}=1/\mu_{\rm ph}+1/\mu_{\rm imp}\); (iv) the mobility \(\mu\) calculated by including both carrier-phonon scattering and carrier-impurity scattering using the \(ai\)BTE.
In panels (a), (c), and (e) we see this comparison for Si, SiC, and GaP, respectively, as a function of temperature. As expected, in all cases the phonon-limited mobilities (black lines) decrease with temperature while the impurity-limited mobilities (red lines) do increase. Their combination results into the characteristic smooth peak which is best seen in the cases of Si and SiC. In these panels, the dashed blue lines are from Matthiessen's rule, and the solid blue lines are the complete \(ai\)BTE solutions. We see that the Matthiessen rule tends to overestimate the \(ai\)BTE mobility, and the deviation is particularly pronounced when the phonon and impurity contributions to the mobility reduction are comparable. To quantify the deviation between \(ai\)BTE calculations and the Matthiessen results, in panels (b), (d), and (f) of Fig. 5 we show the ratio between the two values, as a function of temperature. In all cases we see that the use of Matthiessen's rule leads to an overestimation of the mobilities by up to 50%, which is significant in the context of predictive calculations of transport properties. More importantly, for the compounds considered in this work (Si, SiC, and GaP), the use of Matthiessen's rule would worsen the agreement between calculated mobilities and experimental data.
Based on these findings, we caution against the use of Matthiessen's rule in future _ab initio_ calculations of carrier mobilities.
### Improving the predictive power of the \(ai\)BTE
In this section we investigate simple approaches to improve the predictive accuracy of the \(ai\)BTE by overcoming two standard limitations of DFT.
The first limitation is that the DFT band gap problem typically leads to an overestimation of the dielectric screening. As a result, both carrier-phonon and carrier-impurity matrix elements tend to be underestimated in DFT [93; 35], and mobilities tend to be overestimated. In Ref. [58] it was shown that, for a set of ten semiconductors, this effect leads to mobilities which can overestimate experimental data by as much as a factor of two. To mitigate this effect, we investigate a simple scaling correction to the matrix elements as follow:
\[g_{mn\nu}^{\rm corr}({\bf k},{\bf q})=\frac{\varepsilon_{\rm DFT}}{\varepsilon_ {\rm exp}}g_{mn\nu}^{\rm DFT}({\bf k},{\bf q}), \tag{33}\]
where \(\varepsilon_{\rm DFT}\) is our calculated value, and \(\epsilon_{\rm exp}\) is the experimental value. We use the high-frequency dielectric constant for the carrier-phonon matrix elements, as it was done in Ref. [10], and the static dielectric constants for the carrier-impurity matrix elements [see Eq. (9)]. This approach is meaningful for the systems considered in this work, because the majority of scattering processes occur near the band extrema, and therefore involve small scattering wavevectors \({\bf q}\), thus justifying the re-scaling of screening at long wavelength only.
The second limitation of DFT calculations lies in the inaccurate curvature of the bands, which is also linked to the band gap problem, leading to slightly inaccurate carrier effective masses. This limitation could be overcome by performing GW calculations, but in this work we investigate a simpler mass scaling.
According to Drude's formula, carrier mobilities are inversely proportional to the effective masses. Based on
this observation, we consider the following scaling correction, which is directly applied to the calculated mobility:
\[\mu_{\rm corr}=\frac{m_{\rm DFT}^{*}}{m_{\rm exp}^{*}}\mu_{\rm DFT}, \tag{34}\]
where all masses are isotropic averages.
The three compounds considered in this work all have ellipsoidal conduction band extrema, therefore we can evaluate the average isotropic mass as follows:
\[m^{*}=3(1/m_{\parallel}^{*}+2/m_{\perp}^{*})^{-1}. \tag{35}\]
Evaluating the average hole mass is more complicated owing to the band degeneracy at \(\Gamma\) and the fact that experimental data usually are reported for a given magnetic field direction as opposed to a crystallographic direction (see Sec. IV.1). In the case of silicon, we evaluate the average mass using the values extracted from Dresselhaus' model (see Sec. IV.1). After this averaging procedure, the hole mass is calculated following Ref. [58]:
\[m^{*}=\frac{m_{\rm hh}^{*,5/2}+m_{\rm lh}^{*,5/2}}{m_{\rm hh}^{*,3/2}+m_{\rm lh }^{*,3/2}}, \tag{36}\]
where all quantities on the r.h.s. are spherical averages in k-space. In the case of SiC and GaP we are not aware of a parametrization similar to Dresselhaus', therefore we do not investigate mass corrections in these cases.
The carrier mobilities obtained by applying the above corrections are shown in Fig. 6. In all cases we use the experimental dielectric constants reported in Tab. 2.
Panels (a) and (b) show our results for silicon. The screening correction to the electron mobilities of Si reduces the calculated value at low impurity concentration from 1381 cm\({}^{2}\)/Vs to 1133 cm\({}^{2}\)/Vs. This reduction causes an underestimation of the experimental value by approximately 20%. At higher impurity concentration, the corrected mobility agrees again well with experimental results. The corrections to the electron effective mass of Si are minor and do not affect the mobility. In the case of holes, the screening and mass corrections improve considerably the agreement between theory and experiment (our calculated average hole mass is 0.43 \(m_{\rm e}\) while the experimental value is 0.48 \(m_{\rm e}\)). In fact, we obtain a hole mobility of 463 cm\({}^{2}\)/Vs at low impurity concentration, which is within the measured value between 450 and 500 cm\({}^{2}\)/Vs[76; 77]. The improvement is also noticeable at higher impurity concentration.
Results for SiC are shown in panels (c) and (d) of Fig. 6. In this case, we find that screening and mass corrections do not significantly improve the agreement with experiments at low impurity concentration. In particular, the screening correction reduces the electron mobility from 2047 cm\({}^{2}\)/Vs to 1815 cm\({}^{2}\)/Vs, and the mass correction further reduces this value to 1688 cm\({}^{2}\)/Vs. Despite these corrections, the calculated electron mobility remains too high by about a factor of two. It is possible that additional scattering mechanisms such as dislocations could contribute to reduce this difference. In the case of the hole mobility, the screening correction reduces the calculated value at low impurity concentration from 164 cm\({}^{2}\)/Vs to 148 cm\({}^{2}\)/Vs, which is not significant when compared to the large spread of experimental values [81; 82; 83; 84].
The screening correction appears to be successful in the case of GaP, as seen in panels (e) and (f) of Fig. 6. The electron mobility at low impurity concentration reduces from 326 cm\({}^{2}\)/Vs to 243 cm\({}^{2}\)/Vs upon applying the screening correction. This value is in better agreement with the experimental data. Improved agreement with experiments is also found at higher impurity concentration. The correction to the electron effective mass of GaP is small, and as a result the change in mobility is not significant. The screening correction for holes brings the calculated data closer to the experiments. In particular, at low impurity concentration the hole mobility is reduced from 269 cm\({}^{2}\)/Vs to 226 cm\({}^{2}\)/Vs.
The key takeaway from this analysis is that the screening correction to the scattering matrix elements improves the agreement between theory and experiment for the compounds considered in this work. Based on the above observations, we suggest that screening and mass corrections could be used for the purpose of uncertainty quantification in future _ab initio_ calculations of transport properties.
## V Conclusions
In this work we have demonstrated non-empirical calculations of carrier mobilities in semiconductors using the _ab initio_ Boltzmann transport equations, including carrier scattering by phonons and by ionized impurities. To this end, we developed an _ab initio_ formalism to incorporate ionized-impurity scattering within the transport workflow based on Wannier-Fourier interpolation and implemented in the EPW code.
We described ionized impurities by randomly distributed Coulomb scatters, and we obtained the carrier relaxation time by using the Kohn-Luttinger ensemble averaging procedure. We also incorporated the screening of the impurity potential by free-carriers, within a parameter-free effective Thomas-Fermi model.
We validated our approach by performing an extensive set of calculations of the electron and hole mobilities of three common semiconductors, namely Si, 3C-SiC, and GaP. In all cases we find a reasonably good agreement with experimental data, except possibly for the electron mobility in SiC which is probably reduced by additional scattering at line defects in real samples. Our calculations follow closely the experimental data both as a function of temperature (at fixed impurity concentration) and as a function of impurity concentration (at fixed temperature).
Impurity scattering is found to dominate over phonon scattering at high impurity concentration and at low temperature. In the former case, the thermal distribution
function of the carrier is peaked near the band edges, therefore small-\(\mathbf{q}\) elastic scattering by impurities dominates. In the latter case, the phonon population becomes negligible at low temperature, therefore impurities remain the only active scattering channel. These trends are fully consistent with the general understanding of carrier transport in semiconductors [94]. We also found that the energy-dependent carrier scattering rates are strongly dependent on the detailed mechanisms at play in each compound, and vary significantly over the energy range of relevance for transport phenomena. This finding underlines the importance of detailed _ab initio_ calculations to achieve predictive accuracy in the description of transport phenomena of real materials.
In the presence of multiple scattering channels, it is common to analyze mobility data using the classic Matthiessen rule. However, by directly comparing \(ai\)BTE calculations including both phonon and impurity scattering with estimates based on Matthiessen's rule, we found that the latter lead to inaccurate results, with deviations of up to 50% with respect to \(ai\)BTE calculations. This finding indicates that Matthiessen's rule should not be employed in predictive calculations of transport properties.
Lastly, we investigated simple corrections to DFT calculations of carrier mobilities, by scaling the calculated dielectric screening and the effective masses via their corresponding experimental values. We found that the screening correction generally improves agreement with experiments.
Overall, our present approach offers a powerful tool for calculating transport properties in a variety of semiconducting materials of immediate interest, as well as for screening new putative semiconductors in the context of materials discovery.
Several improvements upon this work are possible. For one, we do not account for neutral impurity scattering. This additional channel could be added by generalizing our monopole model to account for dipoles and quadrupoles, following similar work performed in the context of electron-phonon interactions [47; 48; 49; 59]. Generalizations to the case of two-dimensional materials should also be possible, for example by following the related generalization of the Frohlich matrix element to two-dimensional systems [95; 96]. At high impurity concentration one should also account for carrier-plasmon scattering, for example as discussed in Ref. [12]. And of course, any improvement in the DFT band structures and electron-phonon matrix elements would be highly beneficial to further enhance the predictive power of these calculations [93]. We hope that this study will stimulate further work along these and other promising directions.
###### Acknowledgements.
This research is primarily supported by the Computational Materials Sciences Program funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, under Award No. DE-SC0020129. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources that have contributed to the research results reported within this paper: [https://www.tacc.utexas.edu](https://www.tacc.utexas.edu).
## Appendix A Incomplete ionization of dopant
In all calculations presented in this work, we have considered that the carrier density coincides with the impurity concentration. The implicit assumption underlying this choice is that all impurities are ionized at all temperatures. This is obviously a simplification, since the fraction of ionized impurities depends on the defect energy, the quasi Fermi level of the system, and the temperature. These aspects have already been discussed in the case of silicon in Ref. [14].
In this Appendix we analyze the effect of incomplete ionization for the case of silicon. To estimate the fraction \(f\) of ionized impurities at a given temperature, we use the Fermi-Dirac distribution evaluated at the defect level of the impurity atom, \(\epsilon_{\mathrm{d}}\)[52]:
\[f=\frac{1}{N_{\mathrm{imp}}^{\mathrm{uc}}}\sqrt{N_{\mathrm{imp}}^{\mathrm{uc }}\sum_{n,\mathbf{k}}\frac{1}{e^{(\epsilon_{\mathrm{e}k}-\epsilon_{\mathrm{d} })/k_{B}T}+1}}. \tag{12}\]
Here, \(N_{\mathrm{imp}}^{\mathrm{uc}}\) is the number of impurities per unit cell. This fraction vanishes when the temperature goes to zero, and approaches unity at high temperature.
In Fig. 7 we show the influence of incomplete impurity ionization on the electron mobility of Si. In these calculations, we used \(\epsilon_{\mathrm{d}}=\)45 meV as measured from the conduction band bottom [97]. By comparing these curves with Fig. 1(a), we see that the effect of incomplete ionization improves the agreement with experiments at low temperature and high doping (red curves in both figures). This is precisely the range where carrier-impurity scattering tends to dominate over phonon scattering, therefore it is important to have a precise determination of the impurity concentration in this range. A more systematic assessment of these effects will require a broader database of experimental mobilities to compare with.
Figure 1: Comparison between our calculated carrier mobilities in Si with experimental data. (a) Electron mobility of Si as a function of temperature. The black line and symbols are for low impurity concentration (no impurities in the calculations; \(<10^{12}\) cm\({}^{-3}\) impurities in the experiment); the blue line and symbols are for an impurity concentration of 1.75\(\times 10^{16}\) cm\({}^{-3}\); the red line and symbols are for a concentration of 1.3\(\times 10^{17}\) cm\({}^{-3}\). Filled disks are calculated values, open circles are experimental data from Ref. [75] (black) and [74] (blue and red). (b) Hole mobility of Si as a function of temperature. The black line and symbols are for low impurity concentration (no impurities in the calculations; \(10^{12}\) cm\({}^{-3}\) impurities in the experiment); the blue line and symbols are for an impurity concentration of 2.4\(\times 10^{16}\) cm\({}^{-3}\); the red line and symbols are for a concentration of 2.0\(\cdot 10^{17}\) cm\({}^{-3}\). Filled disks are calculated values, open circles are experimental data from Ref. [98] (black) and [74] (blue and red). (c) Room temperature electron mobility of Si as a function of impurity concentration. Blue line and filled disks are calculated data, open black circles are experimental data from Ref. [76].
Figure 2: Comparison between our calculated carrier mobilities in 3C-SiC with experimental data. (a) Electron mobility of Si as a function of temperature. The black line and symbols are phonon-limited mobilities (no impurities in the calculations); the blue line and symbols are for an impurity concentration of 5\(\times\)10\({}^{16}\) cm\({}^{-3}\). Filled disks are calculated values, open circles are experimental data from Ref. [78]. (b) Hole mobility of 3C-SiC as a function of temperature. The black line and symbols are phonon-limited mobilities (no impurities); the blue line and symbols are for an impurity concentration of 10\({}^{18}\) cm\({}^{-3}\). All data are calculated values. (c) Room temperature electron mobility of 3C-SiC as a function of impurity concentration. Blue line and filled disks are calculated data, open symbols are experimental data from Ref. [24], Ref. [79], Ref. [86], and Ref. [80]. (d) Room temperature hole mobility of 3C-SiC as a function of impurity concentration. Blue line and filled disks are calculated data, open symbols are experimental data from Ref. [81], Ref. [82], Ref. [83], and Ref. [84].
Figure 3: Comparison between our calculated carrier mobilities in GaP with experimental data. (a) Electron mobility of GaP as a function of temperature. The black line and symbols are phonon-limited mobilities (no impurities in the calculations); the blue line and symbols are for an impurity concentration of 2.5\(\times 10^{18}\) cm\({}^{-3}\). Filled disks are calculated values, open circles are experimental data from Ref. [87]. (b) Hole mobility of GaP as a function of temperature. The black line and symbols are phonon-limited mobilities (no impurities); the blue line and symbols are for an impurity concentration of \(2\times 10^{18}\) cm\({}^{-3}\). Filled disks are calculated values, open circles are experimental data from Ref. [87]. (c) Room temperature electron mobility of GaP as a function of impurity concentration. Blue line and filled disks are calculated data, open symbols are experimental data from Ref. [87], Ref. [88], Ref. [90], and Ref. [89]. (d) Room temperature hole mobility of GaP as a function of impurity concentration. Blue line and filled disks are calculated data, open symbols are experimental data from Ref. [87], Ref. [91], and Ref. [92].
Figure 4: Calculated carrier scattering rates at 300 K, for an impurity concentration of \(10^{17}\) cm\({}^{-3}\). (a) Hole scattering rates in Si: carrier-phonon scattering rates (black disks) and carrier-impurity scattering rates (blue disks), as a function of energy referred to the valence band maximum (VBM). The dashed line and the shaded area represents the thermal distribution of carriers. (b) Electron scattering rates in Si: carrier-phonon scattering rates (black disks) and carrier-impurity scattering rates (blue disks), as a function of energy referred to the conduction band minimum (CBM). (c) and (d): same as (a) and (b), but for 3C-SiC. (e) and (f): same as (a) and (b), but for GaP.
Figure 5: Comparison between mobility calculations performed using the _ai_BTE by including both carrier-phonon and carrier-impurity scattering, and mobilities obtained by using the Matthiessen’s rule. (a) Temperature-dependent electron mobility of Si. The black line and symbols indicate the phonon-limited mobility; the red line is the impurity-limited mobility, for an impurity concentration of \(1.3\times 10^{17}\) cm\({}^{-3}\); the dashed blue line is the mobility obtained from Matthiessen’s rule; the solid blue line is the _ai_BTE calculation including both phonons and impurities. (b) Ratio between the electron mobility of Si calculated using Matthiessen’s rule and the result of the _ai_BTE calculation with phonon and impurities, as a function of temperature. (c) and (d): Same as in (a), for for 3C-SiC with an impurity concentration of \(2.5\times 10^{18}\) cm\({}^{-3}\). (e) and (f): Same as in (a), for for 3C-SiC with an impurity concentration of \(5\times 10^{16}\) cm\({}^{-3}\).
Figure 6: Comparison of correction schemes for improving the predictive accuracy of _ai_BTE calculations of mobilities. (a) Room-temperature electron mobility of Si, as a function of impurity concentration. Blue lines and disks indicate the uncorrected _ai_BTE results; green lines and disks indicate calculations with matrix elements corrected for screening; purple lines and disks are calculations corrected for the effective masses; yellow lines and disks include corrections for both the screening and the effective masses. Open black circles are experimental data. (b) Room temperature hole mobility of Si as a function of impurity concentration: uncorrected (blue); with screening correction (green); with effective mass correction (purple); and with both screening and mass correction (yellow). (c) and (d): Same as in (a) and (b) but for 3C-SiC. (e) and (f): Same as in (a) and (b) but for GaP. The experimental data are the same as those reported in Figs. 1, 2, and 3 [86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 199, 190, 193, 195, 196, 197, 198, 199, 199, 190, 197, 198, 199, 199, 190, 191, 199, 191, 192, 193, 194, 195, 196, 197, 198, 199, 199, 199, 198, 199, 199, 199, 190, 199, 199, 191, 198, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199,
Figure 7: Electron mobility in Si as a function of temperature, including the effect of incomplete ionization of the dopants. The black line and disks are the calculated phonon-limited mobilities. These data are compared to measurements for pristine silicon (impurity concentration \(<10^{12}\)cm\({}^{-3}\)), from Ref. [75]. The blue disks and line are calculations for an impurity concentration of \(1.75\times 10^{16}\) cm\({}^{-3}\), taking into account incomplete dopant ionization as described in Appendix A. Experimental data are from Ref. [74]. The red disks and line are for an impurity concentration of \(1.3\times 10^{17}\) cm\({}^{-3}\), taking into account incomplete dopant ionization. Experimental data are from Ref. [74]. |
2310.13209 | Foundational Techniques for Wireless Communications: Channel Coding,
Modulation, and Equalization | This paper analyses foundational techniques for improving wireless
communication systems, including coding methods, modulation schemes, and
channel equalization. Using industry-standard simulation tools, the paper
evaluates the performance of these techniques under different channel
conditions. Convolutional codes, punctured and unpunctured, are assessed for
reliable data transfer. The suitability of various modulation schemes, such as
Phase Shift Keying (PSK) and Quadrature Amplitude Modulation (QAM), are
examined. Linear and decision-feedback equalization techniques are evaluated
for mitigating the effects of channel impairments. The paper provides practical
insights into the implementation of these techniques, emphasizing their
importance in modern wireless communication systems. | Solomon McKiernan | 2023-10-20T00:53:52Z | http://arxiv.org/abs/2310.13209v1 | # Foundational Techniques for Wireless Communications: Channel Coding, Modulation, and Equalization
###### Abstract
This paper analyses foundational techniques for improving wireless communication systems, including coding methods, modulation schemes, and channel equalization. Using industry-standard simulation tools, the paper evaluates the performance of these techniques under different channel conditions. Convolutional codes, punctured and unpunctured, are assessed for reliable data transfer. The suitability of various modulation schemes, such as Phase Shift Keying (PSK) and Quadrature Amplitude Modulation (QAM), are examined. Linear and decision-feedback equalization techniques are evaluated for mitigating the effects of channel impairments. The paper provides practical insights into the implementation of these techniques, emphasizing their importance in modern wireless communication systems.
wireless, LAN, convolutional, punctured, modulation, QAM, PSK, equalization, DFE
## I Introduction
Reliable data transfer is a crucial requirement for modern wireless communication systems. In this paper, an analysis of foundational techniques used in wireless communication systems to achieve reliable data transfer is presented. The focus of this analysis is on the use of coding methods and industry-standard simulation to evaluate the performance of modulation schemes and channel equalization techniques. These techniques have been assessed against standard key performance indicators for wireless communication systems, namely the signal-to-noise ratio (SNR) and bit error rate (BER).
This paper is grounded in the theoretical framework of information theory and signal processing, offering practical implications into utilizing these techniques in modern wireless communication systems. By analysing and evaluating these techniques, the significance of deliberate selection and assessment of coding, modulation, and channel equalization techniques in designing wireless communication systems is highlighted.
## II Convolutional Coding Overview
Convolutional encoding is widely used in wireless communication systems to ensure reliable data transfer. It employs a shift register to perform bitwise operations on the input data stream, generating an encoded output to be transmitted based on the contents of the registers and feedback connections. The received signal is then typically decoded using a Viterbi decoder to recover the original data.
By operating on a sliding window of input data bits, convolutional encoding outputs a set of encoded bits that are wider than the original input due to the added error correction. This wider bit stream adds redundancy to the data and improves the robustness of the transmitted signal, in turn this reduces the bit error rate (BER).
This paper presents a detailed analysis of convolutional encoding, including its mathematical foundations, practical implementation, and application in modern wireless communication systems. It demonstrates how convolutional encoding can improve the reliability of data transmission by reducing the effect of noise and interference in the communication channel. To achieve this, one of the primary methods used is simulation in MATLAB, an industry-standard software package, in conjunction with their Communications Toolbox.
This paper also explores the use of the punctured coding system, which allows for the encoding and decoding of higher rate codes using standard rate 1/2 encoders and decoders. Punctured convolutional encoding involves selectively removing certain bits from the resulting code after performing convolutional encoding on the original code. Further explanation and demonstration of this process is shown in the following sections.
## III Channel Coding Models
Puncturing is used in convolutional coding to selectively remove some of the parity bits from the code to adjust the code rate to better suit the channel. The use of transpose matrix is important for punctured convolutional coding because it allows for the systematic encoding of the code and helps reduce the complexity of the decoder. The transpose matrix is used to puncture the code by removing specific rows, and to recover the original data by reversing the puncturing process during decoding.
The Punctured Convolutional Coding model in Fig. 1 demonstrates a punctured coding system that uses rate 1/2 convolutional encoding and Viterbi decoding. The model simulates the transmission of a convolutionally encoded binary phase shift keying (BPSK) signal through an additive white Gaussian noise (AWGN) channel, and then recovers the original uncoded signal using Viterbi decoding.
Fig. 1: Punctured Convolutional Coding Model in Simulink
The model computes the error rate by comparing the original signal and the decoded signal. This demonstrates how the puncturing technique can enable encoding and decoding of higher rate codes using standard lower rate coders and provides a practical example of how this technique can be implemented in modern wireless communication systems using the Simulink software package.
A custom script was written to automate a sweep of E\({}_{b}\)/N\({}_{0}\) (dB) values, generating the dataset plotted on Fig. 2. When the E\({}_{b}\)/N\({}_{0}\) is at a value of 6 dB or greater, practically no bit errors are observed. Hence, this shows with a suitably a high SNR, and therefore transmit power, one can effectively eliminate AWGN-induced errors. This trend of increasing E\({}_{b}\)/N\({}_{0}\) with decreasing BER is demonstrated throughout this paper. While this general trend is well established, the relative power required of different techniques is of practical significance. Under the majority of conditions shown, forward error correction (FEC) reduces the power required for a given BER.
To confirm the validity of these results, they were compared with an established performance bound as defined in [1]. The BER performance of a rate \(r=(n-1)/n\) punctured code is bounded by (1).
\[P_{b}\leq\frac{1}{2(n-1)}\sum_{d=d_{free}}^{\infty}\omega_{d}erfc(\sqrt{rd(E_{ b}/N_{0})} \tag{1}\]
The expression involves several parameters and is used to determine the theoretical maximum performance of a punctured code with the aforementioned rate. It consists of a sum of terms that depend on a parameter called \(\omega_{d}\) where \(d\) d is the degree of the polynomial used to generate the punctured code. Additionally, the expression includes the complementary error function (\(erfc\)) of the square root of a parameter that takes into account the received signal energy per bit (Eb) and the noise spectral density (N0). For more information, see [1]. Using a larger E\({}_{b}\)/N\({}_{0}\) to gain a better overview, the BER vs E\({}_{b}\)/N\({}_{0}\) plot shown in Fig. 2 was generated, with validity proven by the bound. Fig. 2 displays the relationship between the BER and E\({}_{b}\)/N\({}_{0}\) of a punctured code. As E\({}_{b}\)/N\({}_{0}\) increases, BER decreases, demonstrating an inverse relationship between the two variables. The steepness of the roll-off is important in that it reduces the Eb/No at which you can achieve an arbitrarily low BER.
There are two main types of channel coding: convolutional coding, as aforementioned, and block coding. Block coding involves dividing the data into blocks of fixed size and adding parity check bits to each block. These parity check bits are calculated from the data in the block and are used to detect and correct errors. The main advantage of block coding is its simplicity and ease of implementation. Reed-Solomon (RS) codes are a commonly used type of block code. RS codes typically require larger word sizes compared to convolutional codes so are more suitable for applications with higher data rates.
While both types of codes can provide error-correction capabilities, they have different properties that make them more suitable for certain applications. Block codes are typically used in applications that require high levels of error correction, such as satellite and deep space communication. This is due to block codes working by dividing the message into fixed-length blocks and adding redundancy to each block with their most important feature being the size of the blocks can be readily increased to obtain better performance. Convolutional codes, on the other hand, are better suited for applications where minimal implementation costs are a priority, such as WLANs with lower error-correction capabilities. This is due to convolutional codes working by encoding the message as a continuous stream of bits, using a shift register and a feedback function. Simulations were run over the same E\({}_{b}\)/N\({}_{0}\) range, keeping the code rate of 5/7, QPSK modulation scheme, and data input seed constant; see Fig. 3.
Fig. 3 shows a simulation of RS outperforming convolutional codes at low to medium E\({}_{b}\)/N\({}_{0}\), while convolutional codes outperform RS codes at higher E\({}_{b}\)/N\({}_{0}\). This trend occurs because RS codes can correct a fixed number of errors per block, while convolutional codes can correct errors continuously. As the SNR decreases, the probability of having a larger number of errors in the block increases, and the RS code becomes less effective. At high SNR, the error rate is low, and convolutional codes, with their continuous error correction capability, can take advantage of this to increasingly achieve better performance. It is important to note that generally only BERs under \(10^{-3}\) are considered acceptable for reliable transmission, in which case convolutional is shown to outper
Fig. 3: Convolutional vs Reed-Solomon Coding BER plot
Fig. 2: BER vs E\({}_{b}\)/N\({}_{c}\): Punctured Convolutional Coding
The plot also shows both coded systems performing worse than uncoded systems at very low SNR. This is most likely due to the coding overhead reducing the signal power, making it more difficult to detect the signal at the receiver. As the SNR increases, the benefits of coding progressively outweigh the overhead, and the coded system begins to outperform the uncoded system as intended.
While an RS block size was selected to provide a fair baseline comparison, with higher block sizes, RS code would be expected to increasingly outperform convolutional code.
## IV Signal to Noise Ratio of Radio LAN
To analyse a communication link with increased complexity, and therefore more akin to real-world modern applications, a Simulink model was created based on the High-Performance Radio Local Area Network (HIPERLAN/2) which is described in the European Telecommunications Standards Institute's (ETSI) specification for high-rate wireless Local Area Networks (WLAN). Despite HIPERLAN/2 itself being outdated, it shares many key fundamental blocks with modern wireless communication systems, while still allowing for clear analysis of modulation, coding, and channel equalization techniques. These techniques, such as orthogonal frequency division multiplexing (OFDM), are still utilised in contemporary radio technologies such as IEEE 802.11ax [2], marketed as WI-FI 6E.
The model employs OFDM in the 5 GHz band, offering raw data rates up to 54 Mbps. Using this model, the transmitter-side channel coding and modulation for the 16-QAM, 3% code rate mode, along with an ideal receiver chain and AWGN channel was demonstrated. The model was based on the HIPERLAN/2 simulation setup shown in [3], updating various blocks and replacing fixed values with workspace variables allowing automated sweeps of the model to generate comparison plots. The simulation model discussed is shown in Fig. 4. The link with an SNR set to 25 dB produced the spectrum plot shown in Fig. 5 along with the Constellation Diagram in Fig. 6.
Through simulations of this model, one can gain a better understanding of the performance of wireless LANs under different coding and modulation schemes, which is useful for future wireless communication system designs.
The Spectrum Scope showed a signal with a real, i.e., above 0Hz, bandwidth of 8 MHz. In this range the signal power spectral density magnitude fluctuates around -90 dBW/Hz, approximately 25 dB above the noise floor as specified. The frequency spectrum plot for the OFDM signal is not flat due to the channel induced noise.
The Constellation Diagram, Fig. 6, shows that the channel noise has affected the signal quality, causing errors in the received signal with less sharp points i.e., larger clusters.
The Error Vector Magnitude (EVM) is a measure of how much a received constellation point deviates from the ideal reference value for a given modulation index & can be determined from the Constellation Diagram using (2). The maximum permissible EVM is set by the IEEE and for this system, 16-QAM modulation with 3% code rate, is -19dB [4].
\[EVM=\frac{\frac{\sum_{i=1}^{L_{p}}N_{c}^{N_{c}}{\left|{{\cal{S}}_{i}}\right|} ^{2}}{\left\|{{{\cal{S}}_{i}}^{2}}\right\|^{2}}}{\frac{\sum_{i=1}^{L_{p}}N_{c }^{N_{c}}{\left|{{\cal{S}}_{i}}\right|}^{2}}{\left\|{{{\cal{S}}_{i}}^{2}} \right\|^{2}}} \tag{2}\]
This calculation considers the number of frames (\(L_{p}\)), number of carriers (\(N_{c}\)), received signal (\(R_{i,j}\)) and ideal symbol location (\(S_{i,j}\)). For evaluating a single received point, the number of frames and carriers are set to one, resulting in (3) which has been expressed in logarithmic units to allow for direct evaluation against the upper permissible limit.
\[EVM_{pt,(dB)}=\frac{\left|{{{\cal{S}}_{i}}\right|}-{{{\cal{S}}_{i}}}_{i}{ \left|{{{\cal{S}}_{i}}}\right|}}{\left|{{{\cal{S}}_{i}}\right|}} \tag{3}\]
Using the point furthest from its respective reference shown in the top-left decision region of Fig. 6, a value of \(-29dB\) can be calculated, far below the -19dB limit. This demonstrates that the system in Fig. 4 has high signal quality due to the use of channel coding and modulation.
Running a sweep of AWGN SNR values -5 to 25 dB with a constant channel input signal power 0.01 W generated the values shown in Table 1.
Fig. 4: HIPERLAN/2 based Simulink model
Fig. 5: HIPERLAN/2 Spectrum Plot
Fig. 6: HIPERLAN/2 Constellation Plot
Plotting this data with, including intermittent datapoints, gave Fig. 7. Note the distinction between SNR and E\({}_{\text{b}}\)/N\({}_{0}\) becomes more relevant when comparing various modulation indexes, i.e., differing spectral efficiencies, \(n\), due to (4).
\[SNR=\left(\frac{\mu_{\text{b}}}{N_{0}}\right)\cdot n \tag{4}\]
Spectral, also referred to as bandwidth, efficiency is the information rate that can be transmitted over a given bandwidth in a system. In general, the higher the modulation order, M, the higher the spectral efficiency. Higher-order modulations allow for more bits to be transmitted per symbol, and therefore more data to be transmitted in a given bandwidth. However, as the modulation order increases, the required SNR or E\({}_{\text{b}}\)/N\({}_{0}\) for reliable transmission also increases, which can limit the achievable spectral efficiency.
As anticipated, the correlation between Signal-to-Noise Ratio (SNR) and Bit Error Rate (BER) in the 16-QAM graph follows an inverse proportion. Put simply, as the SNR increases, the BER decreases and vice versa, the same trend as shown in Fig. 2. This is because a higher SNR indicates a more robust signal relative to noise, leading to fewer transmission errors. Conversely, a lower SNR denotes a weaker signal compared to noise, making it harder to differentiate between the signal and noise and elevating the probability of errors.
Consequently, the SNR is a critical factor influencing communication quality in a 16-QAM system, and a suitably high SNR is essential to minimize BER and guarantee dependable data transmission.
Simulations were also run to explore usage of different convolutional coding schemes, these used two standard code rates \(R=1/2\) and \(R=3/4\); defined in [4].
This specific pair was chosen because one could be achieved with puncturing and the other without. The resulting plot is shown in Fig. 8. This plot demonstrates that while puncturing code, increasing the Code Rate, can transmit more data, it requires more signal power to achieve the same BER.
The unpunctured code still has Convolutional Encoding resulting in a Code Rate of 1/2 i.e., 50% of the transmitted bits are used for error correction, making it still more resilient to noise than default. The punctured code allows more data to be transmitted, using a Punctuation Vector of is [1 1 1 0 0 1]\({}^{\prime}\) which has a Puncturing Rate of 4/6, resulting in an overall Code Rate of 3/4 i.e., only 25% redundancy bits.
The Convolutional Encoder block is a key component in the model, which enhances the error correction capabilities of the system. This block processes a binary input sequence and generates a binary output. The block applies a convolutional code to the input sequence, which adds redundancy to the data to make it more resilient to errors.
To configure the Convolutional Encoder block, the puncture vector and the Trellis structure parameters must be set. The puncture vector is a pattern of 1s and 0s that indicates the kept and punctured bits. In the model, the puncture vector is set to [1 1 1 0 0 1]\({}^{\prime}\), where the 0s are the punctured bits. The Trellis structure parameter specifies the encoder using its constraint length, generator polynomials, and possibly feedback connection polynomials.
The Trellis structure is set to \(poly2trellis(7,[133 171])\) which means that it uses an encoder with a constraint length of 7 and code generator polynomials of 171 and 133 in octal notation.Constraint length determines the number of previous input bits that affect the current output bit while the generator polynomials are used to define the mapping between the input and output of the convolutional encoder.
## V Modulation Schemes
Modulation schemes are used to encode data onto a carrier wave before transmission. The choice of modulation scheme affects the bandwidth efficiency, robustness to noise, and data rate of the system; this is elaborated on in [5]. As mentioned prior, a key measure of the performance of a modulation scheme is its BER, which is the probability of an error in a single bit transmission. Different modulation schemes have different BER characteristics, and it is important to compare them to determine their suitability for a given application.
Fig. 8: Convolutional punctured vs unpunctured
Fig. 7: HIPERLAN/2 BER vs SNR simulated plot
Therefore, several modulation schemes commonly used in wireless communication systems were simulated with the results presented in a BER comparison. To achieve this, the modulation and demodulation block were replaced within the model shown in Fig. 4 along with iterative adjustments to the modulation index defined in the code. All blocks requiring parameter values derived from the modulation index, such as the OFDM modulator, were updated with the change in the global \(M\) variable correspondingly. The resultant BER plot is shown in Fig. 9.
Fig. 9 shows at lower values of E\({}_{b}\)/N\({}_{0}\), the response is relatively flat due to the noise floor dominating the received signal resulting in errors occurring for even low levels of modulation. As the E\({}_{b}\)/N\({}_{0}\) increases, the SNR also increases, and the probability of bit errors decreases, resulting in a lower BER. This occurs until a certain threshold, known as the "knee point," is reached, which typically occurs around +5dB to +10dB for many communication systems, corresponding with what was simulated. Beyond this point, the slope of the BER vs E\({}_{b}\)/N\({}_{0}\) curve becomes steeper, indicating a faster decrease in the probability of bit errors as the SNR increases.
Fig. 9 provides important insights into the Bit Error Rate (BER) performance of different Modulation Indexes (M) of 2, 4, 8, 16, 64, and 256, at varying E\({}_{b}\)/N\({}_{0}\) (dB) values. As M increases, BER performance worsens due to higher sensitivity to noise and interference.
The lowest modulation schemes such as BPSK achieve lower BER with the least E\({}_{b}\)/N\({}_{0}\), whereas the highest modulation scheme, 256-QAM, requires the largest E\({}_{b}\)/N\({}_{0}\) to achieve a given BER. However, typically the higher M the more information can be transmitted given a suitable E\({}_{b}\)/N\({}_{0}\) therefore resulting in increased spectral efficiency. By comparing these schemes, one can determine the most suitable option for a given application. System requirements for BER and spectral efficiency along with available transmit power, relating to E\({}_{b}\)/N\({}_{0}\), are key deciding factors when selecting a modulation scheme. In summary, Fig. 9 highlighting the trade-off between performance and spectral efficiency in M-PSK and M-QAM modulation. Designers should carefully select the modulation scheme that best meets their needs while balancing these factors. For example, if large amounts of data are required and there is substantial transmit power available, to achieve a desired BER, then 256-QAM may be a suitable choice.
IEEE 802.11ax (WI-FI 6E) only includes BPSK, QPSK, 16-QAM, 64QAM, 256QAM [4] but additional schemes have been included for comparison and to highlight why their respective trade-offs between data capacity and signal power have resulted in them being excluded from the standard. Wi-Fi 6E introduced 256-QAM, which offers higher data rates at the expense of increased susceptibility to noise and interference, a trend clear in Fig. 9.
To generate the data plotted in Fig. 9 the OFDM Fast Fourier Transform (FFT) length was changed from 128 to 256, both standard values from [4] chosen to simplify the FFT, in order to accommodate the BPSK scheme. This was required as the total number of OFDM carriers, \(N_{SC}\), is in fact equal to the FFT length. However, not all of these subcarriers can be used for data transmission, \(N_{D}\), as some are reserved for other purposes, such as subcarrier pilots, \(N_{P}\), and DC null subcarrier, \(N_{DC}\); the DC null in particular can be seen in Fig. 5. For the subsequent simulations, only two guards were used, with the first guard calculated as shown in (5) and the second guard simply set as one less than this.
\[G=\frac{N_{geo}-N_{D}-N_{P}}{2} \tag{5}\]
To complete a fair comparison, the normalization factor for the PSK schemes was set to a constant value; this is due to M-PSK having a constant amplitude, and therefore constant average power per symbol, just varying phases. Conversely, the QAM schemes had a varying normalization factor dependent on M, shown in, due to M-QAM having multiple amplitude levels and therefore average powers. These normalization factors ensured that the transmit power was constant regardless of the modulation scheme.
## VI Equalization
Equalizers are signal processing blocks that are used to compensate for channel distortions in wireless communication systems. The distortions arise due to multipath propagation, frequency-selective fading, and other impairments in the wireless channel. Essentially, they estimate the channels frequency response in order to apply the inverse, cancelling out the distortions. In WLAN systems, equalizers can help improve the performance of the receiver by mitigating the effects of these distortions leading to improved BER. However, they are not always necessary, and their use depends on the specific communication standards and system requirements. For example, HIPERLAN/2 and IEEE 802.11ax [4] use techniques like OFDM and MIMO to overcome channel impairments, which obviate the need for equalizers.
However, WLANs may still require an equalizer in scenarios where there is a significant amount of channel distortion or interference. For example, in indoor environments with obstacles such as walls or in outdoor environments with multiple reflective surfaces, the transmitted signal can experience multipath fading, where the signal arrives at the receiver via multiple paths with different delays and amplitudes. This can cause inter-symbol interference (ISI), where the symbols transmitted in one time interval interfere with the symbols transmitted in the adjacent time intervals.
In such scenarios, an equalizer can be used to mitigate the effect of ISI and improve the bit error rate (BER) performance. However, in modern systems, OFDM is commonly selected as the primary technique used to combat ISI. While OFDM
Fig. 9: HIPERLAN/2 based BER plot for various modulation schemes
can practically eliminate ISI, when the maximum multipath delay is less than the guard interval, equalizers can still be useful to mitigate frequency selective fading.
It is important to compare equalizer types for WLAN as different equalizers have different complexity, performance, and adaptability trade-offs. For example, decision-feedback equalizers (DFE) can provide excellent performance but at the cost of higher complexity compared to linear equalizers. Similarly, adaptive equalizers can better handle time-varying channels but require additional training overhead. Understanding these trade-offs and choosing the right equalizer type for a particular WLAN scenario can lead to better performance and spectral efficiency. Modifying the code provided in [6], a maximum likelihood sequence estimation (MLSE) equalizer is demonstrated estimating a channel frequency response, see Fig. 10. Increasing the range of [6] resulted in Fig. 11.
Fig. 10 demonstrates that these equalizers can dynamically provide a suitably accurate estimation of the channel which in turn allows them to attempt to compensate for any distortions. This was run at an E\({}_{\text{b}}\)/N\({}_{\text{0}}\) of 14dB and resulted in a BER well below 10\({}^{\text{-3}}\) shown in Fig. 11. As the E\({}_{\text{b}}\)/N\({}_{\text{0}}\) increases, the channel estimation of MLSE generally becomes more accurate. This is because at higher E\({}_{\text{b}}\)/N\({}_{\text{0}}\), there is less noise and the received signal is more reliable. As a result, the channel can be estimated with greater accuracy, leading to improved performance of the MLSE equalizer. Fig. 11 presents a comparison of three different equalizer types based on their BER performance under a range of E\({}_{\text{b}}\)/N\({}_{\text{0}}\) conditions.
It is observed that the MLSE equalizer performs the best across all SNR values, while the linear equalizer has the worst performance. The DFE falls in between these two but is slightly closer to the MLSE equalizer. The performance of the linear equalizer tends to diverge with increasing E\({}_{\text{b}}\)/N\({}_{\text{0}}\), which suggests that it may be better suited for low-power systems particularly where complexity is a concern.
For high-power systems or scenarios with more severe channel distortions, more advanced equalizers such as the DFE or MLSE equalizer may be necessary to achieve acceptable performance levels. On the other hand, the DFE strikes a good balance between performance and complexity. While it requires more computational resources than the linear equalizer, it is still more practical than the MLSE, known for its high complexity and computational requirements.
Selecting the right equalizer for a WLAN system requires careful consideration of multiple factors, including the specific channel characteristics, the desired data rates, power consumption, and computational resources. Fig. 11 provides an indication of how each type of equalizer will perform comparatively and highlights the trade-off between performance and complexity.
## VII Summary
The contents of this paper demonstrates that the selection and design of coding methods, modulation schemes, and equalizers are critical for achieving the desired data rates and quality of service in modern wireless communication systems and highlights some of the most important trade-offs between popular techniques. These techniques must be chosen based on the specific requirements of the wireless communication system, such as power consumption, computational resources, and channel characteristics. Through simulations using industry-standard system modelling software, this paper has demonstrated the effectiveness of various coding and modulation techniques, as well as the suitability of different equalization techniques for wireless communication systems.
|
2303.13331 | A [3]-catenane non-autonomous molecular motor model: geometric phase,
no-pumping theorem, and energy transduction | We study a model of synthetic molecular motor - a [3]-catenane consisting of
two small macrocycles mechanically interlocked with a bigger one - subjected to
a time-dependent driving using stochastic thermodynamics. The model presents
nontrivial features due to the two interacting small macrocycles, but is simple
enough to be treated analytically in limiting regimes. Among the results
obtained, we find a mapping into an equivalent [2]-catenane that reveals the
implications of the no-pumping theorem stating that to generate net motion of
the small macrocycles, both energies and barriers need to change. In the
adiabatic limit (slow driving), we fully characterize the motor's dynamics and
show that the net motion of the small macrocycles is expressed as a surface
integral in parameter space which corrects previous erroneous results. We also
analyze the performance of the motor subjected to a step-wise driving protocols
in absence and in presence of an applied load. Optimization strategies for
generating large currents and maximizing free-energy transduction are proposed.
This simple model provides interesting clues into the working principles of
non-autonomous molecular motors and their optimization. | Massimo Bilancioni, Massimiliano Esposito, Emanuele Penocchio | 2023-03-23T15:13:02Z | http://arxiv.org/abs/2303.13331v1 | # A [3]-catenane non-autonomous molecular motor model:
###### Abstract
We study a model of synthetic molecular motor - a [3]-catenane consisting of two small macrocycles mechanically interlocked with a bigger one - subjected to a time-dependent driving using stochastic thermodynamics. The model presents nontrivial features due to the two interacting small macrocycles, but is simple enough to be treated analytically in limiting regimes. Among the results obtained, we find a mapping into an equivalent [2]-catenane that reveals the implications of the no-pumping theorem stating that to generate net motion of the small macrocycles, both energies and barriers need to change. In the adiabatic limit (slow driving), we fully characterize the motor's dynamics and show that the net motion of the small macrocycles is expressed as a surface integral in parameter space which corrects previous erroneous results. We also analyze the performance of the motor subjected to a step-wise driving protocols in absence and in presence of an applied load. Optimization strategies for generating large currents and maximizing free-energy transduction are proposed. This simple model provides interesting clues into the working principles of non-autonomous molecular motors and their optimization.
## I Introduction
Over the last decades, stochastic thermodynamics has developed as a theory describing the energetics of mesoscopic systems driven far from equilibrium [1; 2; 3; 4; 5; 6]. It has been used to study systems such as colloidal particles [7; 8; 9], chemical reaction networks [10; 11; 12; 13], electronic circuits [14; 15; 16], and biological molecular motors. In this latter case, the quest for a detailed assessment of their thermodynamic performance is particularly important and being actively pursued [17; 18; 19; 20; 21]. Surprisingly however, despite the concurrent bloom of artificial molecular motors [22; 23; 24; 25], few studies analyzed these systems through the lens of stochastic thermodynamics [26; 27; 28]. Yet, because the chemistry of these motors is relatively simple [29; 30; 31; 32], elementary models often grasp many key aspects of their kinetics [33; 34; 35; 36; 37; 38]. This makes them ideal case studies for probing the extent to which stochastic thermodynamics can be helpful to deepen our understanding of their working and suggest ways to design and operate them optimally. So far, the vast majority of these artificial systems operate non-autonomously, meaning that a directional flow emerges due to a periodic external time-variation of parameters such as electric potential [39; 40; 25], light irradiation intensity [41; 42], or the concentrations of chemicals [43; 44; 45; 46; 47; 48; 49; 50]. In the theoretical literature, these non-autonomous systems, often called stochastic pumps, are well understood in the adiabatic limit [51; 52; 53; 54; 55; 56] (i.e., when the parameters are slowly driven) and in the linear regime [57; 58; 59] (i.e., for weak perturbations). Outside these two regimes, a universal no-pumping theorem has been derived [60], and general comparisons with autonomous molecular motors have been drawn [61; 62]. However, a comprehensive theory accounting for their behavior in arbitrary regimes is still lacking. As a result, system-specific studies [63; 64] are very valuable for better characterizing the different modes of operations of these non-autonomous molecular motors. This paper goes precisely in this direction by focusing on non-autonomous catenane-based molecular motors [41; 25; 47], i.e., systems composed of two or more mechanically interlocked macrocycles (ring-like molecules).
We based our study on a model of a three-macrocycle catenane motor made of two small macrocycles mechanically interlocked with a bigger one. This model was previously introduced in Ref. [58; 34]. It is simple enough to be treated analytically and, at the same time, presents non-trivial features arising from the presence of the two small interacting macrocycles. It has been previously studied in the limit of adiabatic operation, where the molecular motor behaves as a reversible pump, and geometric effects reminiscent of the Berry phase in quantum mechanics arise [34]. However, an incorrect formula has been derived to quantify these geometric effects [58; 34]. Here, we correct and further elaborate on it. We also find a mapping of the motor dynamics into that of a two-macrocycle catenane that elucidates its relation with the no-pumping theorem. In addition,
we characterize the dynamic and thermodynamic behaviour of the model beyond the adiabatic regime by studying a step-wise driving protocol that mimics how non-autonomous molecular motors are experimentally operated. We do so both in the absence and presence of a load, finding optimal protocols to maximize specific quantities. In the first case where there is no output work, we introduce a non-thermodynamic coefficient that measures the motor's performance. In the second case, we study the output power and the transduction efficiency, and we develop a method for estimating the stopping force.
This paper is organized as follows. We introduce the three-macrocycle catenane motor model in Sec. II, explaining how its non-autonomous operation works (Sec. II.1) and discussing its relationship with the no-pumping theorem by leveraging the aforementioned mapping into a two-macrocycle catenane. In Sec. III, we investigate the motor's free dynamics, i.e., its behavior in absence of an applied load. This includes the adiabatic limit (Sec. III.1) and the detailed study of a step-wise driving protocol (Sec. III.2). In Sec. IV, we introduce a load and analyze the ability of the motor to perform free energy transduction [65] under the adiabatic IV.1 and the step-wise driving protocol (Sec. IV.2), proposing a method to estimate the stopping force in the latter regime (Sec. IV.3).
In this paper, energy-related quantities will be always expressed in units of \(k_{B}T\) unless otherwise specified. Furthermore, the subscript "\(cyc\)" will represent the average of the corresponding quantity over a cycle of the driving protocol.
## II The model
Our case study is a [3]-catenane consisting of two small macrocycles mechanically interlocked with a bigger one (Fig. 1a). The three macrocycles, hereafter denoted as the two _rings_ (yellow in Fig. 1a) and the _track_ (gray in Fig. 1a), can move relative to each other. In the following, we will always refer to the movement of the rings with respect to the track. The latter hosts three binding sites labeled \(a\), \(b\), and \(c\), namely stations where the rings sit preferentially due to favorable interactions. The two rings, which we treat as identical, cannot pass one another nor occupy the same station due to steric (i.e., repulsive) interactions between them.
We construct a coarse-grained model of the system in terms of discrete (meso)states: each of these states is the collection of all the possible microscopic configurations in which the two rings occupy a given pair of stations. This coarse-graining is legitimate when the microscopic dynamics is much faster than the mesoscopic one, as explained in appendix A. Overall, due to the identical nature of the rings, the system has three possible states, each labeled by the uppercase letter \((A,B,C)\) corresponding to the unoccupied station (see Fig. 1a). By
Figure 1: The [3]-catenane motor. **a)** Chemical reaction network of the molecular motor [58; 34]. The [3]-catenane motor comprises two identical small rings (colored in yellow) mechanically interlocked with a larger ring acting as a track for the small rings’ shuttling. The track presents three distinguishable stations, denoted \(a\), \(b\), and \(c\), where the small rings sit preferentially due to attractive interactions. Each station can host up to one ring. We therefore consider three possible (meso)states (see Appendix A), denoted \(A\), \(B\), or \(C\) based on which station is unoccupied, connected by three reversible transitions with rate constants \(k_{IJ}\). The subscript \(IJ\) denotes a transition from state \(J\) to state \(I\), corresponding to the yellow ring in station \(i\) jumping into station \(j\). **b)** Pictorial representation of the potential free energy surface seen by a ring while shuttling along the track (without taking into account the interaction with the other ring). Each station corresponds to a free energy minimum and the absolute height \(\mathcal{B}\) of the barrier between each couple of station is assumed to be the same. Specifically, we depicted a configuration where the \(c\) station is less stable compared to the other, favouring state \(C\). The exclusion effect preventing the two rings to occupy the same station simultaneously is not represented (see the discussion in the main text).
Fig. 1a. Transitions in which one ring jumps in a station that is occupied by the other ring are prevented by repulsive interactions. Apart from this exclusion effect, the two rings jump independently from each other. Assuming that the free energy barrier between any two stations has the same absolute height \(\mathcal{B}\) for each transition, the rate constants in the network can be expressed in the Arrhenius form:
\[\begin{array}{l}k_{AB}=k_{AC}=\mathcal{A}\,e^{-(\mathcal{B}-\varepsilon_{a}) }\;\;\text{(ring in $a$ jumps)}\\ k_{BC}=k_{BA}=\mathcal{A}\,e^{-(\mathcal{B}-\varepsilon_{b})}\;\;\text{(ring in $b$ jumps)}\\ k_{CA}=k_{CB}=\mathcal{A}\,e^{-(\mathcal{B}-\varepsilon_{c})}\;\;\text{(ring in $c$ jumps)}\end{array} \tag{2}\]
Note that the activation energy appearing in each rate is the initial energy of the ring that perform the jump. As it should be for thermodynamic consistency, they obey local detailed balance (which in this context corresponds to microscopic reversibility [26]):
\[\frac{k_{IJ}}{k_{JI}}=\exp(E_{J}-E_{I}) \tag{3}\]
### Non-autonomous operation
The non-autonomous operation of this molecular motor consists of a periodic driving protocol of period \(\tau\) that only changes the free energies of the stations without any modification of the barriers' absolute height \(\mathcal{B}\). We assume that any periodic driving protocol defined by a control parameter \(\pi(t)\), \(\varepsilon_{i}(t)\equiv\varepsilon_{i}(\pi(t))\), is in principle realizable. This kind of periodic protocols can induce directional flow of the rings around the track. The reason can be understood with the help of Fig. 1a. Suppose we start with \(\varepsilon_{c}\gg\varepsilon_{a},\varepsilon_{b}\), so that the system will be with high probability in state \(C\). Then, the free energies of the stations are switched to a new configuration where \(\varepsilon_{b}\gg\varepsilon_{c},\varepsilon_{a}\), so that the ring in \(b\) is now in a high energetic station favoring its jump into either \(a\) or \(b\), but since \(a\) is occupied by the other ring, the forward jump into \(c\) will be preferred resulting in state \(B\). Then, the free energies of the stations are switched to \(\varepsilon_{a}\gg\varepsilon_{b},\varepsilon_{c}\), so that the ring in \(a\) will most likely jump into \(b\), as the ring that previously jumped into \(c\) now blocks the backward movement, yielding the state \(A\). After this, the cycle repeats. We specify that, in the following, we focus on the behavior of the system in the periodic regime, that is, the behavior of the system after many cycles of driving occurred.
The driving protocol leads to time-dependent rates
\[k_{IJ}(t)=\mathcal{A}\,e^{-(\mathcal{B}-\varepsilon_{i}(t))} \tag{4}\]
which at any instant satisfy the local detailed balance condition (Eq. (3)). As a consequence, the probability distribution evolves according to a master equation with a time-dependent transition matrix:
\[\dot{\mathbf{p}}=\mathbb{W}(t)\,\mathbf{p}(t)\,, \tag{5}\]
where
\[\mathbb{W}(t)=\mathcal{A}\,e^{-\mathcal{B}}\,\mathbb{M}(t) \tag{6}\]
and
\[\mathbb{M}(t)=\begin{pmatrix}-(e^{\varepsilon_{b}(t)}+e^{\varepsilon_{c}(t)} )&e^{\varepsilon_{a}(t)}&e^{\varepsilon_{a}(t)}\\ e^{\varepsilon_{b}(t)}&-(e^{\varepsilon_{a}(t)}+e^{\varepsilon_{c}(t)})&e^{ \varepsilon_{b}(t)}\\ e^{\varepsilon_{c}(t)}&e^{\varepsilon_{c}(t)}&-(e^{\varepsilon_{a}(t)}+e^{ \varepsilon_{b}(t)})\end{pmatrix}\,. \tag{7}\]
The eigenvalues of \(\mathbb{W}(t)\) are:
* \(\lambda_{0}(t)=0\). The corresponding eigenvector is the equilibrium distribution \(\mathbf{p}^{eq}(t)\) at time \(t\) given by: \[p_{I}^{eq}(t)=\frac{e^{\varepsilon_{i}(t)}}{e^{\varepsilon_{a}(t)}+e^{ \varepsilon_{b}(t)}+e^{\varepsilon_{c}(t)}}\,.\] (8)
* \(\lambda_{1,2}(t)=-\lambda(t)\) with \[\lambda(t)=\mathcal{A}\,e^{-\mathcal{B}}\left(e^{\varepsilon_{a}(t)}+e^{ \varepsilon_{b}(t)}+e^{\varepsilon_{c}(t)}\right)\,.\] (9) It can be verified by direct matrix multiplication that the eigenspace corresponding to \(\lambda_{1}\) and \(\lambda_{2}\) is the 2-dimensional subspace of vectors whose components add up to zero.
The fact that \(\mathbb{W}(t)\) has two equal eigenvalues follows from the symmetry of the model and considerably simplifies the master equation that becomes:
\[\dot{\mathbf{p}}= \mathbb{W}(t)\,\left(\mathbf{p}(t)-\mathbf{p}^{eq}(t)\right)+\underbrace{ \mathbb{W}(t)\,\mathbf{p}^{eq}(t)}_{=0}\] \[= -\lambda(t)\,\left(\mathbf{p}(t)-\mathbf{p}^{eq}(t)\right)\,. \tag{10}\]
This evolution equation tells us that the time variation of \(p_{I}\) depends only on \(p_{I}\) itself and it relaxes towards the current equilibrium value with rate \(\lambda(t)\).
For simplicity, from now on the time dependence of the free energies \(\varepsilon_{i}\) and of the quantities that depend on them, such as \(k_{IJ}\), \(\lambda\) and \(p_{I}^{eq}\), will be left implicit in the equations.
### Relationship with the no-pumping theorem
The non-autonomous operation described in the previous section seemingly contradicts the so-called no-pumping theorem [60], a no-go result stating that in order to produce a directional flow with a cyclic driving protocol, one needs to vary both the energies of the stations and barriers' heights. However, the no-pumping theorem holds strictly for systems with single particles (as experimentally observed in the version of this model with only one ring ([2]-catenane [41]) or at most multiple independent particles [66], whereas the system under study comprises two interacting rings. In a sense, the presence of the two rings can be seen as causing a change in the barriers during the non-autonomous operation of
the motor: one of the rings acts in turn as an additional barrier preventing the other ring from moving backward. We stress that this mechanism only works when the interactions between the two rings are long-ranged, as we implicitly assumed by imposing that the two rings cannot occupy the same station. If the two rings only interacted locally when in the same station, then an extension of the no-pumping theorem to locally interacting many-particle systems would apply [66] and we would not be able to generate directional flow with those kinds of protocols.
To show more rigorously that the way in which directed flow can be induced in the [3]-catenane does not contradict the no-pumping theorem, we now map our system into an equivalent [2]-catenane that produces directional flow in compliance with the no-pumping theorem. The idea is to describe our system in terms of the vacant station, i.e., the _hole_. Indeed, the transition \(A\to B\) can be alternatively thought of as the hole jumping from \(a\to b\) (Fig. 2a).
We can assign some effective free energies \(\varepsilon_{i}^{h}\) and barriers \(\mathcal{B}_{ij}^{h}\) describing the hole's dynamics so that it exactly reproduces that of the original system. For this purpose, it is sufficient to choose \(\varepsilon_{i}^{h}\) and \(\mathcal{B}_{ij}^{h}\) such that the transition rates for the hole \(k_{IJ}^{h}\) in the Arrhenius form coincide with the \(k_{IJ}\) of the original system:
\[k_{IJ}^{h}=\mathcal{A}\,e^{-(\mathcal{B}_{ij}^{h}-\varepsilon_{j}^{h})}= \mathcal{A}\,e^{-(\mathcal{B}-\varepsilon_{i})}=k_{IJ} \tag{11}\]
Up to a constant, the correct choice reads:
\[\varepsilon_{a}^{h} =-\varepsilon_{a}\] \[\varepsilon_{b}^{h} =-\varepsilon_{b}\] \[\varepsilon_{c}^{h} =-\varepsilon_{c}\] \[\mathcal{B}_{ab}^{h} =\mathcal{B}-\varepsilon_{a}-\varepsilon_{b}\] \[\mathcal{B}_{bc}^{h} =\mathcal{B}-\varepsilon_{b}-\varepsilon_{c}\] \[\mathcal{B}_{ca}^{h} =\mathcal{B}-\varepsilon_{c}-\varepsilon_{a}\]
The effective free energies \(\varepsilon_{i}^{h}\) experienced by the hole are the opposite of the ones experienced by the rings: if the hole is in one station, there is no ring, so that the contribution of that station's binding energy is absent. Finally, by imagining the hole as a single ring interlocked to the track, the mapping is effectively between a [3]-catenane and a [2]-catenane[67]. A specific example of this mapping is shown in Fig. 2b, where the two potential energy surfaces experienced by the respective rings are sketched: increasing the free energy of the \(c\)-station in the [3]-catenane by \(\varepsilon\) is equivalent to lower the free energy of the same station and the two adjacent barriers in the [2]-catenane by \(\varepsilon\). As this example shows, a driving protocol which only varies the free energies of the stations in the [3]-catenane corresponds to a driving that varies _both_ the free energies and the barriers' heights in the equivalent [2]-catenane. Crucially, if a driving produces directional flow in the original system, the equivalent driving produces the same flow also in the [2]-catenane, where the no-pumping theorem applies. Therefore, the generation of current in the [3]-catenane motor is in compliance with the no-pumping theorem. This explanation is conceptually equivalent to the one already given in [60] for such a system, here we made it explicit by leveraging the mapping.
## III Free dynamics
In this section, we analyze the dynamics and thermodynamics of the [3]-catenane when the free energies of the states are modified according to a periodic driving protocol of the kind described in the previous section. Interesting quantities to characterize the motor's performance under driving are the average current \(J_{cyc}\) generated and the average work \(W_{cyc}\) done on the system over a cycle of driving. The former can be expressed as
\[J_{cyc}=\frac{\Phi_{cyc}}{\tau} \tag{12}\]
where \(\Phi_{cyc}\) is the rings' flux over a cycle. We also introduce a dimensional coefficient of performance (COP) that measures how effective a driving is in producing directional flow:
\[\text{COP}=\frac{\Phi_{cyc}}{W_{cyc}}=\text{number of laps per unit joule spent} \tag{13}\]
Figure 2: **a)** Two equivalent ways of looking at the \(A\to B\) transition. The jump of a small yellow ring from station \(b\) to station \(a\) (left) can be equivalently looked at as a jump of the _hole_ from state \(a\) to state \(b\) (right). **b)** Pictorial representation of the mapping between the [3]-catenane and the [2]-catenane at the level of the potential energy surfaces. Increasing the energy of the \(c\)-station in the [3]-catenane by \(\varepsilon\) (left) is equivalent to lower the energy of the \(c\) station and the two adjacent barriers in the equivalent [2]-catenane by the same amount (right).
The idea behind this coefficient is that, at a fixed period, the higher the COP for a certain driving protocol, the less work is required to generate the same rings' flux.
We start by exploring one interesting limiting case, namely the limit of adiabatic (i.e., quasi-static) driving, then we look at one specific type of protocol that models typical experiments [47, 25, 41] and is exactly solvable for every period \(\tau\); namely, the step protocol.
### Adiabatic driving
We consider a driving as adiabatic whenever the system's relaxation rate, \(\lambda\) in eq. (9), is much faster than the driving protocol, so that the system can be considered to always be in thermodynamic equilibrium with respect to the instantaneous values of stations' free energies. In this regime, the [3]-catenane behaves as a reversible pump [53], that is, a finite directional flux is generated at the cost of vanishing input work over a period of driving:
\[\Phi_{cyc}\propto\text{constant}\,,\quad W_{cyc}\propto 1/\tau\,. \tag{14}\]
In addition, the flux \(\Phi_{cyc}\) becomes a purely _geometric phase_[53, 34] that does not depend on \(\tau\) but only on the loop swept by the driving protocol in the space of parameters (i.e., the free energies of the stations). This property is analogous to the Berry phase in quantum mechanics [68], namely the geometric phase difference acquired by an eigenstate for a cyclical and adiabatic variation of the Hamiltonian's parameters.
In order to formally derive the geometric phase induced in the [3]-catenane by an adiabatic driving protocol, we use eq. (9) and (10) to recast the probability current as:
\[J_{IJ}(t)= k_{IJ}\left[p_{J}(t)-p_{J}^{eq}\right]-k_{JI}\left[p_{I}(t)-p_{I}^{ eq}\right]= \tag{15}\] \[= -\frac{k_{IJ}\,\dot{p}_{J}(t)-k_{JI}\,\dot{p}_{I}(t)}{\lambda} \tag{16}\]
Therefore, the current from \(A\to B\) can be written as:
\[J_{BA}(t)=\mathbf{V}_{BA}\cdot\dot{\mathbf{p}}(t)\,, \tag{17}\]
with
\[\mathbf{V}_{BA}=\frac{1}{\lambda}(-k_{BA},k_{AB},0)\,. \tag{18}\]
The flux over a cycle can then be expressed as
\[\Phi_{cyc}=\Phi_{cyc}^{BA}=\int\mathbf{V}_{BA}\cdot\dot{\mathbf{p}}(t)\,dt\,. \tag{19}\]
By implementing the adiabaticity condition (i.e., the probability distribution is at any instant the equilibrium one defined in Eq. (8)), the above equation boils down to
\[\Phi_{cyc}=\int\mathbf{V}_{BA}\cdot\dot{\mathbf{p}}_{eq}\,dt=\oint\mathbf{V}_{BA}\cdot d \mathbf{p}_{eq}\,, \tag{20}\]
The last term on the right-hand side is the purely geometric phase. Note that, since the driving is adiabatic, a finite flux is generated despite the work performed over a cycle is null, yielding a divergent COP. This is not in violation of the second law of thermodynamics because no work can be extracted out of this finite yet quasi-static directional flux. To gain intuition, the line integral in Eq. (20) can be converted into a more visualizable surface integral by using Stokes theorem. A preliminary substitution simplifying the next passages is the following:
\[\begin{split} e^{\varepsilon_{a}}&\to x>0\\ e^{\varepsilon_{b}}&\to y>0\\ e^{\varepsilon_{c}}&\to z>0\end{split} \tag{21}\]
By evaluating the vector field \(\mathbf{V}_{BA}\) and \(d\mathbf{p}^{eq}\) in terms of the new variables \(x\), \(y\), \(z\), the line integral becomes
\[\Phi_{cyc}=\oint\mathbf{V}_{BA}\cdot d\mathbf{p}_{eq}=\oint\mathbf{A}(\mathbf{r})\cdot d\mathbf{ r}\,, \tag{22}\]
and by exploiting Stokes theorem we then have
\[\Phi_{cyc}=\int(\nabla\times\mathbf{A})\cdot d\mathbf{S}\,, \tag{23}\]
with
\[\nabla\times\mathbf{A}\left(\mathbf{r}\right)=\frac{2}{(x+y+z)^{3}}\,\mathbf{r}\,. \tag{24}\]
Detailed calculations are reported in Appendix B. We note that the rotor in Eq. (24) is different from the ones that correspond to Eq. (6) and (7) of [34] or Eq. (3) and (4) of [58]. The latter rotors result to be nonsymmetric in \(x,y\) and \(z\), which is inconsistent. As a matter of fact, the system's symmetry in the three stations \(a,b\) and \(c\) demands \(\Phi_{cyc}\) to be symmetric in \(x,y\) and \(z\) which, in turn, demands the same symmetry for \(\nabla\times\mathbf{A}\). A graphical illustration of the surface integral in Eq. (23) is given in Fig. 3. The main advantage of this representation is that, since \(\nabla\times\mathbf{A}\) is radial, one can easily tell which of the driving protocols give rise to a nonzero average current. Furthermore, we can easily identify the protocols maximizing rings' flux as the ones collecting all of the outgoing rotor field \(\nabla\times\mathbf{A}\) (light-blue loop in Fig. 3). For these protocols, the rings complete an entire cycle with unitary probability after each period. However, they cannot be performed in reality since they would require, for example, that
\[\frac{x}{y}\to 0\implies e^{\varepsilon_{a}-\varepsilon_{b}}\to\infty\,. \tag{25}\]
Nevertheless, they can be approximated arbitrary well, giving a practical method to optimize adiabatic driving protocols in terms of the induced directional flux.
### Step protocol
In this section, we analyze in detail the step protocol, a type of driving protocol that is exactly solvable and close to what is usually implemented in experiments [25; 30; 41; 47]. At odds with adiabatic protocols in the previous section, in the step protocol, driving is much faster than the system's relaxation rate so that the probability distribution has no time to change during an external manipulation. In particular, we focus on a protocol in which, over a period \(\tau\), the free energies of the stations as a function of time are:
\[(\varepsilon_{a},\varepsilon_{b},\varepsilon_{c})=\begin{cases}(0,0, \varepsilon)&\text{if}\quad 0<t<\tau/3\\ (0,\varepsilon,0)&\text{if}\quad\tau/3<t<2\tau/3\\ (\varepsilon,0,0)&\text{if}\quad 2\tau/3<t<\tau\end{cases} \tag{26}\]
where \(\varepsilon>0\) is the _modulation energy_ and the steps between an energy configuration and the successive one are assumed to be effectively instantaneous. As a consequence, the step protocol can never be considered adiabatic, even in the limit of large period \(\tau\).
A graphic illustration of the step protocol is shown in Fig. 4a, with the two rings moving clockwise according to the intuitive idea discussed at the beginning of Section II.1.
#### ii.2.1 Solution
In order to solve for the probability distribution, it is sufficient to find \(p_{A}(t)\). Indeed, \(p_{B}(t)\) and \(p_{C}(t)\) are equal to \(p_{A}(t)\) modulus a temporal translation:
\[\begin{array}{l}p_{B}(t)=p_{A}(t-\tau/3)\\ p_{C}(t)=p_{A}(t+\tau/3)\end{cases} \tag{27}\]
By combining Eq. (10), (9), and (8), the time evolution of \(p_{A}(t)\) reads:
\[\dot{p}_{A}(t)=k\,e^{\varepsilon_{a}}-\lambda\,p_{A}(t)\,, \tag{28}\]
where we set \(k=\mathcal{A}\,e^{-\mathcal{B}}\). Note that, in this case, \(\lambda\) is constant throughout the step protocol and equal to
\[\lambda=k\,\left(e^{\varepsilon_{a}}+e^{\varepsilon_{b}}+e^{\varepsilon_{c}} \right)=k\,(2+e^{\varepsilon})\,. \tag{29}\]
The solution of Eq. (28) reads:
\[p_{A}(t)=\begin{cases}e^{-\lambda\,t}\,p_{A}(0)+\frac{k}{\lambda}\,\,\left(1- e^{-\lambda\,t}\right)&\text{if }0\leq t<\frac{2\tau}{3}\\ e^{-\lambda\,t}\,p_{A}(0)+\frac{k}{\lambda}\,\left(e^{\varepsilon}+\left(1-e^{ \varepsilon}\right)e^{-\lambda\,(t-2\tau/3)}-e^{-\lambda\,t}\right)&\text{if }\frac{2\tau}{3}\leq t< \tau\end{cases} \tag{30}\]
with
\[p_{A}(0)=\frac{1}{2+e^{\varepsilon}}\,\frac{e^{\varepsilon}+x+x^{2}}{1+x+x^{ 2}}\qquad x=\exp\left(-\frac{\lambda\,\tau}{3}\right)\,. \tag{31}\]
In Fig. 4b, \(p_{A}(t)\) is plotted for three different values of \(\lambda\,\tau\) and contrasted with the energy of state A during the step protocol. As expected, \(p_{A}(t)\) peaks whenever \(E_{A}\) is minimum. However, we can appreciate how \(p_{A}(t)\) straighten out when \(\lambda\tau\) gets smaller, as the system has less time to relax between one step and the other.
#### ii.2.2 Work
The total work done in a cycle can be calculated, according to stochastic thermodynamics [2; 4; 6], as
\[W_{cyc}=\sum_{I}\int_{0}^{\tau}p_{I}\dot{E}_{I}\,dt, \tag{32}\]
Calculations, reported in Appendix C.1, yield the following expression:
\[W_{cyc}=3\,\varepsilon\,\frac{1-e^{-\varepsilon}}{1+2e^{-\varepsilon}}\, \frac{1-x^{2}}{1+x+x^{2}}\qquad x=\exp\left(-\frac{\lambda\,\tau}{3}\right)\,. \tag{33}\]
Figure 3: Graphical representation of the geometric phase in terms of the flux of \(\nabla\times\mathbf{A}\). The blue closed loops are possible periodic driving protocols in the parameters space \((x,y,z)\). The orange arrows represent the flux across these loops generated by the radial \(\nabla\times\mathbf{A}\). The big light blue loop corresponds to the driving which collects all of the outgoing flux of \(\nabla\times\mathbf{A}\).
The average input power supplied by the driving is then
\[\dot{W}_{cyc}=\frac{W_{cyc}}{\tau}\,, \tag{34}\]
which is plotted in the upper graphs of Figs. 4c-d as a function of the period \(\tau\) and the modulation energy \(\varepsilon\). The average input power \(\dot{W}_{cyc}\) decreases monotonically as a function of the period \(\tau\) (Fig. 4c), as the work is delivered over a longer time. It also increases monotonically with the modulation energy \(\varepsilon\) (Fig. 4d) due to the higher work required to change the free energies of the stations.
#### iii.2.3 Current
The average current in a cycle can be found by integrating, over a period, the current through an arbitrary edge of the network in Fig. 1a, e.g., the \(A\to B\) edge:
\[J_{cyc}\ =-\frac{1}{\tau}\int_{0}^{\tau}\,J_{BA}(t^{\prime})\,dt^{\prime}\,, \tag{35}\]
the negative sign comes from the fact that \(J_{BA}\) represents the flow of the _hole_ (the A state is the one in which station \(a\) is unoccupied), which is opposite to the flow of the two rings (Fig. 2a). After the calculations reported in Appendix C.2, the expression of the current boils down to:
\[J_{cyc}=\frac{1}{\tau}\,\frac{(1-e^{-\varepsilon})^{2}}{(1+2\,e^{-\varepsilon })^{2}}\,\frac{(1-x)^{3}}{1-x^{3}}\qquad x=\exp\left(-\frac{\lambda\,\tau}{3} \right)\,, \tag{36}\]
which is plotted in the middle graphs of Figs. 4c-d as a function of the period \(\tau\) and the modulation energy \(\varepsilon\). Interestingly, there is an optimal period \(\tau\) for the driving protocol that maximizes the output current \(J_{cyc}\) (Fig. 4c). This optimal period corresponds to the best
Figure 4: Free dynamics under step protocol driving. **a)** Each step in the driving can be imagined as a rigid instantaneous rotation of the depicted potential free energy surface by \(2\pi/3\). **b)** Top: Probability \(p_{A}\) as a function of the time \(t\) over three periods \(\tau=10\) a.u.(arbitrary units) for three different relaxation rates \(\lambda\). The dotted line represents the equilibrium value of \(p_{A}\) as a function of time. Bottom: the energy of the \(A\)-state during the step protocol: \(E_{A}(t)=\varepsilon_{b}(t)+\varepsilon_{c}(t)\). The modulation energy is \(\varepsilon=3\)\(k_{B}T\). **c), d)** Average input power \(\dot{W}_{cyc}/k_{B}T\) (time\({}^{-1}\), Eq. (34)), current \(J_{cyc}\)(time\({}^{-1}\), Eq. (36)) and COP (\((k_{B}T)^{-1}\), Eq. (37)) as a function of the period \(\tau\) with \(\varepsilon=3\)\(k_{B}T\) (c) and of the modulation energy \(\varepsilon\) with \(\tau=0.14\) a.u. (d). The relaxation rate is set to \(\lambda=22\) time\({}^{-1}\) in both cases.
trade-off between a too fast driving, which does not allow the system to relax between one step and the other, and a too slow driving, which waits too much time after the system relaxed. Furthermore, when the modulation energy \(\varepsilon\) is below a certain threshold, the output current is almost null, and it reaches a plateau very quickly when the threshold is passed (Fig. 4d). This on/off behavior can be explained by the fact that the modulation energy \(\varepsilon\) must be high enough to beat thermal fluctuations and make the rings able to discriminate the least energetic stations. At the same time, \(e^{-\varepsilon}\) quickly becomes negligible in Eq. (36), yielding a constant current.
#### iii.3.4 Coefficient of performance
The analytical expression for the COP as defined in Eq. (13) can be derived from Eqs. (33) and (36):
\[\text{COP}=\frac{J_{cyc}\,\tau}{W_{cyc}}=\frac{1}{3\varepsilon}\frac{1-e^{- \varepsilon}}{1+2e^{-\varepsilon}}\frac{1-x}{1+x}\qquad x=\exp\left(-\lambda\, \tau/3\right)\,. \tag{37}\]
The COP is plotted in the bottom graphs of Figs. 4c-d as a function of the period \(\tau\) and the modulation energy \(\varepsilon\). As it does not take into account the speed of operation but only how efficiently a directional flux is produced, it is maximized for long periods (Fig. 4c). Furthermore, contrary to the case of adiabatic driving, the COP remains finite in the limit of large \(\tau\). This is due to the fact that, as previously mentioned, the step protocol is never adiabatic, even for large \(\tau\). Finally, the bottom plot of Fig. 4d reveals the presence of an optimal modulation energy \(\varepsilon\) maximizing the COP for a fixed period \(\tau\). The presence of such a maximum can be explained intuitively by considering that, on the one hand, a too high modulation energy \(\varepsilon\) is counterproductive because more work is performed without increasing the current; on the other hand, a too small modulation energy \(\varepsilon\) does not sufficiently promote forward transitions over backward ones.
## IV Dynamics with applied load
In this section, we study the non-autonomous operation of the [3]-catenane motor under driving and in the presence of a load. The latter is modeled as an opposing force \(f\) applied to each transition (see Fig. 5a) so that the total force applied to the three-state motor is \(3f\). Here, we are interested in quantifying the output power and the efficiency with which the input work is converted into the output work done against the force due to the rings moving ahead. If the forward current is \(J_{cyc}\), the average output power delivered by the motor will be
\[P_{out}=3f\,J_{cyc}\,, \tag{38}\]
and the efficiency
\[\eta=\frac{P_{out}}{\dot{W}_{in}}\,, \tag{39}\]
where \(\dot{W}_{in}\) is the average work per unit of time performed by driving the molecular motor.
For any transitions, for instance, the \(A\to B\) transition, the local detailed-balance condition requires now
\[\frac{k_{AB}}{k_{BA}}=e^{E_{B}-E_{A}-f}\,. \tag{40}\]
In general, there are no constraints on how the force \(f\) modifies each rate constant, this will depend on the specific system at study. Here, we assume that \(f\) only modifies the backward rates meaning for example:
\[k_{AB}=\mathcal{A}\,e^{-(\mathcal{B}-\varepsilon_{a})},\quad k_{BA}=\mathcal{ A}\,e^{-(\mathcal{B}-\varepsilon_{b})+f} \tag{41}\]
According to this choice, the transition matrix \(\mathbb{W}\) in Eq. (7) is replaced by:
\[\mathbb{W}^{f}(t)=\mathcal{A}\,e^{-\mathcal{B}}\,\mathbb{M}^{f}(t)\,, \tag{42}\]
with
\[\mathbb{M}^{f}(t)=\begin{pmatrix}-(e^{(\varepsilon_{b}+f)}+e^{\varepsilon_{a} })&e^{\varepsilon_{a}}&e^{(\varepsilon_{a}+f)}\\ e^{(\varepsilon_{b}+f)}&-(e^{\varepsilon_{a}}+e^{(\varepsilon_{c}+f)})&e^{ \varepsilon_{b}}\\ e^{\varepsilon_{c}}&e^{(\varepsilon_{c}+f)}&-(e^{(\varepsilon_{a}+f)}+e^{ \varepsilon_{b}})\end{pmatrix}\,. \tag{43}\]
Contrary to \(\mathbb{W}\), \(\mathbb{W}^{f}\) does not have two identical eigenvalues. This prevents us from finding a simple solution to the master equation as done in the previous sections and makes numerics necessary to obtain quantitative results.
### Adiabatic driving with applied load
When subjected to a load, the regime of adiabatic driving is not interesting because the output work vanishes. As a matter of fact, to produce output work, we must have \(J_{cyc}>0\). This means that the contribution to \(J_{cyc}\) coming from the driving must prevail over the negative contribution arising from the opposing force. The former scales \(\propto 1/\tau\) from Sec. III.1, while the latter scales \(\propto f\) for small forces. Therefore, in this regime, to observe a current in the direction opposite to the force, the latter must scale as
\[f\propto\frac{1}{\tau}\,, \tag{44}\]
Since the output power is proportional to the product of the current and the force, the above scaling implies
\[P_{out}\propto\frac{1}{\tau^{2}}\,, \tag{45}\]
which translates into a vanishing output work over a period:
\[W_{cyc}\propto\frac{1}{\tau}\,. \tag{46}\]
### Step protocol with applied load
For the step protocol, we report the results obtained by lengthy analytical calculations done in Mathematica [69]. To make such calculations feasible, the modulation energy \(\varepsilon\) of the step protocol and \(k=\mathcal{A}\,e^{-\mathcal{B}}\) were fixed to specific values (\(\varepsilon=\log 4\), \(k=1\)), and we only kept track of the analytical dependencies of the motor performance on the period \(\tau\) and the force \(f\). In Fig. 5b, the current \(J_{cyc}\) is plotted as a function of these two parameters, with the yellow dotted line delimitating the area of parameters space in which \(J_{cyc}>0\), that is, where we can produce output work. For any given \(\tau\), there is a value of the force, called stopping force \(f_{\text{stop}}\), above which the current becomes negative. We also see that, at fixed \(f\), there is a finite range of intermediate periods in which \(J_{cyc}>0\). The reason is that for small and large periods the forward current produced by the driving tends to zero (see Fig. 4c), and thus the backward current generated by the opposite force dominates. In Fig. 5c, we plotted \(P_{out}\) and the efficiency \(\eta\) as a function of both \(\tau\) and \(f\). The plots show that there is good overlap of the regions in which they are maximum, a feature that can emerge only when systems are operated far from the linear regime [70; 71]. Our analysis allows one to identify regions of good tradeoff between power and efficiency for the non-autonomously operated [3]-catenane motor.
### Estimating the stopping force
As noticed above, for any period \(\tau\), a stopping force \(f_{\text{stop}}\) can be identified such that the motor is stalled (i.e., \(J_{cyc}=0\)). Knowing the value of \(f_{\text{stop}}\) can be useful, as it sets an upper bound to the ability of the motor to perform work against a force under non-autonomous driving. However, as we discussed, while the free dynamics of the motor can be easily solved, the dynamics in the presence of a load has much greater analytical complications. Therefore, the question we ask in this section is: can we
Figure 5: Dynamics with applied load under step protocol driving. **a)**[3]-catenane motor with an applied load. The load is modelled as an opposite force pushing each ring anticlockwise. **b), c)** Average current \(J_{cyc}\), output power \(P_{out}\), and efficiency \(\eta\) as a function of the period \(\tau\) and the applied force \(f\). In each plot, the dotted yellow line delimits the region in which work is performed by the motor, i.e., the \(J_{cyc}>0\) region. The dashed black lines correspond to the regions in which \(J_{cyc}>0.1\) b), \(P_{out}>0.33\) and \(\eta>0.21\) c). The modulation energy \(\varepsilon=1.39\)\(k_{B}T\) and relaxation rate \(\lambda=6\) time\({}^{-1}\)) are fixed. **d)** Comparison of the numerical stopping force \(f_{\text{stop}}\) with the estimate \(f_{\text{eff}}\) as a function of the period \(\tau\). This was done for the step protocol with modulation energy \(\varepsilon=1.39\)\(k_{B}T\) and relaxation rate \(\lambda=6\) (time\({}^{-1}\)).
estimate \(f_{\rm stop}\) from the free dynamics studied in Sec. III? We start from the intuition that the greater the current pumped by a certain driving in absence of any load, the greater \(f_{\rm stop}\) will be for that driving protocol. We then notice that the exact stopping force would be easily deducible from the free dynamics if our molecular motor were autonomously driven. In that case, the stopping force would be the log-ratio of the product of forward and backward autonomous rates. Based on these considerations, we can proceed as follows: (i) starting from the free dynamics of the non-autonomous [3]-catenane motor, we construct an ancillary autonomous dynamics [62] that, at steady state, has the same probability distribution, current and traffic (\(t_{IJ}=k_{IJ}p_{J}+k_{JI}p_{I}\)) as the original one; (ii) we compute the driving affinity of the ancillary dynamics and take it as an estimate (\(f_{\rm eff}\)) for the stopping force of the non-autonomous dynamics; finally, (iii) we compare the estimated \(f_{\rm eff}\) with the real stopping force \(f_{\rm stop}\) in the regimes that we solved. The construction in point (i) can be easily carried out. Indeed, it is enough to choose the rates for the autonomous ancillary dynamics in the following way
\[k_{IJ}^{\rm Aut}=\langle k_{IJ}p_{J}\rangle/\langle p_{J}\rangle\,, \tag{47}\]
where, on the right-hand side, the brackets denote the average over a period in the original non-autonomous dynamics. This choice ensures that the average probability distribution, current and traffic of the non-autonomous molecular motor are exactly reproduced by the ancillary dynamics:
\[p_{I}^{\rm Aut}=\langle p_{I}\rangle\quad J_{IJ}^{\rm Aut}=\langle J_{IJ} \rangle\quad t_{IJ}^{\rm Aut}=\langle t_{IJ}\rangle \tag{48}\]
Intuitively, the ancillary autonomous dynamics represents a stroboscopic version of the non-autonomous one where just the average motion over a period is observed. Moving to point (ii), we estimate the stopping force as the driving affinity of the ancillary dynamics:
\[f_{\rm eff}=\log\left(\frac{\prod_{\rho}k_{+\rho}^{\rm Aut}}{\prod_{\rho}k_{- \rho}^{\rm Aut}}\right)\,, \tag{49}\]
where \(k_{+\rho}^{\rm Aut},k_{-\rho}^{\rm Aut}\) represent the forward and backward rates of the \(\rho\) transition, respectively. Finally, following point (iii), in Fig. 5d we compare \(f_{\rm eff}\) with the real stopping force \(f_{\rm stop}\) as a function of the period \(\tau\) of the step protocol. We find that \(f_{\rm eff}\) is a lower bound for \(f_{\rm stop}\) that qualitatively reproduces its behavior. In the limit \(\tau\to 0\), the two curves converge. The reason is that in this limit, the driving becomes so fast that the opposite force effectively only perceives its average effect, which is exactly reproduced by the ancillary dynamics.
We conclude this section with some remarks. The same procedure that we applied to the step protocol can be followed for any other driving. However, the fact that our estimate lower bounds the exact stopping force has only been tested for the step protocol and for a specific value of the modulation energy (\(\varepsilon=4\)), it is not obvious if it holds in general. Moreover, there are other similar ways of estimating the stopping force from the free dynamics, the one adopted here ensures that, in the limit \(\tau\to 0\), the estimate becomes exact.
## V Conclusions
Artificial non-autonomous molecular motors are currently in the spotlight of the experimental community working on molecular machines [50; 25; 40]. In this paper, we applied the tools of stochastic thermodynamics to build a comprehensive understanding of the dynamics and thermodynamics of a simple model epitomizing the functional elements of catenane-based non-autonomous synthetic motors [47; 25; 43]. Our main results can be summarized as follows. First, we discussed how the current generation in a [3]-catenane relates to the no-pumping theorem [60] leveraging a mapping with an equivalent [2]-catenane. Second, we corrected and further elaborated on a previously derived formula for the adiabatic limit's geometric flux [58; 34]. Finally, we went beyond the linear and adiabatic regime by studying a step-wise driving protocol that resembles those used in experiments. We did so by solving for the molecular motor's behavior both in the absence and presence of a load. In the former case, we quantified its performance by introducing an additional non-thermodynamic coefficient, which we denoted as COP. In the latter case, we studied the transduction efficiency, the output power and the stopping force. In both situations, we found optimal protocols that maximize specific molecular motor performance quantifiers. Our study will help the experimental community develop a more in-depth intuition on optimally designing and operating non-autonomous molecular motors.
## VI Acknowledgements
This is research was supported by AFR PhD grant 15749869 and project ChemComplex (C21/MS/16356329), both funded by the FNR (Luxembourg). For open access, the author has applied a Creative Commons Attribution 4.0 International (CC BY 4.0) license to any Author Accepted Manuscript version arising from this submission.
## VII Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
## Appendix A Coarse-graining
In Sec. II, we introduced a coarse-grained model of the molecular motor in terms of discrete mesostates. Each of these mesostates is a collection of all the different microscopic configurations in which the rings occupy a given pair of stations. In this appendix, we explain when such an effective description works and define the free energies of mesostates [6].
A reliable coarse-graining of a physical system is possible whenever different sets of microscopic configurations (microstates) can be collected into mesostates such that the equilibration at the level of the mesostates is much slower than that of the microstates inside each mesostate. Under this condition, the microscopic configurations collected into a mesostate can be considered to always be in thermodynamic equilibrium while focusing on the dynamics at the level of the mesostates. In our coarse-grained treatment of the [3]-catenane motor, we therefore assumed that jumps between stations occur on a much slower time scale than the microscopic dynamics inside the stations.
In this scenario, the occupation probability at equilibrium of a given mesostate \(I\) in terms of the microscopic states is given by
\[p_{i}^{eq}=\sum_{\xi\in i}p_{\xi}^{eq}\propto\sum_{\xi\in I}e^{-\varepsilon _{\xi}/k_{B}T}=e^{-E_{I}/k_{B}T}\,, \tag{10}\]
where the index \(\xi\) runs over all the microstates in the mesostate \(I\), \(\varepsilon_{\xi}\) labels the energy of microstate \(\xi\), and \(E_{i}\) is precisely the free energy of the mesostate \(I\) defined as
\[E_{i}=-k_{B}T\,\log\left(\sum_{\xi\in i}e^{-\varepsilon_{\xi}/k_{B}T}\right)\,. \tag{11}\]
## Appendix B Geometric phase calculations
The substitution
\[\begin{array}{l}e^{\varepsilon_{a}}\to x>0\\ e^{\varepsilon_{b}}\to y>0\\ e^{\varepsilon_{c}}\to z>0\end{array} \tag{12}\]
leads to
\[p_{A}^{eq}=\frac{x}{x+y+z}\,,\quad p_{B}^{eq}=\frac{y}{x+y+z}\,, \tag{13}\]
and
\[\begin{array}{l}\lambda=k\left(x+y+z\right),\\ k_{AB}=k\,x\,,\quad\text{and}\quad k_{BA}=k\,y\,,\end{array} \tag{14}\]
where we set \(k=\mathcal{A}\,e^{-\mathcal{B}}\). By evaluating \(\mathbf{V}_{BA}\) and \(d\mathbf{p}^{eq}\) in terms of the new variables \(x\), \(y\) and \(z\) we get
\[\Phi_{cyc}=\oint\mathbf{A}(\mathbf{r})\cdot d\mathbf{r} \tag{15}\]
with
\[\mathbf{A}(\mathbf{r})=\frac{(-y,x,0)}{(x+y+z)^{2}} \tag{16}\]
The calculation of \(\nabla\times\mathbf{A}\) yields
\[\nabla\times\mathbf{A}=\frac{2}{(x+y+z)^{3}}\,\vec{r} \tag{17}\]
## Appendix C Step protocol
### Calculation of the work
From symmetry arguments the work in eq. (32) is equal to:
\[W_{cyc}=3\int_{0}^{\tau}p_{A}\dot{E}_{A}dt \tag{18}\]
during the step protocol, the energy of the state \(A\) is :
\[E_{A}=\varepsilon_{b}+\varepsilon_{c}=\begin{cases}\varepsilon&\text{if }0\leq t <\frac{2\tau}{3}\\ 0&\text{if }\frac{2\tau}{3}\leq t<\tau\end{cases} \tag{19}\]
therefore \(W_{cyc}=3\,\varepsilon\left(p_{A}(\tau)-p_{A}(2\tau/3)\right)\) and using eq. (30) for \(p_{A}(t)\) we get
\[W_{cyc}=3\,\varepsilon\,\frac{1-e^{-\varepsilon}}{1+2e^{-\varepsilon}}\,\frac {1-x^{2}}{1+x+x^{2}}\qquad x=\exp\left(-\frac{\lambda\,\tau}{3}\right) \tag{20}\]
### Calculation of the current
From eq. (35) the average current in a cycle is:
\[\begin{array}{l}J_{cyc}=-\frac{1}{\tau}\int_{0}^{\tau}\,k_{BA}\,p_{A}(t^{ \prime})-k_{AB}\,p_{B}(t^{\prime})\,dt^{\prime}\\ \phantom{J_{cyc}=}=-\frac{k}{\tau}\int_{0}^{\tau}\,e^{\varepsilon_{b}}p_{A}( t^{\prime})-e^{\varepsilon_{a}}p_{B}(t^{\prime})\,dt^{\prime}\end{array} \tag{21}\]
keeping in mind that \(p_{B}(t)=p_{A}(t+\tau/3)\) and the behavior of \(\varepsilon_{a}\) and \(\varepsilon_{b}\) in the step protocol, we get:
\[J_{cyc}=\frac{k}{\tau}(e^{\varepsilon}-1)\left[\int_{0}^{\tau/3}p_{A}(t)\,dt- \int_{\tau/3}^{2\tau/3}p_{A}(t)\,dt\right] \tag{22}\]
using eq. (30) for \(p_{A}(t)\) we finally have:
\[J_{cyc}=\frac{1}{\tau}\frac{(1-e^{-\varepsilon})^{2}}{(1+2\,e^{-\varepsilon}) ^{2}}\frac{(1-x)^{3}}{1-x^{3}}\qquad x=\exp\left(-\frac{\lambda\,\tau}{3} \right) \tag{23}\] |
2302.04564 | The minimal length: a cut-off in disguise? | The minimal-length paradigm, a possible implication of quantum gravity at low
energies, is commonly understood as a phenomenological modification of
Heisenberg's uncertainty relation. We show that this modification is equivalent
to a cut-off in the space conjugate to the position representation, i.e. the
space of wave numbers, which does not necessarily correspond to momentum space.
This result is generalized to several dimensions and noncommutative geometries
once a suitable definition of the wave number is provided. Furthermore, we find
a direct relation between the ensuing bound in wave-number space and the
minimal-length scale. For scenarios in which the existence of the minimal
length cannot be explicitly verified, the proposed framework can be used to
clarify the situation. Indeed, applying it to common models, we find that one
of them does, against all expectations, allow for arbitrary precision in
position measurements. In closing, we comment on general implications of our
findings for the field. In particular, we point out that the minimal length is
purely kinematical such that, effectively, there is only one model of
minimal-length quantum mechanics. | Pasquale Bosso, Luciano Petruzziello, Fabian Wagner | 2023-02-09T11:03:31Z | http://arxiv.org/abs/2302.04564v1 | # The minimal length: a cut-off in disguise?
###### Abstract
The minimal-length paradigm, a possible implication of quantum gravity at low energies, is commonly understood as a phenomenological modification of Heisenberg's uncertainty relation. We show that this modification is equivalent to a cut-off in the space conjugate to the position representation, _i. e._ the space of wave numbers, which does not necessarily correspond to momentum space. This result is generalized to several dimensions and noncommutative geometries once a suitable definition of the wave number is provided. Furthermore, we find a direct relation between the ensuing bound in wave-number space and the minimal-length scale. For scenarios in which the existence of the minimal length cannot be explicitly verified, the proposed framework can be used to clarify the situation. Indeed, applying it to common models, we find that one of them does, against all expectations, allow for arbitrary precision in position measurements. In closing, we comment on general implications of our findings for the field. In particular, we point out that the minimal length is purely kinematical such that, effectively, there is only one model of minimal-length quantum mechanics.
When regularizing in quantum field theory, it is often (if somewhat naively) concluded that a finite cut-off in relativistic momentum space regularizing UV-divergences implies the existence of an underlying lattice structure. The corresponding lattice spacing provides a minimal length. In the literature on conventional minimal-length theories, on the other hand, it is common to interpret the minimal-length scale not as a physical length, but as a limit to the physically attainable resolution in distance measurements [1, 2, 3, 4, 5, 6]. In quantum mechanics, for example, this corresponds to a minimum for the standard deviation of the position operator
\[\Delta x_{a}\geq\ell, \tag{1}\]
with the newly introduced length scale \(\ell\). This interpretation attributes a fundamental "fuzziness" to the background spacetime itself owing to a modification of the Heisenberg algebra. Notwithstanding the apparent difference from the conventional cut-off, following [7] this kind of assumption has been used frequently to regularize integrals in phenomenological applications such as the brick wall model of black hole thermodynamics [8, 9, 10]. One may thus wonder in which way the minimal-length idea differs from a physical cut-off in momentum space.
In this paper, we show that a minimal-length scale as given in (1) is indeed equivalent to a hard cut-off. However, this cut-off is not bounding momentum space, but rather the space of wave numbers which we define as being the space conjugate to the position representation. As a matter of fact, it is possible to explicitly relate the bound in wave-number space to the minimal-length scale. Yet, a deformation of the Heisenberg algebra immediately implies a modification of the de Broglie relation such that wave numbers and momenta cease to be proportional to each other [5, 11]. Therefore, momentum space may be unbounded despite wave-number space is not. This, it turns out, is the subtle difference in interpretation between applying a cut-off and deforming the Heisenberg algebra. Bear in mind, however, that the definition of a "physical" momentum cannot be motivated from the minimal length itself.
The interpretational difference becomes all the more pronounced once the coordinates become noncommutative. To cover this possibility, we generalize the concept of wave number to deformed Heisenberg algebras which entail a noncommutative geometry. The resulting (anisotropic) wave-number space continues to be bounded under the assumption of a minimal length. Similarly, the relation between this bound and the minimal-length scale can be generalized.
The present approach is far from being only of conceptual interest. It can be used as a tool to identify deformed Heisenberg algebras which possess a minimal localization and those that do not - also in situations where this may not be possible by other means. Applying this reasoning to the most commonly used models of the field, we indeed find one which, contrary to claims in the literature [12, 13], does not encompass a minimal-length scale.
To comply with the above purposes, the paper is structured as follows: first, we propose the argument for a bound in wave-number space for one spatial dimension (or equivalently for multiple commutative dimensions) in Section I.
This result is then generalized to noncommutative geometries in Section II. Section III is devoted to the application of the general framework to existing models. In section IV we comment on the general implications of our results for minimal-length models. Finally, we summarize and discuss our findings in Section V.
Throughout the work we will use natural units \(\hbar=c=1.\)
## I No cut-off, no minimal length
Let us first consider one spatial dimension, and assume the position of the system at hand to be given by the operator \(\hat{x}.\) Then, we may always find a conjugate wave number operator \(\hat{k}\) such that the ordinary Heisenberg algebra
\[[\hat{x},\hat{k}]=i, \tag{2}\]
is satisfied. While \(\hat{k}\) is not regarded as the physical momentum operator in conventional minimal-length theories, it is bound to exist, and can be used to construct a representation of the underlying deformed Heisenberg algebra.
In this Section, we will show that a lower bound of type (1) is equivalent to a bounded spectrum for \(\hat{k}\). This means that the minimal-length constraint imposes a cut-off on the conjugate wave-number space. Intuitively, one would expect that to happen: given a pair of observables satisfying the Heisenberg algebra (2), if the spectrum of \(\hat{k}\) is continuous and unbounded, it is a simple exercise to construct states which violate the inequality (1).
Consider, thus, a quantum system confined to a box of length \(2B\) in wave-number space, \(\text{\it i.\,e.\,\,\mathrm{spec}}(\hat{k})=\{k:k\in[-B,B]\}.\) To achieve this, we apply Dirichlet boundary conditions at \(k=\pm B\). Clearly, we can express any state \(\psi\) in terms of the eigenstates of \(\hat{x}^{2}\) as
\[\psi=\sum_{n=0}^{\infty}\left[a_{n}\frac{\sin[(n+1)\pi k/B]}{\sqrt{B}}+b_{n} \frac{\cos\left[(2n+1)\pi k/2B\right]}{\sqrt{B}}\right], \tag{3}\]
with the complex coefficients \(a_{n}\), \(b_{n}\) satisfying \(\sum_{n=0}^{\infty}(|a_{n}|^{2}+|b_{n}|^{2})=1.\) Since \(\hat{x}\) obeys the Heisenberg algebra with \(\hat{k}\), it can be represented as a derivative with respect to \(k\), \(\text{\it i.\,\,e.\,\,}\hat{x}\psi=i\partial_{k}\psi.\) We thus obtain for a generic state
\[\Delta x^{2}\equiv \langle\psi|\hat{x}^{2}\psi\rangle-\langle\psi|\hat{x}\psi\rangle ^{2}\leq\langle\hat{x}^{2}\rangle=\left(\frac{\pi}{2B}\right)^{2}\sum_{n=0}^ {\infty}\left[4|a_{n}|^{2}(1+n)^{2}+|b_{n}|^{2}(1+2n)^{2}\right]. \tag{4}\]
The right-hand-side of this inequality is clearly minimal if \(|b_{0}|=1\) while all other coefficients vanish. Assuming a minimal length of the kind (1), we then obtain
\[\ell^{2}\leq\Delta x^{2}\leq\left(\frac{\pi}{2B}\right)^{2}. \tag{5}\]
Standard quantum mechanics would be recovered in the limit \(B\to\infty.\) However, this would violate inequality (5), _i. e._ it is impossible in the presence of a minimal length. Hence, a theory characterized by a minimal length cannot be described in terms of an unbounded wave number.
Starting from the above premises, the argument can be refined even more: it is possible to relate the bound \(B\) of the wave-number spectrum to the minimal length \(\ell\). To this aim, we first notice that the above model does not yield a preferred position. Therefore, there have to be states of smallest possible position uncertainty for every \(\langle\hat{x}\rangle\), all of which produce the same value for \(\Delta x.\) Then, it is sufficient to consider states satisfying \(\langle\hat{x}\rangle=0.\) Under these circumstances, the smallest possible position uncertainty is indeed given by
\[\Delta x=\frac{\pi}{2B}. \tag{6}\]
This quantity is bounded by the minimal length, thereby leading to the fundamental bound in wave-number space
\[B=\frac{\pi}{2\ell}. \tag{7}\]
Thence, provided that there is a minimal length for the position operator, the spectrum of the corresponding conjugate wave number operator is
\[\mathrm{spec}(\hat{k})=\{k:k\in[-\pi/2\ell,\pi/2\ell]\}. \tag{8}\]
In turn, a quantum theory in which the wave number conjugate to the position does not have a bounded spectrum does not have a minimal length. This result can be straightforwardly generalized to several spatial dimensions as long as the underlying geometry is commutative. As we will see in the following Section, noncommutative geometries are slightly more involved to deal with.
Noncommutative geometry
In general, minimal-length models are not understood in terms of conjugate variables \(\hat{x}\) and \(\hat{k}.\) Instead, they are based on a modified Heisenberg algebra expressed with the help of a "physical" momentum \(\hat{P}_{a},\) say
\[[\hat{x}_{a},\hat{P}_{b}]=i\left[f(\hat{P}^{2})\delta_{ab}+\bar{f}( \hat{P}^{2})\frac{\hat{P}_{a}\hat{P}_{b}}{\hat{P}^{2}}\right], [\hat{P}_{a},\hat{P}_{b}]=0, \tag{9}\]
with the two functions \(f,\)\(\bar{f}\) constrained to reduce to \(1\) and \(0\) in the low-energy limit, that is \(P^{2}\to 0,\) so as to guarantee the recovery of the Heisenberg algebra. Unless these two functions satisfy the relation [14; 15]
\[\bar{f}=\frac{2(\log f)^{\prime}\hat{P}^{2}}{1-2(\log f)^{\prime}\bar{P}^{2}}f, \tag{10}\]
the coordinates \(\hat{x}_{a}\) of the model (9) fail to commute. In this scenario, one can verify that
\[[\hat{x}_{a},\hat{x}_{b}]\propto 2\hat{x}_{[b}\hat{P}_{a]}, \tag{11}\]
where the proportionality factor depends on \(\hat{P}^{2},\) and is related to the functions \(f\) and \(\bar{f}\) via Jacobi identities. If coordinates are noncommutative in this way, there is no possibility to recover the undeformed \(d\)-dimensional Heisenberg algebra by merely choosing a new wave number-like variable while keeping the coordinates \(\hat{x}_{a}\) as they are; clearly, if we continue to use the \(\hat{x}_{a},\) the noncommutativity cannot be forced to disappear. Nevertheless, we can find a set of wave numbers which are conjugate to the respective coordinates.
In order to achieve this, it is instructive to follow a two-step procedure: firstly, we diagonalize the deformed Heisenberg algebra; secondly, we find a transformation which restores the undeformed Heisenberg algebra on the diagonal.
In that vein, we define another momentum coordinate \(\hat{p}_{a}=\bar{g}(\hat{P}^{2})\hat{P}_{a}.\) After some algebra, it can be shown that
\[[\hat{x}_{a},\hat{p}_{b}]=if\bar{g}\delta_{ab}+i\left[\bar{g}\bar{f}+2\left(f +\bar{f}\right)\bar{g}^{\prime}\hat{P}^{2}\right]\frac{\hat{P}_{a}\hat{P}_{b} }{\hat{P}^{2}}. \tag{12}\]
Here, we choose the second term of this equation to vanish. Accordingly, \(\bar{g}\) assumes the form
\[\bar{g}=\exp\left(-\int_{0}^{\hat{P}^{2}}\frac{\bar{f}(\Pi)}{2\Pi\left[f(\Pi )+\bar{f}(\Pi)\right]}\mathrm{d}\Pi\right), \tag{13}\]
where as usual \(\bar{g}(0)=1,\) implying that the momenta \(\hat{p}_{a}\) and \(\hat{P}_{a}\) are equal in the low-energy limit. As a result, we obtain the diagonal deformed Heisenberg algebra
\[[\hat{x}_{a},\hat{p}_{b}]=i\delta_{ab}\ g\circ\hat{P}^{2}(\hat{p}^{2}), \tag{14}\]
where we defined \(g\equiv f\bar{g}.\) For the sake of conciseness, henceforth we will omit the composition with \(\hat{P}^{2}\). Next, imposing the Jacobi identities, one can check that a diagonal algebra of this kind implies a commutator of the coordinates given by
\[[\hat{x}_{a},\hat{x}_{b}]=2g^{\prime}\hat{x}_{[b}\hat{p}_{a]}\equiv\theta\hat{ x}_{[b}\hat{p}_{a]}, \tag{15}\]
where we introduced the noncommutativity \(\theta(\hat{p}^{2})=2g^{\prime}(\hat{p}^{2})\). We note that, in the case of a commutative geometry, this first step would have already led to the Heisenberg algebra, _i. e._\(\theta=g^{\prime}=0\) would immediately imply \(g=1\). As a result, we could directly employ the reasoning laid out above for the one-dimensional case to conclude that the space spanned by the \(\hat{p}_{a}\) is to be bounded for a minimal length to appear.
For noncommutative geometries, however, we have to resort to a second transformation. To better convey the reason behind this step, it is instructive to consider the one-dimensional counterpart of the algebra (14). In this case, the wave number is related to the momentum \(\hat{p}\) as [16]
\[\hat{k}=\int_{0}^{\hat{p}}\frac{\mathrm{d}p^{\prime}}{g(p^{\prime 2})}. \tag{16}\]
In several dimensions, we can introduce the analogous transformation
\[\hat{k}_{a}=\int_{0}^{\hat{p}_{a}}\frac{\mathrm{d}p^{\prime}_{a}}{g\left(p^{ \prime 2}_{a}+\sum_{b\neq a}\hat{p}_{b}^{2}\right)}, \tag{17}\]
where we have explicitly separated the dependence on the component \(\hat{p}_{a}\) from the other components, with \(b\neq a\), in the function \(g(p^{2})\). This transformation is particularly nontrivial, because it does not preserve the isotropy of the underlying uncertainty relations: the integration is performed along a specific axis in momentum space, which introduces a preferred coordinate system. Therefore, a rotation in wave-number space does not correspond to a rotation of positions or momenta. This can be seen from the Jacobian
\[J_{ab}=\frac{\partial\hat{k}_{a}}{\partial\hat{p}_{b}}=\begin{cases}g^{-1}( \hat{p}^{2})&a=b,\\ -\hat{p}_{b}\int_{0}^{\hat{p}_{a}}\frac{\delta p^{\prime}_{a}}{g^{2}}&a\neq b, \end{cases} \tag{18}\]
which is nontrivial if the noncommutativity is different from \(0\), and cannot be expressed covariantly. Despite this, we still have \(\hat{p}_{a}|_{k_{a}=0}=0\) such that \(J_{ab}|_{k_{a}=0}=J_{ab}|_{k_{b}=0}=\delta_{ab}/g.\) Consequently, the algebra of observables becomes
\[[\hat{x}_{a},\hat{k}_{b}]=igJ_{ba}=\begin{cases}i&a=b,\\ -ig\hat{p}_{a}\int_{0}^{\hat{p}_{b}}\frac{\theta}{g^{2}}\mathrm{d}p^{\prime}_{ b}&a\neq b.\end{cases} \tag{19}\]
On the diagonal, the positions and the effective wave numbers satisfy the one-dimensional Heisenberg algebra (_i. e._ the wave numbers are conjugate to the respective coordinates). In general, the wave-number spectrum may be bounded to some domain \(D\) which, due to the anisotropic nature of the transformation (17), may not be isotropic. We will see some examples of this in Section III. The anisotropies crucially depend on the noncommutativity \(\theta\) of the coordinates and vanish for commutative backgrounds.
In light of the above, a question naturally arises: does the minimal length still imply a cut-off in this wave-number space? More precisely, is the lowest eigenvalue of the squared position in a given direction, say \((\hat{x}_{d})^{2}\), related to such a bound? We answer both questions in a representation-independent fashion in Appendix A. Here, we provide a simplified argument.
First, let us make an observation: if the background possesses non-vanishing spatial non-commutativity \(\theta\), the coordinates satisfy the uncertainty relation
\[\Delta x_{a}\Delta x_{b}\geq\frac{1}{2}|\langle\theta(\hat{p}^{2})\hat{p}_{[ a}\hat{x}_{b]}\rangle|. \tag{20}\]
To minimize the uncertainty along the direction \(x_{d}\), we need to consider states with large uncertainties in all orthogonal directions. In other words, we require \(\Delta x_{b}\to\infty\) for all \(b\neq d\), thus demanding that a state characterized by the smallest uncertainty \(\Delta x_{d}\) be infinitely peaked in momentum space in those directions. By virtue of Eq. (17), the property of being peaked in the origin carries over to the wave numbers. To further minimize the effect of the noncommutativity, whose absolute value (at least around the origin in momentum space) increases monotonically with \(\hat{p}^{2}\), it is to be expected that the peak should be situated in the origin of the respective directions. Indeed, for those states infinitely peaked in the origin, it can be shown that the right-hand-side of Eq. (20) always vanishes, _i. e._ they are not affected by the coordinate noncommutativity. This way, we can study the minimal length independently of the influence of the noncommutativity.
Furthermore, as effects of the geometry cease to play a role, the wave function saturates the uncertainty relations involving positions or wave numbers in the directions normal to \(p_{d}\) (this can also be inferred from the Jacobian (18) being diagonal at vanishing involved wave numbers). Investigating the states saturating uncertainty relations, in turn, is equivalent to investigating the underlying uncertainty relations themselves.
In momentum space, such a projection on the \(d-\)th axis can be obtained by reducing the state space to wave functions
\[\psi\simeq\psi_{d}(p_{d})\prod_{j=1}^{d-1}\frac{e^{-\frac{p_{d}^{2}}{\epsilon} }}{\sqrt{2\pi\epsilon}}. \tag{21}\]
In the end, we will take the limit \(\epsilon\to 0\), thereby imposing that the involved Gaussians are infinitely peaked in the origin of momentum space. In the following, we intend to evaluate the position uncertainty in the \(d-\)th direction given the states (21).
As every modified Heisenberg algebra can be reduced to the diagonal type (14) by mere redefinition of momenta, we assume it to be the starting point. As a result, we may consider the momentum representation of the position operator
\[\hat{x}_{a}\psi=ig(p^{2})\dot{\partial}_{a}\psi, \tag{22}\]
where we introduced the momentum derivative \(\dot{\partial}_{a}=\partial/\partial p_{a}.\) The position operator is symmetric with respect to the integration measure \(\mathrm{d}^{d}p/g.\) Without loss of generality, we consider states with vanishing expected position \(\langle x_{d}\rangle\) (again there is no preferred position in the model). Therefore, we can write
\[\Delta x_{d}^{2}=-\int_{\mathcal{D}_{p}}\frac{\mathrm{d}^{d}p}{g(p^{2})}\psi^{ *}\left[g(p^{2})\dot{\partial}_{d}\right]^{2}\psi=-\int_{\mathcal{D}_{p}}\frac {\psi_{d}^{*}\left[g(p^{2})\dot{\partial}_{d}\right]^{2}\psi_{d}}{g(p^{2})} \left(\prod_{j=1}^{d-1}\frac{e^{-\frac{p_{j}^{2}}{2\epsilon}}}{\sqrt{2\pi \epsilon}}\mathrm{d}p_{j}\right)\mathrm{d}p_{d}, \tag{23}\]
where the domain of integration \(\mathcal{D}_{p}\) depends on the choice of the model. For vanishing \(\epsilon,\) the product in brackets just becomes a product of Dirac delta-distributions
\[\lim_{\epsilon\to 0}\Delta x_{d}^{2}=\int_{\mathcal{D}_{p}}\frac{\psi_{d}^{*} \left[g(p^{2})\dot{\partial}_{d}\right]^{2}\psi_{d}}{g(p^{2})}\left(\prod_{j= 1}^{d-1}\delta(p_{b})\mathrm{d}p_{b}\right)\mathrm{d}p_{d}, \tag{24}\]
where we applied the definition of the Dirac delta-distribution as an infinitely peaked Gaussian, _i. e._\(\delta(x)=\lim_{\epsilon\to 0}e^{-x^{2}/2\epsilon}/\sqrt{2\pi\epsilon}.\) Consequently, the integration is trivial, and we may immediately project on the \(d-\)th dimension to obtain
\[\lim_{\epsilon\to 0}\Delta x_{d}^{2}=-\int_{-\bar{p}_{d}}^{\bar{p}_{d}}\frac{ \mathrm{d}p_{d}}{g(p_{d}^{2})}\psi_{d}^{*}\left[g(p_{d}^{2})\dot{\partial}_{d }\right]^{2}\psi_{d}, \tag{25}\]
where, according to the model at hand, the effective bound to momentum space in the \(d-\)th dimension \(\bar{p}_{d}\) may be finite or infinite. At this point, redefining the integration variable as \(\mathrm{d}\bar{k}_{d}=\mathrm{d}p_{d}/g(p_{d}^{2})=\mathrm{d}k|_{p_{b}=0}\) for all \(b\neq d\) (which is indeed just the transformation (17) for vanishing transverse momenta), we obtain
\[\lim_{\epsilon\to 0}\Delta x_{d}^{2}=-\left.\int_{-B}^{B}\mathrm{d}k_{d}\psi_{d}^{ *}\frac{\partial^{2}\psi_{d}}{\partial k_{d}^{2}}\right|_{p_{b}=0}, \tag{26}\]
where, similarly to the one-dimensional case, \(B\) may be finite or infinite. The effective one-dimensional operator \(\hat{x}_{d}\psi=i\partial/\partial k_{d}\psi_{d}|_{p_{b}=0}\) is clearly unmodified with respect to the case of commuting coordinates. In short, for vanishing spread of the wave function \(\psi\) (given in Eq. (21)) in the transverse directions of momentum space (the limit \(\epsilon\to 0\)), the position uncertainty in the longitudinal direction is not affected by the presence of coordinate noncommutativity.
Hence, recalling the argument outlined in Section I, if \(B\) is infinite the position uncertainty can be made arbitrarily small. If it is not, the effective value of the bound can be related to the minimal length as
\[B=\lim_{\bar{p}_{d}\rightarrow\bar{p}_{d}}\left(\prod_{b=1}^{d-1}\lim_{\bar{p }_{b}\to 0}\right)\hat{k}_{d}=\frac{\pi}{2\ell}. \tag{27}\]
In a nutshell, the fact that a minimal length requires a bounded wave-number space holds true also for noncommutative scenarios if we define the wave numbers by the transformation (17).
Having shown that the approach of the present paper is valid also in several, possibly noncommutative dimensions, we are ready to apply it to existing models in the literature in order to check for the existence of minimal-length scales.
## III To bound or not to bound
Given a model in the shape (14)1, we have shown that the domain of the wave number defined in Eq. (17) has to be bounded for the model to have a minimal length. This is especially the case in the limit of vanishing transverse wave numbers as seen in Eq. (27), which is in complete correspondence to Eq. (5).
Let us first consider the one-dimensional counterpart of the model (14), namely
\[[\hat{x},\hat{p}]=ig(\hat{p}^{2}). \tag{28}\]
This algebra can be brought into a canonical form by finding the corresponding \(\hat{k}(\hat{p})\) which is conjugate to the \(\hat{x}\), _i. e._ finding Darboux-coordinates without modifying \(\hat{x}\). This has already been done in all generality in Eq. (16). By virtue of Eq. (8), whether the model at hand possesses a minimal length depends on the image of the function \(\hat{k}(\hat{p})\) being bounded for allowed values of \(\hat{p}\).2 Thus, we can immediately obtain the exact value of the minimal length.
Footnote 2: Some models also predict a maximal momentum \(\hat{p}\), _i. e._ a bounded momentum space. This can be read off from the preimage within which \(\hat{k}(\hat{p})\) is an invertible map, meaning that the Jacobian (see Eq. (18)) is non-degenerate.
A short inspection of Eq. (16) shows that it is equivalent to Eq. (17), say in direction \(d\), at vanishing transverse momenta
\[\left(\prod_{a\neq d}\lim_{p_{a}\to 0}\right)\hat{k}_{d}=\int_{0}^{\hat{p}_{ d}}\frac{\mathrm{d}\hat{p}^{\prime}_{d}}{g(\hat{p}^{\prime 2}_{d})}. \tag{29}\]
Domain and image of Eqs. (29) and (16), and with them the respective bounds (27) and (7), are clearly the same. Therefore, it is sufficient to consider all models in one dimension to search for the minimal length.
### One class of common models
Typically, the majority of the models investigated in the literature on deformed Heisenberg algebras [3; 12; 13; 17] belongs to one class, which is characterized by a relation of the form
\[[\hat{x},\hat{p}]=i\hbar(1+\beta\hat{p}^{2})^{\alpha}, \tag{30}\]
where \(\alpha>0\) identifies the model at hand while \(\beta\), having units of \([l^{2}]\), provides a length scale. This length scale is commonly associated with the minimal length. However, it is only in the case \(\alpha=1\), yielding \(\ell=\sqrt{\beta}\) (see [3]), that this connection can be worked out explicitly by applying the Robertson-Schrodinger relation [18; 19].3
Footnote 3: While in [13] it has been claimed to have been shown for the case \(\alpha=1/2\) as well, there is a flawed step between Eqs. (23) and (24) in that reference, explaining the divergence of this conclusion from our results below.
This is where the strength of the present approach comes in. Given a model of the kind (30), all we have to do is finding the wave number \(\hat{k}\), and investigating its domain. By resorting to Eq. (16), the wave number and momentum operators are related by the expression [3; 11]
\[\hat{k}(\hat{p})=\int_{0}^{\hat{p}}\frac{\mathrm{d}\hat{p}^{\prime}}{(1+\beta \hat{p}^{\prime 2})^{\alpha}}=\frac{\hat{p}}{\sqrt{1+\beta\hat{p}^{\prime 2}}}\ {}_{2}F_{1} \left(\frac{1}{2},\frac{3}{2}-\alpha;\frac{3}{2};\frac{\beta\hat{p}^{2}}{1+ \beta\hat{p}^{2}}\right), \tag{31}\]
where \({}_{2}F_{1}\) is the Gaussian hypergeometric function. To evaluate the limit \(\hat{p}\rightarrow\infty\) for any positive value of \(\alpha\), it is convenient to differentiate the models with \(\alpha\leq 1/2\) from the ones where \(\alpha>1/2\).
* \(\alpha\leq\frac{1}{2}:\) for these models we find \[\hat{k}(\hat{p})\geq\int_{0}^{\hat{p}}\frac{\mathrm{d}\hat{p}^{\prime}}{\sqrt {1+\beta\hat{p}^{\prime 2}}}=\frac{\mathrm{arcsinh}\left(\sqrt{\beta}\hat{p} \right)}{\sqrt{\beta}}.\] (32) Both image and domain of this function are unbounded. In other words, \(\hat{p}\)-space is unbounded and \(\hat{k}\) diverges in the limit \(\hat{p}\rightarrow\infty\). Thus, these models do not incorporate a minimal length.
* \(\alpha>1/2:\) in this case, using Gauss' summation theorem [20], we have \[\lim_{p\rightarrow\infty}\frac{p}{\sqrt{1+\beta\hat{p}^{2}}}\ {}_{2}F_{1} \left(\frac{1}{2},\frac{3}{2}-\alpha;\frac{3}{2};\frac{\beta p^{2}}{1+\beta p^ {2}}\right)=\frac{\sqrt{\pi}\Gamma(\alpha-\frac{1}{2})}{2\sqrt{\beta}\Gamma( \alpha)}\] (33) which is finite for \(\alpha>\frac{1}{2}\), implying the minimal length \[\ell=\frac{\sqrt{\pi\beta}\Gamma(\alpha)}{\Gamma(\alpha-\frac{1}{2})}.\] (34)
This function is displayed in Fig. 1. As can be gathered from there, the minimal length decreases for decreasing \(\alpha\) and vanishes at the boundary value \(\alpha=1/2.\) Furthermore, for \(\alpha=1\), we obtain \(\ell=\sqrt{\beta}\), in exact correspondence with the result derived from the Robertson-Schrodinger relation [3].
To show how the case \(\alpha=1\)[3] plays out in two dimensions, the region of allowed wave numbers is displayed in Fig. 2. It is clearly bounded. In particular, at vanishing transverse wave number, _i. e._ on the axes, the bound equals exactly \(\pi/2\sqrt{\beta}\) as expected. Furthermore, it is possible to see the anisotropy of the wave-number representation reflected in the star-like shape of the region.
The boundary case \(\alpha=1/2\) is of particular interest due to it having been the basis for one of the very foundational works of the field [12]. We show the domain of its wave-number space in two dimensions in Fig. 3. In contrast to the example \(\alpha=1\), in Fig. 2 this region is clearly unbounded. To support our finding of this model not possessing a
Figure 1: Value of the minimal length for the one-parameter family of minimal-length models (30) as a function of the model classifier \(\alpha\) evaluated in terms of the model parameter \(\beta\).
Figure 2: Domain of the wave number \(\hat{k}\) for the case \(\alpha=1\) of the family of models in Eq. (30) in two dimensions. The region is bounded. In particular, on the axes the wave numbers do not exceed the value \(\pi/2\sqrt{\beta}\).
minimal length, we have explicitly constructed states which satisfy the proposed uncertainty relation and at the same time allow for infinite localizability in Appendix B.
### Other models
There are a number of other common ansatze which are not of the kind (30). These models may even have a bounded momentum (\(\hat{p}\)) space but no minimal length or vice-versa. The results for some of them are summarised in Table 1. We find that, contrary to the claim in [21] by one of the authors of the present paper, the model \(g=\sqrt{1-\beta\hat{p}^{2}}\) does actually predict a minimal length. Apart from that, the bounds reflect what was known in the literature.
Having gathered the results on different minimal-length models, it is time to comment on the minimal length and how the multitude of distinct realizations of it is to be interpreted.
## IV The essence of the minimal length
Throughout this paper, we have aimed at distilling the very foundation of the minimal-length idea. Nevertheless, we have never had to refer to the dynamics of a system, _i. e._ its Hamiltonian. This explicitly shows that the minimal
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \(g(\hat{p}^{2})\) & wave number \(\hat{k}(\hat{p})\) & maximal momentum (\(\hat{p}\)) & minimal length & Ref. \\ \hline \(1-\beta\hat{p}^{2}\) & \(\text{arctanh}(\sqrt{\beta}\hat{p})/\sqrt{\beta}\) & \(1/\sqrt{\beta}\) & none & [22; 23] \\ \(e^{\beta\hat{p}^{2}}\) & \(\frac{\sqrt{\beta}}{2\sqrt{\beta}}\text{Erf}(\beta\hat{p})\) & none & \(\sqrt{\pi\beta}\) & [24] \\ \(\frac{1}{1-\beta\hat{p}^{2}}\) & \(\hat{p}\left(1-\frac{\beta\hat{p}^{2}}{3}\right)\) & \(1/\sqrt{\beta}\) & \(3\pi\sqrt{\beta}/4\) & [25; 26] \\ \(\sqrt{1-\beta\hat{p}^{2}}\) & \(\frac{\arcsin(\sqrt{\beta}\hat{p})}{\sqrt{\beta}}\) & \(1/\sqrt{\beta}\) & \(\sqrt{\beta}\) & [21] \\ \hline \end{tabular}
\end{table}
Table 1: Wave numbers, momentum-space bounds (if existent) and minimal lengths (if existent) for common deformed Heisenberg algebras. The last column indicates the references associated to the models.
Figure 3: Domain of the wave number \(\hat{k}\) for the case \(\alpha=1/2\) of the family of models in Eq. (30) in two dimensions. Notice that such a model is characterized by an unbounded domain. Specifically, when \(k_{1}=0\), \(k_{2}\) can acquire any real value, and vice-versa.
length is to be explained on the level of kinematics. It is a property of the background on top of which we define a quantum theory.
Any interpretation in terms of a "physical" momentum (throughout the paper denoted as \(\hat{p}\) or \(\hat{p}_{a}\)) which satisfies some modified Heisenberg algebra requires additional structure, while the bound in wave-number space is sufficient to fully characterize the minimal length. The ad-hoc definition of an additional momentum (which may indeed be useful from the point of view of interpretation or calculation) has no physical consequences. However, the choice of Hamiltonian made in the foundational papers on minimal-length quantum mechanics (e. g. [3])
\[H=\frac{\hat{p}^{2}}{2m}+V(x), \tag{35}\]
and countless times in the literature since, does inherit a degree of arbitrariness from it. Why, for example, should we not choose the Hamiltonian
\[H=\frac{\hat{k}^{2}}{2m}+V(x) \tag{36}\]
instead, as suggested in [27]? The effect of the minimal length would still be included by the bound in wave-number space. We thus see that all different minimal-length models, while being kinematically equivalent, only differ in their dynamics, and there is no physical reason to prefer one model over the other (as long as both actually predict a minimal length). In other words, the multitude of approaches only add a layer of modification to the Hamiltonian, which cannot be motivated by the existence of a minimal length itself.4
Footnote 4: In the context of noncommutative backgrounds – themselves an additional assumption – Hamiltonians of the type (36) break isotropy in accordance with Eq. (17). This may indeed be considered a good reason to deform the Hamiltonian such that it is of the form (35). In short, _there is only one model of minimal-length quantum mechanics._
## V Concluding remarks
While minimal-length models have been investigated for quite some time now in the context of quantum gravity phenomenology, a clear definition of what the minimal length exactly entails had not been given up until now. We have closed this gap by showing that it boils down to a cut-off in the space of wave numbers, _i. e._ the conjugates to the positions. This cut-off is quantitatively related to the minimal length. Providing a suitable definition of wave numbers on noncommutative backgrounds, we have generalized the relation to models including coordinate noncommutativity.
The relation between the minimal-length scale and the bound in wave-number space makes it possible to use the framework introduced here to check specific deformed Heisenberg algebras for the existence of minimal lengths. Considering some of the most common models, we have found that one of the original ansatze [12], contrary to claims in the literature, does not entail a minimal length.
A most important property of the minimal length we have distilled in this paper consists in it being solely kinematical: every model with a bound in wave-number space contains a minimal length, independently of the Hamiltonian underlying the dynamics. Apart from that, introducing a momentum operator \(\hat{p}=\hat{p}(\hat{k})\), while possibly making (especially perturbative) calculations more tractable, just amounts to a change of variables. Making the choice of Hamiltonian dependent on change of variables inherits a degree of arbitrariness. It is not a direct effect of the minimal length.
###### Acknowledgements.
The authors acknowledge networking support by the COST Action CA18108 and would like to thank M. Fadel and M. Maggiore for the helpful conversation. L.P. is grateful to the "Angelo Della Riccia" foundation for the awarded fellowship received to support the study at Universitat Ulm.
|
2308.11654 | Large Transformers are Better EEG Learners | Pre-trained large transformer models have achieved remarkable performance in
the fields of natural language processing and computer vision. However, the
limited availability of public electroencephalogram (EEG) data presents a
unique challenge for extending the success of these models to EEG-based tasks.
To address this gap, we propose AdaCT, plug-and-play Adapters designed for
Converting Time series data into spatio-temporal 2D pseudo-images or text
forms. Essentially, AdaCT-I transforms multi-channel or lengthy single-channel
time series data into spatio-temporal 2D pseudo-images for fine-tuning
pre-trained vision transformers, while AdaCT-T converts short single-channel
data into text for fine-tuning pre-trained language transformers. The proposed
approach allows for seamless integration of pre-trained vision models and
language models in time series decoding tasks, particularly in EEG data
analysis. Experimental results on diverse benchmark datasets, including
Epileptic Seizure Recognition, Sleep-EDF, and UCI HAR, demonstrate the
superiority of AdaCT over baseline methods. Overall, we provide a promising
transfer learning framework for leveraging the capabilities of pre-trained
vision and language models in EEG-based tasks, thereby advancing the field of
time series decoding and enhancing interpretability in EEG data analysis. Our
code will be available at https://github.com/wangbxj1234/AdaCE. | Bingxin Wang, Xiaowen Fu, Yuan Lan, Luchan Zhang, Wei Zheng, Yang Xiang | 2023-08-20T12:54:17Z | http://arxiv.org/abs/2308.11654v2 | # Large Transformers are Better EEG Learners
###### Abstract
Pre-trained large transformer models have achieved remarkable performance in the fields of natural language processing and computer vision. Since the magnitude of available labeled electroencephalogram (EEG) data is much lower than that of text and image data, it is difficult for transformer models pre-trained from EEG to be developed as large as GPT-4 100T to fully unleash the potential of this architecture. In this paper, we show that transformers pre-trained from images as well as text can be directly fine-tuned for EEG-based prediction tasks. We design AdaCE, plug-and-play **A**dapters for **C**nverting **E**EG data into image as well as text forms, to fine-tune pre-trained vision and language transformers. The proposed AdaCE module is highly effective for fine-tuning pre-trained transformers while achieving state-of-the-art performance on diverse EEG-based prediction tasks. For example, AdaCE on the pre-trained Swin-Transformer achieves 99.6%, an absolute improvement of 9.2%, on the EEG-decoding task of human activity recognition (UCI HAR). Furthermore, we empirically show that applying the proposed AdaCE to fine-tune larger pre-trained models can achieve better performance on EEG-based predicting tasks, indicating the potential of our adapters for even larger transformers. The plug-and-play AdaCE module can be applied to fine-tuning most of the popular pre-trained transformers on many other time-series data with multiple channels, not limited to EEG data and the models we use. Our code will be available at [https://github.com/wangbxij1234/AdaCE](https://github.com/wangbxij1234/AdaCE).
## 1 Introduction
Electroencephalogram, or EEG, a non-invasive way to measure brain activity, contains a wealth of information about the functioning of the brain, which makes it an attractive source of data for deep learning applications. The development of deep learning in EEG feature analysis has shown great promise for new diagnostic and therapeutic tools in the field of neurological disorders. EEG signals are usually measured by placing electrodes on the scalp and represented as a two-dimensional matrix or graph, where time is one dimension and the location of electrodes is another. Most EEG-decoding approach researchers regard multiple channels as the spatial dimension of EEG data, corresponding to the temporal dimension.
Large language models (LLMs), such as GPT [1][2][3][4], have achieved remarkable results in various traditional natural language processing tasks. [5] combined the pre-trained BERT-like model [6] with their local attention modules for the EEG-To-Text decoding and sentence sentiment classification tasks, in which they treated EEG sequences as encoded sentences. However, regarding a sequence of multi-channel EEG data as a sentence may neglect the spatial dimension of EEG signals, which limits the performance of this approach in prediction tasks. Most existing EEG decoding methods construct special CNN or transformer modules to extract features from the spatial dimension, abandoning which could lead to a significant drop in model performance [7][8][9][10].
We are committed to designing a solution that feeds the complete information of EEG as a two-dimensional matrix into the pre-trained model without requiring any additional feature extraction neural networks, which will introduce additional parameters to be trained. During the fine-tuning process, the gradients of these parameters need to be backpropagated throughout the entire model, which will significantly increase the computational cost.
[11] showed that common pre-trained models get low intrinsic dimensions, which leads to their better generalization performance when fine-tuned on traditional tasks. Could the powerful generalization ability of pre-trained vision transformers help us complete the EEG prediction tasks? To validate this idea, in this paper, we design the EEG-to-Image adapter for converting multi-channel or long single-channel EEG data into image data. We also design an EEG-to-Text adapter for converting moderate single-channel EEG data into text, which effectively solves the overflow issue of regarding EEG as input text for LLMs. These two adapters are collectively referred to as AdaCE, **Adapters** for **C**onverting EEG data into image and text forms, as shown in Figure 1. The proposed AdaCE achieves state-of-the-art performance on diverse EEG-based prediction or classification tasks. We also show that applying the proposed AdaCE to fine-tune larger transformers can achieve even better performance on these tasks. Taking Epilepsy Seizure Recognition [12] as an example, we show that the fine-tuned GPT-2 model [2] achieves 98.7% on the epilepsy seizure prediction task, and that fine-tuned language transformers of the same architecture get an average improvement of 1.3% when the number of parameters changes from about 100 million to 300 million.
The main contributions of our paper are as follows:
* We show that large models (LMs) pre-trained from images as well as text can be directly fine-tuned for EEG-based prediction tasks without introducing extra parameters to be trained. This provides a new application scenario for LMs.
* We propose the AdaCE-to-Image, adapter for converting multi-channel EEG into images, which can directly feed the complete information of two-dimensional EEG data into the pre-trained vision transformers.
* We also design the AdaCE-to-Text, adapter for converting moderate single-channel EEG data into text, which effectively solves the overflow issue of regarding EEG as input text for LLMs.
* The proposed AdaCE module preserves the feature information of each original data point to the maximum extent in two different scenarios, achieving state-of-the-art results in several EEG prediction tasks.
* Furthermore, we empirically show that applying the proposed AdaCE to fine-tune larger pre-trained models can achieve better performance on EEG-based predicting tasks, indicating the potential of our adapters for even larger transformers.
* The plug-and-play AdaCE module could be applied to fine-tuning most of the popular pre-trained large models on other time-series data with multiple channels.
## 2 Related Works
### Transformer Combined with CNN
Whether long-range correlations are essential for decoding EEG data depends on the specific EEG data and decoding task. In some cases, long-range correlation can be important for accurately decoding EEG signals, and the attention-based transformer architecture [13] will be a more suitable choice for decoding EEG data to have better capability to extract long-range dependencies.
Figure 1: Framework: Adapters to convert EEG into images and text for fine-tuning pre-trained large transformers.
[9] used the combined Transformer+CNN model, in which they expected the transformer module to pre-extract temporal and spatial features by calculating attention across time and space separately. While [10] applied CNN+transformer architecture, using the CNN module to pre-extract temporal and spatial features by calculating convolution across time and space separately. Most existing transformer+CNN methods EEG-Transformers would face performance degradation without the pre-extraction module [7][8][9][10].
However, adding the attention or CNN pre-extraction module to the pre-trained large transformer will introduce additional parameters to be trained. During the fine-tuning process, the gradients of these parameters need to be backpropagated throughout the entire model, which will significantly increase the computational cost.
### Self-supervised Learning
Considering the cost of labeling and the limited number of labeled EEG data, [8] proposed the time-series self-supervised paradigm by conducting different temporal and contextual data augmentation for contrast. Soon after they also proposed a Class-Aware semi-supervised learning paradigm [14]. Their method has made a significant contribution towards addressing the problem of limited availability of labeled EEG data, which is much lower in magnitude compared to that of text and image data. However, the performance by measuring absolute prediction accuracy from Contrastive Learning ([8][14]) on diagnostic tasks is not that good.
How to get better performance with limited training data? GPT 2's authors [2] demonstrated that large language models could perform many traditional language tasks without any parameter or architecture modification, a capability referred to as "zero-shot" learning. Given that large language models have already exhibited the remarkable ability to perform "zero-shot" learning, the issue of limited training samples is no longer a major bottleneck. For further exploring LM's potential, We fine-tuned pre-trained transformers on diverse EEG datasets to pursue better performance than that from the "zero-shot" learning setting.
### Synthesize EEG by GPT
[15] and [16] used GPT by synthesizing EEG signal for data augmentation. However, synthetic signals may not necessarily be reliable. When training models for future medical diagnosis, it is preferable to use real data. [2] and [3] have provided evidence that large language models possess the ability to perform few-shot or even zero-shot learning in conventional natural language processing tasks. Therefore, we did not introduce any synthetic biological signals machine-generated by LLMs in our experiments.
### BERT-like Transformers on EEG Dataset
[5] proposed to combine the pre-trained BART [6] with their local attention modules for EEG-To-Text decoding as well as sentence sentiment classification tasks. They assume that the brain has already encoded the data, treating each EEG sequence as an encoded sentence. However, regarding a sequence of multi-channel EEG data as a sentence may neglect the spatial dimension of EEG signals, which limits the performance of this approach in prediction tasks.
Moreover, the local attention modules will introduce additional parameters that need to be trained. As [5] placed them before the pre-trained language model, the gradients of these parameters must be backpropagated throughout the entire
Figure 2: AdaCE-to-Image: Adapter for Converting EEG into images to maintain the complete feature information.
architecture. The pre-extraction modules will significantly increase the computational cost when we switch from BART to a larger model.
[17] and [18] both trained BERT-like transformers on their EEG dataset, which was the beginning of pre-training transformer models on EEG datasets. Although it is widely known that GPT-3 achieved better performance than BERT with the help of a larger number of parameters, pre-trained transformers based on EEG datasets can't go larger since the magnitude of publicly available labeled EEG data is much lower than that of text and image data.
Unlike the approach taken by [5] in which multi-channel EEG sequences are treated as encoded sentences, [18] consider each EEG channel as a sentence and process multiple channels as a whole text. Although preserving all the information points, their approach transforms the parallel relationships between different channel data into contextual relationships, which may overlook the parallel nature of data between different channels.
Since the input sequence's length of large language transformers is limited due to the quadratic complexity of attention calculation, connecting multiple channels as one text input is usually faced with the overflow issue.
## 3 AdaCE: Adapters for Converting EEG Data into Images as well as Text
EEG signals are usually measured by placing electrodes on the scalp and represented as a two-dimensional matrix or graph. The magnitude and sign of numbers in EEG data indicate the polarity and strength of the electrical activity of the brain neurons: A positive value generally indicates that the neuron is firing, while a negative value indicates inhibited; the magnitude of the data indicates how strongly the neuron fires or is inhibited. Deep learning models can be used to analyze EEG signals and extract meaningful information for downstream tasks.
Fine-tuning pre-trained large models may be a promising solution for higher prediction accuracy, but these vision or language transformers were not designed for decoding EEG. The existing EEG-to-TEXT methods [5][18] neglected the original information in either spatial dimension or parallel relationship among different channels. When it comes to fine-tuning larger transformers, we also need to address the extra computational cost from local modules and overflow issues caused by long time series.
In this section, we introduce our solution for fine-tuning pre-trained large transformers on EEG datasets. We design AdaCE, plug-and-play **Ada**pters for converting **EEG** data into image as well as text forms, to fine-tune pre-trained vision and language transformers.
We first propose the EEG-to-Image adapter for converting multi-channel EEG into images, which can directly feed the complete information of two-dimensional EEG data into the pre-trained vision transformers. Apart from this, we also design the EEG-to-Text adapter for converting moderate single-channel EEG data into text, which effectively solves the overflow issue of regarding EEG as input text for LLMs.
### AdaCE-to-Image
To maintain the complete feature information of two-dimensional EEG data, we propose to convert multi-channel EEG into images, as shown in Figure 2.
The first step is to construct a dimension of three corresponding to the RGB pattern required by the pre-trained vision transformer, and we propose two schemes for different conditions:
* In the case where the size of EEG's spatial dimension is a multiple of three, we decompose the spatial dimension into two new dimensions, with the first newly generated dimension having a dimensionality of three.
* In other cases, we apply interpolation on the spatial dimension to make its dimensionality multiple of three and apply the same type of decomposition.
In the second step, we apply reshape operation and bilinear interpolation on the last two dimensions to align their dimensionalities with the input height and width of the pre-trained vision transformers. The interpolation can be either upsampling or downsampling, depending on the size of the vector obtained in the first step.
The proposed adapter can also be used to convert single-channel EEG into images. In this case, we modify the first step by inserting a new dimension of size one before the temporal dimension and replicate data along the newly generated dimension to three for alignment with the RGB pattern.
We empirically show that the proposed EEG-to-Image adapter is highly effective for fine-tuning large vision transformers on diverse EEG-based prediction tasks while achieving state-of-the-art performance.
### AdaCE-to-Text
For EEG data with a moderate number of samples and a single channel, we propose another adapter for EEG-to-Text, which effectively solves the overflow issue of regarding EEG as input text for LLMs.
When we treat EEG data as text, the most intuitive idea is to connect all the numerical points from the same instance to formulate a text, where each sampling point at a timestamp should be mapped as a word in the language. However, the input sequence's lengths of the existing pre-trained language transformers are limited due to the quadratic complexity of attention calculation. Under these conditions, we aim to retain the essential information of each sampling point while using the minimum number of digits when converting the EEG signals into numerical sequences.
MinMaxScaler or other normalization method would lead to too many decimal digits. We propose to concatenate the raw integer data directly into text without the normalization step. For some datasets, the provided data may already be normalized as decimal values. In such cases, we amplify the values by a factor of \(\alpha\) and round them to the nearest integer.
Another issue is that the number of EEG's temporal dimension can be larger than the pre-trained model's input size. Automatic truncation will lead to severe information loss. Since EEG data is often densely sampled over a short period of time (such as a few seconds), its adjacent sampling points on the timestamp exhibit obvious continuity characteristics. Given this feature, we downsample data using non-overlapping sliding windows to the target language transformer's maximum input length, as shown in Figure 3. We do not choose the 1D CNN layer with finite kernel size since we believe that using sliding windows better preserves the original information of the EEG data for tuning pre-trained transformers.
The experimental results confirmed the effectiveness of the proposed adapter for EEG-to-Text. We fine-tuned GPT-2-medium 355M with AdaCE on the Epileptic Seizure Dataset [12] to state-of-the-art performance.
However, it should be noted that based on our experience, when the generated text exceeds three times the acceptable input length of the pre-trained model, using non-overlapping sliding windows for downsampling may result in significant information loss. In such cases, the EEG-to-Image adapter can be used to process this single-channel EEG data.
### Fine-tune Pre-trained Transformers on Converted Dataset
Despite the varying structures of different pre-trained large models, the principle of using most multi-layer transformers for prediction tasks remains the same. We follow the fine-tuning approach of [1]. Suppose we train K instances at a time, the fine-tuning objective is to maximize the following function:
\[L=\sum_{k=1}^{K}\log\left(H_{c}\left(z_{k}^{n}\right)\right), \tag{1}\]
Figure 3: AdaCE-to-Text: Adapter for Converting EEG into text while solving the overflow issue.
where \(z^{n}\) is the last vector of the transformer's output sequence, and \(H_{c}\) is the classification head consisting of a linear layer followed by a softmax layer.
## 4 Experiments
We load the pre-trained transformer models for image or sequence classification from huggingface [19] and evaluate our method on diverse EEG decoding tasks: UCI HAR [20], Epileptic Seizure Recognition [12], and Sleep-EDF [21]. Following the setting in [8], we split the datasets into 60%, 20%, and 20% for training, validating, and testing. We fine-tune the pre-trained large transformers using Huggingface Trainer [19] with AdamW optimizer, initial learning rate of 5e-5, training batch size per device of 16, and gradient accumulation steps of 4. We report the prediction accuracy and macro-averaged f1 scores. Each experiment we conduct is trained within 20 epochs, and further training does not significantly improve the performance. For experiments involving the conversion of EEG data into text, we set the hyperparameter \(\alpha\) to 1000. We conduct fine-tuning with PyTorch 1.13.0+cu116 and Huggingface transformers 4.31.0 on two NVIDIA Quadro RTX 8000 GPUs.
### UCI HAR Dataset
The Human Activity Recognition Dataset [20] was collected from 30 subjects performing six different activities (Walking, Walking Upstairs, Walking Downstairs, Sitting, Standing, Laying), consisting of inertial sensor data that was collected using a smartphone carried by the subjects. The inside EEG data has nine channels: three for acceleration signals of X, Y, and Z; three for body acceleration obtained by subtracting gravity from the total acceleration of X, Y, and Z; three for the angular velocity vector of X, Y, and Z. There are 7352 pieces of EEG data, each piece containing 9*128 data points.
Since this EEG dataset has multiple channels, we choose to fine-tune pre-trained vision transformers with the proposed EEG-to-Image adapter. The pre-trained models we choose are Swin-Transformer [24] base model (87M) pre-trained on ImageNet-21k [25] and Swin-Transformer tiny model (28M) pre-trained on ImageNet-1k.
\begin{table}
\begin{tabular}{|l|l|l|} \hline Method & Accuracy & Macro-F1 \\ \hline SSL-ECG [22] & \(65.3\) & \(63.8\) \\ SimCLR [23] & \(81.0\) & \(80.2\) \\ TS-TCC [8] & \(90.4\) & \(90.4\) \\ AdaCE-to-Image (28M) & \(98.2\) & \(98.1\) \\ AdaCE-to-Image (87M) & \(\mathbf{99.6}\) & \(\mathbf{99.5}\) \\ \hline \end{tabular}
\end{table}
Table 1: Comparisons of our AdaCE-to-Image with those of the previous state-of-the-art transformer-based methods on UCI HAR.
Figure 4: Visualization: Display of some images generated from the UCI HAR Dataset.
Table 1 presents the results of our AdaCE-to-Image method, as well as the results of previous state-of-the-art transformer-based methods tested by [8]. The proposed AdaCE-to-Image method noticeably surpasses the previous state-of-the-art transformer-based methods, achieving an improvement of +9.2% in classification accuracy for AdaCE 87M (99.6%) over TS-TCC (90.4%), and an improvement of +7.8% in classification accuracy for AdaCE 28M (98.2%) over TS-TCC (90.4%). Table 1 also shows that AdaCE performs much better than the earlier models [23] and [22].
Through an epoch-wise training process analysis presented in Figure 5, we demonstrate that AdaCE-to-Image on diverse pre-trained transformers outperforms the previous state-of-the-art transformer-based method within only 5 epochs. This highlights the efficiency and universality of the AdaCE-to-Image method.
### Epileptic Seizure Prediction
The Epileptic Seizure Dataset [12] consists of 500 files, with each file representing a single subject. Each file is a recording of brain activity for 23.6 seconds. The corresponding time series was sampled into 4097 data points. Every 4097 data points were divided and shuffled into 23 chunks, and each chunk contains 178 data points for 1 second. There are 23 x 500 = 11500 pieces of information.
Since the temporal dimensionality of this single-channel dataset is below three times the input limit of the pre-trained language transformers, we conduct the experiments with the proposed AdaCE-to-Text method. The pre-trained models we choose are GPT-2 of 124M and GPT-2 of 355M [2].
Table 2 presents the results of our AdaCE-to-Text method, as well as the results of previous state-of-the-art transformer-based methods tested by [8]. Compared with the previous state-of-the-art transformer-based methods, the proposed AdaCE-to-Text method achieves better accuracy without CNN pre-extraction.
\begin{table}
\begin{tabular}{|l|l|l|} \hline Method & Accuracy & Macro-F1 \\ \hline SSL-ECG [22] & \(93.7\) & \(89.2\) \\ SimCLR [23] & \(96.0\) & \(93.5\) \\ TS-TCC [8] & \(97.2\) & \(95.5\) \\ AdaCE-to-Text (124M) & \(97.4\) & \(96.5\) \\ AdaCE-to-Text (355M) & \(\mathbf{98.7}\) & \(\mathbf{97.9}\) \\ \hline \end{tabular}
\end{table}
Table 2: Comparisons of our AdaCE-to-Text with those of the previous state-of-the-art transformer-based methods on Epilepsy Seizure Prediction.
Figure 5: Training Process Analysis of AdaCE-to-Image on UCI HAR Dataset. The baseline refers to TS-TCC [8].
We also show that applying AdaCE to a larger pre-trained model achieves a better performance: +1.3% for GPT-2 355M (98.7%) over GPT-2 124M (97.4%) by prediction accuracy. Inspired by this, we conduct comparative experiments for fine-tuning pre-trained models of different sizes and include them in the ablation study at the end of this section.
The Sleep-EDF Dataset [21] contains 197 whole-night PolySomnoGraphic sleep recordings, containing EEG, EOG, chin EMG, and event markers. We follow [7], using the first 20 subjects' records out of 78 to construct the single-channel EEG train dataset of 42308 pieces of information, with each piece of information containing 1*3000 data points.
### Sleep-EDF
We conduct the experiments using the proposed AdaCE-to-Image method. The pre-trained models we choose are Swin-Transformer-v2 [26] tiny model of 28M pre-trained on ImageNet-1k and Swin-Transformer-v2 base model of 110M pre-trained on ImageNet-21k [25]. Similar to the two datasets discussed above, the proposed method achieves state-of-the-art performance, and applying the proposed AdaCE to fine-tune larger pre-trained models can achieve better performance, as shown in Table 3.
If we apply the EEG-to-Text method on Sleep-EDF Dataset, the generated text exceeds three times the acceptable input length of the largest pre-trained model we use, even using non-overlapping sliding windows for downsampling may result in significant information loss. To demonstrate this, we conduct comparative experiments for fine-tuning pre-trained models with different approaches and include them in the ablation study at the end of this section.
### Ablation Study
We conduct ablation studies on the UCI HAR Dataset and the Sleep-EDF Dataset to demonstrate the effectiveness of AdaCE.
We first use the conventional method of processing each sequence of EEG data as a text input, and fine-tune BERT and GPT-2 separately. Then we apply the proposed AdaCE-to-Text method on the two pre-trained models separately. For further comparison, we also convert the EEG into images with the proposed AdaCE-to-Image and fine-tune three different pre-trained vision transformers of similar sizes.
Table 4 demonstrates that the AdaCE-to-Image method performs best on the two datasets. Table 4 also shows that using the proposed AdaCE-to-Text method outperforms the previous EEG-to-Text method on the Sleep-EDF Dataset, but is inferior to the proposed AdaCE-to-Image method. The latter performance gap is particularly evident on the UCI HAR
\begin{table}
\begin{tabular}{|l|c c|c c|} \hline Approach (Pre-trained Model) & \multicolumn{2}{c|}{HAR} & \multicolumn{2}{c|}{Sleep-EDF} \\ \cline{2-5} & Accuracy & Macro-F1 & Accuracy & Macro-F1 \\ \hline AdaCE-to-Image (DeiT 86M [27]) & \(99.5\) & \(99.3\) & \(80.7\) & \(70.3\) \\ AdaCE-to-Image (Swin-Transformer 87M [24]) & \(\mathbf{99.6}\) & \(\mathbf{99.5}\) & \(\mathbf{84.7}\) & \(\mathbf{76.2}\) \\ AdaCE-to-Image (ViT 87M [28]) & \(99.5\) & \(99.2\) & \(79.9\) & \(68.7\) \\ \hline AdaCE-to-Text (BERT 110M [29]) & \multicolumn{5}{c|}{} & \(77.0\) & \(65.9\) \\ AdaCE-to-Text (GPT-2 124M [2]) & \multicolumn{5}{c|}{} & \(76.6\) & \(67.6\) \\ \hline EEG-to-Text (BERT 110M [29]) & \(83.3\) & \(84.6\) & \(71.1\) & \(60.8\) \\ EEG-to-Text (GPT-2 124M [2]) & \(85.5\) & \(86.6\) & \(72.0\) & \(60.3\) \\ \hline \end{tabular}
\end{table}
Table 4: Comparisons of the proposed AdaCE-to-Image, AdaCE-to-Text and the previous EEG-to-text method on UCI HAR and SLEEP-EDF. Note: The blank cells in the table for HAR indicate that we do not design AdaCE-to-Text method for multi-channel EEG data.
\begin{table}
\begin{tabular}{|l|l|l|} \hline Method & Accuracy & Macro-F1 \\ \hline SSL-ECG [22] & \(74.6\) & \(65.4\) \\ SimCLR [23] & \(78.9\) & \(68.6\) \\ TS-TCC [8] & \(83.0\) & \(73.6\) \\ AttnSleep [7] & \(84.4\) & \(\mathbf{78.1}\) \\ AdaCE-to-Image (28M) & \(83.4\) & \(74.5\) \\ AdaCE-to-Image (110M) & \(\mathbf{85.3}\) & \(76.4\) \\ \hline \end{tabular}
\end{table}
Table 3: Comparisons of our AdaCE-to-Image with those of the previous state-of-the-art transformer-based methods on SLEEP-EDF.
Dataset, which further highlights the effectiveness of the proposed EEG-to-Image adapter for processing multi-channel EEG signals.
It can also be observed from Table 4 that for the same EEG transformation method, pre-trained transformers with similar number of trainable parameters but different structures have similar performance on the EEG-decoding tasks.
We then compare the performance of pre-trained models with different sizes on the UCI HAR Dataset, as shown in Table 5. We use the proposed adapters to convert the EEG data into images at resolution \(224\times 224\), RGB pattern. Subsequently, we employ six distinct vision transformers, comprising three types, each with a smaller and a larger variant. Throughout the fine-tuning process of different pre-trained models, we maintain consistent settings. From Table 5 we can find that fine-tuning a larger pre-trained model of the same type leads to a higher accuracy, indicating that our method has the potential to be applied to larger transformers.
## 5 Conclusions and Future Work
Decoding medical diagnosis-related electroencephalograms is a new application scenario for LMs, we have shown that large transformers pre-trained from the images as well as text can be directly fine-tuned for EEG-based prediction tasks without introducing extra parameters to be trained. Specifically, We have proposed adapters for converting EEG data into image and text formats for LMs respectively. The proposed AdaCE-to-Image enables input of EEG data into pre-trained vision transformers, and the AdaCE-to-Text solves the overflow issue when regarding EEG as input text for LLMs.
The proposed AdaCE module preserves the feature information of each original data point to the maximum extent, achieving state-of-the-art results in several EEG prediction tasks. Furthermore, we empirically show that applying the proposed AdaCE to fine-tune larger pre-trained models can achieve better performance on EEG-based predicting tasks, indicating the potential of our adapters for even larger transformers.
The plug-and-play AdaCE module can be applied to fine-tuning most of the popular pre-trained transformers on many other time-series data with multiple channels, not limited to EEG data and the models in our paper.
As mentioned above, the Sleep-EDF Dataset also provides ECG (electrocardiogram) and EMG (electromyogram) records, both are also important medical data for diagnosing health conditions and various diseases of the body. AdaCE can also be applied to fine-tuning pre-trained transformers on ECG and EMG data, since they are also multi-channel time-series data measuring the human body's electrical activities.
Moreover, we can also fine-tune pre-trained mobile-friendly transformers [30] with AdaCE, to provide a new technical route for the development of portable disease detectors.
Fine-tuning larger transformers on EEG datasets can also be conducted with parameter-efficient methods (for instance, LoRA [31]), which we leave to future work.
## Acknowledgments
This work was partially supported by the Project of Hetao Shenzhen-HKUST Innovation Cooperation Zone HZQB-KCZYB-2020083.
\begin{table}
\begin{tabular}{|l|l|l|} \hline Pre-trained Model & Accuracy & Macro-F1 \\ \hline DeiT (22M) [27] & \(98.7\) & \(97.6\) \\ Swin-Transformer (28M) [24] & \(98.2\) & \(98.1\) \\ ViT (22M) [28] & \(98.8\) & \(98.0\) \\ \hline DeiT (86M) [27] & \(99.5\) & \(99.3\) \\ Swin-Transformer (87M) [24] & \(\mathbf{99.6}\) & \(\mathbf{99.5}\) \\ ViT (87M) [28] & \(99.5\) & \(99.2\) \\ \hline \end{tabular}
\end{table}
Table 5: Comparisons of fine-tuning pre-trained vision transformers in different sizes on UCI HAR with our AdaCE-to-Image. |
2306.16497 | Observation of two non-thermal fixed points for the same microscopic
symmetry | Close to equilibrium, the underlying symmetries of a system determine its
possible universal behavior. Far from equilibrium, however, different universal
phenomena associated with the existence of multiple non-thermal fixed points
can be realized for given microscopic symmetries. Here, we study this
phenomenon using a quasi-one-dimensional spinor Bose-Einstein condensate. We
prepare two different initial conditions and observe two distinct universal
scaling dynamics with different exponents. Measurements of the complex-valued
order parameter with spatial resolution allow us to characterize the
phase-amplitude excitations for the two scenarios. Our study provides new
insights into the phenomenon of universal dynamics far from equilibrium and
opens a path towards mapping out the associated basins of non-thermal fixed
points. | Stefan Lannig, Maximilian Prüfer, Yannick Deller, Ido Siovitz, Jan Dreher, Thomas Gasenzer, Helmut Strobel, Markus K. Oberthaler | 2023-06-28T18:45:25Z | http://arxiv.org/abs/2306.16497v1 | # Observation of two non-thermal fixed points for the same microscopic symmetry
###### Abstract
Close to equilibrium, the underlying symmetries of a system determine its possible universal behavior. Far from equilibrium, however, different universal phenomena associated with the existence of multiple non-thermal fixed points can be realized for given microscopic symmetries. Here, we study this phenomenon using a quasi-one-dimensional spinor Bose-Einstein condensate. We prepare two different initial conditions and observe two distinct universal scaling dynamics with different exponents. Measurements of the complex-valued order parameter with spatial resolution allow us to characterize the phase-amplitude excitations for the two scenarios. Our study provides new insights into the phenomenon of universal dynamics far from equilibrium and opens a path towards mapping out the associated basins of non-thermal fixed points.
+
Footnote †: preprint: APS/123-QED
Universality is a powerful concept for characterizing systems by means of effective models based on features that are independent of microscopic details. This concept led to the identification of universality classes for systems at and near equilibrium [1; 2]. For example, close to a phase transition, they are characterized by a few universal exponents, which describe the relevant properties as functions of the distance to the critical point [3].
In such close-to-equilibrium scenarios, universal exponents are typically associated with the symmetry properties of the underlying Hamiltonian in the respective phase; generically, that means that for a fixed Hamiltonian, there exists a unique set of universal exponents and scaling functions. The situation changes for systems far from equilibrium. Here, possible excitations are richer and not strictly tied to the symmetry properties associated with the ground state of the Hamiltonian. Thus, universal phenomena far from equilibrium are expected to have more defining elements than the underlying microscopic Hamiltonian.
Motivated by renormalization-group ideas, the concept of non-thermal fixed points (NTFPs) has been developed as a means of characterizing universal types of dynamics far from equilibrium [4; 5]. NTFPs imply universal dynamics, that is, time evolution which can be described by rescaling in time and space according to a set of universal exponents [6; 7]. Quantum simulators utilizing ultracold atoms have proven to be a versatile platform for studying non-equilibrium behavior of quantum many-body systems [8; 9; 10]. During recent years, studies of universal phenomena far from equilibrium have intensified, both in theory [4; 5; 6; 7; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30] and experiment [31; 32; 33; 34; 35; 36; 37; 38]. Universal scaling associated with NTFPs was observed experimentally in different systems [31; 32; 33; 35]; recently, distinct universal phenomena, depending on the Hamiltonian symmetries, have been reported [37].
Far from equilibrium, there is numerical evidence that the universal behavior can exhibit a dependence of the exponents on the initial condition [18], see also [33] for related experiments. Intuitively, this is due to the possible existence of, e.g., different configurations of linear or nonlinear, such as topological, excitations dominating the dynamics. Nevertheless, to be physically relevant, a range of initial conditions must lead to the same universal dynamics (see Fig. 1). Thus, it is sensible to define, for every NTFP, an associated basin encompassing initial states, from which the system can evolve into the vicinity of that fixed point [13].
In this work, we experimentally confirm the existence of two distinct NTFPs in a system with the same microscopic Hamiltonian. We achieve this by generating initial conditions (ICs) belonging to two different associated basins. Both ICs are obtained by quenching to the same parameter regime. Prior to the quench, one is prepared in the ground state of another phase, while for the other one we additionally generate localized excitations, employing local spin control. For the two scaling evolutions we find distinct sets of universal exponents and the
Figure 1: Schematic representation of the equilibration of a many-body system starting from different initial conditions. During this evolution a transient period of slow self-similar behavior in the vicinity of a NTFP may occur. The associated basins are defined by the set of initial conditions which evolve towards the same NTFP. This is the simplest scenario, more complicated ones are possible, such as the existence of associated basins that lead to a subsequent time evolution to two or more NTFPs.
corresponding universal scaling functions.
In our experiments, we employ a Bose-Einstein condensate of \({}^{87}\)Rb in the \(F=1\) hyperfine manifold. The atom cloud is contained in a quasi one-dimensional box consisting of a red-detuned dipole trap with blue-detuned end caps. The latter confine the atoms within the central part of the longitudinal harmonic potential created by the red-detuned beam along the \(x\)-direction (the resulting density is shown in Fig. 2(a)). After preparing the different initial conditions, we study their respective dynamics after a quench. For brevity, we denote the ICs without and with additional excitations as polar IC and rotation IC, respectively. Absorption pictures of these ICs are shown in Fig. 2(c).
The polar IC is set by a gas with \(\sim 1.6\cdot 10^{5}\) atoms in the polar state, all of them occupying the \(m_{\rm F}=0\) magnetic sub-state (cf. Fig. 2(c)). For the rotation IC, we prepare the polar state with a lower atom number of \(\sim 4\cdot 10^{4}\) and additionally generate excitations via 6 equally spaced local spin rotations. Each rotation angle is chosen greater than \(\pi/2\), which, in the polar phase, leads to the generation of vector solitons [39]. The lower atom number is chosen to ensure a sufficiently long lifetime of excitations.
Following the preparation of each IC, the parameter \(q\), denoting the energy offset between \(m_{\rm F}=0\) and \(m_{\rm F}=\pm 1\) (see Fig. 2(b) for the internal level structure and [40] for the microscopic Hamiltonian), is quenched to a value within the easy-plane phase [41; 42], by instantaneously applying off-resonant microwave dressing [43]. The quench tunes spin-changing collisions into resonance, which redistribute atoms between the magnetic levels for both ICs (cf. Fig. 2(b)) and lead to a build-up of transverse spin \(F_{\perp}=F_{x}+{\rm i}F_{y}\), with only small excitations arising in \(F_{z}\) (see Fig. S1 in [40]). To access the order-parameter field \(F_{\perp}\), we simultaneously measure orthogonal spin projections transverse to the magnetic offset field with spatial resolution along the \(x\)-axis [44].
We observe that, after around 17 s, the system reaches a regime where long-wavelength correlations only change slowly in time (cf. Fig. S2 in [40]), however, excitations in the transverse spin plane differ drastically for the distinct ICs. For the polar IC the ensuing dynamics after the quench causes the transverse spin \(F_{\perp}\) to settle to the bottom of the corresponding Mexican-hat mean-field potential in the transverse-spin observables, with Goldstone excitations along the bottom in angular direction (see the left col. of Fig. 2(d)). Contrary to this, when starting from the rotation IC, large fluctuations filling the transverse plane build up and persist during the subsequent evolution (right col. of Fig. 2(d)). We estimate that, for both ICs, the energy input by the quench leaves the system well below the critical temperature for which the easy-plane phase vanishes [40; 45]. Thus, we conclude that the expected thermal state for these energies has the same properties as a thermal state in the easy
Figure 2: Time evolution from two different initial conditions (ICs) after quenching the control parameter \(q\). (a) Quasi-one-dimensional BEC in a box potential (at the start of the evolution at \(t=0\)) formed by an elongated dipole trap with repulsive walls. (b) The relevant energy difference \(q\) between the \(m_{\rm F}=0\) and \(m_{\rm F}=\pm 1\) levels is controlled with off-resonant microwave dressing. This allows tuning spin-changing collisions into resonance, which redistribute population between the hyperfine levels. (c) Absorption pictures of the atom clouds after a short Stern-Gerlach pulse for the polar (left) and rotation ICs (right). For the rotation IC, six local spin rotations transfer atoms from \(m_{\rm F}=0\) to \(m_{\rm F}=\pm 1\). (d) Time evolution of \(F_{x}\)-\(F_{y}\)-histograms for the polar (left) and rotation ICs (right). The black lines correspond to the spatial profiles of a representative single realization. For the polar IC the system relaxes to an approx. constant transverse spin length with phase excitations along the ring. In contrast, combined spin-length and phase fluctuations persist for the rotation IC.
plane phase [46]; that is, one would expect finding a ring structure in the \(F_{x}\)-\(F_{y}\)-histograms.
For both initial conditions we infer spatial dynamics which shows scaling in space and time with distinct parameters. To investigate this phenomenon, we evaluate the time-evolving structure factor \(f(k,t)=\langle|\mathrm{dFT}[F_{\perp}(x,t)](k)|^{2}\rangle\) from the transverse-spin momentum spectra, cf. Fig. 3(a,d). Here, \(\mathrm{dFT}[\cdot]\) denotes the discrete Fourier transform and \(\langle\cdots\rangle\) the ensemble average. We analyze the evolution of \(f(k,t)\) with respect to spatio-temporal scaling of the form [7; 22]
\[f(k,t)=\left(t/t_{\mathrm{r}}\right)^{\alpha}f_{\mathrm{s}}\left(\left[t/t_{ \mathrm{r}}\right]^{\beta}k\right)\,, \tag{1}\]
where \(\alpha\) and \(\beta\) denote the amplitude and momentum scaling exponents, \(f_{\mathrm{s}}\) represents a time-independent scaling function and \(t_{\mathrm{r}}\) is a reference time within the scaling regime.
After an initial period of \(\sim 17\,\mathrm{s}\), the spectra \(f\) have approached the form of the scaling function \(f_{\mathrm{s}}\). Thereafter, we observe, independently for both ICs, a self-similar shift towards lower momenta that is in accordance with Eq. (1) (see Figs. 3(a,d)). Figs. 3(b,e) show the respective rescaled spectra. These coincide well with scaling functions \(f_{\mathrm{s}}(k)\propto 1/[1+(k/k_{\mathrm{s}})^{\zeta}]\) (solid gray lines in Figs. 3(b,e)), with inverse-length scale \(k_{\mathrm{s}}=2\pi/\lambda_{\mathrm{s}}\) (for \(t_{\mathrm{r}}=42\,\mathrm{s}\) we find \(\lambda_{\mathrm{s}}=(147\pm 13_{\mathrm{stat}})\,\mathrm{\SIUnitSymbolMicro m}\) for the polar IC and \(\lambda_{\mathrm{s}}=(33.5\pm 1.3_{\mathrm{stat}})\,\mathrm{\SIUnitSymbolMicro m}\) for the rotation IC). This matches the previous observations in a harmonic trap [31]. In the re-scaling analysis for the polar IC we find
Figure 3: Scaling evolution of the transverse-spin structure factor \(f\) for the polar (a-c) and rotation ICs (d-f). Unscaled (a,d) and scaled (b,e) \(F_{\perp}\) power spectra as a function of \(k=2\pi/\lambda\) with \(\lambda\) the wavelength and corresponding residuals (insets) with respect to the scaling function \(f_{\mathrm{s}}(k)\) at reference time \(t_{\mathrm{r}}=42\,\mathrm{s}\) are displayed over the time range which shows self-similar evolution. The scaling exponents are extracted via a \(\chi^{2}\) minimization with respect to the function \(f_{\mathrm{s}}\), scaled according to Eq. (1) (see [40] for details). For this analysis, only the points in the gray shaded areas are used, where the spectra have the same shape. All error bars indicate \(1\) s.d. of the mean; where no error bars are visible they are smaller than the plot marker. This leads to the likelihood function \(L\propto\exp(-\chi^{2}/2)\) (normalized to \(1\) at the maximum) of the residuals \(\chi^{2}\) shown in the colored contour plot in (c,f). The black and gray ellipses indicate the \(2\sigma\) and \(5\sigma\) statistical error ranges of the extracted exponents. The optimal scaling exponents given in the text are extracted at the position of the minimal residuals (marked by dashed lines and dots).
the values
\[\alpha =0.64\pm 0.09_{\,\rm stat}\pm 0.50_{\,\rm sys}\,,\] \[\beta =0.58\pm 0.04_{\,\rm stat}\pm 0.26_{\,\rm sys}\,,\ \ \zeta=2.51\pm 0.06_{\,\rm stat}\,,\]
while the rotation IC gives rise to scaling with
\[\alpha =0.24\pm 0.04_{\,\rm stat}\pm 0.03_{\,\rm sys}\,,\] \[\beta =0.28\pm 0.04_{\,\rm stat}\pm 0.05_{\,\rm sys}\,,\ \ \zeta=2.87\pm 0.18_{\,\rm stat}\,.\]
Here, we have chosen \(t_{\rm r}=42\,\)s for the extraction of the scaling functions. The statistical error is extracted from the root-mean-square width of the marginal likelihood distributions shown in Fig. 3(c,f) and the systematic errors are estimated from the variability of the exponents when varying the momentum cutoff (see [40] for details). The large systematic error for the exponent \(\alpha\) in the polar IC is a result of the finite system size, which obstructs the observation of a clear infrared plateau. Nevertheless, under the assumption of the transport of conserved \(F_{\perp}\) excitations [21, 47], implying \(\alpha=\beta\), we obtain \(\beta=0.54\pm 0.02_{\,\rm stat}\pm 0.05_{\,\rm sys}\) for the polar IC and \(\beta=0.28\pm 0.03_{\,\rm stat}\pm 0.04_{\,\rm sys}\) for the rotation IC.
For the extraction of the scaling exponents and function parameters we apply a multi-step \(\chi^{2}\) optimization procedure, described in more detail in [40]. First, approximate scaling exponents are extracted by minimizing the squared deviations between measured spectra with respect to the scaling hypothesis (1). Then, the parameters of the scaling function are obtained from a fit to all rescaled data. Finally, these parameters enter into a functional scaling model, which is used to define the squared deviations \(\chi^{2}=\sum_{k,t}[f_{k,t}-f(k,t)]^{2}/\sigma_{k,t}^{2}\) of the unscaled measured values \(f_{k,t}\) from the scaling prediction \(f(k,t)\), relative to the measurement uncertainties \(\sigma_{k,t}\). The resulting likelihood \(L\propto\exp[-\chi^{2}(\alpha,\beta)/2]\) is shown in Figs. 3(c,f), which is used to obtain the scaling exponents with statistical errors as stated above. We select the momentum scaling regime (gray shaded area in Fig. 3) from the lowest \(k>0\) up to \(k_{\rm max}\), where the shape of the measured spectra starts to differ for different times. The insets in Fig. 3 depict the residuals of the spectra, obtained by dividing the measured data by the scaling function.
To gain more insight into the difference between the two scaling scenarios we evaluate the space-resolved profiles of the transverse spin \(F_{\perp}(x)=|F_{\perp}(x)|e^{i\varphi_{\rm L}(x)}\) of single realizations in Fig. 4(a). The solid lines show the spin length \(|F_{\perp}(x)|\) and the dashed ones the Larmor phase \(\varphi_{\rm L}(x)\) at \(t=29\,\)s. For the polar IC, mostly phase excitations with roughly constant \(|F_{\perp}|\) are present in the system while, for the rotation IC, strong localized phase-amplitude defects are present (marked by vertical blue lines).
To characterize these phase-amplitude excitations, we evaluate the spatial cross-correlation function
\[\mathcal{C}(\Delta x,t)=\sum_{x}\left\langle\Delta\left|F_{\perp}(x,t)\right| \cdot\left|\nabla\varphi_{\rm L}(x+\Delta x,t)\right|\right\rangle \tag{2}\]
between spin-length variations and gradients of the Larmor phase. Specifically, \(\Delta|F_{\perp}|(x,t)=\langle|F_{\perp}|\rangle_{x}(t)-|F_{\perp}|(x,t)\) describes the local deviation of the spin length \(|F_{\perp}|\) from the mean value \(\langle|F_{\perp}|\rangle_{x}\) taken over all positions and realizations. The time evolution of the local correlator amplitude \(\mathcal{C}(\Delta x=0,t)\) in Fig. 4(b) shows a clear distinction between the decay of excitations for the polar IC as compared to the rotation IC: The spatial correlator profiles show a distinct peak at \(\Delta x=0\) in accordance with the structures identified in Fig. 4(a). While, for the polar IC, the local amplitude \(\mathcal{C}(\Delta x=0,t)\) decays over
Figure 4: Spatial structure of the spin excitations in the system. (a) Transverse-spin length \(|F_{\perp}|\) (upper plot: solid black line) and Larmor phase \(\varphi_{\rm L}\) (upper plot: dashed red line) extracted from the same single realization at \(t=42\,\)s shown as black line in Fig. 2. The lower plot shows the argument \(S(x)=\Delta|F_{\perp}(x)|\cdot|\nabla\varphi_{\rm L}(x)|\) of the correlator given in Eq. (2) for the single realizations. All positions with large correlations between spin length reduction and phase gradient with \(S(x)>0.25\) (horizontal blue line) are marked with vertical blue lines. (b) Evolution of correlator amplitude \(\mathcal{C}(\Delta x=0,t)\) and spatial profiles (insets) of the full cross correlator \(\mathcal{C}\). This shows the larger abundance and persistence of excitations for the rotation IC as compared to the polar IC, which shows a decay.
time, it remains approximately unchanged for the rotation IC. We find the conservation of the correlator amplitude to be enhanced for lower total atomic densities, while larger densities lead to a decay.
Numerical simulations for the rotation IC demonstrate that the phase kinks are rather stable, in line with the conservation of \(\mathcal{C}(\Delta x=0,t)\), and propagate through the system while interacting with the strongly fluctuating spin [40]. The corresponding scaling exponents agree with the experimental results. This is in contrast to simulations for the polar IC, where a similarly slow scaling with exponents \(\alpha=0.27\pm 0.06\), \(\beta=0.25\pm 0.04\) has been observed numerically [48; 49].
We report the measurement of two distinct scaling evolutions characterized by different exponents and scaling functions in a regime with the same microscopic Hamiltonian symmetry of a spinor gas in the easy-plane phase. Our findings show the existence of distinct associated basins of initial conditions, that is, the sets of states which evolve towards their respective NTFP. Thus, for a classification of universal phenomena far from equilibrium, not only the scaling properties in the vicinity of NTFPs are important but also the structure of their associated basins. This hints towards the necessity to include the initial condition for identifying universality classes of far-from-equilibrium dynamics; this is still an unsettled question.
The authors thank S. Erne for useful comments on error estimation and K. Boguslavski, P. Heinen, P.G. Kevrekidis, A.N. Mikheev, and C.M. Schmied for discussions and collaboration on related topics. They acknowledge support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through SFB 1225 ISOQUANT - 27381115, and GA677/10-1, as well as under Germany's Excellence Strategy - EXC-2181/1 - 390900948 (the Heidelberg STRUCTURES Excellence Cluster), and by the state of Baden-Wurttemberg through bwHPC and DFG through grants INST 35/1134-1 FUGG, INST 35/1503-1 FUGG, INST 35/1597-1 FUGG, and 40/575-1 FUGG.
|
2303.15998 | Understanding the deflection of the `Cartwheel CME': data analysis and
modeling | We study the low corona evolution of the `Cartwheel' coronal mass ejection
(CME; 2008-04-09) by reconstructing its 3D path and modeling it with
magneto-hydrodynamic simulations. This event exhibits a double-deflection that
has been reported and analyzed in previous works but whose underlying cause
remained unclear. The `Cartwheel CME' travels toward a coronal hole (CH) and
against the magnetic gradients. Using a high-cadence, full trajectory
reconstruction, we accurately determine the location of the magnetic flux rope
(MFR) and, consequently, the magnetic environment in which it is immersed. We
find a pseudostreamer (PS) structure whose null point may be responsible for
the complex evolution of the MFR at the initial phase. From the pre-eruptive
magnetic field reconstruction, we estimate the dynamic forces acting on the MFR
and provide a new physical insight on the motion exhibited by the 2008-04-09
event. By setting up a similar magnetic configuration in a 2.5D numerical
simulation we are able to reproduce the observed behavior, confirming the
importance of the PS null point. We find that the magnetic forces directed
toward the null point cause the first deflection, directing the MFR towards the
CH. Later, the magnetic pressure gradient of the CH produces the reversal
motion of the MFR. | Abril Sahade, Angelos Vourildas, Laura Balmaceda, Mariana Cecere | 2023-03-28T14:12:51Z | http://arxiv.org/abs/2303.15998v2 | # Understanding the deflection of the 'Cartwheel CME': data analysis and modeling
###### Abstract
We study the low corona evolution of the 'Cartwheel' coronal mass ejection (CME; 2008-04-09) by reconstructing its 3D path and modeling it with magneto-hydrodynamic simulations. This event exhibits a double-deflection that has been reported and analyzed in previous works but whose underlying cause remained unclear. The 'Cartwheel CME' travels toward a coronal hole (CH) and against the magnetic gradients. Using a high-cadence, full trajectory reconstruction, we accurately determine the location of the magnetic flux rope (MFR) and, consequently, the magnetic environment in which it is immersed. We find a pseudostreamer (PS) structure whose null point may be responsible for the complex evolution of the MFR at the initial phase. From the pre-eruptive magnetic field reconstruction, we estimate the dynamic forces acting on the MFR and provide a new physical insight on the motion exhibited by the 2008-04-09 event. By setting up a similar magnetic configuration in a 2.5D numerical simulation we are able to reproduce the observed behavior, confirming the importance of the PS null point. We find that the magnetic forces directed toward the null point cause the first deflection, directing the MFR towards the CH. Later, the magnetic pressure gradient of the CH produces the reversal motion of the MFR.
Sun: coronal mass ejections (CMEs) -- Sun: prominences -- Sun: magnetic fields -- magnetohydrodynamics (MHD) 0000-0002-4002]Abril Sahade
0000-0002-4882-7888]Angelos Vourlidas
0000-0002-4882-7888]Laura A. Balmaceda
0000-0002-1883-0888]Mariana Cecere
## 1 Introduction
Coronal mass ejections (CMEs) are the drivers of the strongest geomagnetic storms and a major concern of space whether. They are usually related to the ejection of magnetic flux rope (MFR) that connects them to the eruptive source region in the lower corona, including prominence/filament
eruptions, flares, and cavities (e.g., Zhang et al., 2001; van Driel-Gesztelyi and Green, 2015; Green et al., 2018; Jiang et al., 2018; Yang et al., 2018; Filippov, 2019). Predicting the occurrence and trajectory of the eruption is crucial for assessing their potential geoeffectiveness. Since the launch of the _Solar TErrestrial RElations Observatory_ (STEREO, Kaiser et al., 2008) twin spacecrafts (STA and STB, hereafter), along with the development of various reconstruction tools (e.g., Mierla et al., 2008; Maloney et al., 2009; Temmer et al., 2009; Thernisien et al., 2009; Kwon et al., 2014; Isavnin, 2016; Zhang, 2021), multi-point observations allow the determination of the three-dimensional (3D) path of CMEs and their associated source regions.
Several factors can deflect an eruption from its radial course (MacQueen et al., 1986; Cremades and Bothmer, 2004; Gui et al., 2011; Kay et al., 2015; Sieyra et al., 2020). It is generally accepted that neighboring magnetic structures, such as coronal holes (CHs - e.g., Cremades et al., 2006; Gopalswamy et al., 2009; Sahade et al., 2020, 2021) and active regions (ARs - e.g., Kay et al., 2015; Mostl et al., 2015; Wang et al., 2015), can deflect MFRs in longitude and latitude against their position. On the other hand, heliospheric current sheets (e.g., Liewer et al., 2015; Wang et al., 2020), helmet-streamers (e.g., Zuccarello et al., 2012; Yang et al., 2018), and pseudostreamers (PSs - e.g., Cecere et al., 2020; Wang et al., 2020; Karna et al., 2021; Sahade et al., 2022) attract MFRs toward their low magnetic field regions. This responses can be quantified, in strength and direction, by the local and global gradients of the magnetic pressure (Gui et al., 2011; Panasenco et al., 2013; Liewer et al., 2015; Sieyra et al., 2020).
However, there are events that seem to propagate against those gradients, such us the known as 'Cartwheel CME'. This event erupted on 2008-04-09, after 8:45 UT, and has been studied extensively from different perspectives (Landi et al., 2010; Savage et al., 2010; Gui et al., 2011; Patsourakos and Vourlidas, 2011; Thompson et al., 2012; Kliem et al., 2012; Capannolo et al., 2017). The eruption followed a non-radial trajectory, according to 3D reconstructions (Landi et al., 2010; Gui et al., 2011; Patsourakos and Vourlidas, 2011; Thompson et al., 2012). Landi et al. (2010) firstly reconstructed the 3D CME core trajectory at eight different times from \(1.1\,R_{\odot}\) to \(5.1\,R_{\odot}\) noting that the 'Cartwheel CME' has an initial deviation toward the Earth and later it moves away from Earth direction. Savage et al. (2010) tracked the erupted material in STA plane-of-sky (POS) with better cadence but in a 2D projection (below \(1.5\,R_{\odot}\)). They investigated the magnetic field configuration noting that the CME seems to initially move toward the southern open field lines. Later, the trajectory projected in the POS becomes more radial near \(\sim 2.5\,R_{\odot}\). To understand the non-radial evolution of this event, Capannolo et al. (2017) modeled the MFR eruption with ForeCAT (Kay et al., 2013, 2015) and compared it to Landi et al. (2010) reconstructed trajectory. ForeCAT calculates the deflection and rotation of the simulated MFR (varying initial mass, speed, size, shape, and location) considering the magnetic forces (tension and pressure gradient) from the solar background. Although they are able to reproduce the double deflection, they find that the MFR moves unexpectedly against the magnetic gradients, toward a CH. They need to assume a non-radial initial velocity to impulse the MFR in this direction and propose that an asymmetrical reconnection of the footpoints could explain it.
In this paper, we investigate the validity of the previous interpretations about the deflection of the 'Cartwheel CME' and find that the eruption is not unusual but follows the expected trajectory along existing magnetic fields. A detailed reconstruction allows to properly investigate the magnetic interaction between the environment and the 'Cartwheel CME' and to provide a new insight into the
MFR behavior. In Section 2 we reconstruct the 3D path of the 2008-04-09 event with higher cadence and by different techniques at the low corona level. We reconstruct the surrounding magnetic field with the Potential Field Source Surface model (PFSS, Schrijver & De Rosa, 2003). In Section 3 we present the results of a magnetohydrodynamic (MHD) numerical simulation where a MFR interacts with the main magnetic structure found by the PFSS reconstruction. The simulated event reproduce the 'Cartwheel CME' behavior and allow us to compute the forces acting on the MFR. Conclusions and final comments are presented in Section 4.
## 2 Data Analysis
### Source region
Much of the eruptive material belongs to a prominence located within AR 10989. The prominence is enclosed by an anemone PS, whose southern side is overlaid by the negative open field of a CH and the northern side is overlaid by the negative footpoints of closed field lines. The region is complex and presents more PS structures, defining a PS as twin arcades covered by field lines of the same polarity that produce a single null point and form the spine of the PS (Rachmeler et al., 2014).
Between 2008-03-22 and 2008-03-30 the AR exhibited eruptive activity, with a major CME on 2008-03-25. After that, the region remained quiet until 2008-04-03, when it exhibited brightenings in the EUV 195 channel and two small eruptions on 2008-04-05. The 'Cartwheel CME' is the last and most notable of the eruptions from this region.
### Prominence and CME 3D Reconstruction
Figure 1: 2008-04-09 event in STEREO 171, 195 and 304 filters at 09:56 UT. Rainbow-color dots represent the apex position of the prominence from 8:25 UT (violet dot) to 10:40 UT (red dot), triangulated in EUVI images. Left panels show STB FOV, dots with black center are behind limb and are triangulated with SOHO/EIT; right panels show STA FOV.
The 2008-04-09 event was observed on the west limb by the _Solar and Heliospheric Observatory_ (SOHO, Domingo et al., 1995) and STEREO spacecraft. At that time, the STEREO spacecraft were separated by \(\sim 24^{\circ}\) from Earth. We use the data provided by the _Extreme ultraviolet Imaging Telescope_ (SOHO/EIT, Delaboudiniere et al., 1995), the _Large Angle and Spectrometric Coronagraph Experiment_ (SOHO/LASCO, Brueckner et al., 1995), the _Extreme-Ultraviolet Imager_ (STEREO/EUVI, Howard et al., 2008), and COR1 coronagraphs from STEREO spacecraft to reconstruct the trajectory of the prominence and CME. We use _Michelson Doppler Imager_ (SOHO/MDI, Scherrer et al., 1995) data for the days before 2008-04-09 and apply the PFSS model to reconstruct the magnetic field over the solar surface.
Since the source region was located near the western limb of STEREO-A (STA), we reconstruct the initial 3D trajectory from SOHO/EIT and STA/EUVI 195 channels. When the prominence appears in the STEREO-B field-of-view (FOV), we track the ejected material in the 171, 195 and 304 channels from both STEREO spacecraft to ensure we are following the same features and cover the broader time range with high cadence. Finally, we track the prominence in white-light images from STEREO/COR1 while it is bright and compact. The 3D location of the prominence is determined using the tie-pointing technique, which consists of a geometrical reconstruction by considering the position of the same feature in the FOV of two different spacecraft (see, e.g., Inhester, 2006). We use the scc_measure routine, developed by B. T. Thompson, from _SolarSoft_. Figure 1 shows the eruption at 09:56 UT from STA and STB perspective in the different filters. For the 171 and 195 filters we follow the apex of the cold material prominence. In the 304 filter and COR1 images we track the main axis of the prominence, measuring multiple positions each time. The median latitudes and longitudes correspond to the position of the apex. The color-coded dots in Figure 1 summarize the reconstructed trajectory of the prominence apex at each time.
Also, we reconstruct the CME from the EUV 195 and coronagraph images from the three viewpoints (SOHO, STA and STB). To reproduce the evolution of the deflecting CME from the low corona we use, for the first time, a non-radial GCS model. In this way, the CME footpoints coordinates can be fixed while the CME front can vary in latitude and longitude. We use the _SolarSoft_ routine rtcloudwidget, the parameters tuned for the reconstruction are: latitude, longitude, tilt angle, height, half angle, ratio and non-radial tilt. The latter allows to change the angle subtended by the CME axis and the radial plane defined by the footpoints and the solar center. This adds a degree of freedom in the reconstruction, which may produce a new set of solutions. However, as the footpoints are characterized by the latitude, longitude, and tilt angle these parameters remain unchanged throughout the full evolution of the CME. This is an improvement over previous reconstructions as the deflection can be better captured and the coordinates of the CME front are more accurately determined while CME footpoints remain in the source region. Table 1 shows the parameters used for the non-radial GCS reconstruction and Table 2 (Appendix A) shows the coordinates for both GCS and triangulation reconstruction techniques, for the prominence and the CME, in EUV and white-light images, respectively. The prominence is tracked from 08:15 UT to 11:15 UT (\([1.03-3.08]\,R_{\odot}\)), and the CME can be modeled from 09:25 UT to 11:45 UT (\([1.24-4.1]\,R_{\odot}\)).
Figure 3 shows the triangulated prominence trajectory and the surrounding PFSS magnetic field lines. The prominence initially departs from a complex region of closed loops (white lines) at latitude and Carrington longitude (\(-16^{\circ}\),\(195^{\circ}\)). It travels towards the open magnetic field of a southern CH
(blue lines) until reaching (\(-29^{\circ}\),182\({}^{\circ}\)). Afterwards, the prominence changes its motion and travels outward along the open magnetic field lines, with coordinates (\(-26^{\circ}\),196\({}^{\circ}\)) in the last measured position. The purple line marks the radial direction from the initial position of the eruptive material, with ticks from 1.2 to \(2.8\,R_{\odot}\). The prominence apex changes in both latitude and longitude, but it is possible to define a plane intersecting the solar sphere which contains the evolution of the apex (hereafter, plane of eruption - POE). The POE is selected by non-linear least-square fitting and presents a standard deviation lower than 1\({}^{\circ}\). The right panel of Fig. 3 shows a rotated view of the eruption in which the POE is parallel to the POS and the radial direction is pointing upwards. We define a Cartesian reference system with the \(x\)-axis being parallel to the solar surface at the initial position of the prominence, the \(y\)-axis pointing in the radial direction, and \(z\)-axis perpendicular to the POE. In this system, the outward motion is projected in the \(y\)-direction, and the deflections in \(x\)-direction. For reference, the solar equator is shown in teal color.
By defining the POE we can study the magnetic scenario that produces the non-radial motion in a simpler way as we reduce the dimension of the problem. Figure 4 shows the magnetic field magnitude in logarithmic scale, the magnetic field lines and the prominence position (rainbow dots), the CME center position (magma dots with gray edges) and the CME cross section (gray circles) projected on the POE, in the Cartesian reference system described above. From the early reconstruction of the prominence path we can see that it heads toward the null point (gray start) located at \((x,y)=(0.16\,R_{\odot},1.05\,R_{\odot})\), then both the CME and the prominence move to the right (in this coordinate system) displacing \(\sim 15^{\circ}\) from the radial direction. About \(1.8\,R_{\odot}\) they reverse the motion traveling to the left and aligning with the CH field lines. The final angle of deflection is lower than 5\({}^{\circ}\). This double-deflection behavior was previously reported in Sahade et al. (2021). Their scenario did not
Figure 2: 2008-04-09 event in STA, SOHO and STB in 195 filter (left) and coronagraph (right) images. Color dots represent the position of the CME center from 9:25 UT (violet dot) to 11:45 UT (yellow dot). The GCS reconstruction at 10:05 UT (magenta wireframe) and its central cross section (white circle) is plotted over the 195 images. The cross sections (grey circles) of the GCS reconstruction and centers (color dots) in 9:25 - 11:45 UT are overplotted on the coronagraph composites in the right panels.
include a PS configuration as in the case here, but the interaction with the null point and the open magnetic field lines is quite similar (further analysis in the next section). It is interesting to note the evolution of the prominence relative to the CME. Initially, the prominence is located close to the right edge of the MFR cross section, exhibiting a larger deflection than the MFR center (the maximum being \(18^{\circ}\) and \(14^{\circ}\), respectively). In the later stages, the prominence apex progressively reaches the MFR center, in both displacement \(x\) and height \(y\). This behavior appears consistent with the prominence material lying at the bottom of the MFR due to the gravity and magnetic forces balance (e.g Vourlidas et al., 2013). As the MFR moves non-radially, the prominence follows along the edge of the cavity, experiencing larger deflection possibly because of its larger inertia. After it expands and loses density it reaches a position closer to the MFR center. The displacement between the MFR center and the MFR center is \(\sim 10^{\circ}\), which is consistent with the prominence material lying at the bottom of the MFR. The MFR center is \(\sim 10^{\circ}\), which is consistent with the prominence material lying at the bottom of the MFR.
the MFR and the prominence is noted in the STA images (see Fig. 1) but 3D measurements give us certainty that the actual trajectories differ and that it is not a projection effect.
### Magnetic forces
The PFSS model is useful for understanding the global magnetic environment and large-scale structures surrounding the eruptive material. However, it cannot account for the magnetic field evolution during an eruption unless the eruption produces photospheric changes, which is observed only in large eruptions. From this reconstruction technique we recover a PS null point which may be "attracting" the MFR and directing it toward the open magnetic field lines of the nearby CH. Figure 5 shows the temporal evolution of the angular alignment between the MFR trajectory and both the magnetic field lines and the gradient of magnetic pressure (with the conventional minus sign in front, i.e., \(\vec{G}=-\nabla\frac{B^{2}}{2\mu_{0}}\)). In the initial phase of the eruption (8:45-9:35 UT, below \(1.2\,R_{\odot}\)), the MFR moves slightly misaligned with the gradient direction, but since the MFR does not stop in the null point the miss-alignment grows to \(100^{\circ}\). In the second phase (until 10:35 UT and \(1.8\,R_{\odot}\)), the MFR moves along the CH magnetic field lines, aligning with both the gradient and field as it loses speed in the \(x\)-direction. In the third phase of the evolution, the misalignment remains small but with an increasing trend. This can be understood as the dynamical response of the CH, which was compressed by the inertial motion of the MFR and later returns the MFR to the original position of
Figure 3: (a) Triangulated trajectory of the prominence apex with the PFSS magnetic field lines. White magnetic field lines are closed and blue ones are open lines of positive polarity. Rainbow-color dots show the prominence triangulation in EUVI images. Black, violet, pink and beige dots are triangulated in COR1 images at 10:45 UT, 10:55 UT, 11:05 UT and 11:15 UT, respectively. In purple, the radial direction according to initial position of the prominence, with markers separated by \(0.2\,R_{\odot}\). In light blue, the plane where the trajectory lies, the pink circle shows the intersection of the plane and solar surface, the teal circle represents the equator. (b) Rotated position of the eruption and magnetic field lines. Cartesian axes defined from the POE.
the open field lines. At 11:00 UT, and above \(2.5\,R_{\odot}\), the CME stops the \(x\) displacement, confined in the lines of the CH.
To estimate the force exerted by the CH on the MFR, we consider flux conservation of the CH magnetic field lines. Without magnetic reconnection, the field lines should be pushed inwards, reducing the CH area and proportionally increasing it magnetic field strength in the \(B_{y}\) component. Considering this, we estimate the magnetic pressure gradient in the \(x\)-direction. We also obtain
Figure 4: Magnetic field magnitude and field lines with the prominence and CME trajectory in the POE. Gray start indicates the null point position. Rainbow-color dots correspond to the triangulated positions of the prominence with same time scale as Fig. 3 and the last larger red dot being the black one in that figure. Gray circles are the cross sections of the non-radial GCS model from 9:25 UT to 11:20 UT, and color dots with same gray edges are the centers of each cross section, up to 10:45 UT.
Figure 5: Angle of miss-alignment between the magnetic gradient and trajectory (G-T), and the magnetic field and trajectory (B-T). The grey dashed line indicates the height at which the PS spine is crossed.
a polynomial fitting for the trajectory, deriving from there the radial (\(y\)-direction) and non-radial (\(x\)-direction) velocity and acceleration. Figure 6 compares the normalized magnitude of the force produced by the magnetic pressure gradient and the normalized magnitude of the MFR acceleration in the \(x\)-direction. In this evolution period the acceleration is directed to the \(-x\)-direction and increases until \(\sim 1.7\,R_{\odot}\) (10:20 UT), then gradually reduces to zero. Since the trend of both curves is quite similar we suspect that the magnetic pressure gradient is contributing to accelerate the MFR out of the CH during their interaction.
## 3 Numerical Simulation
In Sahade et al. (2022, hereafter S22) we modeled a MFR immersed in a PS magnetic field and studied the dynamical interaction between both structures while changing the parameters. We adapt the model used in that work to simulate the magnetic configuration of the 2008-04-09 event. The 'Cartwheel CME' seems to interact with the null point of a nearby small PS and then with the CH overlying the southern lobe of that PS. By adjusting the model parameters and modifying equations (11a-b) of S22 to obtain a bent PS spine, we establish an initial magnetic field configuration that has a topology and magnetic field strength similar to those shown in Fig. 4. The new equations for the background magnetic field allow a shift of the central position of the potential magnetic field overlying the PS:
\[B_{x}(x,y)= \frac{2\sigma B_{\rm PS}(x-x_{\rm PS})(y-y_{\rm PS})}{((x-x_{\rm PS })^{2}+(y-y_{\rm PS})^{2})^{2}}+ \tag{1}\] \[B_{0}\ \sin\left(\frac{x-x_{\rm CH}}{H}\right)\,\exp[-y/H]\,,\]
\[B_{y}(x,y)= -\frac{2\sigma B_{\rm PS}(x-x_{\rm PS})^{2}}{((x-x_{\rm PS})^{2 }+(y-y_{\rm PS})^{2})^{2}}+ \tag{2}\] \[\frac{\sigma B_{\rm PS}}{(x-x_{\rm PS})^{2}+(y-y_{\rm PS})^{2}}+\] \[B_{0}\ \cos\left(\frac{x-x_{\rm CH}}{H}\right)\,\exp[-y/H]\,,\]
Figure 6: Normalized magnetic gradient pressure (\(\tilde{G}_{x}\)) and normalized acceleration (\(\tilde{a}_{x}\)) in \(x\)-direction during the observed interaction with between the MFR and the CH.
where \(B_{\rm PS}=-0.7\) G is the magnetic field strength due to a single line dipole (\(\sigma=3\times 10^{19}\) is a dimensionless scaling factor) located at \((x,y)=(x_{\rm PS}=120\,{\rm Mm},y_{\rm PS}=-10\,{\rm Mm})\), \(B_{0}=1\) G is the background field strength at \((x,y)=(x_{\rm CH}=200\,{\rm Mm},0)\), and \(H=400\,{\rm Mm}\) is the height decay factor. The rest of the simulation parameters are set as in S22 except for the current densities, being here: \(j_{0}=-700\) statA cm\({}^{-2}\), \(j_{1}=516\) statA cm\({}^{-2}\).
Figure 7 shows the background magnetic field resulting from (1)-(2), where the MFR is added. The turquoise dot and the gray star indicate the prominence initial position and null point position from the observational data. The null point position, the width of the PS, and the magnetic field strength are well reproduced (compare to Fig. 4) by the parameter selection. We note that above \(y=2\,R_{\odot}\) and beyond \(x=0.4\,R_{\odot}\), the field lines behave differently than in Fig. 4. This is expected since the model assumes a simpler configuration than the actual solar magnetic field. However, it is not necessary to modify the magnetic field configuration to fit those farther regions since our intention is to understand the initial behavior of the MFR. The blue-scaled dots and gray circles show the evolution of the MFR center and cross section, respectively. While the trajectory and the MFR size are comparable, the simulation evolves faster than the observed case.
Figure 8 shows the gas pressure distribution and magnetic field lines of the simulation at \(t=1900\,{\rm s}\). We note the MFR (delimited by the low pressure cavity) is displaced to the right and expanded compared to its initial position and volume (see the full evolution in the animated version). During its ascent the MFR develops a super-Alfvenic shock ahead of the cavity. The shock interacts with the CH bending it field lines. In the MFR separatrix there is a flux cancellation region (bottom-right diagonal of the MFR) that leads into reconnection and magnetic islands formation. See in the
Figure 7: Background magnetic field magnitude and lines of the simulated case. The turquoise dot indicates the initial position of the FR, the gray star indicates the observed null point position. The MFR cross section and center represented by gray circles and blue-scaled dots, respectively.
animated version, for example, the magnetic island located at \((x,y)\sim(0.4\,R_{\odot},1.4\,R_{\odot})\) at \(t=1500\,\)s, it rotates along the MFR edge almost \(90^{\circ}\) in \(900\,\)s, afterwards it is lost within the turbulence.
The simulated scenario reproduces the behavior observed in the 'Cartwheel CME'. The MFR travels toward the null point location and it continues the lateral motion pushing the CH field lines. The CH lines are initially bent by the shock and the MFR, but they eventually stop the rightward motion of the MFR, push back, and guide the MFR back to the original position of the CH lines. From the dynamic evolution of the magnetic field we can calculate the forces exerted on the MFR during its ascent. Figure 9 shows the evolution of the MFR forces per length unit (since the simulation is 2.5D) in \(x\)-direction in the upper panel. The magnetic forces have a larger contribution than the gas pressure, and both magnetic pressure gradient and tension accelerate the MFR to the right (toward the null point) in the initial phase of the evolution. The magnetic forces are driving the MFR toward the null point position. When the MFR begins to interact with the open magnetic field lines of the CH, the magnetic pressure gradient becomes negative, stopping the rightward motion. The maximum MFR displacement occurs at \(y\sim 1.6\,R_{\odot}\) ( and \(x\sim 0.3\,R_{\odot}\) as observed). After that, the negative magnetic pressure gradient increases linearly to zero, as the magnetic tension decreases to zero. Thereafter, the magnetic pressure exerted by the open magnetic field lines restores the MFR to the force-free direction. To compare with Fig. 6, lower panel of Fig. 9 shows a time zoom-in of the normalized magnetic gradient pressure (\(\tilde{G}_{x}\)) and normalized fit acceleration (\(\tilde{a}_{x}\)) in \(x\)-direction during the interaction with the open field lines.
## 4 Discussion and Conclusions
Figure 8: Gas pressure distribution and magnetic field lines at \(t=1900\,\)s. An animated version of this figure, showing the magnetic field lines and pressure evolution, is available in the HTML version.
The 'Cartwheel CME' is a well-studied event with a dynamic behavior that appears to run contrary to the current understanding of the interaction between MFRs and ambient magnetic structures (Capannolo et al., 2017). To investigate whether the CME's behavior was indeed unusual, we first focus on obtaining a more precise reconstruction of the event than previous attempts (Landi et al., 2010; Gui et al., 2011; Patsourakos and Vourlidas, 2011; Thompson et al., 2012). To achieve this, we use data from three different viewpoints (SOHO, STA and STB) and two different techniques to reconstruct the evolution of different components of the magnetic system within \(4\,R_{\odot}\). Furthermore, we reconstruct the CME using the non-radial GCS model to better capture the non-radial motion of this event. Our reconstructions are consistent with the assumption that the prominence material is located at the bottom of the MFR. The measurements indicate that the prominence undergoes a larger deflection than the MFR center, due possibly to the higher momentum of the heavier prominence material.
Although the deflection produces changes in both latitude and longitude for the prominence apex and MFR, the entire evolution of them can be projected onto a 2D-plane, which simplifies the analysis of the magnetic field configuration in which the MFR moves (see Fig. 3 and 4). As established before(Savage et al., 2010; Capannolo et al., 2017), the MFR is interacting with the open magnetic field lines of a CH and, traveling toward them and against the magnetic gradients. Our thorough reconstruction of the initial rising phase and magnetic field allows the identification of a pseudostreamer null point located between the initial position of the MFR and the CH. Considering the action of null points in trajectory (e.g., Panasenco et al., 2013; Wang et al., 2020; Sahade et al., 2021, 2022), we presume that this null point is responsible for attracting the MFR toward the CH, which then stops the MFR deviation and guides it parallel to its magnetic field lines. Then, the peculiar behavior of
Figure 9: Upper panel: Forces per length unit in \(x\)-direction exerted on the MFR. Magnetic pressure gradient (\(G_{x}\)), magnetic tension (\(T_{x}\)) and gas pressure gradient (\(-\nabla_{x}p\)) for the first 4000 s of the simulation. Lower panel: Normalized magnetic gradient pressure (\(\tilde{G}_{x}\)) and normalized fit acceleration (\(\tilde{a}_{x}\)) in \(x\)-direction for the simulated MFR during the interaction with the open field lines.
the 2008-04-09 CME can be explained not only by assuming an asymmetric magnetic reconnection (Capannolo et al., 2017), but as a response to the interaction with the magnetic environment near the source region.
We analyze the alignment between the trajectory and the magnetic pressure gradient (see Figure 5) and observe different phases in the evolution. Initially, as expected, the angle is small as the MFR travels toward the null point, then it increases abruptly as the MFR crosses the null point location. Once inside the CH, the MFR trajectory smoothly aligns with both the magnetic gradient and magnetic fields. Finally, we see that the misalignment between the angles increase, presumably because the CH is reacting to the MFR displacement and pushing it back toward a more radial path. In conclusion, our analysis indicates that the MFR tries to align with the magnetic pressure gradient. However, it should be noted that null points can lead to stronger deflections producing misalignment and also, at later evolution, the angles calculated from the static magnetic field extrapolation may not reflect the magnetic structure responses. We estimate the dynamic magnetic pressure gradient exerted by the CH on the MFR (Fig. 6) and find that it correlates with the non-radial acceleration of the MFR. Consequently, we conclude that the magnetic pressure gradient is at least one of the restorative forces producing the reversal deflection.
We perform ideal MHD-simulations to model the dynamics of the 2008-04-09 event, adapting the magnetic scenario explored in S22. The simulation considers a MFR interacting with a PS structure similar to the observed one (see Figures 7 and 8), other magnetic structures as the nearby AR are excluded from the modeling. The simulated MFR presents the same double-deflection behavior as the 'Cartwheel CME', validating the relevance of the null point and magnetic configuration. We calculate the forces acting over the MFR by considering the dynamic evolution of the environment. Initially, the magnetic tension and the magnetic pressure gradient are responsible for deviating the MFR in a non-radial direction and toward the null point. Then, the magnetic pressure gradient is the restoring force that stops the MFR deflection and pushes it back toward a direction more aligned with the original CH lines, agreeing with the data analysis.
In summary, we find, observationally and numerically, that the behavior of the 'Cartwheel CME' can be explained once the trajectory and magnetic environment are well described. The evolution can be divided into three phases. The first one is driven by the presence of the PS null point (deflection to the south until 10:09 UT), the second one consists of the response to the CH (reversal deflection), and the third one concerns the MFR outward propagation parallel to the magnetic field lines following the least resistance path (near radial trajectory after 11:05 UT).
The most important conclusions drawn from this work are:
* The dynamic behavior of the CME was not unusual but rather as expected. The CME escapes through the nearest null point as expected on physical considerations. The apparent 'rolling' behavior and sharp direction change were due to the topological configuration in the vicinity of the eruption.
* Multi-viewpoint observations of the low coronal evolution of an eruption are key for understanding the topological environment around the erupting MFR. They can provide essential information to understand unexpected behaviors.
* Null points play a key role in the early evolution of erupting MFRs. Identifying their presence and, more generally, determining the ambient magnetic topology, will greatly improve our understanding of the early development and trajectory of eruptions.
* 2.5 MHD numerical simulations provide a useful tool to study different scenarios in which a MFR can evolve. They are computationally less expensive than, for example, data-driven models and allow us to test the interpretations that can not be easily verified with data.
Recent developments in instrumentation and observations promise great opportunities for further understanding the early evolution of CMEs. EUV and white observations from Solar Orbiter provide a "3rd' eye to the observations from STEREO, Earth-based assets (e.g SDO, GOES/SUVI and SWFO-L1 in 2025+) from widely variable viewpoints. The future addition of off Sun-Earth line magnetograms (via ESA's Vigil mission, currently in development) will further enhance the reliability of magnetic field extrapolations and consequently of topological maps of the solar corona.
AS is doctoral fellow of CONICET. AS, AV and LAB are supported by NASA grant 80NSSC19K0069. MC is member of the Carrera del Investigador Cientifico (CONICET). AS and MC acknowledge support from SECYT-UNC grant number PC No. 33620180101147CB, and support from PIP under grant number No. 11220200103150CO. Also, we thank the Centro de Computo de Alto Desompenio (UNC), where the simulations were carried out. The work presented here was carried out at JHU/APL and GMU as part of a research internship. AS thanks JHU/APL and GMU for their hospitality during her visit.
## Appendix A Prominence and CME 3D Reconstruction
Table 2 displays the 3d-coordinates for each timestamp with the different techniques, instruments and features of the erupting structure.
|
2305.11870 | Chupa: Carving 3D Clothed Humans from Skinned Shape Priors using 2D
Diffusion Probabilistic Models | We propose a 3D generation pipeline that uses diffusion models to generate
realistic human digital avatars. Due to the wide variety of human identities,
poses, and stochastic details, the generation of 3D human meshes has been a
challenging problem. To address this, we decompose the problem into 2D normal
map generation and normal map-based 3D reconstruction. Specifically, we first
simultaneously generate realistic normal maps for the front and backside of a
clothed human, dubbed dual normal maps, using a pose-conditional diffusion
model. For 3D reconstruction, we "carve" the prior SMPL-X mesh to a detailed 3D
mesh according to the normal maps through mesh optimization. To further enhance
the high-frequency details, we present a diffusion resampling scheme on both
body and facial regions, thus encouraging the generation of realistic digital
avatars. We also seamlessly incorporate a recent text-to-image diffusion model
to support text-based human identity control. Our method, namely, Chupa, is
capable of generating realistic 3D clothed humans with better perceptual
quality and identity variety. | Byungjun Kim, Patrick Kwon, Kwangho Lee, Myunggi Lee, Sookwan Han, Daesik Kim, Hanbyul Joo | 2023-05-19T17:59:18Z | http://arxiv.org/abs/2305.11870v3 | # _Chupa_: Carving 3D Clothed Humans from Skinned Shape Priors
###### Abstract
We propose a 3D generation pipeline that uses diffusion models to generate realistic human digital avatars. Due to the wide variety of human identities, poses, and stochastic details, the generation of 3D human meshes has been a challenging problem. To address this, we decompose the problem into 2D normal map generation and normal map-based 3D reconstruction. Specifically, we first simultaneously generate realistic normal maps for the front and backside of a clothed human, dubbed dual normal maps, using a pose-conditional diffusion model. For 3D reconstruction, we "carve" the prior SMPL-X mesh to a detailed 3D mesh according to the normal maps through mesh optimization. To further enhance the high-frequency details, we present a diffusion resampling scheme on both body and facial regions, thus encouraging the generation of realistic digital avatars. We also seamlessly incorporate a recent text-to-image diffusion model to support text-based human identity control. Our method, namely, Chupa, is capable of generating realistic 3D clothed humans with better perceptual quality and identity variety.
## 1 Introduction
The creation of clothed 3D human characters, which we refer to as "digital avatars", has become an essential part of many fields including gaming, animation, virtual/mixed reality, and the 3D industry in general. These digital avatars allow users to use their own virtual representation for a range
of purposes, thus enhancing the user immersion within such services. However, creating high-quality digital avatars often requires specialized 3D artists using a sophisticated creation pipeline [26, 46], making it a laborious process.
The recent advances in deep generative models [24, 28, 41] have enabled the creation of high quality images that accurately reflects the textual input semantics [53, 65]. However, the usage of such generative models in creating 3D has mainly focused on object generation [60, 76, 94, 97, 99] and shown rather limited performance in generating full-body, realistic 3D human avatars due to the difficulty of collecting a large-scale ground truth dataset. Many previous 3D generative models [2, 7, 8, 25, 30, 56, 98] focus on training generative models on large-scale image datasets along with implicit 3D shape representations and differentiable volume rendering [55, 87]. However, those approaches are rather limited in generating full-body humans with realistic details and rely on computationally expensive volume rendering. Other approach [10] directly uses high quality 3D datasets [64, 96] to train generative models based on auto-decoding frameworks [57], but the resulting stochastic details tend to be unrealistic, due to the usage of an adversarial loss [24].
In this paper, we decompose the problem of 3D generation into 2D normal map generation and 3D reconstruction, bridging the power of generative models in the image domain towards 3D generation. Following the intuition of "sandwich-like" approaches for single image-based 3D human reconstruction [21, 77, 92], we generate normal maps for frontal and backside regions of human mesh to get rich details mitigating the computational cost of 3D representations. We adopt a diffusion model [28, 65] to simultaneously create consistent normal maps for both frontal and backside regions, which we call _dual_ normal maps, conditioned on a posed SMPL-X [47, 58]. Since diffusion models are well known for their mode coverage [90], we find it suitable to generate diverse 3D digital avatars. The dual normal maps are then used as input for our 3D reconstruction pipeline, in which we _carve_ the initial posed SMPL-X mesh to a clothed, realistic human mesh with normal map-based mesh optimization inspired by NDS [89]. During optimization, the initial mesh is gradually deformed to match the generated normal maps through a differentiable rasterization pipeline [43] and geometric regularization including a loss function for plausible side-view. Our dual normal map-based 3D generation pipeline alleviates the difficulty of generating consistent multi-views, which is the fundamental reason that diffusion-based 3D generative models [60, 84, 94] suffer from slow convergence or fail to generate multi-view consistent results. We show that the diffusion model can generate consistent dual normal maps and they are sufficient to generate plausible 3D humans along with SMPL-X prior. Then, we can further improve the generated mesh by using a resampling scheme motivated by SDEdit [51], in which we use separate diffusion models for the body and facial regions to refine the perceptual quality of the rendered normals in different viewpoints, while preserving the view and identity consistency. The refined normal maps are subsequently used as inputs for the mesh optimization, thus creating a realistic 3D digital avatar with high-frequency details.
As shown in Fig. 1, our pipeline, which we dub it _Chupa_, can be extended to text-based generation for further controllability on the human identity (_e.g_., gender, clothing, hair, etc.), by leveraging the power of a pre-trained text-to-image diffusion model, e.g., Stable Diffusion [65]. Specifically, we modify and fine-tune the text-to-image model [95, 4] to enable conditioning on posed SMPL-X, such that the model creates detailed normal maps according to both the pose information and textual descriptions. Afterwards, we pass the generated frontal normal map as guidance to the dual normal map generator to complete dual normal maps, seamlessly connecting text-based generation to our original pipeline.
Trained from posed 3D scans only, Chupa is capable of generating various digital avatars from pose and textual information, with realistic, high-fidelity features such as wrinkles and large varieties in human identity and clothing. We evaluate our method through established benchmarks along with a perceptual study, and show that our method outperforms the previous baseline. In summary, our contributions are:
* A 3D generation pipeline which directly leverages the 2D image generation capability of diffusion models towards 3D reconstruction.
* A diffusion-based normal map generation and refinement strategy for view-consistent normal maps, targeted for 3D generation.
* A method to effectively allow text-based 3D full-body digital avatar creation, providing an intuitive scenario for digital avatar creation.
## 2 Related Work
3D Generative Models.Leveraging the success of generative models in producing realistic 2D images [15, 18, 19, 24, 35, 36, 37], several efforts have been made to build 3D generative models from 2D datasets while ensuring view consistency [7, 8, 25, 56]. To achieve this, 3D neural implicit representation [52, 57, 87] is employed to represent 3D targets, along with volume rendering to project the 3D scenes into 2D images [7, 8, 25, 56]. While early methods in this direction were mainly focused on rigid objects [7, 54, 74] or human faces [8, 25, 56], recent work has extended to human bodies by using LBS-based canonicalization [9] with SMPL to handle articulated pose changes [2, 30, 98]. However, these approaches suffer from low-quality 3D outputs and high computational costs due to the volume rendering.
Other methods [13, 49] utilized SMPL models with latent codes to represent clothing information. However, these
methods tend to be limited in geometric detail. gDNA [10] was the first generative model-based approach along with a neural implicit representation [59] to create diverse 3D humans with varying identity, poses, and clothing. gDNA further leverages the adversarial loss [24] to generate detailed surface normals. However, the adversarial loss made the model susceptible to mode collapse, which leads to unnatural stochastic details. In contrast, our approach is based on diffusion probabilistic models, which alleviates the mode collapsing issue while producing state-of-the-art quality.
3D Human Reconstruction.The reconstruction of 3D humans has been a long-standing problem in the field of 3D computer vision. Traditional multi-view approaches tended to rely on calibrated multi-camera systems [5, 14, 20, 22, 31, 33, 50, 81, 83]. Several 3D parametric human body models [1, 32, 47, 93] have been presented to represent the shape and pose variation of humans through parametric control, and they are widely used in human pose estimation [34, 42, 66]. Building upon such parametric models, single image-based 3D clothed human reconstruction methods with implicit 3D representation [72, 73] show outstanding results with high frequency details. Such models however, tend to show disembodied or broken limbs for unseen poses due to the lack of topological prior. To address the problem, recent works [91, 100] combine implicit representation [52] and parametric models [47, 58]. Inspired by sandwich-like approaches [21, 77], ECON [92] exploits front and back normal maps to build partial surfaces through normal integration [6] and stitches them with a mesh from IF-Net [11] and SMPL mesh through poisson surface reconstruction [39, 40]. Our approach achieves realistic 3D human generation via normal map-based mesh optimization with SMPL-X mesh as a prior. Rather than using the parametric model as an implicit guidance [91, 100] or stitching it with separate surfaces [92], we directly deform the SMPL-X mesh to be consistent with the input normal maps, using a differentiable rasterizer [43].
Diffusion Models.Diffusion Probabilistic Models [78] are a group of generative models that have achieved state-of-the-art results in perceptual image quality and mode coverage [15, 29, 48, 69, 71, 80]. Recent diffusion models for text-to-image generation [53, 63, 65, 70] have demonstrated the ability to produce high quality images based on textual input. Among them, Rombach et al. [65] enhances the efficiency of diffusion models by operating in a latent space that has a lower dimension than the image space, while being perceptually equivalent. We list details in the inner workings of the diffusion models in the supplementary material.
Previous methods [60, 84, 88, 94] focused on text-to-shape tasks, where the output is a small 3D object lacking photorealistic quality. Among such methods, 3DiM [88] presents view-consistent generation through stochastic conditioning, but is limited to expressing 3D objects in a \(128\) resolution. DiffuStereo [75] was one of the first methods to achieve high-quality 3D human reconstruction through diffusion models, but the usage of diffusion models was limited to refining details, while ours better utilizes the generation capability and mode coverage in generating diverse 3D models. Other works such as Rodin [85] also uses textual conditions to generate human 3D models, but are limited to the upper body, being unable to represent various human poses.
## 3 Method
Our model is capable of generating 3D full body human models by conditioning on a front normal map rendered from a SMPL-X [47, 58] mesh \(\mathcal{M}\), which provides pose information, and an optional textual description that includes other identity-related information. The resulting 3D clothed human models display realistic details, while maintaining consistency to the input pose and textual description.
Conditioned on the normal map rendered from SMPL-X, we first utilize a diffusion-based generative model to create full body normal maps for both frontal (observed) and backside (occluded) regions (Sec. 3.1). We then employ a normal map-based mesh optimization method inspired by NDS [89] to deform the posed SMPL-X mesh into a detailed human mesh (Sec. 3.2). To enhance the quality of our mesh, we render the normal maps from the resulting human mesh at multiple viewpoints and refine them through a diffusion-based resampling strategy [51], where we use separate diffusion models for the full body and facial regions (Sec. 3.3). The refined normal maps are subsequently used as inputs to our mesh optimization method, creating a high-quality 3D clothed digital avatar. Our pipeline also accepts additional text information to further control the identity of the digital avatar using a text-to-image diffusion model [65] (Sec. 3.4). Fig. 2 shows the overall pipeline of our method.
### Dual Normal Map Generation
Following the intuition of "sandwich-like" approaches for single image-based 3D human reconstruction [21, 77, 92], we generate both the frontal and backside normal map \((\mathbf{x}^{F},\mathbf{x}^{B})\) of clothed humans, dubbed _dual_ normal maps, with the front-view SMPL-X normal map \(\mathbf{c}_{N}\) as a pose condition. We demonstrate that dual normal maps have sufficient information to generate plausible 3D humans with our normal map-based mesh reconstruction method. Generating dual normal maps, we can mitigate the difficulty and computational cost of directly generating 3D representation (_e.g_., voxels, point clouds, etc.) or multi-view consistent 2D representation (_e.g_., RGB images, normal maps, etc.). Since dual normal maps can be represented as images, we can exploit a diffusion model renowned for its image generation capability. We employ a latent diffusion model [65] and adapt it to generate the dual normal maps.
Following the procedure of the latent diffusion model [65], we first train a vector-quantized autoencoder \((\mathcal{E},\mathcal{D})\)[17, 82] to support normal maps with alpha channels which enable getting foreground mask of generated normal maps easily. Specifically, given an RGB image with alpha channel \(\mathbf{x}\in\mathbb{R}^{H\times W\times 4}\), the encoder \(\mathcal{E}\) encodes \(\mathbf{x}\) into the latent representation \(\mathbf{z}\in\mathbb{R}^{h\times w\times 4}\), and the decoder \(\mathcal{D}\) reconstructs an image back from the latent \(\mathbf{z}\). We train our autoencoder based on rendered normal maps from views with different yaw angles, so that the autoencoder efficiently encodes these normal maps into a perceptually equivalent latent space, i.e., \(\mathbf{z}^{F}=\mathcal{E}(\mathbf{x}^{F})\) and \(\mathbf{z}^{B}=\mathcal{E}(\mathbf{x}^{B})\). For simultaneous generation, we concatenate the two latent codes \(\mathbf{z}^{F}\) and \(\mathbf{z}^{B}\) into a latent code \(\mathbf{z}\), and treat it as an 8-channel image. We list additional details on the supplementary material.
During training, the latent code \(\mathbf{z}\) is perturbed by the forward diffusion process according to a timestep \(t\), producing a noisy latent code \(\mathbf{z}_{t}\). The diffusion model \(\mathbf{\epsilon}_{\theta}\) then learns to predict the perturbed noise \(\mathbf{\epsilon}\) of \(\mathbf{z}_{t}\), given the SMPL normal map condition \(\mathbf{\epsilon}_{N}\in\mathbb{R}^{H\times W\times 4}\). In practice, the SMPL normal map is also encoded \((\mathcal{E}(\mathbf{\epsilon}_{N}))\) and concatenated with \(\mathbf{z}_{t}\) channelwise. The corresponding objective function becomes
\[L_{\text{dual}}=\mathbb{E}_{\mathbf{x}^{F},\mathbf{x}^{B},\mathbf{\epsilon}_{N}, \mathbf{\epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{I}),t}[\|\mathbf{\epsilon}-\mathbf{ \epsilon}_{\theta}(\mathbf{z}_{t}^{F},\mathbf{z}_{t}^{B},t,\mathcal{E}(\mathbf{ \epsilon}_{N}))\|_{2}^{2}]. \tag{1}\]
At inference time, we start from the Gaussian noise \(\mathbf{z}_{T}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) and iteratively sample from the previous step until \(\mathbf{z}_{0}\), then we decode \(\mathbf{z}_{0}\) to get the final frontal and backside normal maps. We use classifier-free guidance [27] to boost the sample quality during conditional generation. To enable classifier-free guidance, we randomly assign blank latent embeddings to the conditional image \(\mathbf{c}_{N}\) with \(10\)% probability during training. Then, for each inference step we use the following modification to predict the denoised latent code:
\[\hat{\mathbf{\epsilon}}_{\theta}(\mathbf{z}_{t},t,\mathcal{E}(\mathbf{c}_{N}))= \lambda\mathbf{\epsilon}_{\theta}(\mathbf{z}_{t},t,\mathcal{E}(\mathbf{c}_{N}))+( 1-\lambda)\mathbf{\epsilon}_{\theta}(\mathbf{z}_{t},t), \tag{2}\]
where \(\lambda\) specifies the guidance strength that can be controlled during inference, and \(\mathbf{\epsilon}_{\theta}(\mathbf{z}_{t},t,\mathcal{E}(\mathbf{c}_{N}))\) and \(\mathbf{\epsilon}_{\theta}(\mathbf{z}_{t},t)\) each corresponds to the conditional and unconditional predictions. In Fig. 3, our simultaneous dual generation scheme shows that the generated frontal and backside normal maps are more consistent, compared to separate generation.
### Mesh Reconstruction with Front/Back Normals
Given the initial posed SMPL-X mesh \(\mathcal{M}\) and the generated clothed normal maps \((\mathbf{x}^{F},\mathbf{x}^{B})\), we deform the initial mesh into a detailed 3D human mesh through iterative optimization. Our mesh reconstruction method is motivated from Neural Deferred Shading (a.k.a NDS) [89], which reconstructs geometry from multi-view RGB images using a differentiable rasterizer and neural shader. Unlike NDS, we remove the neural shader as the generated normal maps provide supervision for geometry, and directly optimize the 3D geometry by comparing the normal maps with the geometry buffers rendered from a differentiable rasterizer [43].
Figure 2: **Overview.** Chupa takes a posed SMPL-X mesh \(\mathcal{M}\) and its front normal map \(\mathbf{\epsilon}_{N}\) as input. At the first stage, Chupa generates frontal and backside clothed normal maps, \(\mathbf{x}^{F},\mathbf{x}^{B}\), conditioned on \(\mathbf{\epsilon}_{N}\). These normals are then used as reference to “carve” \(\mathcal{M}\) through our normal map-based mesh optimization process. To further increase the quality we separately refine the multi-view normal maps rendered from the full body and facial regions through a resampling procedure, and perform the second optimization to create \(\mathcal{M}_{\text{final}}\). Our pipeline can also support identity control through a text description by leveraging the power of a text-to-image generation model.
In general, mesh reconstruction via two normal maps is an ill-posed problem. Using SMPL-X mesh as an initial mesh, which is a strong geometric prior, and introducing a novel sidewise loss \(L_{\mathrm{sides}}\) for regularizing side-views, we can reconstruct plausible 3D geometry of humans while mitigating the difficulty of generating multi-view consistent images at once. Our total objective function is defined as
\[\begin{split} L&=\lambda_{\mathrm{normal}}L_{\mathrm{ normal}}+\lambda_{\mathrm{mask}}L_{\mathrm{mask}}+\lambda_{\mathrm{sides}}L_{ \mathrm{sides}}\\ &\quad+\lambda_{\mathrm{laplacian}}L_{\mathrm{laplacian}}+ \lambda_{\mathrm{normal}}^{\mathrm{reg}}L_{\mathrm{normal}}^{\mathrm{reg}}. \end{split} \tag{3}\]
Normal map loss.We minimize the difference between the input normal maps \((\mathbf{x}^{F},\mathbf{x}^{B})\) and the normal maps rendered from the front/back views of the human mesh \((\mathbf{N}^{F},\mathbf{N}^{B})\) through a \(L_{1}\) loss, denoted as \(L_{\mathrm{normal}}\). We also minimize the discrepancy between the mask of the normal maps through a \(L_{2}\) loss, \(L_{\mathrm{mask}}\), to match the silhouette of the mesh. Note that we can acquire the masks of the generated normal maps by a simple thresholding on the alpha channel.
Sidewise loss.Since our initial 3D reconstruction is based on frontal/backside normal maps, the left/rightside regions of the human body tend to contain depth ambiguity [61]. We therefore introduce a novel sidewise loss, which ensures that the body masks rendered from the side views \((\hat{\mathbf{M}}_{\text{left}},\hat{\mathbf{M}}_{\text{right}})\) are not shrinked into the side views of the initial SMPL mesh \((\mathbf{M}_{\text{left}}^{\text{smpl}},\mathbf{M}_{\text{right}}^{\text{ smpl}})\). The loss function becomes
\[L_{\mathrm{sides}}=\sum_{\mathbf{M}_{\text{view}}^{\text{smpl}}[h,w]=1}\| \mathbf{M}_{\text{view}}^{\text{smpl}}[h,w]-\hat{\mathbf{M}}_{\text{view}}[ h,w]\|_{2}^{2}, \tag{4}\]
where \([h,w]\) denotes indexing with the pixel \((h,w)\) of the mask \(\mathbf{M}\in\mathbb{R}^{H\times W}\) and view \(\in\{\text{Left},\text{Right}\}\). Even though we can mitigate the problem to some extent with the 3D prior from initial SMPL-X, we further prevent the optimized mesh from having unrealistic side-views.
Geometric regularization.As noted by NDS [89], optimizing the mesh based only on the aforementioned loss terms can lead to degenerated mesh due to unconstrained vertex movement. To overcome this issue, we use geometric regularization terms following NDS [89]. Given a matrix \(\mathbf{V}\in\mathbb{R}^{n\times 3}\) with vertex positions of mesh \(\mathcal{M}\) as rows, the Laplacian term is defined as \(L_{\mathrm{laplacian}}=\frac{1}{n}\sum_{i=1}^{n}\|\boldsymbol{\delta}_{i}\|_{2}^ {2}\), where \(\boldsymbol{\delta}_{i}=(LV)_{i}\in\mathbb{R}^{3}\) are the differential coordinates of vertex \(i\) with the Laplacian graph \(L\). Since the differential coordinates are the sum of positional difference between its neighbors, minimizing this loss leads to a smoother mesh. We also introduce a normal consistency term, defined as \(L_{\mathrm{normal}}^{\mathrm{reg}}=\frac{1}{|\mathcal{F}|}\sum_{(i,j)\in \mathcal{F}}(1-\boldsymbol{n}_{i}\cdot\boldsymbol{n}_{j})^{2}\), where \(\mathcal{\bar{F}}\) is the set of mesh face pairs with a shared edge and \(\boldsymbol{n}_{i}\in\mathbb{R}^{3}\) is the normal of triangle \(i\). Minimizing the cosine similarity between face normals of neighbors encourages further smoothness.
### Refine by Resampling
Resampling multi-view normal maps.After the initial mesh reconstruction, we can further improve the mesh while we already have plausible one. We refine the 3D human mesh by refining the multi-view normal maps of the reconstructed mesh without losing view consistency. The refined maps are then used as inputs to the 3D reconstruction pipeline, creating an improved, realistic 3D human mesh.
Our pipeline is inspired by SDEdit [51], which proposes an image translation method by progressively denoising a noise-perturbed image. The amount of noise perturbation is decided by timestep \(0<t_{0}<1\), and as \(t_{0}\) gets closer to 0, the operation focuses on editing the finer details. We repeat this process by \(K\) times to improve fidelity without harming the original information. To preserve the original structure while adjusting any unrealistic information, we set \(t_{0}=0.02\) and \(K=2\), which we empirically found to be sufficient.
Figure 4: **Body Resampling. The initial 3D mesh displays undesired visual artifacts, such as unnatural cloth wrinkles and depth misprediction. By resampling, those artifacts are moderated to produce more natural results.**
Figure 3: **Separate generation vs. Dual generation. Comparison between (a) separate sampling for frontal/backside normal maps and (b) our dual sampling. When generating separately, attributes of two normal maps likely differ. However, generating the dual normal maps at once ensures the maps to share the same semantics.**
In practice, we first render a collection of \(n\)-view normal maps \(\{\mathbf{I}^{1},\mathbf{I}^{2},...,\mathbf{I}^{n}\}\) by evenly rotating the yaw camera angle around the 3D mesh. For refinement, we use the same dual normal map generation model in Sec. 3.1, which uses the normal map of posed SMPL-X as spatial guidance. We pair the rendered normal maps so that each pair is rendered from the backside of one another, and use the SMPL-X normal map corresponding to the frontal normal map as the condition to the diffusion model. This perturb-and-denoise process, which we call _resampling_, drives the normal maps rendered from the optimized mesh into the distribution of normal maps rendered from training 3D scans on which our diffusion model is trained, thus the normal maps become more realistic without losing overall semantics. Once the resampling is complete, we pass the refined normal maps as inputs to the 3D reconstruction stage (Sec. 3.2) to produce a refined 3D human model. Fig. 4 shows that our resampling-based refinement produces more natural details.
Facial resampling.We enhance the facial details of the optimized mesh by refining the normal maps rendered from facial regions of the mesh. We train a latent diffusion model which shares the same architecture of the dual normal map generation model in Sec. 3.1, but trained on normal maps with face close-up. The close-up is done with respect to the head vertices of SMPL-X based on the pre-defined part segmentation [58]. With the face close-up views, we can render facial regions of 3D scans and aligned SMPL-X mesh.
Given the aligned facial normal maps, we can train the diffusion model which generates the frontal and backside facial normal maps with facial normal maps of SMPL-X as a condition. We then apply the same resampling technique used for the full body to refine the multi-view facial normal maps rendered from the optimized mesh. Fig. 5 shows how the facial region is perceptually refined without harming the original structure. Unlike the method of Fruhstuck et al. [18], which performs offline optimization to blend a full body image and face image, we just do the normal map-based optimization (Sec. 3.2) with refined normal maps of both body and face, which aggregates the refined normal maps directly in 3D to generate 3D human mesh with better details.
### Text-guided Normal Map Generation
In addition to the main, pose-conditional 3D generation pipeline, we also include an optional pose-and-text conditional pipeline to further control the identity of the resulting human mesh. In order to generate 3D human mesh based on a textual description, we adopt a powerful text-to-image diffusion model, _e.g_., Stable Diffusion [65], and fine-tune its weights to generate normal maps that are consistent to the text description and the posed SMPL-X normal map.
As the method of Wang et al. [86] displayed the effectiveness of fine-tuning large diffusion models for image translation tasks, we initialize the weights of our model based on a pre-trained Stable Diffusion checkpoint, leveraging its renowned generation capabilities. Following previous work [4, 95] to allow image conditioning, we add additional input channels to the first layer of the U-Net [67] and initialize their weights to zero. We also use the same text conditioning based on a pre-trained CLIP model [62]. We provide additional training details in the supplementary material.
As shown in Fig. 6, our model supports the generation of detailed normal maps based on the textual description and the posed SMPL-X. Our method is the first method to support text-based full-body normal map generation by basing on Stable Diffusion.
Frontal normal map-guided generation.To get _dual_ normal maps based on the frontal normal map generated from the text-based normal map generation model, we follow the intuitions of Repair [48]. Since we already know and want to preserve the frontal shape, the goal here is to predict the
Figure 5: **Face close-up resampling**. Both images are aligned according to the SMPL vertices for the facial region. We can observe that the perceptibility of the faces are clearly improved.
Figure 6: **Text-based normal map generation.** Note that our model is capable of generating normal map consistent in gender, clothing, and hair style1. Moreover, our guided generation method can create a view-consistent back normal map from the initial frontal map, making it possible to use it for our original pipeline.
unknown backside normal map, based on the frontal normal map. For each inference step, we sample the intermediate frontal latent code \(z_{t}^{F}\) from the original latent \(z^{F}\) at any timestep \(t\), owing to the fact that the diffusion process is defined by a Gaussian Markov chain. In contrast, we sample the unknown, intermediate backside latent code \(z_{t}^{B}\) through reverse diffusion, which is concatenated channel-wise to \(z_{t}^{F}\). Since we consider both \(z_{t}^{F}\) and \(z_{t}^{B}\) as a single, 8-channel latent code, the diffusion model leverages the context of the known frontal normal map while generating the unknown backside normal map, making this a _channel-wise inpainting approach_. Fig. 6 shows that our approach helps to generate backside normal maps that match to the original frontal map. Through frontal normal map-guided dual normal map generation, we can seamlessly connect the generative powers of a text-to-image model with our main pipeline.
## 4 Experiments
In this section, we validate Chupa's effectiveness in generating realistic 3D humans. We first compare Chupa with the previous state-of-the-art through an image quality metric and a perceptual user study. We also conduct ablation studies to illustrate the effectiveness of each parts of our pipeline. Fig. 7 shows comparison of generated results from our method and the baseline [10]. Further qualitative evaluations on Chupa are available in the supplementary material.
Datasets.We train and test our model with Renderpeople [64] and THuman 2.0 [96] dataset, which consists of \(500\), \(526\) scans with various identities and clothing. We split both datasets with a 9:1 ratio for train/test split. For training, we render \(36\) multi-view normal maps of the train split scans with rotation of \(10^{\circ}\) yaw interval. To create text pairs from
\begin{table}
\begin{tabular}{l c c} \hline \hline Method & FID\({}_{\mathrm{normal}}\downarrow\) & FID\({}_{\mathrm{shade}}\downarrow\) \\ \hline \hline gDNA\({}_{\mathrm{coarse}}\)[10] & \(53.74\) & \(68.14\) \\ gDNA\({}_{\mathrm{fine}}\)[10] & \(36.43\) & \(45.57\) \\ Ours & \(\mathbf{21.90}\) & \(\mathbf{36.58}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Quantitative Evaluation.** We report two types of FID scores for test split of Renderpeople and Thuman 2.0.
Figure 7: **Generation Comparison**. We display the visual comparisons between gDNA [10] and Chupa with the same SMPL input. Note that gDNA tends to amplify the unnatural artifacts from its coarse stage to the fine stage, while our results produce more natural results.
normal maps for Stable Diffusion fine-tuning, we adopt an off-the-shelf image tagger model [44] based on ViT [16].
Baseline.We compare our method with gDNA [10] as a baseline. gDNA is the state-of-the-art method to generate 3D human mesh with given SMPL-X parameter \(\beta\), \(\Theta\) and randomly sampled shape latent code \(z_{\mathrm{shape}}\) and detail latent code \(z_{\mathrm{detail}}\) from its learned latent space.
### Quantitative Results
We conduct a quantitative evaluation of the quality of generated meshes, based on given SMPL parameters. We generated 3D human meshes with SMPL parameters fitted to \(103\) test scans, \(50\) from Renderpeople and \(53\) from THuman \(2.0\), for both our method and gDNA [10]. Following the previous work [99, 10, 76], we render normal maps [10] and shading-images [99, 76] of groundtruth scans and generated meshes into \(18\) views with \(20^{\circ}\) yaw interval, and compute FID score with them, which denoted as \(\text{FID}_{\mathrm{normal}}\) and \(\text{FID}_{\mathrm{shade}}\) respectively. Tab. 1 shows that our method achieves lower FID for both images than the baseline.
### User Preference
We carry out a perceptual study over 78 subjects asking their preference between the meshes from our method and gDNA. We randomly select \(40\) from a set of SMPL-X parameters fitted to \(103\) test scans. We randomly generate meshes based on them with our method and gDNA, and render shading-images in \(3\) views, \(0^{\circ},120^{\circ},240^{\circ}\) for full body images and \(0^{\circ},40^{\circ},-40^{\circ}\) for face images. Note that we use the narrower field-of-view for better comparing facial details. Tab. 1 shows that the users preferred meshes from our method both for full-body and face images. We present more details in the supplementary material.
### Ablation Study
We validate the building blocks of our pipeline through ablation study. The evaluation is based on the same test split. The results are summarized in Tab. 3. |
2304.01792 | Close encounters of black hole - star binaries with stellar-mass black
holes | Dynamical interactions involving binaries play a crucial role in the
evolution of star clusters and galaxies. We continue our investigation of the
hydrodynamics of three-body encounters, focusing on binary black hole (BBH)
formation, stellar disruption, and electromagnetic (EM) emission in dynamical
interactions between a BH-star binary and a stellar-mass BH, using the
moving-mesh hydrodynamics code {\small AREPO}. This type of encounters can be
divided into two classes depending on whether the final outcome includes BBHs.
This outcome is primarily determined by which two objects meet at the first
closest approach. BBHs are more likely to form when the star and the incoming
BH encounter first with an impact parameter smaller than the binary's semimajor
axis. In this case, the star is frequently disrupted. On the other hand, when
the two BHs encounter first, frequent consequences are an orbit perturbation of
the original binary or a binary member exchange. For the parameters chosen in
this study, BBH formation, accompanied by stellar disruption, happens in
roughly 1 out of 4 encounters. The close correlation between BBH formation and
stellar disruption has possible implications for EM counterparts at the
binary's merger. The BH that disrupts the star is promptly surrounded by an
optically and geometrically thick disk with accretion rates exceeding the
Eddington limit. If the debris disk cools fast enough to become long-lived, EM
counterparts can be produced at the time of the BBH merger. | Taeho Ryu, Ruggero Valli, Rudiger Pakmor, Rosalba Perna, Selma E. de Mink, Volker Springel | 2023-04-04T13:42:01Z | http://arxiv.org/abs/2304.01792v4 | # Close encounters of black hole - star binaries with stellar-mass black holes
###### Abstract
Dynamical interactions involving binaries play a crucial role in the evolution of star clusters and galaxies. We continue our investigation of the hydrodynamics of three-body encounters, focusing on binary black hole (BBH) formation, stellar disruption, and electromagnetic (EM) emission in dynamical interactions between a BH-star binary and a stellar-mass BH, using the moving-mesh hydrodynamics code arepo. This type of encounters can be divided into two classes depending on whether the final outcome includes BBHs. This outcome is primarily determined by which two objects meet at the first closest approach. BBHs are more likely to form when the star and the incoming BH encounter first with an impact parameter smaller than the binary's semimajor axis. In this case, the star is frequently disrupted. On the other hand, when the two BHs encounter first, frequent consequences are an orbit perturbation of the original binary or a binary member exchange. For the parameters chosen in this study, BBH formation, accompanied by stellar disruption, happens in roughly 1 out of 4 encounters. The close correlation between BBH formation and stellar disruption has possible implications for EM counterparts at the binary's merger. The BH that disrupts the star is promptly surrounded by an optically and geometrically thick disk with accretion rates exceeding the Eddington limit. If the debris disk cools fast enough to become long-lived, EM counterparts can be produced at the time of the BBH merger.
keywords: black hole physics - gravitation - stellar dynamics
## 1 Introduction
Dynamical interactions between stars and the compact objects they leave behind play an important role in dense environments, such as globular and nuclear star clusters and disks of Active Galactic Nuclei (AGNs). On global, large scales they can influence cluster thermodynamics (Hut et al., 1992), while on local, small-scales close interactions can alter the original birth composition of isolated stars and binaries.
Dynamical formation of binary black holes (BBHs) is one of the leading pathways (e.g., Downing et al., 2010; Portegies Zwart and McMillan, 2000; Samsing et al., 2014; Rodriguez et al., 2015; Antonini et al., 2016; Askar et al., 2017; Banerjee, 2018; Perna et al., 2019; Fragione et al., 2019; Di Carlo et al., 2019; Rodriguez et al., 2019; Arca Sedda et al., 2020; Mapelli et al., 2021) to forming the binaries which have been observed via gravitational wave (GW) emission at their merger by the LIGO and Virgo observatories (The LIGO Scientific Collaboration et al., 2021). Subsequent dynamical encounters between BBHs and tertiary BHs can further influence the orbital parameters of the binaries, and hence their timescale to merger by GW radiation (e.g., Trani et al., 2019; Samsing et al., 2020; Wang et al., 2021; Arca Sedda et al., 2021).
Despite the relatively high fractions of stars and compact objects in binaries, hydrodynamic simulations of close encounters involving binaries have begun only recently. Lopez et al. (2019) and Ryu et al. (2022, Paper 1 in the following) studied close encounters between BBHs and single stars. They found that, in addition to altering the spin of the accreting BHs, tidal disruption events (TDEs) can have a significant impact on the binary BH's orbit, in ways which can be quantitatively different than the case of pure scattering. The EM signatures produced by these close encounters can also differ significantly from those of TDEs by isolated BHs: depending on the geometry of the encounter, the accretion rate can display periodic modulations with the orbital period. Detections of such events can provide constraints on the formation of BBH mergers (Samsing et al., 2019).
More recently, Ryu et al. (2023, Paper 2 in the following) performed the first investigation of close encounters between binary stars and single BHs. Their hydrodynamic simulations showed a variety of possible outcomes, from full disruptions of both stars, to a full disruption of one star and a partial disruption of the other, to dissociation into bound and unbound single stars. Among these cases of dissociation, interesting outcomes include the formation of a runaway star, and of a fast-moving BH that accretes the tidally
disrupted debris of the other star. In other outcomes, the binary stars are dissociated, and one of the stars is exchanged with the intruding BH, resulting in the formation of an X-ray binary.
Here we extend the line of investigation begun in Paper 1 and Paper 2 by performing a suite of hydrodynamic simulations of nearly parabolic close encounters between BH-star binaries and single BHs. Similarly to Paper 2, we use the moving-mesh code AREPO (Springel, 2010; Pakmor et al., 2016; Weinberger et al., 2020), whose quasi-Lagrangian approach to hydrodynamics is well adapted to the problem. Our study aims to elucidate how such encounters can lead to a variety of outcomes, including EM transients due to the disruption of the star and the formation (via member exchange) of tight BBHs surrounded by debris material, potentially leading to a situation in which the BBH merger could be accompanied by an EM counterpart.
Our paper is organized as follows: SS 2 presents the estimate for the rate of this type of encounters in globular clusters. SS 3 describes the details of the numerical simulations and the initial conditions. We present our simulation results in SS 4, with particular emphasis on a classification of the outcomes and its dependence on encounter parameters. We discuss these results in the context of their possible EM counterparts in SS 5, and we finally summarize our work and conclude in SS 6.
## 2 Encounter Rate in Globular Clusters
We begin by estimating the encounter rate of three-body interactions between BH-star binaries and single stars in globular clusters. Following Paper 2, we first calculate the differential rate of a single BH encountering a BH-star binary as \(\mathrm{d}\mathcal{R}/\mathrm{d}N_{\bullet}\simeq n\Sigma v_{\mathrm{rel}}\). Here, \(n\) is the binary number density in the vicinity of the BH, \(n\simeq f_{\mathrm{b}}n_{\mathrm{s}}\), where \(f_{\mathrm{b}}\) is the non-interacting star - BH binary fraction, \(f_{\mathrm{b}}\simeq 10^{-4}-10^{-5}\)(Morscher et al., 2015; Kremer et al., 2018), and \(n_{\mathrm{s}}\) is the number density of stellar-mass objects near the center of the clusters. The variable \(v_{\mathrm{rel}}\) represents the relative velocity between the binary and the BH while \(\Sigma\) is the encounter cross-section. For \(v_{\mathrm{p}}=\sqrt{2G(M_{\bullet}+M_{\bullet})/r_{\mathrm{p}}}\gg\sigma\), we can write \(\Sigma\simeq\pi G(M_{\bullet}+M_{\bullet})r_{\mathrm{p}}/\sigma^{2}\) where \(\sigma\) is the velocity dispersion. Adopting our results that strong encounters occur when \(r_{\mathrm{p}}<a\), and assuming that this relation applies to binaries of any size and mass ratio, we can approximate \(\Sigma\simeq\pi G(M_{\bullet}+M_{\bullet})a\sigma^{-2}\). Then, we find that \(\mathrm{d}\mathcal{R}/\mathrm{d}N_{\bullet}\) can be expressed as
\[\frac{\mathrm{d}\mathcal{R}}{\mathrm{d}N_{\bullet}} \simeq\frac{\pi\pi G(M_{\bullet}+M_{\bullet})a}{\sigma},\] \[\simeq 4\times 10^{-13}\ \mathrm{yr}^{-1}\left(\frac{f_{\mathrm{b}}}{ 10^{-4}}\right)\left(\frac{n_{\mathrm{s}}}{10^{5}\mathrm{pc}^{-3}}\right)\left( \frac{M_{\bullet}+M_{\bullet}}{20\,\mathrm{M}_{\odot}}\right)\] \[\times\left(\frac{a}{100\,\mathrm{R}_{\odot}}\right)\left(\frac{ \sigma}{15\ \mathrm{km\,sec}}\right)^{-1}. \tag{1}\]
Assuming more than a few tens of single stellar-mass black holes exist in dense clusters at the present day (Morscher et al., 2015; Askar et al., 2018; Kremer et al., 2018), and \(\simeq\)150 globular clusters in the Milky Way (Harris, 2010), the rate of strong three-body encounters per Milky Way-like galaxy is,
\[\mathcal{R} \simeq 6\times 10^{-9}\ \mathrm{yr}^{-1}\left(\frac{N_{\bullet}}{15000 }\right)\left(\frac{f_{\mathrm{b}}}{10^{-4}}\right)\left(\frac{n_{\mathrm{s}}} {10^{5}\mathrm{pc}^{-3}}\right)\left(\frac{M_{\bullet}+M_{\bullet}}{20\, \mathrm{M}_{\odot}}\right)\] \[\times\left(\frac{a}{100\,\mathrm{R}_{\odot}}\right)\left(\frac{ \sigma}{15\ \mathrm{km\,sec}}\right)^{-1}. \tag{2}\]
Two outcomes that can be produced in this type of encounters are EM transients due to disruption of the star and formation of BBHs. In particular, because BBHs would likely form in \(\simeq 25\%\) of all encounters (see SS 4.4), the rate \(\mathcal{R}\) of _BBH-forming_ events would be \(O(10^{-9})\ \mathrm{yr}^{-1}\). The rate for encounters involving massive stars would be relatively high, compared to that for low-mass stars before all the massive stars turn into compact objects. However, as discussed in SS5.3, the total number of this type of encounters over the full cluster lifetime would be higher for less massive stars because of their longer lifetime and higher abundance. Note that \(f_{\mathrm{b}}\) depends on cluster parameters such as the initial binary fraction and the cluster age (Morscher et al., 2015), and calculating \(\mathcal{R}\) requires a detailed modeling of cluster evolution, as well as the star formation history. Thus, for a more precise estimate of \(\mathcal{R}\) a more careful consideration of cluster evolution history is required.
## 3 Simulation Details
### Numerical methods
Our numerical methods and setup are essentially the same as in Paper 2. We perform a suite of 3D hydrodynamic simulations of close encounters using the massively parallel gravity and magnetohydrodynamics moving-mesh code AREPO (Springel, 2010; Pakmor et al., 2016; Weinberger et al., 2020), which combines advantages of the two conventional hydrodynamical schemes, the Eulerian finite-volume method and the Lagrangian smoothed particle method, such as shock capturing without introducing an artificial viscosity, low advection errors, an efficient tracking of supersonic flow, and an automatically adaptive adjustment of spatial resolution. We use the HELMHOLTZ equation of state (Timmes & Swesty, 2000) which accounts for radiation pressure, assuming local thermodynamic equilibrium. We include 8 isotopes (n, p, \({}^{4}\)He, \({}^{12}\)C, \({}^{14}\)N, \({}^{16}\)O, \({}^{20}\)Ne, \({}^{24}\)Mg, Pakmor et al., 2012). We follow the advection of the elements which are then used for the update of the thermodynamics quantities (e.g., pressure). We do not follow the nuclear reactions, which should be fine given the short duration of the simulations and the reaction rates expected for the temperatures and densities that occur in our simulations.
Figure 1: _Top_ panel: The radial density profile of the main-sequence stars with \(M_{\bullet}=10\,\mathrm{M}_{\odot}\) (red) relaxed for five stellar dynamical times. _Bottom_ panel: The relative error with respect to the MESA model, as a function of mass. The dashed grey line in the _top_ panel indicates the profiles for the MESA model, which are just sitting below the solid line.
### Stellar model
The initial state of the star was taken as an evolved main-sequence (MS) star computed using the stellar evolution code MESA (version r22.05.1) (Paxton et al., 2011, 2013, 2015, 2018, 2019; Jermyn et al., 2023). The star has an initial mass of \(10\,\mathrm{M}_{\odot}\) and a metallicity \(Z=0.006\), which is lower than solar and consistent with what is found for globular clusters (e.g., VandenBerg et al., 2013), whose high stellar density facilitates dynamical interactions.1 Convection is modeled according to the mixing length theory with a mixing length parameter of 1.5. We use the Ledoux (1947) criterion to determine the boundary of the convective regions and include an exponential overshoot prescription (Herwig, 2000) with parameters \(f=0.014\) and \(f_{0}=0.004\). We treat semiconvection as in Langer et al. (1983, 1985) assuming it is fully efficient. Wind mass loss is modeled with the prescription from Vink et al. (2001). We evolve the star until about halfway through the main sequence, which we choose as the time when the central hydrogen mass fraction drops to 0.3, at which point the star has developed a \(2.6\,\mathrm{M}_{\odot}\) convective helium core with a radius of \(0.8\,\mathrm{R}_{\odot}\). The stellar radius is \(R_{\star}=5.4\,\mathrm{R}_{\odot}\), and the central density is \(\simeq 11\,\mathrm{g}\,\mathrm{cm}^{-3}\). Since we evolve the stellar model as a single star, we neglect possible past interactions that could have affected the structure. For example, if the black hole binary system formed through binary evolution, the star may have accreted mass (e.g., Renzo and Gotherg, 2021). If the system formed through dynamical capture, the star may have lost mass. We expect such effects to be small and not affect our results significantly.
Footnote 1: Note that the exact value of the metallicity does not affect our main results because the two most often cases in our simulations are fly-bys of stars around the BHs, or near-collisional disruptions of stars, and the outcomes of these two cases are not significantly affected by a slight change in the stellar internal structure due to a different metallicity.
We use the MESA stellar model as initial state for the AREPO simulation. After mapping the 1D MESA model into a 3D AREPO grid with \(N\simeq 5\times 10^{5}\) cells, the 3D single star is first relaxed. It usually takes up to five stellar dynamical time until it is fully relaxed. The stellar dynamical time is defined as \(\sqrt{R_{\star}^{3}/G\,M_{\star}}\) where \(R_{\star}\) and \(M_{\star}\) are the radius and mass of the star, respectively. Note that we increase the resolution for each single star by almost a factor of 2 compared to that in Paper 2, showing the results were converged with \(N\ga 2.5\times 10^{5}\). This is to more conservatively guarantee the convergence of our results, and to better resolve stars that may be partially disrupted during encounters. In addition to that, the resolution becomes finer over time by adopting refinement near the BHs (SS 3.4): some simulations with violent interactions have \(10^{7}\) cells at the end of the simulations. The density profiles of the relaxed stars considered in our simulations are depicted in Figure 1. As shown in the figure, the relative difference of our 3D star with respect to the MESA model is 1% for the inner region up to \(2\,\mathrm{M}_{\odot}\). The match is better than 10% throughout most of the star, except for the surface. We expect that this is sufficient for the aims of this study.
### Black holes
As in Paper 2, we model the BH using a sink particle assuming it is not rotating initially. It only interacts gravitationally with gas and grows in mass via accretion of gas. We set the gravitational softening length of the BH (\(\simeq 0.01\,\mathrm{R}_{\odot}\)) to be ten times the minimum softening length of the cells of the stars.
We follow the same procedure for accretion described in Paper 2. However, we significantly improve the resolution near the BH using refinement (see SS 3.4), which leads to more accurate estimates of the accretion rate with stricter conditions for accretion than in Paper 2. We search for cells bound to the BH (i.e. negative orbital energy relative to the BH) within \(10^{3}\,r_{\mathrm{g}}\) (c.f., \(1.5\times 10^{4}\,r_{\mathrm{g}}\) in Paper 2) where \(r_{\mathrm{g}}=GM_{\star}/c^{2}\) is the gravitational radius of the BH and \(M_{\star}\) denotes the mass of the BH. We still apply the same inverse-distance kernel (Monaghan and Lattanzio, 1985) to put more weight onto closer cells. Although the change in the momentum and the mass of the BH due to accretion is taken into account, our simulations do not include potential radiative feedback produced by accretion.
### Mesh refinement
The simulation code can adjust the local mesh resolution by adaptively splitting or merging cells if certain prescribed refinement criteria are satisfied (for more details, see SS 6 in Springel, 2010). We apply the refinement technique to cells in the vicinity of each BH to better resolve the stream structure there. At every time step, the code refines cells near a BH if all of the following conditions are met:
1. the distance from the BH fulfils \(r<5000\,r_{\mathrm{g}}\),
2. the cell density is \(\rho>2\times 10^{-4}\,\mathrm{g}\,\mathrm{cm}^{-3}\),
3. the cell mass exceeds \(>6\times 10^{22}\) g,
4. and \(\Delta d/r>0.26\) for \(500\,r_{\mathrm{g}}<r<5000\,r_{\mathrm{g}}\) and \(\Delta d>500\,r_{\mathrm{g}}\) for \(r<500\,r_{\mathrm{g}}\), where \(\Delta d\) is the cell size.
The refinement radius in condition (i) is chosen to be larger than the accretion radius (1000 \(r_{\mathrm{g}}\) in this work) to ensure that gas streams inside the accretion radius are well resolved. Condition (ii) is designed to apply the refinement only to the cells that represent "real" gas, not vacuum regions. Criterion (iii) avoids a runaway creation of low-mass cells. Finally, the resolution limit imposed through condition (iv) guarantees that there are at least \(O(10^{2})\) cells within the accretion radius. On the other hand, at every time step, the code can also derefine cells within \(r<5000\,r_{\mathrm{g}}\) around each BH if the cell mass is \(<1.5\times 10^{22}\) g, meaning the mass of cells within \(r<5000\,r_{\mathrm{g}}\) never becomes smaller than this mass resolution limit.
We ran a few simulations with five different resolution limits within the range \(0.05\leq\Delta d\leq 0.4\). This confirmed that the global evolution of the systems, such as their final interaction outcomes, is not affected by the refinement. The accretion rate has converged when the cell size fulfils \(\Delta d/r<0.3\). Note that the number of cells within a volume at distance \(r\) increases approximately by a factor of 8 when \(\Delta d\) decreases by a factor of 2.
### Star - black hole binary
Before we carry out our encounter experiments, we relax binaries consisting of a fully relaxed star and a BH for 10 stellar dynamical times. We parameterize the binary's semimajor axis \(a\) using an approximate analytic estimate of the Roche lobe radius (Eggleton, 1983),
\[\frac{r_{\mathrm{RL}}}{a}=\frac{0.49\,q^{2/3}}{0.6\,q^{2/3}+\ln(1+q^{1/3})}, \tag{3}\]
where \(r_{\mathrm{RL}}\) is the volume averaged Roche lobe radius of the star, \(q=M_{\star}/M_{\star}\) is the mass ratio, and \(a\) is the orbital separation. We define \(a_{\mathrm{RL}}\equiv a(R_{\mathrm{RL}}=R_{\star})\) as the separation at which the star fills its Roche lobe. For \(q=0.5\) and \(r_{\mathrm{RL}}=R_{\star}\), \(a_{\mathrm{RL}}\simeq 3.12\,R_{\star}\simeq 16.9\,\mathrm{R}_{\odot}\).
We have performed this binary relaxation process for every binary with different orbital parameters (3 different binaries in total). The
semimajor axis and the eccentricity of the relaxed binaries differ by less than 1% from their initial values.
### Initial conditions
Following the same terminology as in Paper 2, we refer to quantities with a subscript containing \(b-\bullet\) as those relating to the orbit between a binary and a single BH. We assume a parabolic encounter with eccentricity \(1-e_{b-\bullet}=10^{-5}\) between a single 10 M\({}_{\odot}\) BH with a binary consisting of a 20 M\({}_{\odot}\) BH and a 10 M\({}_{\odot}\) star. The exact choices of the system parameters are somewhat arbitrary, but BHs with such masses have been found in X-ray binaries (Binder et al., 2021, e.g.). Encounters between objects of comparable masses are expected in the dense centers of young mass-segregated star clusters. We later discuss potential effects of different masses and orbits of encountering objects in SS 5.3, based on our simulation results. We consider three semi-major axes for the initial binary systems: \(a/a_{\rm RL}=2\), 4 and 6, corresponding to an orbital period of 4, 12, and 22 days, respectively. We assume the binaries are circular at the start of our simulations. This is primarily to simplify the initial conditions, but this may not be unreasonable given that close binaries are often found to be circular (Almeida et al., 2017).
The distance between the binary's center of mass and the BH at the first closest approach \(r_{\rm p,b-\bullet}\) is parameterized using the impact parameter \(b\), i.e., \(r_{\rm p,b-\bullet}=0.5\,ba\) where \(a\) is the binary semimajor axis. We consider \(b=1/4\), \(1/2\), 1, and 2 for \(a/a_{\rm RL}=4\), and \(1/2\) for \(a/a_{\rm RL}=2\) and 6. The binary's angular momentum direction is always along the \(z\)-axis in our simulations. We illustrate the initial configuration of the stellar binary and the BH in Figure 2.
We investigate the dependence of encounter outcomes on key encounter parameters, that is inclination angle \(i=0\), \(30^{\circ}\), \(60^{\circ}\), \(120^{\circ}\) and \(180^{\circ}\), \(b=1/4\), 1/2, and 1 and 2, and the phase angle \(\phi=0^{\circ}-180^{\circ}\) with \(\Delta\phi=45^{\circ}\). We define \(\phi\) as the initial angle between the line connecting the two members in the binary and the coordinate \(x-\)axis (see Figure 2). We start by studying the dependence on the two phase angles of the binary (\(\phi=0^{\circ}\) and \(180^{\circ}\)) by fixing all the other parameters. To achieve this, we initially rotate the binary while the initial separation between the center of mass of the binary and the BH is fixed at \(r=5\,a\). This allows us to examine the outcomes from the first contact of the single BH with a different member of the binary. However, given the relatively high computational costs, instead of simulating encounters with every combination of \(i\) and \(b\), we perform simulations for the encounters of the intermediate-size binaries (\(a/a_{\rm RL}=4\)) with different combinations of \(b=1/4\), 1/2, 1 and 2, and \(i=30^{\circ}\), \(150^{\circ}\), and \(\phi=0^{\circ}\) and \(180^{\circ}\). For the smallest and largest binaries (\(a/a_{\rm RL}=2\) and 6), we only consider \(i=30^{\circ}\) and \(150^{\circ}\) while \(b=1/2\). In addition, we further examine the dependence of \(i\) on the outcome properties by considering \(i=0\), \(60^{\circ}\), \(120^{\circ}\) and \(180^{\circ}\) (for \(b=1/2\)). Last, we also study the impact of the phase angle \(\phi\) on the encounter outcomes by simulating encounters with six additional phase angles (\(\phi=45^{\circ}\), \(90^{\circ}\), \(135^{\circ}\), \(225^{\circ}\), \(270^{\circ}\), and \(315^{\circ}\)).
In Table 1, we summarize the initial parameters considered in our simulations. Each of the models is integrated in time up to a few 100 \(t_{\rm p}\) as needed to identify the final outcomes. Here, \(t_{\rm p}=\sqrt{r_{\rm p}^{3}/GM}\) is the dynamical time at \(r=r_{\rm p}\), where \(M\) is the total mass of the three objects (40 M\({}_{\odot}\)). The value of \(t_{\rm p}\) for each model is given in Table 1.
The total computational cost for each run varies, mainly depending on how long the interactions last until the final outcomes are produced. Using 200 - 300 CPU-cores of the Intel Xeon CascadeLake-AP processor (Xeon Platinum 9242), the total compute time per run has been around 70000 - 100000 core hours.
Figure 2: Schematic diagrams for the initial configuration of the BH-star binary (blue solid circle and red solid star) and single BH (black circle) for a prograde case with an inclination angle \(i<90^{\circ}\) and phase angle \(\phi=0^{\circ}\), projected onto the \(x-y\) plane (_left_) and the \(x-z\) plane (_right_). The arrows indicate the instantaneous direction of motion. The open symbols, on the same circle with the solid symbols, indicate the case with \(\phi>0^{\circ}\).
## 4 Results
### Classification of outcomes
The outcomes of three-body encounters between BH-star binaries and single BHs can be divided into three classes, depending on the final products.
1. _BBH-forming encounters_: This class refers to encounters in which a BBH emerges. In this case, the impact parameter is mostly \(\lesssim 1/2-1\). The incoming single BH frequently interacts first with the star by the time the binary's center of mass and the single BH arrive at pericenter (models with "Yes" in the fifth column in Table 2). In this situation, the star in the binary _nearly collides_ with the incoming single BH. We show one example for this type of encounter in Figure 3. The incoming single BH loses a significant amount of its kinetic energy and is gravitationally captured by the other BH initially in the binary. Because of the member exchange due to a violent star-removing encounter, the size of the final binary is not necessarily correlated with the size of the initial binary. To illustrate this, we compare in the _top-left_ panel of Figure 4 the final \(a\) of the BBHs with the semimajor axis of the initial BH-star binaries. The final \(a\) covers over a wide range of values and is not necessarily comparable to \(a\) of the initial binary. These violent interactions can lead to the formation of _merging_ BBHs, as illustrated in the _top-right_ panels: the GW-driven merger time scale of 5 out of 14 final BBHs is less than a Hubble time. Note that the absolute magnitude of the binding energy of the merging BBHs is much larger than the typical kinetic energy of stars in both globular and nuclear clusters (\(\simeq\sigma^{2}\) where \(\sigma\) is the velocity dispersion). This suggests that subsequent interactions with other stars would not dissociate these "hard" binaries, but rather make them more compact (Heggie, 1975) and more eccentric (Valtonen and Karttunen, 2006), which would facilitate their mergers. In addition, the disruption of the star prior to the BBH formation means that at least one member of the BBH is frequently surrounded by gas upon binary formation. When the BBH is compact, both BHs accrete gas.
2. _Non-BBH-forming encounters_: In this class, the outcomes are member exchanges between the two BHs or perturbations of the initial binary's orbit (models with "No" in the fifth column of Table 2). This mostly occurs when the two BHs interact at the first contact
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline Model number & Model name & \(a\) & \(b\) & \(\phi\) [\({}^{\circ}\)] & \(i\) & \(t_{b}\) & \(P\) & \(v_{\rm orb}\) \\ \hline & Unit & \(a_{\rm BL}\) & R\({}_{\odot}\) & - & R\({}_{\odot}\) & \({}^{\circ}\) & \({}^{\circ}\) & hours & days & \(\rm km\,s^{-1}\) \\ \hline
[MISSING_PAGE_POST]
between the binary and the single BH. We show one example for this type of encounter in Figure 5, resulting in an orbit perturbation. The impact of the encounters is relatively weak compared to the _BBH-forming encounters_. As shown in the _bottom-left_ panel of Figure 4, the final value of \(a\) scatters within less than a factor of 2 around the initial \(a\). The eccentricity of the final BH-star binary is widely distributed between 0.1 and 0.9 (_bottom-right_ panel), similarly to those of final BBHs (_top-right_ panel). In this type of encounters, EM transient phenomena, such as tidal disruption events, collisions, or interacting binaries, can be created (e.g., Model 17. \(a2b1/2\phi 0i30\)). In addition, the single BHs are ejected at \(\gtrsim 60\) km s\({}^{-1}\), comparable to or higher than the escape velocity of globular clusters (i.e., tens of km s\({}^{-1}\); Gnedin et al., 2002; Antonini and Rasio, 2016).
3. _Undetermined_: This class refers to cases where final outcomes are not determined (12 models in total, Models with "-" in the fifth column in Table 2 and with superscript \(\star\) or \(\star\star\)). Among these 12 models, there are eight encounters (models designated with superscript \(\star\)) in which the three objects form an unstable hierarchical triple, which we define as a triple where the outer binary is on a very large eccentric orbit so that the pericenter distance of the outer binary is smaller than the semimajor axis of the inner binary. In the table, we provide the orbital parameters of the inner binary. In the rest (models with superscript \(\star\star\)), interactions become extremely prolonged so that a final outcome has not (yet) emerged.
From now on, we will focus on the first two classes, i.e., _BBH-forming_ and _Non-BBH-forming_ encounters. These types of final outcomes and their properties are summarized in Table 2.
### Dynamical processes
The two most crucial factors to determine the outcomes for the parameter space considered in our simulations are 1) the types of objects that meet at the closest encounter (BH - BH or BH - star), and 2) the net direction of the momentum kick relative to the bystander object (i.e., the object in the binary that does not interact with the incoming BH at the first closest approach) imparted by the interaction between the two meeting objects. As explained in the previous section, the first aspect substantially affects the chances of the survival of the star. The second aspect determines which objects end up in the final binary (i.e., member exchange, binary perturbation) and how the final binary's orbit looks like.
For the _non-BBH-forming_ encounters with \(b\simeq 1/2-1\), the most frequent outcomes are either a member exchange or a perturbation of the original binary. The latter happens in retrograde encounters. This case can be categorized into three configurations, which are illustrated in Figure 6. In the first configuration (_top_ panels), the incoming BH strongly interacts with the other BH and turns around at a small pericenter distance compared to the binary semimajor axis. The initially single BH is rapidly ejected from the system in the direction roughly opposite to the incoming direction. This interaction imparts a momentum kick to the BH that perturbs the binary orbit. In the other two configurations, the incoming BH either moves around or passes through the binary, without significant interactions with any of the binary members (_bottom_ panels).
The dominant channel for member exchange is depicted in Figure 7. For the _non-BBH-forming_ encounters in a prograde orbit, the two BHs meet first and pass through their points of closest approach. Like the first configuration for orbit perturbation, their relative motion gives a momentum kick to the motion of the BH originally in the binary, relative to the star. The momentum kick gives an additional acceleration in the BH's receding motion from the star. The initially single BH, after turning around the other BH, moves in a similar direction with the star and gravitationally captures it.
For the _BBH-forming_ encounters, the star and the initially single
Figure 3: An example of a _BBH-forming_ encounter, Model \(20.a2b1/2\phi 180i150\), showing the density distribution in the binary orbital plane at a few different times in units of \(t_{\rm p}\). The color bar gives the logarithmic density in g cm\({}^{-3}\). The time is measured since the expected pericenter passage between the binary’s center of mass and the single BH. At \(t/t_{\rm p}\simeq-16\) (top-1\({}^{\rm st}\)), the binary (star - green dot) and the single BH (yellow dot) approach each other. At \(t/t_{\rm p}=-3.21\) (top-3\({}^{\rm nd}\)), the incoming BH strongly encounters with the star in the binary, followed by the collision of the star (top-4\({}^{\rm nd}\)). The BH that disrupted the star is gravitationally captured by the other BH, forming a merging binary with \(a\simeq 6.70\) R\({}_{\odot}\) and \(e\simeq 0.943\), corresponding to \(t_{\rm GW}\simeq 10^{4}\) yr (bottom panels).
BH undergo close encounters, naturally resulting in a tidal disruption event or stellar collision. Both events can also impart a momentum kick to the disrupting BH. In our simulations, the momentum kick is not large enough to prevent the two BHs from forming a bound pair. For example, if the star and the incoming BH undergo a head-on collision, the incoming BH dramatically slows down and forms a merging BBH with the other BH (e.g., Model 20. \(a2b1/2\phi 180i150\)). We caution that the head-on collision between the two equal-mass objects in our simulations is an extreme case yielding a dramatic drop in the BH's kinetic energy. The net effect of such star-removing events on the motion of the disrupting BH and the subsequent formation of a BBH depends on the mass ratio, relative velocity, and the direction of the momentum kick.
### Binary black hole formation
Typical semimajor axes of BBHs formed in the _BBH-forming_ encounters range within \(10-400\,\mathrm{R}_{\odot}\) while eccentricities vary within \(0.1-0.97\). Correspondingly, the GW-driven merger time scales of those merging binaries are in the range \(10^{4}-10^{13}\) yr. Five of our models among these encounters are merging BBHs with \(a\simeq 7-150\,\mathrm{R}_{\odot}\) and \(e\simeq 0.6-0.97\). As explained in SS 4.2, the dominant formation channel is the gravitational capture of the incoming BH by the BH originally in the binary after strong interactions between the incoming BH and the star. Naturally, a disruption event or a collision precedes the BBH formation. As a result, either the disrupting BH or both BHs accrete matter by the time they form a stable binary.
\begin{table}
\begin{tabular}{c c c c|c c c c c|c c} \hline Model number & Model name & BBH\({}^{2}\) & STAR\({}^{2}\) & Binary type & \(a\) & \(e\) & \(\log_{10}\mathrm{fow}\) & \(v\) & Single type & \(v\) \\ \hline - & - & - & - & - & \(\mathrm{R}_{\odot}\) & - & \(\mathrm{v}\) & km/s & - & km/s \\ \hline
[MISSING_PAGE_POST]
es & Destroyed & \(\mathrm{\SIUnitSymbolFace{\bullet}}(20)-\star(10)\) & 61.0 & 0.406 & 11.7 & 64.0 & - & - \\
19 & \(a2b1/2\phi 04150\) & No & Destroyed & - & - & - & - & \(\mathrm{\SIUnitSymbolFace{\bullet}}(20),\mathrm{O}(10)\) & 60.8, 234 \\
20 & \(a2b1/2\phi 180150\) & Yes & Destroyed & \(\mathrm{\SIUnitSymbolFace{\bullet}}(20)-\star(10)\) & 6.70 & 0.943 & 4.36 & 47.6 & - & - \\
21 & \(a6b1/2\phi 030^{\star}\) & - & - & - & - & - & - & - & - \\
22 & \(a6b1/2\phi 18030^{\star}\) & Yes & Destroyed & \(\mathrm{\SIUnitSymbolFace{\bullet}}(20)-\star(10)\) & 354 & 0.787 & 13.2 & 69.1 & - & - \\
23 & \(a6b1/2\phi 04150\) & No & Survived & \(\mathrm{\SIUnitSymbolFace{\bullet}}(20)-\star(10)\) & 76.0 & 0.728 & - & 65.7 & \(\mathrm{O}(10)\) & 199 \\
24 & \(a6b1/2\phi 180150\) & Yes & Destroyed & \(\mathrm{\SIUnitSymbolFace{\bullet}}(20)-\star(10)\) & 149 & 0.972 & 8.64 & 79.2 & - & - \\ \hline
25 & \(a4b1/2\phi 004^{\star}\) & - & - & - & - & - & - & - & \(\mathrm{\SIUnitSymbolFace{\bullet}}(20)\) & - \\
26 & \(a4b1/2\phi 0406^{\star}\) & - & - & - & - & - & - & - & - \\
27 & \(a4b1/2\phi 040120\) & No & Survived & \(\mathrm{\SIUnitSymbolFace{\bullet}}(20)-\star(10)\) & 68 & 0.488 & - & 42.6 & \(\mathrm{O}(10)\) & 126 \\
28 & \(a4b1/2\phi 18030\) & Yes & Destroyed & \(\mathrm{\SIUnitSymbolFace{\bullet}}(20)-\mathrm{O}(10)\) & 370 & 0.831 & 13 & 38.6 & - & - \\
29 & \(a4b1/2\phi 18060\) & Yes & Destroyed & \(\mathrm{\SIUnitSymbolFace{\bullet}}(20)-\mathrm{O}(10)\) & 155 & 0.713 & 12.3 & 38.0 & - &
### Dependence of outcomes on parameters
We examine the dependence of outcomes on a few key encounter parameters, phase angle \(\phi\), impact parameter \(b\), inclination angle \(i\), and semimajor axis \(a\), by varying one parameter at a time, keeping the rest of them fixed. Our simulations suggest that the two most important parameters that affect the formation of BBHs in this scenario of three-body encounters are the impact parameter and the phase angle.
1. _Phase angle \(\phi\)_: this is found to be one of the key parameters that separates _BBH-forming_ encounters from _non-BBH-forming_ encounters. For the former, very likely outcomes are BBHs, frequently accompanied by a disruption of the star. On the other hand, for the latter, frequent outcomes are eccentric BH-star binaries produced via member exchange or weak tidal perturbations of the initial stellar orbit. In addition, even for the _BBH-forming_ encounters, the direction of the encounter between the initially isolated BH and the star at the first closest approach relative to the other BH determines the size of the semimajor axis of the BBH: if the momentum kick imparted on the encountering BH is given such that it adds to the encountering BH's momentum, a large binary forms (e.g., Model 6. \(a4b1\phi 180\)/30 and Model 22. \(a6b1/2\phi 180\)/30). Although our study is not appropriate for rate estimates, the dependence of outcomes on the phase angle may indicate that roughly \(\simeq 25\%\) of these three-body encounters between objects of similar mass with \(b\lesssim 1\) may possibly lead to BBH formation with a high chance of creating EM transients.
2. _Impact parameter \(b\)_: in general, the initial binary and the single BH can interact significantly (member exchange or stellar collisions) at the first closest approach when \(r_{\rm p}\lesssim a\), which is also found in Ryu et al. (2022, 2023). Fly-by only occurs at \(r_{\rm p}>a\) (Models 1. \(a4b2\phi 030\), and Model 9. \(a4b2\phi 0150\)). For this case, the initial binary orbit is weakly perturbed, resulting in a 10 - 20% change in the semimajor axis. Relatively weak interactions also take place when the impact parameter is too small compared to the size of the binary, i.e., \(r_{\rm p}<a/8\) (e.g., Model 4. \(a4b1/4\phi 030\), and Model 16. \(a4b1/4\phi 180\)/150), as the single BH penetrates through the binary without interacting strongly with any of the binary members (see the _bottom-right_ panel of Figure 6).
3. _Inclination angle \(i\)_: prograde encounters tend to result in strong interactions between the first two encounter objects, frequently
Figure 4: The orbital properties of the final binaries, for binary black holes (_top_ panels) and black hole – star binaries (_bottom_ panels). The _left_ panels compare the initial semimajor axis with the final one, and the _right_ panels show the semimajor axis and the eccentricity of the final binaries. The black diagonal lines in the _left_ panels depict the cases where the size of the initial binaries and final binaries are identical. The grey curves in the _top-right_ panels indicate the gravitational wave-driven merger time scales of 14 Gyr, 1 Gyr and 1 Myr, respectively, for a binary consisting of 10 M\({}_{\odot}\) and 20 M\({}_{\odot}\) black holes. In the _top_ panels, the solid (hollow) markers indicate BBHs that would (not) merge in a Hubble time. On the other hand, the hollow markers in the _bottom_ panels indicate the models where the final outcome is an unstable triple and, for this case, the orbital properties of the inner binary are presented.
leading to outcomes that involve member exchange (e.g., models with \(\phi=0^{\circ}\) and \(i=30^{\circ}\)) or stellar collisions (e.g., models with \(\phi=180^{\circ}\) and \(i=150^{\circ}\)). This is because the relative velocity between the two encountering objects is smaller, implying a larger gravitational focusing cross section (\(\propto v^{2}/\sigma^{2}\) where \(\sigma\) is a typical relative velocity at infinity). A typical configuration for member exchange in prograde encounters is drawn in Figure 7. On the other hand, the first interactions in retrograde orbits are relatively weak due to the large relative speed between the two encountering objects. As a result, frequent outcomes are perturbations of the initial binary orbit, as depicted in Figure 6.
(iv) _Semimajor axis_: given the same pericenter distance relative to \(a\) (\(r_{\rm P}\simeq 0.25a\)) for the simulations with varying \(a\), the type of the final outcomes does not show a strong dependence on \(a\). However, the size of the final binary is closely correlated with that of the initial binary, e.g., \(a\geq 76\,{\rm R}_{\odot}\) of final binaries in models with \(a/a_{\rm RL}=6\) (or \(a=101\,{\rm R}_{\odot}\)) and \(a\la 61\,{\rm R}_{\odot}\) in models with \(a/a_{\rm RL}=2\) (or \(a=32\,{\rm R}_{\odot}\)).
### Accretion
Our simulations show that stars can be disrupted in three-body interactions between BH-star binaries and single BHs via strong interactions with very small impact parameters, i.e., collisions. In such events, a merging BBH can subsequently form, and at least one of the BHs is surrounded by an accretion disk which can create EM transient phenomena. To zero-th order, the disk structure and features of the accretion rate can be imprinted onto light curves of such events.
The refinement scheme adopted for the simulations allows us to resolve the gas structure down to \(0.01\,{\rm R}_{\odot}\simeq 10^{3}\,r_{\rm g}\) from the BH. Although the regions that we can resolve are still too far from the BH to be directly related to the accretion process, we can provide an accurately resolved large scale structure of the disks formed in star-destroying events, which can be used as initial conditions for detailed disk simulations. Here, we define a disk as a group of gas cells tightly bound to the BH and coherently orbiting in the azimuthal direction. The outer edge of the disk is defined as the radius containing 99% of the total bound mass orbiting at a velocity exceeding 1% of the local Keplerian speed \(v_{\rm kap}(r)=\sqrt{G[M(<r)+M_{\bullet}]/r}\), where \(M(<r)\) is the mass enclosed within \(r\).
We show in Figure 8 both the face-on (_left_ panels) and edge-on (_right_ panels) density distributions of the disks around the BH that destroys the star at the first encounter in four example models, and in Figure 9 the radial profiles of the aspect ratio, the density, the temperature, and the rotational velocity for all models where an accretion disk forms. The aspect ratio \(h/r\) is defined as the ratio of the first-moment density scale height, averaged over a given cylindrical radius, to the cylindrical radius. Here, we excluded Model 20. \(a2b1/2\phi 180i150\) in this analysis because the BH in that model is surrounded by a nearly spherical gas cloud, not by a disk. But we provide the accretion rate for that model also, shown in Figure 10.
We find that the disks are thick and pressure-supported, and mostly confined within \(r\simeq 30\,{\rm R}_{\odot}\). In general, the aspect ratio \(h/r\) (_top-left_ panel of Figure 9) is comparable to or greater than order unity up to the outer edge of the disks. \(h/r\) declines from \(h/r\simeq 3-5\) to \(h/r\simeq 1\) outwards. The rotational velocity \(v^{\phi}\) near the mid-plane is sub-Keplerian (\(v^{\phi}/v_{\rm kap}\simeq 0.1-0.6\)), indicating the disk is not rotationally supported. The velocity ratio remains the same out to the outer disk edge. The density of the inner region stays flat at \(\rho\simeq(0.1-5)\) g cm\({}^{-3}\) up to 0.1 - 0.2 of the disk size, then declines steeply following a \(r^{-4}\) power-law. On the other hand, the temperature does not show such flatness at \(r\la 1\,{\rm R}_{\odot}\), but continuously decreases following a \(r^{-1}\) power-law.
Finally, we present in Figure 10 the accretion rate of the initially
Figure 5: An example of a _non-BBH-forming_ encounter, Model \(17.a2b1/2\phi 0i30\). We depict the density distribution in the binary orbital plane at a few different times in units of \(t_{\rm p}\). At \(t/t_{\rm p}\simeq-16\) (top-\(18^{3}\)), the binary (star - green dot) and the single BH (yellow dot) approach each other. At \(t/t_{\rm p}=-3.21\) (top-\(3^{\rm rd}\)), the two BHs encounter, resulting in the ejection of the initially single BH, while the initial binary orbit is significantly perturbed to become an interacting binary (bottom panels). Because of periodic interactions at pericenter, the binary orbit continues to evolve until the end of the simulation. The semimajor axis and eccentricity measured at the end of the simulation are \(\simeq 18\,{\rm R}_{\odot}\) and \(\simeq 0.4\), respectively.
single BHs that fully destroy the star at the first closest encounter. The general trend is that, upon disruption or collision, the accretion rate dramatically increases up to \(\dot{M}\simeq(10^{-6}-10^{-5})\) M\({}_{\odot}\) s\({}^{-1}\) and it takes around 80-100 hours until \(\dot{M}\) declines by a factor of 100 from its peak. When the binary is eccentric and the pericenter distance is sufficiently close, a periodic perturbation from the other BH at periastron results in periodic bursts on a time scale \(\simeq\) the orbital period (e.g., Model 15. \(a4b1/2\phi 180\)/\(i50\), and Model 20. \(a2b1/2\phi 180\)/\(i50\)). Although the accretion rate is super-Eddington, the total accreted mass is at most 0.1 M\({}_{\odot}\) (\(\lesssim 0.4\)%) until the end of the simulation, and the magnitude of the BH spin driven by accretion can be as large as 0.01.
We have to caution that such extremely high accretion rates for stellar-mass BHs would result in strong outflows (e.g., Sadowski et al., 2014), which would regulate the accretion rate. Although we have
Figure 6: Schematic diagram showing three dominant configurations resulting in the perturbation of the original binary’s orbit in our simulations. The black arrows indicate the direction of motion of objects, and long grey arrows the trajectory of the incoming BH (black circle). In the first configuration (_top_ panels), for the prograde encounter with \(\phi<90\degr\) and \(b\lesssim 1\), the incoming BH undergoes a strong encounter with the BH (blue circle) initially in the binary (small closest approach distance compared to the binary semimajor axis), quickly turns around and advances to the left (_top-right_ panel). This quick turn-around motion gives a momentum kick (green arrow) to the blue circle to the right with respect to the star (red circle). The orbit of the initial binary is perturbed. The second configuration (_bottom-left_ panel) is a distant fly-by where the incoming BH does not significantly interact with any of the binary members, and this happens when \(b\gtrsim 1\). The last configuration (_bottom-right_ panel) shows the case where the incoming BH passes through the binary without strong interactions with any of the binary members (e.g., \(b\simeq 1/4\) and \(a/a_{\rm ML}=4\)).
realized a significant improvement in resolving gas motions near the BHs compared to Paper 2 thanks to using refinement, since feedback from the BHs is not included in our simulations it is likely that our accretion rates are overestimated. Nonetheless, if the luminosity is mostly driven by accretion, the features revealed in the accretion rate (e.g., periodic bursts) could possibly be imprinted in the light curves.
## 5 Discussion
### Formation of merging binary black holes
Our simulations show that close three-body encounters between a BH-star binary and a single BH can create a merging BBH (see the _top-left_ panel of Figure 4). One possibly dominant formation process we identified is the close interaction between the star and the incoming BH at the first closest approach, resulting in a stellar disruption, followed by the formation of a BBH. 5 out of 11 BBHs formed in our simulations would merge in a Hubble time via GW emission. The semimajor axes of the merging BBHs are \(\lesssim 114\,\mathrm{R}_{\odot}\) and their eccentricities are quite high, \(0.66\lesssim e\lesssim 0.97\). If the required conditions are met (\(r_{\mathrm{p}}\simeq 0.5\,a\), encounters between the star and the incoming BH at the first closest approach), this type of encounters can form, albeit likely rarely, a very compact eccentric BBH: \(t_{\mathrm{GW}}\simeq 10^{4}\) yr in Model 20, \(a2b1/2\phi 180/150\).
To see whether the merging BBHs can have residual eccentricities when they enter the frequency band of LIGO (10 Hz to 10kHz), we evolve the five binaries assuming their orbits evolve purely via GW emission until \(t_{\mathrm{GW}}=P\), where \(P\) is the binary orbital period. We solve Equations 5.6 and 5.7 in Peters (1964) simultaneously using a 4th-order Runge-Kutta method with an adoptive step size of \(10^{-3}t_{\mathrm{GW}}\). As a sanity check, we confirmed that our numerical solutions are consistent with the analytic solution (Equation 5.11 in Peters 1964) within fractional errors of \(\lesssim 10^{-8}\). Figure 11 shows the evolution of \(a\) and \(e\) of the five merging BBHs, starting from those found in our simulations (marked as scatters near the top-left corner). As shown in the figure, by the time the BBHs enter the LIGO frequency band, their residual eccentricities would be very small (\(e<10^{-5}\)).
Nonetheless, the circumbinary gas produced by the disruption of the star may affect the (at least early) orbital evolution, which may hence deviate from the purely GW-driven evolution considered above. The gas-binary interaction and resulting binary evolution remains an active topic of study. A growing number of numerical works have suggested that a binary surrounded by a circumbinary disk can expand (e.g., Miranda et al. 2017; Munoz et al. 2019; Duffell et al. 2020) and can be driven into an eccentric orbit (e.g., Zrake et al. 2021; D'Orazio and Duffell 2021), depending on the disk and binary parameters, as opposed to the predictions from the commonly held picture of surrounding gas driving binaries into shrinking circular orbits (e.g., Armitage and Natarajan 2002). However, given the limited parameter space explored in previous work, it is not straightforward to predict the evolution of our unequal-mass, very eccentric BBHs surrounded by a possibly misaligned disk, based on the results from the previous work.
The remaining 6 BBHs with GW-driven merger timescales longer than a Hubble time are hard binaries in typical stellar cluster environments. This means that those binaries could become potential GW event candidates via weak interactions with other objects and a few strong interactions like the ones considered in this study.
### Electromagnetic counterparts of binary black hole merger
The close association of BBHs and stellar disruptions can have important implications for EM counterparts of BBH mergers. At the
Figure 7: Schematic diagram showing the dominant configuration resulting in an exchange of binary members. The black arrows indicate the direction of motion of objects, and long grey arrows (_right_ panel) the trajectory of the incoming BH (black circle). In retrograde encounters with \(\phi<90^{\circ}\), like configuration 1 for the orbit perturbation (Figure 6), the incoming BH strongly interacts with the BH (blue circle) initially in the binary, quickly turns around and advances to the right. This motion results in a momentum kick (green arrow) to the blue circle to the left with respect to the star (red circle). The initially single BH gravitationally captures the star and forms a binary.
Figure 8: Face-on (_left_) and edge-on (_right_) density distribution of disks around the BH that disrupts the star at the first closest approach in four selected models with \(i=30^{\circ}\) or \(150^{\circ}\), for Model 6. \(a4b1\)\(\phi\)180\(i\)30 (\(1^{\rm{8}}\) row), Model 15. \(a4b1\)/2\(\phi\)180\(i\)150 (\(2^{\rm{nd}}\) row), Model 18. \(a2b1\)/2\(\phi\)180\(i\)30 (\(3^{\rm{rd}}\) row), and Model 22. \(a6b1\)/2\(\phi\)180\(i\)30 (\(4^{\rm{th}}\) row), at the end of the simulations. The white horizontal bar at the bottom-left corner of each panel shows the spatial scale, 4 R\({}_{\odot}\), except for the second row of panels where it is 2 R\({}_{\odot}\).
time the BBH forms, there would be a prompt EM transient phenomenon due to the stellar disruption. The very high accretion rate (Figure 10), along with the accretion-driven BH spin and magnetic field of debris inherited from the star, suggests that a jet can be launched. For such a case, the luminosity powered by the jet would track the accretion rate as \(\propto\dot{M}c^{2}\) with an uncertain efficiency factor. We also found that both BHs can be surrounded by the stellar debris and accrete, possibly suggesting that both BHs may be able to launch jets simultaneously, potentially leading to a unique observational signature.
In addition to the prompt EM emission, the existence of the surrounding gas when the BBH forms may result in a possible EM counterpart at the time of merger. This is a quite similar situation as found in Ryu et al. (2022) where an initially hard BBH encounters with a single star and becomes surrounded by gas debris after disrupting the star. Perna et al. (2016) studied the evolution of an initially hyper-Eddington accretion disk which cools and shuts down the magnetorotational instability before the disk material is fully accreted. Under these conditions, the "dead disk" is expected to survive until the BBHs merge, and to heat up and re-ignite during the merger process, hence yielding a possible EM counterpart to the GW event.
Figure 10: The accretion rates of the initially single BHs that fully destroy the star in BBH-forming simulations with \(\phi=180^{\circ}\), and \(i=30^{\circ}\) or \(150^{\circ}\).
Figure 9: Profiles of the structure of the disks in simulations with \(i=30^{\circ}\) or \(150^{\circ}\) where a BBH forms, including the four models shown in Figure 8: The aspect ratio, defined as the ratio of the density scale height to the cylindrical radius \(r\) (top-left), the ratio of the mass-weighted average of the azimuthal velocity along the midplane within the scale height to the Keplerian velocity \(v_{\rm{app}}\) (top-right), the average density along the midplane within the scale height (bottom-left), and the mass-weighted average of the temperature along the midplane within the scale height (bottom-right). All the reported quantities are measured at the end of the simulations.
### Varieties of encounters
Although we consider three-body encounters between a circular binary and a single object with similar masses (the largest mass ratio is \(0.5\)), there could be a variety of these types of events involving, e.g., initially eccentric binaries and a wide range of masses of encountering objects.
Encounters involving massive stars (i.e., \(10\,\mathrm{M}_{\odot}\)) are likely to occur during the early evolutionary stages of star clusters unless there is another episode of star formation, since stars with mass \(>10\,\mathrm{M}_{\odot}\) would collapse to compact objects in tens of Myrs. Therefore, over the full cluster lifetime, the overall rate would indeed be higher for encounters involving less massive MS stars because such binaries would survive longer. Using Monte Carlo simulations of globular clusters, Kremer et al. (2018) showed that up to \(10\) detached BH-MS binaries can exist in clusters at an age of \(10-12\) Gyr, and the typical mass of the companion MS stars is \(\lesssim 1-2\,\mathrm{M}_{\odot}\), depending on the cluster properties. Even for this case, strong interactions between a low-mass MS star and the incoming BH at the first closest approach would have higher chances of forming BBHs than for the other cases where two BHs meet first.
At later times when all massive stars collapse to compact objects, interactions between stars and BHs with significantly different masses would be more probable. If the star is significantly less massive than both BHs, the interactions would be effectively two-body with small perturbations of the BH orbits by the star. However, if the star was disrupted by the incoming BH as in the _BBH-forming_ encounters, the stellar disruption would generate bright EM flares. Furthermore, resulting momentum kicks and gas dynamical friction would facilitate the formation of BBHs, unless the momentum kick is given to increase the relative kinetic energy of the BHs. This process would be most efficient when the star and the incoming BH have comparable masses.
If the encountering binary is eccentric, the binary members would spend most of their orbital time near apocenter, indicating that the cross-section would be enhanced by a factor of \(1+e\). Unlike the increase in \(e\) in our models when initially circular binaries are considered, the final binaries can be circularized depending on the direction of the momentum kick associated with close interactions between the two objects at the first closest approach. We already demonstrated in Figures 3 and 5 that the momentum kick acts to add to or remove the momentum of the BH in the binary, depending on whether the orbit is initially in a prograde or retrograde direction.
## 6 Summary and conclusions
In this work we have investigated the outcomes of three-body encounters between a \(20\,\mathrm{M}_{\odot}\) BH - \(10\,\mathrm{M}_{\odot}\) star circular binary and a \(10\,\mathrm{M}_{\odot}\) stellar-mass BH, using a suite of hydrodynamical simulations with the moving-mesh code AREPO. We have focused on the formation of BBHs, the conditions required for their formation, and the EM emission from those systems. We have considered a wide range of encounter parameters, i.e., varying the binary size (\(a\simeq 34\), \(68\), \(101\,\mathrm{R}_{\odot}\)), the impact parameter (\(a/4-a\)), the inclination angle (\(i=0^{\circ}\), \(30^{\circ}\), \(120^{\circ}\), \(150^{\circ}\), and \(180^{\circ}\)), and the phase angle (\(\phi=0^{\circ}\), \(45^{\circ}\), \(90^{\circ}\) and \(135^{\circ}\), \(180^{\circ}\), \(225^{\circ}\), \(270^{\circ}\), \(315^{\circ}\)), while we have kept fixed the masses of the star and of the BHs.
We have categorized the encounters into two classes depending on their outcomes. This classification is primarily determined by which types of objects meet at the first closest approach. When the star and the incoming single BH encounter first, their close interaction imparts a momentum kick to the BH, resulting in a dramatic decrease in the BH's speed. The BH is subsequently captured by the other bystander BH, forming a BBH. In this case, the star is frequently destroyed due to its close encounter with the BH. On the other hand, when two BHs encounter first, either the original binary's orbit is simply perturbed (prograde encounters), or the originally single BH captures the star, forming a new binary (member exchange, retrograde encounters). Although the most frequent outcomes are BH-star binaries, a disruption of the star and BBH formation are still possible.
The most important factors that determine the outcomes are the phase angle and the impact parameter. As explained above, the phase angle primarily demarcates the boundary between "BBH-forming" encounters and "non-BBH-forming" encounters. The impact parameter on the other hand affects the strength of interactions: for \(r_{\mathrm{p}}>a\), the incoming BH interacts weakly with the binary. As a result, the binary orbit is perturbed, or the binary members are exchanged. For \(r_{\mathrm{p}}\lesssim a\), interactions can become significant, possibly resulting in a disruption of the star when the star and the BH meet at the first closest approach. Although our simulations do not cover the entire parameter space for this type of encounters, the key dynamical processes can be extrapolated within this class of encounters to other initial parameters, and possibly also to other astrophysical systems (e.g., three-body encounters involving a massive black hole having a stellar companion and an isolated BH, forming extreme mass ratio inspirals).
The close correlation between BBH formation and stellar disruption in our systems has interesting implications for the formation channel of BBHs and EM counterparts of their merger. We confirm that three-body encounters between a BH-star binary and a BH can produce merging BBHs. In addition, we find that the BH that disrupts the star in the _BBH-forming_ encounters is promptly surrounded by an optically and geometrically thick disk with accretion flows towards the BH exceeding the Eddington limit. If a jet is launched from the system, the jet luminosity would likely track the accretion rate. If the disk remains long-lived and revives at merger, EM counterparts can be produced at the time of the BBH merger.
Our order-of-magnitude estimate for the encounter rate suggests that this type of encounters may be rarer than other types of three
Figure 11: The evolution of \(a\) and \(e\) of the five merging BBHs formed in three-body interactions due to GW emission. The markers depict \(a\) and \(e\) of the final merging binary black holes. The four grey horizontal lines indicate the semimajor axes at which the rest-frame GW frequency (twice the orbital frequency) is \(f\)/\(\mathrm{\delta w}=10^{-4}\), \(10^{-2}\), \(1\), and \(10^{2}\) Hz, respectively.
body encounters considered in Paper 1 (i.e., between binary BHs and single stars) and Paper 2 (i.e., between stellar-binaries and single BHs). However, given the simplified assumptions made here, more detailed estimates should be made for these encounters, taking their specific astrophysical environments accurately into account.
## Acknowledgements
The authors are grateful to the referee for constructive comments and suggestions. This research project was conducted using computational resources (and/or scientific computing services) at the Max-Planck Computing & Data Facility. The simulations were performed on the national supercomputer Hawk at the High Performance Computing Center Stuttgart (HLRS) under the grant number 44232. The authors gratefully acknowledge the scientific support and HPC resources provided by the Erlangen National High Performance Computing Center (NHR@FAU) of the Friedrich-Alexander-Universitat Erlangen-Nurnberg (FAU) under the NHR project b166ea10. NHR funding is provided by federal and Bavarian state authorities. NHR@FAU hardware is partially funded by the German Research Foundation (DFG) - 440719683. R. Perna acknowledges support by NSF award AST-2006839.
## Data Availability
Any data used in this analysis are available on reasonable request from the first author.
|
2307.09846 | Enhanced bipartite entanglement and Gaussian quantum steering of
squeezed magnon modes | We theoretically investigate a scheme to entangle two squeezed magnon modes
in a double cavitymagnon system, where both cavities are driven by a two-mode
squeezed vacuum microwave field. Each cavity contains an optical parametric
amplifier as well as a macroscopic yttrium iron garnet (YIG) sphere placed near
the maximum bias magnetic fields such that this leads to the excitation of the
relevant magnon mode and its coupling with the corresponding cavity mode. We
have obtained optimal parameter regimes for achieving the strong magnon-magnon
entanglement and also studied the effectiveness of this scheme towards the
mismatch of both the cavity-magnon couplings and decay parameters. We have also
explored the entanglement transfer efficiency including Gaussian quantum
steering in our proposed system | Shaik Ahmed, M. Amazioug, Jia-Xin Peng, S. K. Singh | 2023-07-19T09:07:01Z | http://arxiv.org/abs/2307.09846v1 | # Enhanced bipartite entanglement and Gaussian quantum steering of squeezed magnon modes
###### Abstract
We theoretically investigate a scheme to entangle two squeezed magnon modes in a double cavity-magnon system, where both cavities are driven by a two-mode squeezed vacuum microwave field. Each cavity contains an optical parametric amplifier as well as a macroscopic yttrium iron garnet (YIG) sphere placed near the maximum bias magnetic fields such that this leads to the excitation of the relevant magnon mode and its coupling with the corresponding cavity mode. We have obtained optimal parameter regimes for achieving the strong magnon-magnon entanglement and also studied the effectiveness of this scheme towards the mismatch of both the cavity-magnon couplings and decay parameters. We have also explored the entanglement transfer efficiency including Gaussian quantum steering in our proposed system.
## I Introduction
Quantum entanglement [1] and Gaussian quantum steering [2; 3] are two major important resources in the field of quantum computing [4], quantum cryptography [5] and quantum teleportation [6] including quantum information processing [7]. Many microscopic as well as macroscopic quantum systems have been proposed over the past decades for the study of quantum entanglement and other nonclassical quantum correlations in superconducting qubits [8], atomic ensembles [9], cavity optomechanics [10; 11; 12; 13; 14; 15; 16] and cavity magnomechanical (CMM) systems [17; 18; 19; 20; 21; 22] which paves the way for advancements in present era of quantum technology. In CMM systems, the magnons defined as the collective excitation of a large number of spins in ferrimagnetic materials play very important role in the study of light-matter interactions due to their tunability, low damping, high spin density [23; 24] as well the strong coupling with the microwave photons [25; 26; 27; 28]. Moreover, other important macroscopic quantum phenomena such as magnon-induced effects [30], tunable magnomenhemically induced transparency and absorption [31; 32], slow light [33], four-wave mixing [34], squeezed states [35; 36; 37; 38], nonclassical quantum correlations [39; 40], microwave-to-optical carrier conversion [41] including quantum sensing [42; 43] also successfully investigated in cavity magnomechanical systems.
To quantify the bipartite entanglement between the magnon and microwave photon in CMM systems, we use a very well-known witness of bipartite entanglement defined as the logarithmic negativity [44]. A recent theoretical work given in [45] explored the logarithmic negativity between two magnon modes where the optimal conditions for achieving the strong magnon-magnon entanglement involve the resonant coupling between the microwave cavity with both the magnon modes whereas in case of two microwave cavities given in [46; 47; 48], it is found that the detuning of the cavity and magnon mode significantly affects the bipartite entanglement. These research works also found the presence of both one-way and two-way Gaussian quantum steering. So, all these studies demonstrate that the bipartite entanglement and quantum steering in CMM systems can be significantly controlled through the various physical parameters. All these recent progress also broaden our understanding of quantum correlations and facilitate to further explore the possibility of secure quantum protocols in such kind of macroscopic quantum systems.
Motivated by these works, we study the quantum correlations and Gaussian quantum steering between two squeezed magnon modes of two yttrium iron garnet (YIG) spheres in a system of two spatially separated microwave cavities. Each cavity also contains an optical parametric amplifier (OPA) as well as a macroscopic YIG sphere placed near the maximum bias magnetic fields such that this leads to the excitation of the corresponding magnon mode and its coupling with the cavity mode. In addition, both the cavities are simultaneously driven by a two mode squeezed vacuum microwave field in our proposed system [49]. In this present work, we found the generation of a considerable bipartite entanglement between the two magnon modes with gradually increasing squeezing parameter and the mean
thermal magnon number. Moreover, it can be seen clearly from our work that not all entangled states allow for quantum steering whereas any state that can be steered must necessarily be entangled.
This paper is organized as follows: In Section II, we introduce model Hamiltonian and also evaluate corresponding quantum Langevin equations including its solutions (QLEs). In Section III, we discuss in details about mathematical formulation of bipartite entanglement and Gaussian quantum steering between two magnon modes. Numerical Results and related discussions are given in Section IV whereas we conclude our results in Section V.
## II The model Hamiltonian
Our proposed system shown in Fig. 1 consists of two microwave cavities and two magnon modes in two YIG spheres, which are respectively placed inside the cavities near the maximum magnetic fields of the cavity modes and simultaneously in uniform bias magnetic fields such that it causes both the magnon modes to strongly couple with respective cavity modes [49; 50]. Each cavity, contains a degenerate optical parametric amplifier (OPA) to produce squeezed light [51]. The Hamiltonian of the system in a rotating frame with frequency \(\omega_{j}\) can be written as
\[\mathcal{H}=\sum_{j=1,2}\Bigl{\{}\hbar\Delta_{c_{j}}c_{j}^{\dagger}c_{j}+ \hbar\Delta_{m_{j}}m_{j}^{\dagger}m_{j}+\hbar g_{j}\bigl{(}c_{j}m_{j}^{\dagger }+c_{j}^{\dagger}m_{j}\bigr{)}+ic\hbar\lambda_{j}(e^{i\ell\theta}c_{j}^{ \dagger 2}-e^{-i\varepsilon\theta}c_{j}^{2})+ic\hbar\mu_{j}(e^{i\varepsilon \nu}m_{j}^{\dagger 2}-e^{-i\varepsilon\nu}m_{j}^{2})\Bigr{\}}, \tag{1}\]
where \(\Delta_{c_{j}}=\omega_{c_{j}}-\omega_{j}\), \(\Delta_{m_{j}}=\omega_{m_{j}}-\omega_{j}\), \(c_{j}\) (\(c_{j}^{\dagger}\)), \(m_{j}\) (\(m_{j}^{\dagger}\)) are the annihilation (creation) operators of the \(j^{th}\) cavity and magnon modes, respectively, and we have \(\bigl{[}\mathcal{O},\mathcal{O}^{+}\bigr{]}=1\) (\(\mathcal{O}\!=\!c_{j},m_{j}\)). \(\omega_{c_{j}}\) (\(\omega_{m_{j}}\)) is the resonance frequency of the \(j^{th}\) cavity mode (magnon mode). The frequency of the magnon mode \(\omega_{m_{j}}\) is determined by the external bias magnetic field \(H_{j}\) and the gyromagnetic ratio \(\beta\) via \(\omega_{m_{j}}=\beta H_{j}\), and thus can be flexibly adjusted, and \(g_{j}\) is the coupling rate between the \(j^{th}\) cavity and magnon modes. The parameter \(\lambda_{j}\) and \(\theta_{j}\) represents respectively the nonlinear gain of the OPA and the phase of the driving field. with \(\mu_{j}\) being the squeezing parameter and \(\nu\) being the phase of \(j^{th}\) squeezing mode. The magnon squeezing can be achieved by transferring squeezing from a squeezed-vacuum microwave field [34], or by the intrinsic nonlinearity of the magnetostriction (the so-called ponderomotive-like squeezing) [52], or by the anisotropy of the ferromagnet [53; 54], etc.
In the frame rotating at the frequency \(\omega_{j}\), i.e., the frequency of the \(j^{th}\) mode of the input two-mode squeezed field, the QLEs of this model Hamiltonian are given by
\[\dot{c}_{j} = -(\kappa_{c_{j}}+i\Delta_{c_{j}})a_{j}-ig_{j}m_{j}+2\lambda_{j}e^ {i\theta}c^{\dagger}{}_{j}+\sqrt{2\kappa_{c_{j}}}c_{j}^{in}, \tag{2}\] \[\dot{m}_{j} = -(\kappa_{m_{j}}+i\Delta_{m_{j}})m_{j}-ig_{j}c_{j}+2\mu_{j}e^{i \varepsilon\nu}m_{j}^{\dagger}+\sqrt{2\kappa_{m_{j}}}m_{j}^{in},\]
where \(\kappa_{c_{j}}\) (\(\kappa_{m_{j}}\)) is the decay rate of the \(j\)th cavity mode (magnon mode), \(\Delta_{c_{j}}=\omega_{c_{j}}-\omega_{j}\), \(\Delta_{m_{j}}=\omega_{m_{j}}-\omega_{j}\), and \(c_{j}^{in}\) (\(m_{j}^{in}\)) is the input noise operator for the \(j^{th}\) cavity mode (magnon mode). The two cavity input noise operators \(c_{j}^{in}\) are quantum correlated due to the injection of the two-mode squeezed field, and have the following non-zero correlations
Figure 1: Schematic diagram of a double cavity-magnon system where both the cavities are driven by a two-mode squeezed vacuum microwave field. Two YIG spheres are respectively placed inside the microcavities near the maximum magnetic fields of the cavity modes and simultaneously in uniform bias magnetic fields.
in time domain \(c_{j}^{in}\) and \(c_{j}^{in\dagger}\) are given by [64]
\[\langle c_{j}^{in}(t)c_{j}^{in\dagger}(t^{\prime})\rangle=(\mathcal{N}+1)\delta(t -t^{\prime}) \tag{3}\]
\[\langle c_{j}^{in\dagger}(t)c_{j}^{in}(t^{\prime})\rangle=\mathcal{N}\delta(t-t ^{\prime}) \tag{4}\]
\[\langle c_{j}^{in}(t)c_{j^{\prime}}^{in}(t^{\prime})\rangle=\mathcal{M}e^{-i \omega_{\mathcal{M}}(t+t^{\prime})}\delta(t-t^{\prime})\quad;\quad j\neq j^{\prime} \tag{5}\]
\[\langle c_{j}^{in\dagger}(t)c_{j^{\prime}}^{in\dagger}(t^{\prime})\rangle= \mathcal{M}e^{i\omega_{\mathcal{M}}(t+t^{\prime})}\delta(t-t^{\prime})\quad; \quad j\neq j^{\prime} \tag{6}\]
. Here \(\mathcal{N}=\sinh^{2}r\), \(\mathcal{M}=\sinh r\cosh r\) and \(r\) is the squeezing parameter of the two-mode squeezed vacuum field whereas the magnon input noise operators \(m_{j}^{in}\) are of zero mean and correlated as follows
\[\langle m_{j}^{in}(t)m_{j}^{in\dagger}(t^{\prime})\rangle=(N_{m_{j}}+1)\delta (t-t^{\prime}) \tag{7}\]
\[\langle m_{j}^{in\dagger}(t)m_{j}^{in}(t^{\prime})\rangle=N_{m_{j}}\delta(t-t ^{\prime}) \tag{8}\]
where \(N_{m_{j}}=\left[\exp\left(\frac{\hbar\omega_{m_{j}}}{k_{B}T}\right)-1\right]^{-1}\) is the equilibrium mean thermal magnon number of the \(j^{th}\) mode, with \(T\) the environmental temperature and \(k_{B}\) the Boltzmann constant.
Using the linearisation of quantum Langevin equations, the fluctuations of the system are written as
\[\delta\dot{c}_{j} = -(\kappa_{c_{j}}+i\Delta_{c_{j}})\delta c_{j}-ig_{j}\delta m_{j}+ 2\lambda_{j}e^{i\theta}\delta c_{j}^{\dagger}+\sqrt{2\kappa_{c_{j}}}c_{j}^{in}, \tag{9}\] \[\delta\dot{m}_{j} = -(\kappa_{m_{j}}+i\Delta_{m_{j}})\delta m_{j}-ig_{j}\delta c_{j}+ 2\mu_{j}e^{i\omega}\delta m_{j}^{\dagger}+\sqrt{2\kappa_{m_{j}}}m_{j}^{in}.\]
To get the explicit expression of the degree of freedom of optical and magnon modes, we consider the EPR-type quadrature fluctuations operators corresponding to the two subsystems defined as \(\delta Q_{j}=(\delta c_{j}+\delta c_{j}^{\dagger})/\sqrt{2},\delta P_{j}=i( \delta c_{j}^{\dagger}-\delta c_{j})/\sqrt{2},\delta q_{j}=(\delta m_{j}+ \delta m_{j}^{\dagger})/\sqrt{2},\delta p_{j}=i(\delta m_{j}^{\dagger}-\delta m _{j})/\sqrt{2}\) (we have similar definition for input noises \(Q_{j}^{in},P_{j}^{in}\) and \(q_{j}^{in},p_{j}^{in}\)) [55; 56; 57; 58; 59; 60],
The above QLEs can be simplified as
\[\delta\dot{Q}_{j} = -\kappa_{c_{j}}\delta Q_{j}+\Delta_{c_{j}}\delta P_{j}+g_{j} \delta p_{j}+2\lambda\cos(\theta)\delta q_{j}+2\lambda\sin(\theta)\delta p_{j} +\sqrt{2\kappa_{c_{j}}}Q_{j}^{in}, \tag{10}\] \[\delta\dot{P}_{j} = -\kappa_{c_{j}}\delta P_{j}-\Delta_{c_{j}}\delta Q_{j}-g_{j} \delta q_{j}-2\lambda\cos(\theta)\delta p_{j}+2\lambda\sin(\theta)\delta q_{j} +\sqrt{2\kappa_{c_{j}}}P_{j}^{in},\] \[\delta\dot{q}_{j} = -\kappa_{m_{j}}\delta q_{j}+\Delta_{m_{j}}\delta p_{j}+g_{j} \delta P_{j}+2\mu\cos(\nu)\delta Q_{j}+2\mu\sin(\nu)\delta P_{j}+\sqrt{2 \kappa_{m_{j}}}q_{j}^{in},\] \[\delta\dot{p}_{j} = -\kappa_{m_{j}}\delta p_{j}-\Delta_{m_{j}}\delta p_{j}-g_{j} \delta Q_{j}-2\mu\cos(\nu)\delta P_{j}+2\mu\sin(\nu)\delta Q_{j}+\sqrt{2\kappa _{m_{j}}}p_{j}^{in}.\]
Equation (10) take the following compact matrix form
\[\dot{V}(t)=\mathcal{A}V(t)+\chi(t), \tag{11}\]
Here \(V(t)=[\delta Q_{1},\delta P_{1},\delta Q_{2},\delta P_{2},\delta q_{1},\delta p _{1},\delta q_{2},\delta p_{2}]^{T}\), \(\mathcal{A}\) is the drift matrix
\[\mathcal{A}=\begin{pmatrix}\mathcal{A}_{1}&\mathcal{A}_{3}\\ \mathcal{A}_{3}&\mathcal{A}_{2}\end{pmatrix} \tag{12}\]
where
\[\mathcal{A}_{1}=\begin{pmatrix}-\kappa_{c_{1}}+2\lambda\cos(\theta)&\Delta_{c_ {1}}+2\lambda\sin(\theta)&0&0\\ -\Delta_{c_{1}}+2\lambda\sin(\theta)&-\kappa_{c_{1}}-2\lambda\cos(\theta)&0&0 \\ 0&0&-\kappa_{c_{2}}+2\lambda\cos(\theta)&\Delta_{c_{2}}+2\lambda\sin(\theta)\\ 0&0&-\Delta_{c_{2}}+2\lambda\sin(\theta)&-\kappa_{c_{2}}-2\lambda\cos(\theta) \end{pmatrix} \tag{13}\]
and
\[\mathcal{A}_{2}=\begin{pmatrix}-\kappa_{m_{1}}+2\mu\cos(\nu)&\Delta_{m_{1}}+2\mu \sin(\nu)&0&0\\ -\Delta_{m_{1}}+2\mu\sin(\nu)&-\kappa_{m_{1}}-2\mu\cos(\nu)&0&0\\ 0&0&-\kappa_{m_{2}}+2\mu\cos(\nu)&\Delta_{m_{2}}+2\mu\sin(\nu)\\ 0&0&-\Delta_{m_{2}}+2\mu\sin(\nu)&-\kappa_{m_{2}}-2\mu\cos(\nu)\end{pmatrix} \tag{14}\]
and
\[\mathcal{A}_{3}=\begin{pmatrix}0&g_{1}&0&0\\ -g_{1}&0&0&0\\ 0&0&0&g_{2}\\ 0&0&-g_{2}&0\end{pmatrix} \tag{15}\]
and \(\chi(t)=[\sqrt{2\kappa_{c_{1}}}Q_{1}^{in},\sqrt{2\kappa_{c_{1}}}P_{1}^{in}, \sqrt{2\kappa_{c_{2}}}Q_{2}^{in},\sqrt{2\kappa_{c_{2}}}P_{2}^{in},\sqrt{2 \kappa_{m_{1}}}q_{1}^{in},\sqrt{2\kappa_{m_{1}}}p_{1}^{in},\sqrt{2\kappa_{m_ {2}}}q_{2}^{in},\sqrt{2\kappa_{m_{2}}}p_{2}^{in}]^{T}\). The system is stable when eigenvalues of the drift matrix \(\mathcal{A}\) (12) have negative real parts. This corresponds to the so-called Routh-Hurwitz criterion [61]. The steady state of the system, which is completely characterized by an \(8\times 8\) covariance matrix (CM) \(\Sigma\), defined as \(\Sigma_{ij}(t)=\langle V_{i}(t)V_{j}(t^{\prime})+V_{j}(t^{\prime})V_{i}(t) \rangle/2\). the solution of \(\Sigma\) can be obtained by directly solving the Lyapunov equation [62]
\[\mathcal{A}\Sigma+\Sigma\mathcal{A}^{T}=-\mathcal{D}, \tag{16}\]
where \(\mathcal{D}\) is the diffuse matrix defined by \(\mathcal{D}_{ij}\delta(t-t^{\prime})=\langle\chi_{i}(t)\chi_{j}(t^{\prime}) +\chi_{j}(t^{\prime})\chi_{i}(t)\rangle/2\), given by
\[\mathcal{D}=\begin{pmatrix}\kappa^{\prime}&0&\sqrt{\kappa_{c_{1}}\kappa_{c_ {2}}}\mathcal{M}&0&0&0&0&0\\ 0&\kappa^{\prime}&0&-\sqrt{\kappa_{c_{1}}\kappa_{c_{2}}}\mathcal{M}&0&0&0&0\\ \sqrt{\kappa_{c_{1}}\kappa_{c_{2}}}\mathcal{M}&0&\kappa^{\prime\prime}&0&0&0& 0\\ 0&-\sqrt{\kappa_{c_{1}}\kappa_{c_{2}}}\mathcal{M}&0&\kappa^{\prime\prime}&0&0& 0&0\\ 0&0&0&0&\gamma^{\prime}&0&0&0\\ 0&0&0&0&0&\gamma^{\prime\prime}&0&0\\ 0&0&0&0&0&0&\gamma^{\prime\prime}&0\\ 0&0&0&0&0&0&\gamma^{\prime\prime}\end{pmatrix} \tag{17}\]
where \(\kappa^{\prime}=\kappa_{c_{1}}\big{(}\mathcal{N}+\frac{1}{2}\big{)}\), \(\kappa^{\prime\prime}=\kappa_{c_{2}}\big{(}\mathcal{N}+\frac{1}{2}\big{)}\), \(\gamma^{\prime}=\kappa_{m_{1}}\big{(}2N_{m_{1}}+1\big{)}\) and \(\gamma^{\prime\prime}=\kappa_{m_{2}}\big{(}2N_{m_{2}}+1\big{)}\).
The covariance matrix \(\Sigma_{(mm)}\) associated with the two magnon modes is given by
\[\Sigma_{(mm)}=\begin{pmatrix}\mathcal{X}&\mathcal{Z}\\ \mathcal{Z}^{T}&\mathcal{Y}\end{pmatrix} \tag{18}\]
The \(2\times 2\) sub-matrices \(\mathcal{X}\) and \(\mathcal{Y}\) in Eq. (18) describe the autocorrelations of the two magnon modes and \(2\times 2\) sub-matrix \(\mathcal{Z}\) in Eq. (18) denotes the cross-correlations of the two magnon modes.
## III Quantum correlations
### Quantum entanglement
The logarithmic negativity \(E_{m}\) is a measure or witness of entanglement in bipartite continuous-variable (CV) systems [63]. Mathematically, it can be expressed as:
\[E_{m}=\max[0,-\log(2\psi^{-})] \tag{19}\]
with \(\psi^{-}\) being the smallest symplectic eigenvalue of partial transposed covariance matrix \(\Sigma_{(mm)}\) of two magnon modes
\[\psi^{-}=\sqrt{\frac{\Gamma-\sqrt{\Gamma^{2}-4\det\Sigma_{(mm)}}}{2}} \tag{20}\]
where the symbol \(\Gamma\) is written as \(\Gamma=\det\mathcal{X}+\det\mathcal{Y}-\det\mathcal{Z}\). The two magnon modes are entangled if the condition \(\psi^{-}<1/2\) (i.e. when \(E_{m}>0\)) is satisfied.
### Gaussian quantum steering
Quantum steering is the process of acquiring information about an unmeasurable quantum system by measuring a single quantum system. Gaussian quantum steering is the asymmetric property between two entangled observers (the two mechanical modes), Alice (\(A\): magnon \(M_{1}\)) and Bob (\(B\): magnon \(M_{2}\)). Besides, they are used to quantify how much the two magnon modes are steerable. We use the covariance matrix \(\Sigma_{(mm)}\) of the two magnon modes, the Gaussian steering \(A\to B\) and \(B\to A\) written as [127 ]
\[S^{A\to B}=S^{B\to A}=\max\left[0,\frac{1}{2}\ln\left(\frac{\det(\mathcal{X})}{ 4\det\Sigma_{(mm)}}\right)\right]; \tag{21}\]
There are two possibilities of steerability between \(A\) and \(B\) : (i) no-way steering if \(S^{A\to B}=S^{B\to A}=0\) i.e. Alice can't steer Bob and vice versa also impossible even if they are not separable, and (ii) two-way steering if \(S^{A\to B}=S^{B\to A}>0\), i.e. Alice can steer Bob and vice versa. Indeed, a non-separable state is not always a steerable state, while a steerable state is always not separable.
## IV Results and discussion
In this section, we will discuss the steady state quantum correlations of two magnon modes under various parameters regime reported experimentally [49; 64] and given as \(\omega_{c_{1}}/2\pi=10\) GHz, \(\kappa_{c}/2\pi=5\) MHz, \(\kappa_{m}=\kappa_{c}/5\), \(g_{1}=g_{2}=5\kappa_{c}\), \(\theta=\pi\) and \(\nu=0.9\pi\). For simplicity, we consider that \(N_{m_{1}}=N_{m_{2}}=N_{m}\), \(\kappa_{c_{1}}=\kappa_{c_{2}}=\kappa_{c}\) and \(\kappa_{m_{1}}=\kappa_{m_{2}}=\kappa_{m}\). Additionally, each YIG sphere used in our study has a diameter of 0.5 mm. These spheres are specifically chosen for their size and contain more than \(10^{17}\) spins.
In Fig. 2, we have plotted the logarithmic negativity \(E_{m}\) of subsystem magnon-magnon with varying \(\Delta_{c_{j}}\) and \(\Delta_{m_{j}}\) (\(j=1,2\)) whereas all other parameters are fixed. It can be seen in Figs. 2(a) and (b) that when \(\Delta_{c_{j}}=\Delta_{m_{j}}=0\) (\(j=1,2\)), the entanglement between two magnon modes is optimal. This observation can be attributed to the resonant transfer of quantum correlations from the input fields to the two magnon modes, facilitated by the linear cavity-magnon coupling.
Fig. 3(a) shows that the bipartite entanglement \(E_{m}\) increases with the squeezing parameter \(r\) of the two-mode input squeezed vacuum field and decreases with the temperature \(T\). The effect of the temperature is due to the influence of the decoherence phenomenon [65]. Furthermore, it has been observed that the logarithmic negativity \(E_{m}\) reaches its maximum value when the value of \(r\) falls within the range of (1-1.5) whereas when the parameter \(r\) goes to zero both the magnon modes remain separable (\(E_{m}=0\)) as illustrated in figure 3(a). This shows the dependence of the bipartite entanglement of the two magnon modes on the squeezing parameter \(r\). In Fig. 3(b), we plot \(E_{m}\) as a function of squeezing parameter \(r\) and \(g_{2}/g_{1}\). We found here the generation of the bipartite entanglement between the two magnon modes with a gradual increase in squeezing parameter \(r\) even for a wide range of mismatch of the two coupling strengths \(g_{2}\) and \(g_{1}\).
In Fig.(4), we plot the Gaussian steering \(S^{A\to B}\), \(S^{B\to A}\) and logarithmic negativity \(E_{m}\) for the subsystem magnon-magnon as a function of the equilibrium mean thermal magnon number \(N_{m}\) for various values of the parameters \(\lambda\) and \(\mu\) whereas the other parameters are fixed. It can be seen that \(S^{A\to B}\), \(S^{B\to A}\) and entanglement \(E_{m}\) have the same evolution behavior. This figure studies the effect of \(N_{m}\) (temperature \(T\)) and the parameters \(\lambda\) and \(\mu\) on the bipartite entanglement and quantum steerings. Due to decoherence phenomena both the quantities i.e. bipartite entanglement and quantum steering decrease very quickly with increasing \(N_{m}\). Moreover, when we enhance \(\lambda\) and \(\mu\) the magnon-magnon entanglement as well as two-way quantum steering become finite for a wide range of temperature \(T\) (the equilibrium mean thermal magnon number \(N_{m}\) ). Moreover, as depicted in Fig.(4) the entangled state is not always a steerable state but a steerable state must be entangled, i.e. when \(S^{A\to B}=S^{B\to A}>0\) which leads to \(E_{m}>0\) and hence is the witnesses of the existence of Gaussian two-way steering. This means that both magnon modes are entangled as well as are steerable from \(A\) to \(B\) and from \(B\) to \(A\). However, we get no-way steering when \(S^{A\to B}=S^{B\to A}=0\) and \(E_{m}>0\) and so the measure of Gaussian steering always remains bounded by the bipartite entanglement \(E_{m}\).
In Fig. 5, we have plotted the logarithmic negativity \(E_{m}\) of two magnon modes with \(\kappa_{c}\) and \(\Delta_{m_{1}}\) for a fixed value of all other parameters. It can be seen that the entanglement between the two magnon modes is maximum when \(\Delta_{m_{1}}=0\) and \(\kappa_{c}=3\times 10^{7}\) Hz. However, the bipartite entanglement \(E_{m}\) decreases with decreasing decay rate \(\kappa_{c}\) and increasing detuning \(\Delta_{m_{1}}\).
## V Conclusions
We have theoretically investigated a scheme for the generation of the bipartite entanglement and Gaussian quantum steering in a double microwave cavity-magnon hybrid system where a two-mode squeezed microwave vacuum field is also transferred simultaneously into both cavities. We have obtained optimal parameter regimes for achieving the strong magnon-magnon entanglement and also explored the effectiveness of the scheme towards the mismatch of two cavity magnon couplings including the entanglement transfer efficiency. Our present study of bipartite entangled states of two magnon modes in coupled microwave resonators has important applications in coherent control of various non classical correlations in macroscopic quantum systems and further applications of such systems in quantum information processing as well as quantum communication.
|
2301.04034 | Existence and Blow-up of solutions for Stochastic Modified Two-component
Camassa-Holm System | In this paper, we consider the modified two-component Camassa-Holm System
with multiplicative noise. For these SPDEs, we first establish the local
existence and pathwise uniqueness of the pathwise solutions in Sobolev spaces
$H^{s}\times H^{s}, s>\frac{3}{2}$. Then we show that strong enough noise can
actually prevent blow-up with probability 1. Finally, we analyse the effects of
weak noise and present conditions on the initial data that lead to the global
existence and the blow-up in finite time of the solutions, and their associated
probabilities are also obtained. | Wujun Lv, Xing Huang | 2023-01-10T15:38:01Z | http://arxiv.org/abs/2301.04034v1 | # Existence and Blow-up of solutions for Stochastic Modified Two-component Camassa-Holm System
# Existence and Blow-up of solutions for Stochastic Modified Two-component Camassa-Holm System
Wujun Lv
Department of Statistics, College of Science, Donghua University
201620, Shanghai, P. R. China
[email protected]
Xing Huang
Center for Applied Mathematics, Tianjin University
300072, Tianjin, P. R. China
[email protected]
Corresponding author
**Abstract:** In this paper, we consider the modified two-component Camassa-Holm System with multiplicative noise. For these SPDEs, we first establish the local existence and pathwise uniqueness of the pathwise solutions in Sobolev spaces \(H^{s}\times H^{s},s>\frac{3}{2}\). Then we show that strong enough noise can actually prevent blow-up with probability \(1\). Finally, we analyse the effects of weak noise and present conditions on the initial data that lead to the global existence and the blow-up in finite time of the solutions, and their associated probabilities are also obtained.
_Keywords_: Stochastic modified two-component Camassa-Holm system (SMCH2); Pathwise solutions; Global existence; Blow-up criterion; Blow-up scenarios.
## 1 Introduction
Consider the following integrable two-component Camassa-Holm (CH2) shallow water system
\[\left\{\begin{array}{ll}(u-u_{xx})_{t}+3uu_{x}-2u_{x}u_{xx}-uu_{xxx}+\rho \rho_{x}=0,&t>0,\ x\in\mathbb{R},\\ \rho_{t}+(\rho u)_{x}=0,&t>0,\ x\in\mathbb{R},\\ u(0,x)=u_{0}(x),&x\in\mathbb{R},\\ \rho(0,x)=\rho_{0}(x),&x\in\mathbb{R}.\end{array}\right.\]
This system appears initially in [1]. In 2008, it was derived by Constantin and Ivanov in [2], which provided a demonstration about its derivation in view of the fluid shallow water theory from the hydrodynamic point of view. Similar to the Camassa-Holm equation, this system possesses the peakon, multi-kink solutions and the bi-Hamiltonian structure [3, 4] and is integrable. Well-posedness and wave breaking mechanism were discussed in [5, 6, 7] and the existence of global solutions was analyzed in [2, 6, 8].
Obviously, under the constraint of \(\rho(x,t)=0\), this system reduces to the cerebrated Camassa-Holm (CH) equation, which was derived physically by Camassa and Holm in [9] (found earlier by Fokas and Fuchssteiner [10] as a bi-Hamiltonian generalization of the KdV equation) by approximating directly the Hamiltonian for Euler's equation in the shallow water region with \(u(x,t)\) representing the free surface above a flat bottom. CH equation is completely integrable [11, 12] and has infinitely many conservation laws [13]. Local well-posedness for the initial data \(u_{0}\in H^{s}\) with \(s>3/2\) was proved in [14, 15]. One of the remarkable features of the CH equation is the presence of breaking waves as well as global solutions
in time. Wave breaking for a large class of initial data has been established in [14, 15, 16, 17, 18, 19]. Global solutions were also explored in [14, 16]. The solitary waves of the CH equation are peaked solitons and are orbitally stable [20]. If \(\rho(x,t)\neq 0\), this CH2 system is actually an extension of the CH equation.
However, a modified version of the two-component Camassa-Holm (MCH2) system allows a dependence on the average density \(\overline{\rho}\) as well as the pointwise density \(\rho\), and it is written as
\[\left\{\begin{array}{ll}(u-u_{xx})_{t}+3uu_{x}-2u_{x}u_{xx}-uu_{xxx}+\rho \overline{\rho}_{x}=0,&t>0,\ x\in\mathbb{R},\\ \rho_{t}+(\rho u)_{x}=0,&t>0,\ x\in\mathbb{R},\\ u(0,x)=u_{0}(x),&x\in\mathbb{R},\\ \rho(0,x)=\rho_{0}(x),&x\in\mathbb{R},\end{array}\right. \tag{1.1}\]
where \(u\) denotes the velocity field, \(\rho=(1-\partial_{x}^{2})(\overline{\rho}-\overline{\rho}_{0})\) with some constant \(\overline{\rho}_{0}\). This system was introduced by Holm et al. in [21], and it does admit peaked solutions in the velocity and average density. Many authors analytically identified the steepening mechanism that allows the singular solutions to emerge from smooth spatially confined initial data. They found that wave breaking in the fluid velocity does not imply singularity in the pointwise density \(\rho\) at the point of vertical slope. (1.1) may not be integrable unlike the CH2 system. The characteristic is that it will amount to strengthening the norm for \(\overline{\rho}\) from \(L^{2}\) to \(H^{1}\) in the potential energy term. Letting \(\gamma=\overline{\rho}-\overline{\rho}_{0}\), it leads to the conserved quantity \(\int_{\mathbb{R}}\|u\|_{H^{1}}^{2}+\|\gamma\|_{H^{1}}^{2}dx\), which is absent in the CH2 system. This property inspired a series of interesting works for a deep insight into the MCH2 system in the recent years. The Cauchy problem of (1.1) has been studied in many works [22, 23, 21, 24]. It has been shown that this system is locally well-posed on the line [22] and on the circle [24]. Moreover, the authors presented several blow-up results [22, 23, 24, 25]. In addition, basing on a conserved quantity, the authors established the global existence results for strong solutions to the system [26].
Before introducing our model, we recall some theory of infinite dimensional stochastic analysis. Let
\[\mathcal{S}=(\Omega,\mathcal{F},\mathbb{P},\{\mathcal{F}_{t}\}_{t\geq 0}, \mathcal{W}_{1},\mathcal{W}_{2}),\]
where \((\Omega,\mathcal{F},\mathbb{P},\{\mathcal{F}_{t}\}_{t\geq 0})\) is a complete filtration probability space, and \(\mathcal{W}_{1},\mathcal{W}_{2}\) are two cylindrical Wiener process on some separable Hilbert space \(U\) and \(d\langle\mathcal{W}_{1},\mathcal{W}_{2}\rangle_{t}=\kappa dt,-1\leq\kappa\leq 1\). To be precise, we consider a separable Hilbert space \(U\) as well as a larger Hilbert space \(U_{0}\) such that the canonical embedding \(U\hookrightarrow U_{0}\) is Hilbert-Schmidt. Therefore we have
\[\mathcal{W}_{i}=\sum_{k=1}^{\infty}W_{k}^{i}e_{k}\in C([0,\infty),U_{0}),\ \ i=1,2,\]
where \(\{W_{k}^{i}\}_{k\geq 1}\) is a sequence of mutually independent one-dimensional Brownian motions and \(\{e_{k}\}_{k\in\mathbb{N}}\) is a complete orthonormal basis of \(U\).
To define the Ito stochastic integral
\[\int_{0}^{t}Gd\mathcal{W}_{i}=\sum_{k=1}^{\infty}\int_{0}^{t}Ge_{k}dW_{k}^{i},\ \ i=1,2\]
on \(H^{s}\), it is required in [27, 28] for the predictable stochastic process \(G\) to take values in the space of \(L_{2}(U;H^{s})\), the Hilbert-Schmidt operators from \(U\) to \(H^{s}\). We have
\[\bigg{(}\int_{0}^{t}Gd\mathcal{W}_{i},v\bigg{)}_{H^{s}}=\sum_{k=1}^{\infty} \int_{0}^{t}(Ge_{k},v)_{H^{s}}dW_{k}^{i},\ \ i=1,2.\]
Moreover, the Burkholder-Davis-Gundy inequality
\[\mathbb{E}\bigg{(}\sup_{t\in[0,T]}\bigg{\|}\int_{0}^{t}Gd\mathcal{W}_{i} \bigg{\|}_{H^{s}}^{p}\bigg{)}\leq C(p,s)\mathbb{E}\bigg{(}\int_{0}^{T}\|G\|_{L _{2}(U;H^{s})}^{2}ds\bigg{)}^{\frac{p}{2}},\ p\geq 1,\ i=1,2\]
holds for some constant \(C(p,s)>0\).
In this paper, we are interested in stochastic variants of the MCH2 system to model energy consuming/exchanging mechanisms in (1.1) that are driven by external stochastic influences. Adding multiplicative noise has also been connected to the prevailing hypotheses that the onset of turbulence in fluid models involves randomness [29, 30, 31]. Precisely, we consider stochastic modified two-component Camassa-Holm (SMCH2) system
\[\left\{\begin{array}{l}(u-u_{xx})_{t}+3uu_{x}-2u_{x}u_{xx}-uu_{xxx}+\rho \overline{\rho}_{x}=(1-\partial_{x}^{2})h_{1}(t,u,\rho)\dot{\mathcal{W}}_{1}, \ \ t>0,\ x\in\mathbb{R},\\ \rho_{t}+(\rho u)_{x}=(1-\partial_{x}^{2})h_{2}(t,u,\rho)\dot{\mathcal{W}}_{2},\ \ t>0,\ x\in\mathbb{R},\\ u(0,x)=u_{0}(x),\ \ x\in\mathbb{R},\\ \rho(0,x)=\rho_{0}(x),\ \ x\in\mathbb{R},\end{array}\right. \tag{1.2}\]
where \(h_{1}(t,u,\rho),h_{2}(t,u,\rho)\) are typically nonlinear functions.
Let \(\gamma=\bar{\rho}-\bar{\rho}_{0}\), then \((1-\partial_{x}^{2})^{-1}\rho=\gamma\). Notice that the deterministic MCH2 type equations with the weakly dissipative term \(\lambda_{2}(1-\partial_{x}^{2})h_{1}(t,u,\rho),\lambda_{2}(1-\partial_{x}^{2} )h_{2}(t,u,\rho)\) have been introduced and studied by many scholars [32, 33, 34]. In order to model more general random energy exchange, we consider the possibly nonlinear noise term \((1-\partial_{x}^{2})h_{1}(t,u,\rho)\dot{\mathcal{W}}_{1},(1-\partial_{x}^{2} )h_{2}(t,u,\rho)\dot{\mathcal{W}}_{2}\) in (1.2), which will be used to compare with deterministic weakly dissipative MCH2 type equations.
In (1.2), the operator \((1-\partial_{x}^{2})^{-1}\) can be expressed by it's associated Green's function \(G(x)=e^{-|x|}/2\) with
\[[(1-\partial_{x}^{2})^{-1}f](x)=[G*f](x)=\frac{1}{2}\int_{\mathbb{R}}e^{-|x-y| }f(y)dy.\]
So the system (1.2) is equivalent to tha following one
\[\left\{\begin{array}{l}du+[uu_{x}+F_{1}(u,\gamma)]dt=h_{1}(t,u,\gamma)d \mathcal{W}_{1},\ t>0,\ x\in\mathbb{R},\\ d\gamma+[u\gamma_{x}+F_{2}(u,\gamma)]dt=h_{2}(t,u,\gamma)d\mathcal{W}_{2},\ t>0,\ x\in\mathbb{R},\\ u(0,x)=u_{0}(x),\ x\in\mathbb{R},\\ \gamma(0,x)=\gamma_{0}(x),\ x\in\mathbb{R},\end{array}\right. \tag{1.3}\]
where \(F_{1}(u,\gamma)=\partial_{x}(1-\partial_{x}^{2})^{-1}(u^{2}+\frac{1}{2}u_{x}^ {2}+\frac{1}{2}\gamma^{2}-\frac{1}{2}\gamma_{x}^{2})\), \(F_{2}(u,\gamma)=(1-\partial_{x}^{2})^{-1}((u_{x}\gamma_{x})_{x}+u_{x}\gamma)\).
The purpose of this paper is as follows:
\(\bullet\) The first goal of the present paper is to analyze the existence and uniqueness of pathwise solutions and to determine possible blow-up criterion for the Cauchy problem (1.3). Under generic assumptions on \(h_{1}(t,u,\gamma),h_{2}(t,u,\gamma)\), we will show that (1.3) has a local unique pathwise solution(see Theorem 3.5 below).
\(\bullet\) The second goal of this work is to study the case of strong nonlinear noise and consider its effect. As we will see in (3.3) below, for the solution to (1.3), its \(H^{s}\times H^{s}\)-norm blows up if and only if its \(W^{1,\infty}\times W^{1,\infty}\)-norm blows up. This suggests choosing a noise coefficient involving the \(W^{1,\infty}\times W^{1,\infty}\)-norm of \((u,\gamma)\). Therefore in this work we consider the case that \(h_{1}(t,u,\gamma)d\mathcal{W}_{1}=h_{1}(t,u,\gamma)d(\sum_{i=1}^{\infty}W_{i }(t)e_{i})=a(t)(1+\|u\|_{W^{1,\infty}}+\|\gamma\|_{W^{1,\infty}})^{\theta}udW_ {1}\), where \(\{e_{i}\}_{i\in\mathbb{N}^{*}}\) denote an orthonormal basis, \(\{W_{i}\}_{i\in\mathbb{N}^{*}}\) is a family of independent standard real-valued Wiener processes. Similarly, we consider the case that \(h_{2}(t,u,\gamma)d\mathcal{W}_{2}=a(t)(1+\|u\|_{W^{1,\infty}}+\|\gamma\|_{W^{1,\infty}})^{\theta}\gamma dW_{1}\), where \(\theta>0,0<a_{*}\leq a^{2}(t)\leq a^{*}\). To simplify the model, we write \(W_{1}\) as \(W\), and we will try to determine the range of \(\theta\) and \(a^{*},a_{*}\) such that the solution exists globally in time.
\(\bullet\) The third goal of this paper is to consider weak linear noise effects associated with the phenomenon of wave breaking. Due to Theorem 3.6 below, we see that if wave breaking occurs, the noise term does not grow fast. Hence we consider \(\theta=0\) in (1.3), namely a non-autonomous pre-factor depending on time \(t\). Precisely, we consider the MCH2 equation with linear multiplicative noise. We will study the conditions that lead to the global existence and the blow-up in finite time of the solution, and then analyze the associated probabilities.
Notation and preliminaries
In this section, we begin by introducing some notations and recall some elementary results, for completeness, we list the lemmas and skip their some proofs for conciseness.
Let \(L^{2}\) be the usual space of square-integrable functions on \(\mathbb{R}\). For any real number \(s\in\mathbb{R}\), \(D^{s}=(1-\partial_{x}^{2})^{s/2}\) is defined by \(\widehat{D^{s}f}(x)=(1+x^{2})^{\frac{s}{2}}\widehat{f}(x)\), where \(\widehat{f}\) is the Fourier transform of \(f\). The Sobolev space \(H^{s}\) is defined as
\[H^{s}\triangleq\{f\in L^{2}(\mathbb{R}):\|f\|_{H^{s}}^{2}=\int_{ \mathbb{R}}(1+x^{2})^{s}|\widehat{f}(x)|^{2}dx<\infty\},\]
and the inner product
\[(f,g)_{H^{s}}:=\int_{\mathbb{R}}(1+x^{2})^{s}\widehat{f}(x) \widetilde{\widetilde{g}}(x)dx=(\widehat{D^{s}f},\widehat{D^{s}g})_{L^{2}}.\]
In addition, \(x\lesssim y\), \(x,y\in\mathbb{R}\) means that there exists \(C>0\), which may vary from line to line and depend on various parameters, such that \(x\leq Cy\). Hereafter, \(C\) denotes a positive constant, whose value may change from one place to another.
Firstly, we summarize some auxiliary results, which will be used to prove our main results. Define the regularizing operator \(T_{\epsilon}\) on \(\mathbb{R}\) as
\[T_{\epsilon}f(x):=(1-\epsilon^{2}\Delta)^{-1}f(x)=\int_{\mathbb{R }}\frac{e^{i\xi x}\hat{f}(\xi)}{1+\epsilon^{2}|\xi|^{2}}d\xi,\ \ \epsilon\in(0,1). \tag{2.1}\]
Since \(T_{\epsilon}\) can be characterized by its Fourier multipliers, see [35], it is easy to see that
\[[D^{s},T_{\epsilon}]=0,\] \[(T_{\epsilon}f,g)_{L^{2}}=(f,T_{\epsilon}g)_{L^{2}},\] \[\|T_{\epsilon}u\|_{H^{s}}\leq\|u\|_{H^{s}}. \tag{2.2}\]
Where \([D^{s},T_{\epsilon}]=D^{s}T_{\epsilon}-T_{\epsilon}D^{s}\). We therefore have the following lemma.
**Lemma 2.1**: _[_35_]_ _Let \(f,g:\mathbb{R}\rightarrow\mathbb{R}\) such that \(g\in W^{1,\infty}\) and \(f\in L^{2}\). Then for some \(C>0\),_
\[\|[T_{\epsilon},(g\cdot\nabla)]f\|_{L^{2}}\leq C\|g\|_{W^{1,\infty}}\|f\|_{L^{ 2}}.\]
Furthermore, we also need to recall some useful commutator estimates.
**Lemma 2.2**: _[_36_]_ _If \(r>0\), then \(H^{r}\bigcap L^{\infty}\) is an algebra. Moreover, \(\|uv\|_{H^{r}}\lesssim\|u\|_{L^{\infty}}\|v\|_{H^{r}}+\|u\|_{H^{r}}\|v\|_{L^{ \infty}}\)._
**Lemma 2.3**: _[_36_]_ _Let \(r>0\), if \(u\in H^{r}\bigcap W^{1,\infty}\) and \(v\in H^{r-1}\bigcap L^{\infty}\), then_
\[\|[D^{r},u]v\|_{L^{2}}\lesssim\|\partial_{x}u\|_{L^{\infty}}\|D^{r-1}v\|_{L^{ 2}}+\|D^{r}u\|_{L^{2}}\|v\|_{L^{\infty}},\]
_where \([D^{r},u]v=D^{r}uv-uD^{r}v\)._
A direct application of Lemma 2.2-2.3 gives the following estimates and we omit the proof here.
**Lemma 2.4**: _For the \(F_{1},F_{2}\) defined in (1.3) and for any \(u,\gamma,u_{1},u_{2},\gamma_{1},\gamma_{2}\in H^{s}\) with \(s>1/2\), we have_
\[\|F_{1}(u,\gamma)\|_{H^{s}}\lesssim (\|u\|_{W^{1,\infty}}+\|\gamma\|_{W^{1,\infty}})(\|u\|_{H^{s}}+\| \gamma\|_{H^{s}}), s>3/2,\] \[\|F_{2}(u,\gamma)\|_{H^{s}}\lesssim (\|u\|_{W^{1,\infty}}+\|\gamma\|_{W^{1,\infty}})(\|u\|_{H^{s}}+\| \gamma\|_{H^{s}}), s>3/2,\] \[\|F_{1}(u_{1},\gamma_{1})-F_{1}(u_{2},\gamma_{2})\|_{H^{s}}\lesssim (\|u_{1}\|_{H^{s}}+\|u_{2}\|_{H^{s}}+\|\gamma_{1}\|_{H^{s}}+\| \gamma_{2}\|_{H^{s}})\] \[(\|u_{1}-u_{2}\|_{H^{s}}+\|\gamma_{1}-\gamma_{2}\|_{H^{s}}), s>3/2,\] \[\|F_{1}(u_{1},\gamma_{1})-F_{1}(u_{2},\gamma_{2})\|_{H^{s}}\lesssim (\|u_{1}\|_{H^{s+1}}+\|u_{2}\|_{H^{s+1}}+\|\gamma_{1}\|_{H^{s+1}}+\| \gamma_{2}\|_{H^{s+1}})\]
\[(\|u_{1}-u_{2}\|_{H^{*}}+\|\gamma_{1}-\gamma_{2}\|_{H^{*}}), 1/2<s<3/2,\] \[\|F_{2}(u_{1},\gamma_{1})-F_{2}(u_{2},\gamma_{2})\|_{H^{*}}\lesssim (\|u_{1}\|_{H^{*}}+\|\gamma_{2}\|_{H^{*}})(\|u_{1}-u_{2}\|_{H^{*}}+\| \gamma_{1}-\gamma_{2}\|_{H^{*}}), s>3/2,\] \[\|F_{2}(u_{1},\gamma_{1})-F_{2}(u_{2},\gamma_{2})\|_{H^{*}}\lesssim (\|u_{1}\|_{H^{*+1}}+\|\gamma_{2}\|_{H^{*+1}})(\|u_{1}-u_{2}\|_{H^{* }}+\|\gamma_{1}-\gamma_{2}\|_{H^{*}}), 1/2<s<3/2.\]
In addition, we provide the following algebraic inequality, which will be used in the proof of Theorem 3.6.
**Lemma 2.5**: _Let \(c,M_{1},M_{2}>0\). Assume \(a,b_{*},b^{*}>0\),_
\[\mbox{either }\eta>1,\ 0<\sqrt{b_{*}}<b(t)<\sqrt{b^{*}}\mbox{ and }2b_{*}>b^{*}\] \[\mbox{or }\eta=1,0<\sqrt{b_{*}}<b(t)<\sqrt{b^{*}}\mbox{ and }2b_{*}>a+b^{*}.\]
_Then there is a constant \(C>0\) such that for all \(0\leq x_{1}\leq M_{1}y_{1}<\infty,0\leq x_{2}\leq M_{2}y_{2}<\infty\),_
\[\frac{a(x_{1}+x_{2})(y_{1}^{2}+y_{2}^{2})+b(t)(1+x_{1}+x_{2})^{ \eta}(y_{1}^{2}+y_{2}^{2})}{1+y_{1}^{2}+y_{2}^{2}}- \frac{2b(t)(1+x_{1}+x_{2})^{\eta}(y_{1}^{2}+y_{2}^{2})^{2}}{(1+y_{1}^{2}+y_{2} ^{2})^{2}}\] \[+\frac{cb(t)(1+x_{1}+x_{2})^{\eta}(y_{1}^{2}+y_{2}^{2})^{2}}{(1+y_ {1}^{2}+y_{2}^{2})^{2}(1+\log(1+y_{1}^{2}+y_{2}^{2}))}\leq C.\]
**Proof:** Since \(0\leq\frac{x_{1}}{M_{1}}\leq y_{1}<\infty,0\leq\frac{x_{2}}{M_{2}}\leq y_{2}<\infty\), we obtain
\[\frac{a(x_{1}+x_{2})(y_{1}^{2}+y_{2}^{2})+b(t)(1+x_{1}+x_{2})^{ \eta}(y_{1}^{2}+y_{2}^{2})}{1+y_{1}^{2}+y_{2}^{2}}-\frac{2b(t)(1+x_{1}+x_{2})^ {\eta}(y_{1}^{2}+y_{2}^{2})^{2}}{(1+y_{1}^{2}+y_{2}^{2})^{2}}\] \[+\frac{cb(t)(1+x_{1}+x_{2})^{\eta}(y_{1}^{2}+y_{2}^{2})^{2}}{(1+y_ {1}^{2}+y_{2}^{2})^{2}(1+\log(1+y_{1}^{2}+y_{2}^{2}))}\] \[\leq a(x_{1}+x_{2})+b^{*}(1+x_{1}+x_{2})^{\eta}-2b_{*}(1+x_{1}+x_{2})^ {\eta}\frac{(y_{1}^{2}+y_{2}^{2})^{2}}{(1+y_{1}^{2}+y_{2}^{2})^{2}}+\frac{cb^{* }(1+x_{1}+x_{2})^{\eta}}{1+\log(1+(\frac{x_{1}}{M_{1}})^{2}+(\frac{x_{2}}{M_{2} })^{2})}.\]
When \(\eta>1\) and \(2b_{*}>b^{*}\) or \(\eta=1\) and \(2b_{*}>a+b^{*}\), we find that the inequality will tend to \(-\infty\) for \(x_{1}\to\infty\) or \(x_{2}\to\infty\) (namely \(y_{1}\to\infty\) or \(y_{2}\to\infty\)), which completes the proof.
\(\Box\)
Finally, we present the following lemma to establish the Theorem 3.7 on the global existence solutions.
**Lemma 2.6**: _Let \(\alpha(t)\) be a deterministic and locally bounded function. Suppose that \(\lambda>0\), and \(x(t)\) satisfies_
\[x(t)=e^{\int_{0}^{t}\alpha(t^{{}^{\prime}})dW_{t^{{}^{\prime}}}-\int_{0}^{t} \lambda\alpha^{2}(t^{{}^{\prime}})dt^{{}^{\prime}}}.\]
_For \(R>1\) define \(\tau_{R}=\inf\{t\geq 0:x(t)>R\}\). Then we have_
\[\mathbb{P}\{\tau_{R}=\infty\}\geq 1-R^{-2\lambda}.\]
**Proof:** Noting that
\[x(t)^{2\lambda}=e^{\int_{0}^{t}2\lambda\alpha(t^{{}^{\prime}})dW_{t^{{}^{ \prime}}}-\frac{1}{2}\int_{0}^{t}(2\lambda)^{2}\alpha^{2}(t^{{}^{\prime}})dt^{ {}^{\prime}}}\]
is an exponential martingale, by the martingale stopping theorem, we can derive \(\mathbb{E}x(t\wedge\tau_{R})^{2\lambda}=1\). This yields
\[\mathbb{P}(\tau_{R}=\infty)=\lim_{n\to\infty}\mathbb{P}(\tau_{R}>n)=\lim_{n\to \infty}\mathbb{P}(x(n\wedge\tau_{R})^{2\lambda}<R^{2\lambda})\geq\lim_{n\to \infty}\left(1-\frac{\mathbb{E}x(n\wedge\tau_{R})^{2\lambda}}{R^{2\lambda}} \right)=1-R^{-2\lambda},\]
which finishes the proof.
**Lemma 2.7**: _Let \(s>3/2\), \(F_{1},F_{2}\) and \(T_{\epsilon}\) be given in (1.3) and (2.1) respectively. Then there is a constant \(K=K(s)>0\) such that for all \(\epsilon>0\),_
\[|(T_{\epsilon}[uu_{x}],T_{\epsilon}u)_{H^{s}}|+|(T_{\epsilon}F_{1 }(u,\gamma),T_{\epsilon}u)_{H^{s}}|+|(T_{\epsilon}[u\gamma_{x}],T_{\epsilon} \gamma)_{H^{s}}|+|(T_{\epsilon}F_{2}(u,\gamma),T_{\epsilon}\gamma)_{H^{s}}|\] \[\leq K(\|u\|_{W^{1,\infty}}+\|\gamma\|_{W^{1,\infty}})(\|u\|_{H^{s}}^{ 2}+\|\gamma\|_{H^{s}}^{2}).\]
**Proof:**_According to (2.2), we derive_
\[(T_{\epsilon}[uu_{x}],T_{\epsilon}u)_{H^{s}} = (D^{s}T_{\epsilon}[uu_{x}],D^{s}T_{\epsilon}u)_{L^{2}}\] \[= ([D^{s},u]u_{x},D^{s}T_{\epsilon}^{2}u)_{L^{2}}+([T_{\epsilon},u ]D^{s}u_{x},D^{s}T_{\epsilon}u)_{L^{2}}+(uD^{s}T_{\epsilon}u_{x},D^{s}T_{ \epsilon}u)_{L^{2}},\] \[(T_{\epsilon}[u\gamma_{x}],T_{\epsilon}\gamma)_{H^{s}} = (D^{s}T_{\epsilon}[u\gamma_{x}],D^{s}T_{\epsilon}\gamma)_{L^{2}}\] \[= ([D^{s},u]\gamma_{x},D^{s}T_{\epsilon}^{2}\gamma)_{L^{2}}+([T_{ \epsilon},u]D^{s}\gamma_{x},D^{s}T_{\epsilon}\gamma)_{L^{2}}+(uD^{s}T_{ \epsilon}\gamma_{x},D^{s}T_{\epsilon}\gamma)_{L^{2}}.\]
_Then by Lemma 2.1, Lemma 2.2, (2.2) and Sobolev embedding Theorem, we have_
\[|(T_{\epsilon}[uu_{x}],T_{\epsilon}u)_{H^{s}}|+|(T_{\epsilon}[u\gamma_{x}],T_ {\epsilon}\gamma)_{H^{s}}|\lesssim(\|u\|_{W^{1,\infty}}+\|\gamma\|_{W^{1, \infty}})(\|u\|_{H^{s}}^{2}+\|\gamma\|_{H^{s}}^{2}).\]
_In addition, using Lemma 2.4 and (2.2), we obtain that_
\[|(T_{\epsilon}F_{1}(u,\gamma),T_{\epsilon}u)_{H^{s}}|+|(T_{\epsilon}F_{2}(u, \gamma),T_{\epsilon}\gamma)_{H^{s}}|\lesssim(\|u\|_{W^{1,\infty}}+\|\gamma\|_ {W^{1,\infty}})(\|u\|_{H^{s}}^{2}+\|\gamma\|_{H^{s}}^{2}).\]
_Combining the above two inequalities, we complete the proof._\(\Box\)
## 3 Local existence and uniqueness for SMCH2
In this section, we consider the following stochastic system with multiplicative noise (1.3) in \(H^{s}(\mathbb{R})\times H^{s}(\mathbb{R})\).
### Assumptions
For the main results in this paper, we rely on the following different assumptions concerning random perturbation term in (1.3). We assume that \((h_{1},h_{2}):[0,\infty)\times(H^{s}\times H^{s})\ni(t,u,\gamma)\to(h_{1}(t,u,\gamma),h_{2}(t,u,\gamma))\in L_{2}(U,H^{s}\times H^{s})\) are continuous in \((t,u,\gamma)\). Moreover, we assume
**Assumption 3.1**: _(H.1) There exists some non-decreasing function \(f:[0,\infty)\to[0,\infty)\) with \(f(0)=0\) such that for all \((u,\gamma)\in H^{s}\times H^{s},s>1/2\),_
\[\sum_{i=1}^{2}\|h_{i}(t,u,\gamma)\|_{L_{2}(U,H^{s})}\leq f(\|u\|_{W^{1,\infty}}+\| \gamma\|_{W^{1,\infty}})(1+\|u\|_{H^{s}}+\|\gamma\|_{H^{s}}). \tag{3.1}\]
_(H.2) There exists some non-decreasing function \(g:[0,\infty)\to[0,\infty)\) such that for all \((u_{1},\gamma_{1}),(u_{2},\gamma_{2})\in H^{s}\times H^{s},s>1/2\),_
\[\sup_{\|u_{1}\|_{H^{s}},\|\gamma_{1}\|_{H^{s}},\|u_{2}\|_{H^{s}},\|\gamma_{2}\|_{H^{s}}\leq N} \sum_{i=1}^{2}\|h_{i}(t,u_{1},\gamma_{1})-h_{i}(t,u_{2},\gamma_{2})\|_{L_{2 }(U,H^{s})}\] \[\leq g(N)\cdot(\|u_{1}-u_{2}\|_{H^{s}}+\|\gamma_{1}-\gamma_{2}\|_{H^ {s}}),N\geq 1.\]
**Assumption 3.2**: \(h_{1}(t,u,\gamma)d\mathcal{W}_{1}=a(t)(1+\|u\|_{W^{1,\infty}}+\|\gamma\|_{W^{1, \infty}})^{\theta}udW,h_{2}(t,u,\gamma)d\mathcal{W}_{2}=a(t)(1+\|u\|_{W^{1, \infty}}+\|\gamma\|_{W^{1,\infty}})^{\theta}\gamma dW\) _for a standard 1-D Brownian motion \(W\) and \(\theta>0\), \(0<a_{*}\leq a^{2}(t)\leq a^{*}\) for all \(t\)._
**Assumption 3.3**: \(h_{1}(u,\gamma)d\mathcal{W}_{1}=b(t)udW\)_, \(h_{2}(u,\gamma)d\mathcal{W}_{2}=b(t)\gamma dW\) for a standard 1-D Brownian motion \(W\), and there are constants \(b_{*},b^{*}>0\) such that \(0<b_{*}\leq b^{2}(t)\leq b^{*}\) for all \(t\)._
### Definitions of the solutions.
Next, we give the definition of pathwise solution to (1.3).
**Definition 3.4**: _(Pathwise solutions). Let \(\mathcal{S}=(\Omega,\mathcal{F},\mathbb{P},\{\mathcal{F}_{t}\}_{t\geq 0},\mathcal{W}_{ 1},\mathcal{W}_{2})\) be a fixed stochastic basis. Let \(s>3/2\) and \(z_{0}=(u_{0},\gamma_{0})\) be an \(H^{s}\times H^{s}\)-valued \(\mathcal{F}_{0}\)-measurable random variable. 1.A local pathwise solution to (1.3) is a pair \((z,\tau)\), where \(\tau\geq 0\) is a stopping time satisfying \(\mathbb{P}\{\tau>0\}=1\) and \(z=(u,\gamma):\Omega\times[0,\tau)\to H^{s}\times H^{s}\) is an \(\mathcal{F}_{t}\)-adapted \(H^{s}\times H^{s}\)-valued process satisfying \(\mathbb{P}-a.s.\)_
\[z\in C([0,\tau);H^{s}\times H^{s}), \tag{3.2}\]
_and \(\mathbb{P}-a.s.\),_
\[u(t)-u(0)+\int_{0}^{t}[uu_{x}+F_{1}(u,\gamma)]dt^{{}^{\prime}}= \int_{0}^{t}h_{1}(t^{{}^{\prime}},u,\gamma)d\mathcal{W}_{1},\] \[\gamma(t)-\gamma(0)+\int_{0}^{t}[u\gamma_{x}+F_{2}(u,\gamma)]dt^{ {}^{\prime}}=\int_{0}^{t}h_{2}(t^{{}^{\prime}},u,\gamma)d\mathcal{W}_{2},\ \ t\in[0,\tau).\]
_2. Local pathwise uniqueness: if given any two local pathwise solutions \((z_{1},\tau_{1})\) and \((z_{2},\tau_{2})\) with \(\mathbb{P}\{z_{1}(0)=z_{2}(0)\}=1\), we have_
\[\mathbb{P}\{z_{1}(t)=z_{2}(t),\ \ t\in[0,\tau_{1}\wedge\tau_{2})\}=1.\]
_3.Additionally, \((z,\tau^{*})\) is called a maximal pathwise solution to (1.3) if \(\tau^{*}>0\) almost surely and there is an increasing sequence \(\tau_{n}\to\tau^{*}\) such that for any \(n\in\mathbb{N}\), \((z,\tau_{n})\) is a pathwise solution to (1.3) and on the set \(\{\tau^{*}<\infty\}\),_
\[\sup_{t\in[0,\tau_{n}]}(\|u\|_{H^{s}}+\|\gamma\|_{H^{s}})\geq n,\ n\geq 1.\]
_4. If \((z,\tau^{*})\) is a maximal pathwise solution and \(\tau^{*}=\infty\) almost surely, then we call that the pathwise solution exists globally._
### Main results and remarks.
Now, we summarize our major contributions, such as existence of pathwise solutions, global well-posedness of (1.3) and the blow-up results, and the concrete proofs will be provided later in the remainder of the paper.
**Theorem 3.5**: _(Maximal solutions) Let \(s>3/2\), and \(h_{1}(t,u,\gamma),h_{2}(t,u,\gamma)\) satisfy Assumption 3.1. For a given stochastic basis \(\mathcal{S}=(\Omega,\mathcal{F},\mathbb{P},\{\mathcal{F}_{t}\}_{t\geq 0}, \mathcal{W}_{1},\mathcal{W}_{2})\), if \((u_{0},\gamma_{0})\) is an \(H^{s}\times H^{s}\)-valued \(\mathcal{F}_{0}\)-measurable random variable, then there is a local unique pathwise solution \((z,\tau)\) to (1.3) in the sense of Definition 3.4 with_
\[z\in C([0,\tau);H^{s}\times H^{s}).\]
_Moreover, \((z,\tau)\) can be extended to a unique maximal pathwise solution \((z,\tau^{*})\) and the following blow up scenario satisfies \(\mathbb{P}-a.s.\) on the set \(\{\tau^{*}<\infty\}\),_
\[1_{\{\lim\sup_{t\to\tau^{*}}(\|u(t)\|_{H^{s}}+\|\gamma(t)\|_{H^{s}})=\infty\} }=1_{\{\lim\sup_{t\to\tau^{*}}(\|u(t)\|_{W^{1},\infty}+\|\gamma(t)\|_{W^{1}, \infty})=\infty\}}. \tag{3.3}\]
**Remark 3.1**: _The proof of Theorem 3.5 combines the techniques as used in the papers [35, 37, 38, 39, 40, 41]. By constructing the approximate sequence of the truncation problem of \(W^{1,\infty}\times W^{1,\infty}\). Such a cut-off means linear growth of \(u\) and \(\gamma\), and guarantees the global existence of an approximate solution._
Turning to noise-driven regularization effects, the blow-up scenario (3.3) suggests relating the noise coefficient to the \(W^{1,\infty}\times W^{1,\infty}\) of \((u,\gamma)\). Therefore we consider scalable noise impact, i.e. we assume \(h_{1}(t,u,\gamma)d\mathcal{W}_{1}=a(t)(1+\|u\|_{W^{1,\infty}}+\|\gamma\|_{W^{1, \infty}})^{\theta}udW,h_{2}(t,u,\gamma)d\mathcal{W}_{2}=a(t)(1+\|u\|_{W^{1, \infty}}+\|\gamma\|_{W^{1,\infty}})^{\theta}\gamma dW\) for a standard 1-D Brownian motion \(W\), some \(\theta>0\), \(0<a_{*}\leq a^{2}(t)\leq a^{*}\) for all \(t\). When \(a^{*},a_{*}\) and \(\theta\) satisfy certain stronger conditions, the noise term remove the formation of singularities.
**Theorem 3.6**: _(Global existence for strong nonlinear noise). Let Assumption 3.2 hold and assume that \(\mathcal{S}=(\Omega,\mathcal{F},\mathbb{P},\{\mathcal{F}_{t}\}_{t\geq 0},W)\) is a fixed stochastic basis. Let \(s>5/2\), \((u_{0},\gamma_{0})\in H^{s}\times H^{s}\) be an \(H^{s}\times H^{s}\)-valued \(\mathcal{F}_{0}\)-measurable random variable. Assume that \(\theta\) and \(a^{*},a_{*}(0<a_{*}\leq a^{2}(t)\leq a^{*})\) satisfy_
\[either\ 2a_{*}>a^{*},\ \theta>1/2\ or\ 2a_{*}>K+a^{*},\ \theta=1/2,\]
_where \(K=K(s)\) is the constant introduced in Lemma 2.7. Then the corresponding maximal solution \((z,\tau^{*})\) to (1.3) satisfies_
\[\mathbb{P}\{\tau^{*}=\infty\}=1.\]
**Remark 3.2**: _Theorem 3.6 means that blow-up of pathwise solutions might only be observed if the noise is weak. According to Theorem 3.6, we can see that if wave breaking occurs, the noise term will not bring rapid growth. Therefore, we consider \(\theta=0\) but a non-autonomous pre-factor dependent on time \(t\) is introduced._
To detect such noise, we analyze the simpler form \(h_{1}(t,u,\gamma)d\mathcal{W}_{1}=b(t)udW,h_{2}(t,u,\gamma)d\mathcal{W}_{2}=b (t)\gamma dW\), \(W\) is a standard 1-D Brownian motion. Even in this linear noise case the situation is quite interesting allowing for global existence as well as blow-up of solutions. For global existence, we can identify two cases.
Using Lemma 2.2-Lemma 2.4 and the integration by parts, we conclude that there is a \(C=C(s)>1\) such that
\[-\int_{\mathbb{R}}D^{s}v_{1}D^{s}(v_{1}v_{1x})dx-\int_{\mathbb{R }}D^{s}v_{1}D^{s}F_{1}(v_{1},v_{2})dx-\int_{\mathbb{R}}D^{s}v_{2}D^{s}(v_{1}v_ {2x})dx-\int_{\mathbb{R}}D^{s}v_{2}D^{s}F_{2}(v_{1},v_{2})dx\] \[\leq\frac{1}{2}C(\|v_{1}\|_{W^{1,\infty}}+\|v_{2}\|_{W^{1,\infty} })(\|v_{1}\|_{H^{s}}^{2}+\|v_{2}\|_{H^{s}}^{2}). \tag{3.4}\]
**Theorem 3.7**: _(Global existence for weak noise I). Let \(s>3/2\), Assumption 3.3 be verified and \(\mathcal{S}=(\Omega,\mathcal{F},\mathbb{P},\{\mathcal{F}_{t}\}_{t\geq 0},W)\) be a fixed stochastic basis. Assume \((u_{0},\gamma_{0})\) is an \(H^{s}\times H^{s}\)-valued \(\mathcal{F}_{0}\)-measurable random variable. Let \(Q=Q(s)>0\) be the constant such that the embedding \(\|u\|_{W^{1,\infty}}<Q\|u\|_{H^{s}},\|\gamma\|_{W^{1,\infty}}<Q\|\gamma\|_{H^{ s}}\) holds. Let \(C=C(s)>1\) be in (3.4). If there is a \(R>1\) and \(\lambda_{1}>1\) satisfying \(\mathbb{P}-a.s.\)_
\[\|u_{0}\|_{H^{s}}^{2}+\|\gamma_{0}\|_{H^{s}}^{2}<\frac{b_{*}^{2}}{4C^{2}Q^{2} \lambda_{1}^{2}R},\]
_then (1.3) has a maximal solution \((z,\tau^{*})\) satisfying for any \(0<\lambda_{2}<\frac{\lambda_{1}-1}{\lambda_{1}}\) the estimate_
\[\mathbb{P}\bigg{\{}\|u(t)\|_{H^{s}}^{2}+\|\gamma(t)\|_{H^{s}}^{2}<\frac{b_{*}^ {2}}{C^{2}Q^{2}\lambda_{1}^{2}}\ \ for\ all\ t>0\bigg{\}}\geq 1-R^{-2\lambda_{2}}.\]
**Remark 3.3**: _Theorem 3.7 presents a global existence solution with bounded initial data. This result can not be observed in the deterministic case._
**Theorem 3.8**: _(Global existence for weak noise II) Let \(s>5/2\), Assumption 3.3 be verified and \(\mathcal{S}=(\Omega,\mathcal{F},\mathbb{P},\{\mathcal{F}_{t}\}_{t\geq 0},W)\) be a fixed stochastic basis. \((u_{0},\gamma_{0})\) is an \(H^{s}\times H^{s}\)-valued \(\mathcal{F}_{0}\)-measurable random variable. If_
\[\mathbb{P}\{(1-\partial_{x}^{2})u_{0}(x)>0,\forall x\in\mathbb{R}\}=p,\ \ \mathbb{P}\{(1-\partial_{x}^{2})u_{0}(x)<0,\forall x\in\mathbb{R}\}=q,\]
_and there exists some \(x_{0}\in\mathbb{R}\) such that_
\[\mathbb{P}\{(1-\partial_{x}^{2})u_{0}(x)\leq 0,\ x\leq x_{0}\ \ and\ \ (1- \partial_{x}^{2})u_{0}(x)\geq 0,\ x\geq x_{0}\}=m,\]
_for some \(p,q,m\in[0,1]\), then the corresponding maximal solution \((z,\tau^{*})\) to (1.3) satisfies_
\[\mathbb{P}\{\tau^{*}=\infty\}\geq p+q+m.\]
**Remark 3.4**: _The proof of Theorem 3.8 depends on the analysis of a PDE with random coefficient. When \(b(t)=0\) and taking \((p,q,m)=(1,0,0)\), \((p,q,m)=(0,1,0)\) or \((p,q,m)=(0,0,1)\) in Theorem 3.8, we obtain the global existence for the deterministic MCH2 system. Therefore, in this sense, Theorem 3.8 covers the deterministic result._
**Theorem 3.9**: _(Wave breaking criterion for weak noise I) Let \(\mathcal{S}=(\Omega,\mathcal{F},\mathbb{P},\{\mathcal{F}_{t}\}_{t\geq 0},W)\) be a fixed stochastic basis and \(s>5/2\). Let Assumption 3.3 be verified and \((u_{0},\gamma_{0})\) be an \(H^{s}\times H^{s}\)-valued \(\mathcal{F}_{0}\)-measurable random variable. If for some \(c\in(0,1)\) and \(x_{0}\in\mathbb{R}\),_
\[u_{0x}(x_{0})<-\frac{1}{2}\sqrt{\frac{(b^{*})^{2}}{c^{2}}+4(\|u_{0}\|_{H^{1}}^ {2}+\|\gamma_{0}\|_{H^{1}}^{2})}-\frac{b^{*}}{2c}\ \mathbb{P}-a.s., \tag{3.5}\]
_then the maximal solution \((z,\tau^{*})\) to (1.3) satisfies_
\[\mathbb{P}\{\tau^{*}<\infty\}\geq\mathbb{P}\left\{e^{\int_{0}^{t}b(t^{{}^{ \prime}})dW_{t^{{}^{\prime}}}+\int_{0}^{t}\frac{b^{*}-b^{2}(t^{{}^{\prime}})}{ 2}dt^{{}^{\prime}}}\geq c\ for\ all\ t\right\}>0.\]
**Remark 3.5**: _Theorem 3.9 detects the solution singularities in finite time under certain initial data, while Theorem 3.7 provides a global existence result. We stress that these two results do not contain each other. In Theorem 3.7, assuming \(\|u_{0}\|_{H^{s}}^{2}+\|\gamma_{0}\|_{H^{s}}^{2}<\frac{b_{0}^{2}}{2C^{2}Q^{2} \lambda_{1}^{2}R}\), then \(z\) globally exists with probability greater than \(1-R^{-2\lambda_{2}}\). In Theorem 3.9, (3.5) implies that \(\|u_{0}\|_{H^{s}}^{2}>\frac{1}{Q^{2}}\|u_{0}\|_{W^{1,\infty}}^{2}>\frac{(b^{*} )^{2}}{c^{2}Q^{2}}>\frac{b_{0}^{2}}{2C^{2}Q^{2}\lambda_{1}^{2}R}\)._
**Theorem 3.10**: _(Wave breaking criterion for weak noise II) Let \(\mathcal{S}=(\Omega,\mathcal{F},\mathbb{P},\{\mathcal{F}_{t}\}_{t\geq 0},W)\) be a fixed stochastic basis and \(s>5/2\). Let Assumption 3.3 be verified and \((u_{0},\gamma_{0})\) be an \(H^{s}\times H^{s}\)-valued \(\mathcal{F}_{0}\)-measurable random variable. If for some \(c\in(0,1)\),_
\[\begin{split}\int_{\mathbb{R}}u_{0x}^{3}(x)dx<-\sqrt{\frac{(b^{* })^{2}}{4c^{2}}(\|u_{0}\|_{H^{1}}^{2}+\|\gamma_{0}\|_{H^{1}}^{2})^{2}+\frac{15 }{8}(\|u_{0}\|_{H^{1}}^{2}+\|\gamma_{0}\|_{H^{1}}^{2})^{3}}\\ -\frac{b^{*}}{2c}(\|u_{0}\|_{H^{1}}^{2}+\|\gamma_{0}\|_{H^{1}}^{2} )\ \mathbb{P}-a.s.,\end{split} \tag{3.6}\]
_then the maximal solution \((z,\tau^{*})\) to (1.3) satisfies_
\[\mathbb{P}\{\tau^{*}<\infty\}\geq\mathbb{P}\left\{e^{\int_{0}^{t}b(t^{{}^{ \prime}})dW_{t^{{}^{\prime}}}+\int_{0}^{t}\frac{b^{*}-b^{2}(t^{{}^{\prime}})}{ 2}dt^{{}^{\prime}}}\geq c\ for\ all\ t\right\}>0.\]
**Remark 3.6**: _Theorem 3.10 detects the solution singularities in finite time under certain initial data. (3.6) implies that \(\int_{\mathbb{R}}u_{0x}^{3}(x)dx<-\frac{b^{*}}{c}(\|u_{0}\|_{H^{1}}^{2}+\| \gamma_{0}\|_{H^{1}}^{2})\), which combined with \(min_{x\in\mathbb{R}}u_{0x}(x)(\|u_{0}\|_{H^{1}}^{2}+\|\gamma_{0}\|_{H^{1}}^{2} )\leq\int_{\mathbb{R}}u_{0x}^{3}(x)dx\) derives \(\|u_{0}\|_{H^{s}}^{2}>\frac{1}{Q^{2}}\|u_{0}\|_{W^{1,\infty}}^{2}>\frac{(b^{*} )^{2}}{c^{2}Q^{2}}>\frac{b_{0}^{2}}{2C^{2}Q^{2}\lambda_{1}^{2}R}\). So, the initial value conditions here and those given in Theorem 3.7 do not contain each other._
Sketch of the Proof of Theorem 3.5
We consider the initial value problem (1.3). The proof of existence and uniqueness of pathwise solutions can be carried out by standard procedures used in many works, see [35, 37, 39, 40, 42, 43] for more details. Therefore we only give a sketch.
1. (Approximation scheme) The first step is to construct a suitable approximation scheme. For any \(R>1\), we let \(\chi_{R}(x):[0,\infty)\rightarrow[0,1]\) be a \(C_{0}^{\infty}\) function such that \(\chi_{R}(x)=1\) for \(x\in[0,R]\) and \(\chi_{R}(x)=0\) for \(x>2R\). Then we consider the following cut-off problem on \(\mathbb{R}\),
\[\left\{\begin{array}{ll}du+\chi_{R}(\|u\|_{W^{1,\infty}}+\|\gamma\|_{W^{1, \infty}})[uu_{x}+F_{1}(u,\gamma)]dt=\chi_{R}(\|u\|_{W^{1,\infty}}+\|\gamma\|_ {W^{1,\infty}})h_{1}(t,u,\gamma)d{\cal W}_{1},\ t>0,\\ d\gamma+\chi_{R}(\|u\|_{W^{1,\infty}}+\|\gamma\|_{W^{1,\infty}})[u\gamma_{x} +F_{2}(u,\gamma)]dt=\chi_{R}(\|u\|_{W^{1,\infty}}+\|\gamma\|_{W^{1,\infty}})h_ {2}(t,u,\gamma)d{\cal W}_{2},\ t>0,\\ u(\omega,0,x)=u_{0}(\omega,x),\\ \gamma(\omega,0,x)=\gamma_{0}(\omega,x).\end{array}\right. \tag{4.1}\]
From (2.4), we observe that the nonlinear term \(F_{1}(u,\gamma),F_{2}(u,\gamma)\), preserves the \(H^{s}\times H^{s}\)-regularity of \((u,\gamma)\) for any \(s>3/2\). However, in order to apply the stochastic differential equation (SDE) theory in Hilbert space to (4.1), we will mollify the transport term \(uu_{x},u\gamma_{x}\) since the products \(uu_{x}\) and \(u\gamma_{x}\) lose one regularity. For this reason, we consider the following approximation scheme:
\[\left\{\begin{array}{ll}du+G_{1,\epsilon}(u,\gamma)dt=\chi_{R}(\|u\|_{W^{1, \infty}}+\|\gamma\|_{W^{1,\infty}})h_{1}(t,u,\gamma)d{\cal W}_{1},\ t>0,\ x\in \mathbb{R},\\ d\gamma+G_{2,\epsilon}(u,\gamma)dt=\chi_{R}(\|u\|_{W^{1,\infty}}+\|\gamma\|_ {W^{1,\infty}})h_{2}(t,u,\gamma)d{\cal W}_{2},\ t>0,\ x\in\mathbb{R},\\ G_{1,\epsilon}(u,\gamma)=\chi_{R}(\|u\|_{W^{1,\infty}}+\|\gamma\|_{W^{1, \infty}})\big{[}J_{\epsilon}((J_{\epsilon}u)(J_{\epsilon}u)_{x})+F_{1}(u, \gamma)\big{]},\\ G_{2,\epsilon}(u,\gamma)=\chi_{R}(\|u\|_{W^{1,\infty}}+\|\gamma\|_{W^{1, \infty}})\big{[}J_{\epsilon}((J_{\epsilon}u)(J_{\epsilon}\gamma)_{x})+F_{2}( u,\gamma)\big{]},\\ u(0,x)=u_{0}(x)\in H^{s},\ \gamma(0,x)=\gamma_{0}(x)\in H^{s},\end{array}\right. \tag{4.2}\]
where \(J_{\epsilon}\) is the Friedrichs mollifier. According to the theory of SDE in Hilbert space (see for example [28, 44]), for a fixed stochastic basis \({\cal S}=(\Omega,{\cal F},\mathbb{P},\{{\cal F}_{t}\}_{t\geq 0},{\cal W}_{1},{ \cal W}_{2})\) and for \((u_{0},\gamma_{0})\in H^{s}\times H^{s}\) with \(s>5/2\), (4.2) admits a unique solution \((u_{\epsilon},\gamma_{\epsilon})\in C([0,T_{\epsilon}),H^{s}\times H^{s})\).
In addition, the uniform \(L^{\infty}(\Omega;W^{1,\infty}\times W^{1,\infty})\) condition provided by the cut-off function \(\chi_{R}\) enables us to split the expectation \(\mathbb{E}(\|u_{\epsilon}\|_{H^{s}}^{2}\|u_{\epsilon}\|_{W^{1,\infty}}|{\cal F }_{0})\), \(\mathbb{E}(\|u_{\epsilon}\|_{H^{s}}^{2}\|\gamma_{\epsilon}\|_{W^{1,\infty}}|{ \cal F}_{0})\) to close a priori \(L^{2}(\Omega,H^{s}\times H^{s},\mathbb{P}(\cdot|{\cal F}_{0}))\) estimate for \(u_{\epsilon},\gamma_{\epsilon}\). Then we can go along the lines as we prove Lemma 4.1 to find that for each fixed \(\epsilon\), if \(T_{\epsilon}<\infty\), then \(\limsup_{t\to T_{\epsilon}}(\|u_{\epsilon}\|_{W^{1,\infty}}+\|\gamma_{ \epsilon}\|_{W^{1,\infty}})=\infty\). Due to the cut-off in (4.2) for \(\mathbb{P}\)-a.s. \(\omega\in\Omega\), \(\|u_{\epsilon}\|_{W^{1,\infty}},\|\gamma_{\epsilon}\|_{W^{1,\infty}}\) are always bounded and hence \((u_{\epsilon},\gamma_{\epsilon})\) is actually a global in time solution, that is \((u_{\epsilon},\gamma_{\epsilon})\in C([0,\infty),H^{s}\times H^{s})\ \mathbb{P}-a.s\).
2. (Pathwise solution to the cut-off problem in \(H^{s}\times H^{s}\) with \(s>5/2\)) By applying the stochastic compactness arguments from Prokhorov's and Skorokhod's Theorem, we obtain the almost sure convergence for a new approximation solution \(((\tilde{u}_{\epsilon},\tilde{\gamma_{\epsilon}}),\tilde{\cal W}_{1\epsilon}, \tilde{\cal W}_{2\epsilon})\) defined on a new probability space. By virtue of a refined martingale representation Theorem [45, Theorem A.1], we may set \(\epsilon\to 0\) in \(((\tilde{u}_{\epsilon},\tilde{\gamma_{\epsilon}}),\tilde{\cal W}_{1\epsilon}, \tilde{\cal W}_{2\epsilon})\) to obtain a martingale solution in \(H^{s}\times H^{s}\) with \(s>5/2\). Here, the Gyongy-Krylov characterization [46] of the convergence in probability can be used here to prove the convergence of the original approximation solutions, and one can refer to [35, Theorem 1.7] for more details.
Finally, since \(F_{1}(u,\gamma),F_{2}(u,\gamma)\) satisfy the estimates as in Lemma 2.4 and \(h_{1}(t,u,\gamma),h_{2}(t,u,\gamma)\) satisfies Assumption 3.1, we conclude that \(G_{1,\epsilon},G_{2,\epsilon},\chi_{R}(\|u\|_{W^{1,\infty}}+\|\gamma\|_{W^{1, \infty}})h_{1},\chi_{R}(\|u\|_{W^{1,\infty}}+\|\gamma\|_{W^{1,\infty}})h_{2}\) are Lipschitz continuous. So, one can obtain the pathwise uniqueness easily. Then by the Yamada-Watanabe principle, we derive the existence and uniqueness of the pathwise solution to (4.2) denoted by \(z^{R}=(u^{R},\gamma^{R})\).
3. (Remove the cut-off and extend the range of \(s\) to \(s>3/2\))
Let \(\tau_{R}:=R\wedge\inf\{t\geq 0:\|u^{R}(t)\|_{W^{1,\infty}}+\|\gamma^{R}(t)\|_{W^{1, \infty}}>R\}\). By the pathwise uniqueness to (4.2), we conclude that \(z^{R}(t)=z^{\widetilde{R}}(t),\ \ t\in[0,\tau^{R}\wedge\tau^{R})\). In particular, \(\tau_{R}\) is increasing in \(R\). Let \(\tau^{*}=\lim_{R\rightarrow\infty}\tau_{R}\) and define
\[z=\sum_{R=1}^{\infty}1_{[\tau_{R-1},\tau_{R})}z^{R}.\]
Then \((z,\tau^{*})\) is the unique pathwise solution to (1.3) for \(s>\frac{5}{2}\).
Next, we extend the range of \(s\) to \(s>\frac{3}{2}\). When \(z_{0}\in L^{\infty}(\Omega,H^{s}\times H^{s})\) with \(s>3/2\), by mollifying the initial data, we obtain a sequence of regular solutions \(\{z_{n},\zeta^{n}\}_{n\in\mathbb{N}}\) to (1.3). Motivated by [42], one can prove that there is some stopping time \(\tau\) with \(\mathbb{P}(\tau>0)=1\), a subsequence (still denoted by \(z_{n}\)) and some process \(z\) such that \(\mathbb{P}-a.s.\)
\[\lim_{n\to\infty}\sup_{t\in[0,s]}(\|z_{n}-z\|_{H^{s}})=0\,\ \ s<\tau\]
and
\[\sup_{t\in[0,s]}\|z\|_{H^{s}\times H^{s}}\leq\|z_{0}\|_{H^{s}\times H^{s}}+2, \ \ s<\tau. \tag{4.3}\]
Then we can let \(n\to\infty\) to prove that \((z,\tau)\) is a solution to (1.3).
Besides, a cutting argument as in [37, 39, 42] enables us to remove the \(L^{\infty}(\Omega,H^{s}\times H^{s})\) assumption on \((u_{0},\gamma_{0})\). More precisely, consider the decomposition
\[\Omega_{m}=\{m-1\leq\|u_{0}\|_{H^{s}}+\|\gamma_{0}\|_{H^{s}}<m\},\ \ m\geq 1.\]
We conclude \(\sum_{m=1}^{\infty}\mathbb{P}(\Omega_{m})=1\). Therefore we have \(\mathbb{P}-a.s.\)
\[z_{0}(\omega,x)=\sum_{m\geq 1}z_{0}^{m}(\omega,x):=\sum_{m\geq 1}z_{0}(\omega,x)1_ {\Omega_{m}}.\]
For each initial value \(z_{0}^{m}\), we let \((z^{m},\zeta_{m})\) be the pathwise unique solution to (1.3) satisfying (4.3). Moreover, as \(\Omega_{m}\cap\Omega_{m^{{}^{\prime}}}=\emptyset,m\neq m^{{}^{\prime}}\), \(F_{1}(0,0)=0,F_{2}(0,0)=0\) and \(h_{1}(t,0,0)=0,h_{2}(t,0,0)=0\) (see (3.1)), it follows that
\[z:=\sum_{m\geq 1}z^{m}1_{\Omega_{m}},\ \ \zeta=\sum_{m\geq 1}\zeta_{m}1_{ \Omega_{m}}\]
is the unique pathwise solution to (1.3) with corresponding initial condition \(z_{0}\). Since \((z^{m},\zeta_{m})\) satisfies (4.3), we have \(\mathbb{P}-a.s.\)
\[\sup_{t\in[0,s]}(\|u\|_{H^{s}}^{2}+\|\gamma\|_{H^{s}}^{2}) = \sum_{m=1}^{\infty}1_{\Omega_{m}}\sup_{t\in[0,s]}(\|u^{m}\|_{H^{ s}}^{2}+\|\gamma^{m}\|_{H^{s}}^{2})\] \[\leq C\sum_{m=1}^{\infty}1_{\Omega_{m}}(4+\|u_{0}^{m}\|_{H^{s}}^{2}+ \|\gamma_{0}^{m}\|_{H^{s}}^{2})\] \[= C(4+\|u_{0}\|_{H^{s}}^{2}+\|\gamma_{0}\|_{H^{s}}^{2}),\ \ s<\zeta.\]
So, (3.2) holds. Since the passage from \((z,\zeta)\) to a unique maximal pathwise solution \((z,\tau^{*})\) in the sense of Definition 3.4 can be carried out as in [37, 42, 47], we omit the details. To finish the proof of Theorem 3.5, we only need to prove the blow-up scenario (3.3). Motivated by [43, 48], we consider the relationship between the explosion time of \(\|z(t)\|_{H^{s}\times H^{s}}\) and the explosion time of \(\|z(t)\|_{W^{1,\infty}\times W^{1,\infty}}\) in the next lemma.
**Lemma 4.1**: _(Blow-up scenario 1) Let \((z,\tau^{*})\) be the unique maximal solution to (1.3). Then the real-valued stochastic processes \(\|z(t)\|_{W^{1,\infty}\times W^{1,\infty}},\ \|z(t)\|_{H^{s}\times H^{s}}\) are also \(\mathcal{F}_{t}\)-adapted. Besides, for any \(m,n\in\mathbb{Z}^{+}\), define_
\[\tau_{1,m}=\inf\{t\geq 0:\|u(t)\|_{H^{s}}+\|\gamma(t)\|_{H^{s}}\geq m\},\ \tau_{2,n}=\inf\{t\geq 0:\|u(t)\|_{W^{1, \infty}}+\|\gamma(t)\|_{W^{1,\infty}}\geq n\}.\]
_For \(\tau_{1}:=\tau^{*}=\lim_{m\to\infty}\tau_{1,m}\) and \(\tau_{2}=\lim_{n\to\infty}\tau_{2,n}\), we have_
\[\tau_{1}=\tau_{2}\ \ \ \mathbb{P}-a.s.\]
Consequently, \(1_{\{[\lim_{t\hookrightarrow\tau^{*}}]\;[u(t)]\|_{W^{1,\infty}}+\|\gamma(t)\|_{W^{1, \infty}}=\infty\}}=1_{\{\tau^{*}<\infty\}}\;\;\mathbb{P}-a.s.\).
**Proof:** Since \(z\in C([0,\tau^{*}];H^{s}\times H^{s})\) almost surely, by the continuous embedding \(H^{s}\times H^{s}\hookrightarrow W^{1,\infty}\times W^{1,\infty}\) for \(s>3/2\), we conclude that \(\|z(t)\|_{W^{1,\infty}\times W^{1,\infty}}\) is \(\mathcal{F}_{t}\)-adapted. Moreover, the embedding \(H^{s}\times H^{s}\hookrightarrow W^{1,\infty}\times W^{1,\infty}\) for \(s>3/2\) means that \(\mathbb{P}\)-a.s. \(\tau_{1}\leq\tau_{2}\;\mathbb{P}-a.s.\) Now we only need to prove \(\mathbb{P}\)-a.s. \(\tau_{2}\leq\tau_{1}\). We cannot directly apply the Ito formula for \(\|u(t)\|_{H^{s}}^{2}+\|\gamma(t)\|_{H^{s}}^{2}\) since we only have \(u,\gamma\in H^{s}\) and \(uu_{x},u\gamma_{x}\in H^{s-1}\). Therefore, the Ito formula in Hilbert space cannot be applied directly, see ([27], Theorem 4.32) or ([49], Theorem 2.10). Instead, we will use the mollifier operator \(T_{\epsilon}\) defined in (2.2) to overcome this difficulty. We apply \(T_{\epsilon}\) to (1.3), and then use the Ito formula for \(\|T_{\epsilon}u\|_{H^{s}}^{2},\|T_{\epsilon}\gamma\|_{H^{s}}^{2}\) to derive
\[d\|T_{\epsilon}u(t)\|_{H^{s}}^{2} = 2(T_{\epsilon}h_{1}(u,\gamma)d\mathcal{W}_{1},T_{\epsilon}u)_{H^ {s}}-2(D^{s}T_{\epsilon}[uu_{x}],D^{s}T_{\epsilon}u)_{L^{2}}dt-2(D^{s}T_{ \epsilon}F_{1}(u,\gamma),D^{s}T_{\epsilon}u)_{L^{2}}dt\] \[+ \|T_{\epsilon}h_{1}(u,\gamma)\|_{L^{2}_{2}(U,H^{s})}^{2}dt,\] \[d\|T_{\epsilon}\gamma(t)\|_{H^{s}}^{2} = 2(T_{\epsilon}h_{2}(u,\gamma)d\mathcal{W}_{2},T_{\epsilon}\gamma )_{H^{s}}-2(D^{s}T_{\epsilon}[u\gamma_{x}],D^{s}T_{\epsilon}\gamma)_{L^{2}}dt -2(D^{s}T_{\epsilon}F_{2}(u,\gamma),D^{s}T_{\epsilon}\gamma)_{L^{2}}dt\] \[+ \|T_{\epsilon}h_{2}(u,\gamma)\|_{L^{2}_{2}(U,H^{s})}^{2}dt.\]
Therefore for any \(n_{1},m\geq 1\), \(r\geq 0\), and \(t\in[0,\tau_{2,n_{1}}\wedge r\wedge\tau_{1,m}]\),
\[\|T_{\epsilon}u(t)\|_{H^{s}}^{2}+\|T_{\epsilon}\gamma(t)\|_{H^{s }}^{2}-\|T_{\epsilon}u(0)\|_{H^{s}}^{2}-\|T_{\epsilon}\gamma(0)\|_{H^{s}}^{2}\] \[= 2\sum_{j=1}^{\infty}\int_{0}^{t}(D^{s}T_{\epsilon}h_{1}(u,\gamma )e_{j},D^{s}T_{\epsilon}u)_{L^{2}}dW_{j}^{1}+2\sum_{i=1}^{\infty}\int_{0}^{t} (D^{s}T_{\epsilon}h_{2}(u,\gamma)e_{i},D^{s}T_{\epsilon}\gamma)_{L^{2}}dW_{i }^{2}\] \[-2\int_{0}^{t}(D^{s}T_{\epsilon}[uu_{x}],D^{s}T_{\epsilon}u)_{L^{ 2}}dt^{{}^{\prime}}-2\int_{0}^{t}(D^{s}T_{\epsilon}F_{1}(u,\gamma),D^{s}T_{ \epsilon}u)_{L^{2}}dt^{{}^{\prime}}\] \[+\int_{0}^{t}\|T_{\epsilon}h_{1}(u,\gamma)\|_{L_{2}(U,H^{s})}^{2} dt^{{}^{\prime}}-2\int_{0}^{t}(D^{s}T_{\epsilon}[u\gamma_{x}],D^{s}T_{ \epsilon}\gamma)_{L^{2}}dt^{{}^{\prime}}\] \[-2\int_{0}^{t}(D^{s}T_{\epsilon}F_{2}(u,\gamma),D^{s}T_{\epsilon }\gamma)_{L^{2}}dt^{{}^{\prime}}+\int_{0}^{t}\|T_{\epsilon}h_{2}(u,\gamma)\|_ {L_{2}(U,H^{s})}^{2}dt^{{}^{\prime}}\] \[=: \int_{0}^{t}\sum_{j=1}^{\infty}L_{1,j}dW_{j}^{1}+\int_{0}^{t} \sum_{i=1}^{\infty}L_{2,i}dW_{i}^{2}+\sum_{j=3}^{8}\int_{0}^{t}L_{j}dt^{{}^{ \prime}},\]
where \(\{e_{k}\}\) is the complete orthonormal basis of \(U\). On account of the Burkholder-Davis-Gundy inequality and (3.1), we obtain that
\[\mathbb{E}\bigg{[}\sup_{t\in[0,\tau_{2,n_{1}}\wedge r\wedge\tau_ {1,m}]}\left|\int_{0}^{t}\sum_{j=1}^{\infty}L_{1,j}dW_{j}^{1}\right|\bigg{|} \mathcal{F}_{0}\bigg{]}\leq C\mathbb{E}\bigg{\{}\bigg{(}\sum_{j=1}^{\infty}\int_ {0}^{\tau_{2,n_{1}}\wedge r\wedge\tau_{1,m}}|L_{1,j}|^{2}dt\bigg{)}^{\frac{1}{2} }\bigg{|}\mathcal{F}_{0}\bigg{\}}\] \[\leq\frac{1}{2}\mathbb{E}\left(\sup_{t\in[0,\tau_{2,n_{1}}\wedge r \wedge\wedge\tau_{1,m}]}\|T_{\epsilon}u\|_{H^{s}}^{2}\bigg{|}\mathcal{F}_{0} \right)+Cf^{2}(n_{1})\int_{0}^{r}\mathbb{E}\bigg{[}\sup_{t^{{}^{\prime}}\in[0, \tau_{2,n_{1}}\wedge t\wedge\tau_{1,m}]}(1+\|u(t^{{}^{\prime}})\|_{H^{s}}^{2}+\| \gamma(t^{{}^{\prime}})\|_{H^{s}}^{2})\bigg{|}\mathcal{F}_{0}\bigg{]}dt,\] \[\mathbb{E}\bigg{[}\sup_{t\in[0,\tau_{2,n_{1}}\wedge r\wedge\tau_{1,m}]}\left|\int_{0}^{t}\sum_{i=1}^{\infty}L_{2,i}dW_{i}^{2}\right|\bigg{|} \mathcal{F}_{0}\bigg{]}\leq C\mathbb{E}\bigg{\{}\bigg{(}\sum_{i=1}^{\infty}\int_ {0}^{\tau_{2,n_{1}}\wedge r\wedge\tau_{1,m}}|L_{1,i}|^{2}dt\bigg{)}^{\frac{1}{2} }\bigg{|}\mathcal{F}_{0}\bigg{\}}\] \[\leq\frac{1}{2}\mathbb{E}\left[\sup_{t\in[0,\tau_{2,n_{1}}\wedge r \wedge\wedge\tau_{1,m}]}\|T_{\epsilon}\gamma\|_{H^{s}}^{2}\bigg{|}\mathcal{F}_{0} \right]+Cf^{2}(n_{1})\int_{0}^{r}\mathbb{E}\bigg{[}\sup_{t^{{}^{\prime}}\in[0, \tau_{2,n_{1}}\wedge t\wedge\tau_{1,m}]}(1+\|u(t^{{}^{\prime}})\|_{H^{s}}^{2}+\| \gamma(t^{{}^{\prime}})\|_{H^{s}}^{2})\bigg{|}\mathcal{F}_{0}\bigg{]}dt.\]
For \(L_{3},L_{6}\), using integration by part, Sobolev's inequality and Lemma 2.3, we have
\[(D^{s}T_{\epsilon}[uu_{x}],D^{s}T_{\epsilon}u)_{L^{2}} = ([D^{s},u]u_{x},D^{s}T_{\epsilon}^{2}u)_{L^{2}}+([T_{\epsilon},u]D ^{s}u_{x},D^{s}T_{\epsilon}u)_{L^{2}}+(uD^{s}T_{\epsilon}u_{x},D^{s}T_{ \epsilon}u)_{L^{2}}\] \[\leq C\|u\|_{W^{1,\infty}}\|u\|_{H^{s}}^{2},\] \[(D^{s}T_{\epsilon}[u\gamma_{x}],D^{s}T_{\epsilon}\gamma)_{L^{2}} = ([D^{s},u]\gamma_{x},D^{s}T_{\epsilon}^{2}\gamma)_{L^{2}}+([T_{ \epsilon},u]D^{s}\gamma_{x},D^{s}T_{\epsilon}\gamma)_{L^{2}}+(uD^{s}T_{ \epsilon}\gamma_{x},D^{s}T_{\epsilon}\gamma)_{L^{2}}\]
\[\mathbb{E}\left\{\sup_{t\in[0,\tau_{2,n_{1}}\wedge r\wedge\tau_{1,m}]}( \|u(t)\|_{H^{s}}^{2}+\|\gamma(t)\|_{H^{s}}^{2})\bigg{|}\mathcal{F}_{0}\right\}\] \[\leq C[\|u(0)\|_{H^{s}}^{2}+\|\gamma(0)\|_{H^{s}}^{2}]+C\int_{0}^{r} \mathbb{E}\bigg{(}1+\sup_{t^{{}^{\prime}}\in[0,\tau_{2,n_{1}}\wedge t\wedge\tau _{1,m}]}(\|u(t^{{}^{\prime}})\|_{H^{s}}^{2}+\|\gamma(t^{{}^{\prime}})\|_{H^{s} }^{2})|\mathcal{F}_{0}\bigg{)}dt.\]
Then Gronwall's inequality shows that for each \(n_{1}\in\mathbb{Z}^{+}\), \(r\in\mathbb{R}^{+}\), there is a constant \(C=C(n_{1},r,u_{0},\gamma_{0})>0\) such that
\[\mathbb{E}\left[\sup_{t\in[0,\tau_{2,n_{1}}\wedge r\wedge\tau_{1,m}]}(\|u(t) \|_{H^{s}}^{2}+\|\gamma(t)\|_{H^{s}}^{2})\bigg{|}\mathcal{F}_{0}\right]<C(n_{1},r,u_{0},\gamma_{0}).\]
So, it follows from Chebyshev's inequality and Fatou's lemma that
\[\mathbb{P}(\tau_{1}\leq\tau_{2,n_{1}}\wedge r|\mathcal{F}_{0}) \leq\lim_{m\to\infty}\mathbb{P}(\tau_{1,m}\leq\tau_{2,n_{1}}\wedge r |\mathcal{F}_{0})\] \[\leq\lim_{m\to\infty}\mathbb{P}\left(\sup_{t\in[0,\tau_{2,n_{1}} \wedge r\wedge\tau_{1,m}]}(\|u(t)\|_{H^{s}}+\|\gamma(t)\|_{H^{s}})\geq m\bigg{|} \mathcal{F}_{0}\right)\] \[\leq\lim_{m\to\infty}\frac{\mathbb{E}[\sup_{t\in[0,\tau_{2,n_{1}} \wedge r\wedge\tau_{1,m}]}(2\|u(t)\|_{H^{s}}^{2}+2\|\gamma(t)\|_{H^{s}}^{2})| \mathcal{F}_{0}]}{m^{2}}=0.\]
Letting \(r\to\infty\) first and then \(n_{1}\to\infty\), Fatou's lemma yields that
\[\mathbb{P}(\tau_{1}\leq\tau_{2}|\mathcal{F}_{0})=0.\]
Therefore, we conclude
\[\mathbb{P}(\tau_{1}\geq\tau_{2})\geq 1-\mathbb{P}(\tau_{1}\leq\tau_{2})=1-\mathbb{ P}[\mathbb{P}(\tau_{1}\leq\tau_{2}|\mathcal{F}_{0})]=1.\]
We finish the section with the proof of the first blow-up scenario (3.3). \(\Box\)
## 5 Proof of Theorem 3.6
Assume \(s>5/2\) and let \((u_{0},\gamma_{0})\) be \(H^{s}\times H^{s}\)-valued \(\mathcal{F}_{0}\)-measurable random variable. Let \(h_{1}(t,u,\gamma)=a(t)(1+\|u\|_{W^{1,\infty}}+\|\gamma\|_{W^{1,\infty}})^{ \theta}u,h_{2}(t,u,\gamma)=a(t)(1+\|u\|_{W^{1,\infty}}+\|\gamma\|_{W^{1,\infty} })^{\theta}\gamma\) with \(\theta\geq 1/2\) and \(a(t)\neq 0\).
For \(s>3/2\), the embedding \(H^{s}\times H^{s}\hookrightarrow W^{1,\infty}\times W^{1,\infty}\) implies that
\[\sup_{\|u_{1}\|_{H^{s}},\|_{\gamma_{1}}\|_{H^{s}},\|u_{2}\|_{H^{s }},\|_{\gamma_{2}}\|_{H^{s}}\leq N} \sum_{i=1}^{2}(\|h_{i}(t,u_{1},\gamma_{1})-h_{i}(t,u_{2},\gamma_{ 2})\|_{H^{s}}\] \[\leq g(N)(\|u_{1}-u_{2}\|_{H^{s}}+\|\gamma_{1}-\gamma_{2}\|_{H^{s}}), \,\,\,\,N\geq 1.\]
Hence, by Theorem 3.5, we conclude that (1.3) admits a unique pathwise solution \(z=(u,\gamma)\) in \(H^{s}\times H^{s}\) with \(s>5/2\) and maximal existence time \(\tau^{*}\). Define
\[\tau_{m}=\inf\{t\geq 0:\|u\|_{H^{s}}^{2}+\|\gamma\|_{H^{s}}^{2}\geq m\}.\]
Applying the Ito formula to \(\|T_{\epsilon}u\|_{H^{s}}^{2},\|T_{\epsilon}\gamma\|_{H^{s}}^{2}\) gives
\[d\|T_{\epsilon}u\|_{H^{s}}^{2}= 2a(t)(1+\|u\|_{W^{1,\infty}}+\|\gamma\|_{W^{1,\infty}})^{\theta} (T_{\epsilon}u,T_{\epsilon}u)_{H^{s}}dW-2(T_{\epsilon}[uu_{x}],T_{\epsilon}u) _{H^{s}}dt\] \[-2(T_{\epsilon}F_{1}(u,\gamma),T_{\epsilon}u)_{H^{s}}dt+a^{2}(t )(1+\|u\|_{W^{1,\infty}}+\|\gamma\|_{W^{1,\infty}})^{2\theta}(T_{\epsilon}u,T_ {\epsilon}u)_{H^{s}}dt,\] \[d\|T_{\epsilon}\gamma\|_{H^{s}}^{2}= 2a(t)(1+\|u\|_{W^{1,\infty}}+\|\gamma\|_{W^{1,\infty}})^{\theta} (T_{\epsilon}\gamma,T_{\epsilon}\gamma)_{H^{s}}dW-2(T_{\epsilon}[u\gamma_{x}],T_{\epsilon}\gamma)_{H^{s}}dt\] \[-2(T_{\epsilon}F_{2}(u,\gamma),T_{\epsilon}\gamma)_{H^{s}}dt+a^{ 2}(t)(1+\|u\|_{W^{1,\infty}}+\|\gamma\|_{W^{1,\infty}})^{2\theta}(T_{\epsilon} \gamma,T_{\epsilon}\gamma)_{H^{s}}dt,\]
Again, using Ito's formula to \(\log(1+\|T_{\epsilon}u\|_{H^{s}}^{2}+\|T_{\epsilon}\gamma\|_{H^{s}}^{2})\) yields
\[d\log(1+\|T_{\epsilon}u\|_{H^{s}}^{2}+\|T_{\epsilon}\gamma\|_{H ^{s}}^{2})=\frac{2a(t)(1+\|u\|_{W^{1,\infty}}+\|\gamma\|_{W^{1,\infty}})^{ \theta}(T_{\epsilon}u,T_{\epsilon}u)_{H^{s}}}{1+\|T_{\epsilon}u\|_{H^{s}}^{2} +\|T_{\epsilon}\gamma\|_{H^{s}}^{2}}dW\] \[-\frac{2(T_{\epsilon}[uu_{x}],T_{\epsilon}u)_{H^{s}}}{1+\|T_{ \epsilon}u\|_{H^{s}}^{2}+\|T_{\epsilon}\gamma\|_{H^{s}}^{2}}dt-\frac{2(T_{ \epsilon}F_{1}(u,\gamma),T_{\epsilon}u)_{H^{s}}}{1+\|T_{\epsilon}u\|_{H^{s}}^{ 2}+\|T_{\epsilon}\gamma\|_{H^{s}}^{2}}dt\] \[+\frac{a^{2}(t)(1+\|u\|_{W^{1,\infty}}+\|\gamma\|_{W^{1,\infty}}) ^{2\theta}(T_{\epsilon}u,T_{\epsilon}u)_{H^{s}}}{1+\|T_{\epsilon}u\|_{H^{s}}^ {2}+\|T_{\epsilon}\gamma\|_{H^{s}}^{2}}dt\] \[+\frac{2a(t)(1+\|u\|_{W^{1,\infty}}+\|\gamma\|_{W^{1,\infty}})^{ \theta}(T_{\epsilon}\gamma,T_{\epsilon}\gamma)_{H^{s}}}{1+\|T_{\epsilon}u\|_{ H^{s}}^{2}+\|T_{\epsilon}\gamma\|_{H^{s}}^{2}}dW\] \[-\frac{2(T_{\epsilon}[u\gamma_{x}],T_{\epsilon}\gamma)_{H^{s}}}{ 1+\|T_{\epsilon}u\|_{H^{s}}^{2}+\|T_{\epsilon}\gamma\|_{H^{s}}^{2}}dt-\frac{2( T_{\epsilon}F_{2}(u,\gamma),T_{\epsilon}\gamma)_{H^{s}}}{1+\|T_{\epsilon}u\|_{H^{s}}^ {2}+\|T_{\epsilon}\gamma\|_{H^{s}}^{2}}dt\] \[+\frac{a^{2}(t)(1+\|u\|_{W^{1,\infty}}+\|\gamma\|_{W^{1,\infty}}) ^{2\theta}(T_{\epsilon}\gamma,T_{\epsilon}\gamma)_{H^{s}}}{1+\|T_{\epsilon}u \|_{H^{s}}^{2}+\|T_{\epsilon}\gamma\|_{H^{s}}^{2}}dt\] \[-2a^{2}(t)(1+\|u\|_{W^{1,\infty}}+\|\gamma\|_{W^{1,\infty}})^{2 \theta}\] \[\times\frac{[(T_{\epsilon}u,T_{\epsilon}u)_{H^{s}}^{2}+(T_{ \epsilon}\gamma,T_{\epsilon}\gamma)_{H^{s}}^{2}+2(T_{\epsilon}u,T_{\epsilon}u)_{H^ {s}}(T_{\epsilon}\gamma,T_{\epsilon}\gamma)_{H^{s}}]}{(1+\|T_{\epsilon}u\|_{H^{s} }^{2}+\|T_{\epsilon}\gamma\|_{H^{s}}^{2})^{2}}dt.\]
By Lemma 2.1-2.4 and Lemma 2.7, it follows that
\[\mathbb{E}[\log(1+\|T_{\epsilon}u(t\wedge\tau_{m})\|_{H^{s}}^{2}+\|T_{\epsilon} \gamma(t\wedge\tau_{m})\|_{H^{s}}^{2})|\mathcal{F}_{0}]-\log(1+\|T_{\epsilon}u _{0}\|_{H^{s}}^{2}+\|T_{\epsilon}\gamma\|_{H^{s}}^{2})\]
\[= -2\mathbb{E}\bigg{[}\int_{0}^{t\wedge\tau_{m}}\frac{(T_{\epsilon}[uu_ {x}],T_{\epsilon}\gamma)_{H^{*}}}{1+\|T_{\epsilon}u\|^{2}_{H^{*}}+\|T_{ \epsilon}\gamma\|^{2}_{H^{*}}}dt^{{}^{\prime}}\bigg{|}\mathcal{F}_{0}\bigg{]}-2 \mathbb{E}\bigg{[}\int_{0}^{t\wedge\tau_{m}}\frac{(T_{\epsilon}F_{1}(u,\gamma),T_ {\epsilon}u)_{H^{*}}}{1+\|T_{\epsilon}u\|^{2}_{H^{*}}+\|T_{\epsilon}\gamma\|^ {2}_{H^{*}}}dt^{{}^{\prime}}\bigg{|}\mathcal{F}_{0}\bigg{]}\] \[+\mathbb{E}\bigg{[}\int_{0}^{t\wedge\tau_{m}}\frac{a^{2}(t^{{}^{ \prime}})(1+\|u\|_{W^{1,\infty}}+\|\gamma\|_{W^{1,\infty}})^{2\theta}(T_{ \epsilon}u,T_{\epsilon}u)_{H^{*}}}{1+\|T_{\epsilon}u\|^{2}_{H^{*}}+\|T_{ \epsilon}\gamma\|^{2}_{H^{*}}}dt^{{}^{\prime}}\bigg{|}\mathcal{F}_{0}\bigg{]}\] \[-2\mathbb{E}\bigg{[}\int_{0}^{t\wedge\tau_{m}}\frac{(T_{\epsilon }[u\gamma_{x}],T_{\epsilon}\gamma)_{H^{*}}}{1+\|T_{\epsilon}u\|^{2}_{H^{*}}+\| T_{\epsilon}\gamma\|^{2}_{H^{*}}}dt^{{}^{\prime}}\bigg{|}\mathcal{F}_{0}\bigg{]}-2 \mathbb{E}\bigg{[}\int_{0}^{t\wedge\tau_{m}}\frac{(T_{\epsilon}F_{2}(u,\gamma),T_{\epsilon}\gamma)_{H^{*}}}{1+\|T_{\epsilon}u\|^{2}_{H^{*}}+\|T_{\epsilon} \gamma\|^{2}_{H^{*}}}dt^{{}^{\prime}}\bigg{|}\mathcal{F}_{0}\bigg{]}\] \[+\mathbb{E}\bigg{[}\int_{0}^{t\wedge\tau_{m}}\frac{a^{2}(t^{{}^{ \prime}})(1+\|u\|_{W^{1,\infty}}+\|\gamma\|_{W^{1,\infty}})^{2\theta}(T_{ \epsilon}\gamma,T_{\epsilon}\gamma)_{H^{*}}}{1+\|T_{\epsilon}u\|^{2}_{H^{*}}+ \|T_{\epsilon}\gamma\|^{2}_{H^{*}}}dt^{{}^{\prime}}\bigg{|}\mathcal{F}_{0} \bigg{]}\] \[-2\mathbb{E}\bigg{[}\int_{0}^{t\wedge\tau_{m}}a^{2}(t^{{}^{ \prime}})(1+\|u\|_{W^{1,\infty}}+\|\gamma\|_{W^{1,\infty}})^{2\theta}\] \[\times\frac{[(T_{\epsilon}u,T_{\epsilon}u)_{H^{*}}^{2}+(T_{ \epsilon}\gamma,T_{\epsilon}\gamma)_{H^{*}}^{2}+2(T_{\epsilon}u,T_{\epsilon}u )_{H^{*}}(T_{\epsilon}\gamma,T_{\epsilon}\gamma)_{H^{*}}]}{(1+\|T_{\epsilon} u\|^{2}_{H^{*}}+\|T_{\epsilon}\gamma\|^{2}_{H^{*}})^{2}}dt^{{}^{\prime}}\bigg{|} \mathcal{F}_{0}\bigg{]}\] \[\leq \mathbb{E}\bigg{[}\int_{0}^{t\wedge\tau_{m}}\frac{K(\|u\|_{W^{1, \infty}}+\|\gamma\|_{W^{1,\infty}})(\|u\|^{2}_{H^{*}}+\|\gamma\|^{2}_{H^{*}}) }{1+\|T_{\epsilon}u\|^{2}_{H^{*}}+\|T_{\epsilon}\gamma\|^{2}_{H^{*}}}dt^{{}^{ \prime}}\bigg{|}\mathcal{F}_{0}\bigg{]}\] \[+\mathbb{E}\bigg{[}\int_{0}^{t\wedge\tau_{m}}\frac{a^{2}(t^{{}^{ \prime}})(1+\|u\|_{W^{1,\infty}}+\|\gamma\|_{W^{1,\infty}})^{2\theta}(\|T_{ \epsilon}u\|^{2}_{H^{*}}+\|T_{\epsilon}\gamma\|^{2}_{H^{*}})}{1+\|T_{\epsilon }u\|^{2}_{H^{*}}+\|T_{\epsilon}\gamma\|^{2}_{H^{*}}}dt^{{}^{\prime}}\bigg{|} \mathcal{F}_{0}\bigg{]}\] \[-2\mathbb{E}\bigg{[}\int_{0}^{t\wedge\tau_{m}}a^{2}(t^{{}^{\prime }})(1+\|u\|_{W^{1,\infty}}+\|\gamma\|_{W^{1,\infty}})^{2\theta}\] \[\frac{(\|T_{\epsilon}u\|^{4}_{H^{*}}+\|T_{\epsilon}\gamma\|^{2}_ {H^{*}}+2\|T_{\epsilon}u\|^{2}_{H^{*}}\|T_{\epsilon}\gamma\|^{2}_{H^{*}})}{(1 +\|T_{\epsilon}u\|^{2}_{H^{*}}+\|T_{\epsilon}\gamma\|^{2}_{H^{*}})^{2}}dt^{{}^{ \prime}}\bigg{|}\mathcal{F}_{0}\bigg{]}.\]
Let
\[I^{\epsilon}_{1}(t^{\prime}) =\frac{K(\|u\|_{W^{1,\infty}}+\|\gamma\|_{W^{1,\infty}})(\|u\|^{ 2}_{H^{*}}+\|\gamma\|^{2}_{H^{*}})}{1+\|T_{\epsilon}u\|^{2}_{H^{*}}+\|T_{ \epsilon}\gamma\|^{2}_{H^{*}}}-\frac{K(\|u\|_{W^{1,\infty}}+\|\gamma\|_{W^{1, \infty}})(\|u\|^{2}_{H^{*}}+\|\gamma\|^{2}_{H^{*}})}{1+\|u\|^{2}_{H^{*}}+\| \gamma\|^{2}_{H^{*}}},\] \[I^{\epsilon}_{2}(t^{\prime}) =\frac{a^{2}(t^{{}^{\prime}})(1+\|u\|_{W^{1,\infty}}+\|\gamma\|_{W^ {1,\infty}})^{2\theta}(\|T_{\epsilon}u\|^{2}_{H^{*}}+\|T_{\epsilon}\gamma\|^{2}_{H^ {*}})}{1+\|T_{\epsilon}u\|^{2}_{H^{*}}+\|T_{\epsilon}\gamma\|^{2}_{H^{*}}}\] \[\qquad\qquad-\frac{a^{2}(t^{{}^{\prime}})(1+\|u\|_{W^{1,\infty}}+ \|\gamma\|_{W^{1,\infty}})^{2\theta}(\|u\|^{2}_{H^{*}}+\|\gamma\|^{2}_{H^{*}})}{1 +\|u\|^{2}_{H^{*}}+\|\gamma\|^{2}_{H^{*}}},\] \[I^{\epsilon}_{3}(t^{\prime}) =\frac{a^{2}(t^{{}^{\prime}})(1+\|u\|_{W^{1,\infty}}+\|\gamma\|_{W^ {1,\infty}})^{2\theta}(\|T_{\epsilon}u\|^{4}_{H^{*}}+\|T_{\epsilon}\gamma\|^{4}_{H ^{*}}+2\|T_{\epsilon}u\|^{2}_{H^{*}}\|T_{\epsilon}\gamma\|^{2}_{H^{*}})}{(1+\|T_{ \epsilon}u\|^{2}_{H^{*}}+\|T_{\epsilon}\gamma\|^{2}_{H^{*}})^{2}}\] \[\qquad-\frac{a^{2}(t^{{}^{\prime}})(1+\|u\|_{W^{1,\infty}}+ \|\gamma\|_{W^{1,\infty}})^{2\theta}(\|u\|^{4}_{H^{*}}+\|\gamma\|^{4}_{H^{*}}+2 \|u\|^{2}_{H^{*}}\|\gamma\|^{2}_{H^{*}})}{(1+\|u\|^{2}_{H^{*}}+\|\gamma\|^{2}_{H^ {*}})^{2}}. \tag{5.1}\]
Notice that for any \(T>0\), \((T_{\epsilon}u,T_{\epsilon}\gamma)\) tends to \((u,\gamma)\) in \(C([0,\tau_{m}\wedge t],H^{*}\times H^{*})\) almost surely as \(\epsilon\to 0\). It follows from the dominated convergence theorem that
\[\lim_{\epsilon\to 0}\mathbb{E}\left[\int_{0}^{t\wedge\tau_{m}}[|I^{ \epsilon}_{1}(t^{\prime})|+|I^{\epsilon}_{2}(t^{\prime})|+|I^{\epsilon}_{3}(t^{ \prime})|dt^{{}^{\prime}}\bigg{|}\mathcal{F}_{0}\right]=0.\]
Then, by (2.2) and the dominated convergence Theorem, it holds
\[\mathbb{E}[\log(1+\|u(t\wedge\tau_{m})\|^{2}_{H^{*}}+\|\gamma(t \wedge\tau_{m})\|^{2}_{H^{*}})|\mathcal{F}_{0}]-\log(1+\|u_{0}\|^{2}_{H^{*}}+\| \gamma_{0}\|^{
\[+C_{1}T+C_{2}\mathbb{E}\bigg{[}\int_{0}^{T\wedge\tau_{m}}\frac{a^{2}(t)(1+ \|u\|_{W^{1,\infty}}+\|\gamma\|_{W^{1,\infty}})^{2\theta}(\|u\|_{H^{s}}^{2}+\| \gamma\|_{H^{s}}^{2})^{2}}{(1+\|u\|_{H^{s}}^{2}+\|\gamma\|_{H^{s}}^{2})}dt\bigg{|} \mathcal{F}_{0}\bigg{]}\] \[+C_{1}T+C_{2}\mathbb{E}\bigg{[}\int_{0}^{T\wedge\tau_{m}}\frac{a^{ 2}(t)(1+\|u\|_{W^{1,\infty}}+\|\gamma\|_{W^{1,\infty}})^{2\theta}(\|u\|_{H^{s} }^{2}+\|\gamma\|_{H^{s}}^{2})^{2}}{(1+\|u\|_{H^{s}}^{2}+\|\gamma\|_{H^{s}}^{2}) }dt\bigg{|}\mathcal{F}_{0}\bigg{]}\] \[+\mathbb{E}\left[\int_{0}^{T\wedge\tau_{m}}[|I_{1}^{\epsilon}(t)| +|I_{2}^{\epsilon}(t)|+|I_{3}^{\epsilon}(t)|]dt\bigg{|}\mathcal{F}_{0}\right]\] \[\leq \frac{1}{2}\mathbb{E}\left[\sup_{t\in[0,T\wedge\tau_{m}]}(1+\log (1+\|u\|_{H^{s}}^{2}+\|\gamma\|_{H^{s}}^{2}))\bigg{|}\mathcal{F}_{0}\right]+ \mathbb{E}\left[\int_{0}^{T\wedge\tau_{m}}[|I_{1}^{\epsilon}(t)|+|I_{2}^{ \epsilon}(t)|+|I_{3}^{\epsilon}(t)|]dt\bigg{|}\mathcal{F}_{0}\right]\] \[+C(u_{0},\gamma_{0},C_{1},C_{2},T)+C_{1}T.\]
Thus, we use the dominated convergence Theorem, (5.2) and (5.1) to obtain
\[\mathbb{E}\left[\sup_{t\in[0,T\wedge\tau_{m}]}\log(1+\|u\|_{H^{s}}^{2}+\| \gamma\|_{H^{s}}^{2})\bigg{|}\mathcal{F}_{0}\right]\leq C(u_{0},\gamma_{0},C_{1},C_{2},T).\]
Since \(\log(1+x)\) is increasing for \(x>0\), we have that for any \(m\geq 1\),
\[\mathbb{P}\{\tau_{m}<T|\mathcal{F}_{0}\}\leq\mathbb{P}\bigg{\{}\sup_{t\in[0,T \wedge\tau_{m}]}\log(1+\|u\|_{H^{s}}^{2}+\|\gamma\|_{H^{s}}^{2})\geq\log(1+m) \bigg{|}\mathcal{F}_{0}\bigg{\}}\leq\frac{C(u_{0},\gamma_{0},C_{1},C_{2},T)}{ \log(1+m)}.\]
Letting \(m\to\infty\) forces \(\mathbb{P}\{\tau^{*}<T|\mathcal{F}_{0}\}=0\) for any \(T>0\), which means \(\mathbb{P}\{\tau^{*}=\infty\}=1\).
Proof of Theorem 3.7
In this section, we study (1.3) with linear noise satisfying Assumption 3.3. Depending on the strength of the noise in (1.3), we provide the global existence of pathwise solutions for the maximal pathwise solution. Motivated by [35, 37, 47], we introduce
\[\beta(\omega,t)=e^{\int_{0}^{t}b(t^{{}^{\prime}})dW_{t^{{}^{\prime}}}-\int_{0}^ {t}\frac{b^{2}(t^{{}^{\prime}})}{2}dt^{{}^{\prime}}}.\]
**Proposition 6.1**: _Let \(s>3/2\) and \(h_{1}(t,u,\gamma)=b(t)u,h_{2}(t,u,\gamma)=b(t)\gamma\) such that \(b(t)\) satisfies Assumption 3.3. Let \((u_{0},\gamma_{0})\) be an \(H^{s}\times H^{s}\)-valued \(\mathcal{F}_{0}\)-measurable random variable and \((z,\tau^{*})\) be the corresponding unique maximal solution to (1.3). Let \(v_{1}=\beta^{-1}u,v_{2}=\beta^{-1}\gamma\). Then for \(t\in[0,\tau^{*})\), the processes \(v_{1},v_{2}\) solve the following problem_
\[\left\{\begin{array}{l}\partial_{t}v_{1}+\beta v_{1}v_{1x}+ \beta(1-\partial_{x}^{2})^{-1}\partial_{x}(v_{1}^{2}+\frac{1}{2}v_{1x}^{2}+ \frac{1}{2}v_{2}^{2}-\frac{1}{2}v_{2x}^{2})=0,\\ \partial_{t}v_{2}+\beta v_{1}v_{2x}+\beta(1-\partial_{x}^{2})^{-1}((v_{1x}v_{2 x})_{x}+v_{1x}v_{2})=0,\\ v_{1}(\omega,0,x)=u_{0}(\omega,x),v_{2}(\omega,0,x)=\gamma_{0}(\omega,x). \end{array}\right. \tag{6.1}\]
_Moreover, we have \(\mathbb{P}-a.s.\)\((v_{1},v_{2})\in C([0,\tau^{*});H^{s}\times H^{s})\cap C^{1}([0,\tau^{*});H^{s-1} \times H^{s-1})\). In addition, if \(s>5/2\), then it holds_
\[\mathbb{P}\{\|v_{1}\|_{H^{1}}+\|v_{2}\|_{H^{1}}=\|u_{0}\|_{H^{1}}+ \|\gamma_{0}\|_{H^{1}}\ for\ all\ t<\tau^{*}\}=1. \tag{6.2}\]
**Proof:** Since \(b(t)\) satisfies Assumption 3.3, \(h_{1}(t,u,\gamma)=b(t)u,h_{2}(t,u,\gamma)=b(t)\gamma\) satisfy Assumption 3.1, Theorem 3.5 implies that (1.3) has a unique maximal solution \((z,\tau^{*})\). A direct computation with the Ito formula yields
\[d\beta^{-1}=-b(t)\beta^{-1}dW+b^{2}(t)\beta^{-1}dt.\]
Therefore we have
\[dv_{1} = \beta^{-1}du+ud\beta^{-1}+d\beta^{-1}du\] \[= \beta^{-1}\left[-uu_{x}-(1-\partial_{x}^{2})^{-1}\partial_{x} \left(u^{2}+\frac{1}{2}u_{x}^{2}+\frac{1}{2}\gamma^{2}-\frac{1}{2}\gamma_{x}^ {2}\right)\right]dt+b(t)\beta^{-1}udW\] \[+u[-b(t)\beta^{-1}dW+b^{2}(t)\beta^{-1}dt]-b^{2}(t)\beta^{-1}udt\] \[= -\beta v_{1}v_{1x}-\beta(1-\partial_{x}^{2})^{-1}\partial_{x} \left(v_{1}^{2}+\frac{1}{2}v_{1x}^{2}+\frac{1}{2}v_{2}^{2}-\frac{1}{2}v_{2x}^ {2}\right),\] \[dv_{2} = \beta^{-1}d\gamma+\gamma d\beta^{-1}+d\beta^{-1}d\gamma\] \[= \beta^{-1}[-u\gamma_{x}-(1-\partial_{x}^{2})^{-1}((u_{x}\gamma_ {x})_{x}+u_{x}\gamma)]dt+b(t)\beta^{-1}\gamma dW\] \[+\gamma[-b(t)\beta^{-1}dW+b^{2}(t)\beta^{-1}dt]-b^{2}(t)\beta^{- 1}\gamma dt\] \[= -\beta v_{1}v_{2x}-\beta(1-\partial_{x}^{2})^{-1}((v_{1x}v_{2x}) _{x}+v_{1x}v_{2})\]
and since \(v_{1}(\omega,0,x)=u_{0}(\omega,x),v_{2}(\omega,0,x)=\gamma_{0}(\omega,x)\), we see that \((v_{1},v_{2})\) satisfies (6.1). Moreover, Theorem 3.5 implies \((u,\gamma)\in C([0,\tau^{*}),H^{s}\times H^{s})\)\(\mathbb{P}-a.s.\), so is \((v_{1},v_{2})\). From Lemma 2.4 and (6.1), we see that for \(\mathbb{P}-a.s.\), \(v_{1t}=-\beta v_{1}v_{1x}-\beta(1-\partial_{x}^{2})^{-1}\partial_{x}(v_{1}^{2} +\frac{1}{2}v_{1x}^{2}+\frac{1}{2}v_{2}^{2}-\frac{1}{2}v_{2x}^{2}),v_{2t}=- \beta v_{1}v_{2x}-\beta(1-\partial_{x}^{2})^{-1}((v_{1x}v_{2x})_{x}+v_{1x}v_{2})\), \((v_{1t},v_{2t})\in C([0,\tau^{*}),H^{s-1}\times H^{s-1})\). Hence, it holds \(\mathbb{P}\)-a.s. \((v_{1},v_{2})\in C^{1}([0,\tau^{*}),H^{s-1}\times H^{s-1})\).
In addition, the first two equations of (6.1) are equivalent to
\[v_{1tt}-v_{1xxt}+3\beta v_{1}v_{1x}-2\beta v_{1x}v_{1xx}-\beta v _{1}v_{1xxx}+\beta(v_{2}-v_{2xx})v_{2x}=0, \tag{6.3}\]
\[v_{2t}-v_{2xxt}+\beta(v_{1}v_{2x}+v_{1x}v_{2})-\beta(v_{1}v_{2xxx}+v_{1x}v_{2 xx})=0. \tag{6.4}\]
Multiplying both sides of (6.3) by \(v_{1}\) and multiplying both sides of (6.4) by \(v_{2}\), then integrating the equation on \(x\in\mathbb{R}\), and finally adding the two derived equations, we arrive at \(\mathbb{P}\)-a.s.
\[\frac{d}{dt}\int_{\mathbb{R}}(v_{1}^{2}+v_{2}^{2}+v_{1x}^{2}+v_{2x}^{2})dx=0, \ \ t<\tau^{*},\]
which implies (6.2). \(\Box\)
**Proof of Theorem 3.7.** To begin with, we apply the operator \(D^{s}\) to (6.3) and (6.4), multiply both sides of the resulting equation by \(D^{s}v_{1},D^{s}v_{2}\) respectively, and then integrate on \(\mathbb{R}\) to obtain \(\mathbb{P}-a.s.\)
\[\frac{1}{2}\frac{d}{dt}(\|v_{1}\|_{H^{s}}^{2}+\|v_{2}\|_{H^{s}}^{2}) = -\beta(\omega,t)\int_{\mathbb{R}}D^{s}v_{1}D^{s}(v_{1}v_{1x})dx- \beta(\omega,t)\int_{\mathbb{R}}D^{s}v_{1}D^{s}F_{1}(v_{1},v_{2})dx\] \[-\beta(\omega,t)\int_{\mathbb{R}}D^{s}v_{2}D^{s}(v_{1}v_{2x})dx- \beta(\omega,t)\int_{\mathbb{R}}D^{s}v_{2}D^{s}F_{2}(v_{1},v_{2})dx.\]
By (3.4), we conclude that \(\mathbb{P}-a.s.\)
\[\frac{d}{dt}(\|v_{1}\|_{H^{s}}^{2}+\|v_{2}\|_{H^{s}}^{2})\leq C \beta(\omega,t)(\|v_{1}\|_{W^{1,\infty}}+\|v_{2}\|_{W^{1,\infty}})(\|v_{1}\|_{ H^{s}}^{2}+\|v_{2}\|_{H^{s}}^{2}). \tag{6.5}\]
Letting \(w_{1}=e^{-\int_{0}^{t}b(t^{{}^{\prime}})dW_{t^{{}^{\prime}}}}u=e^{-\int_{0}^{ t}\frac{b^{2}(t^{{}^{\prime}})}{2}dt^{{}^{\prime}}}v_{1},w_{2}=e^{-\int_{0}^{t}b(t^{ {}^{\prime}})dW_{t^{{}^{\prime}}}}\gamma=e^{-\int_{0}^{t}\frac{b^{2}(t^{{}^{ \prime}})}{2}dt^{{}^{\prime}}}v_{2}\) and \(\alpha(\omega,t)=e^{\int_{0}^{t}b(t^{{}^{\prime}})dW_{t^{{}^{\prime}}}}\), we obtain
\[\frac{d}{dt}(\|w_{1}\|_{H^{s}}^{2}+\|w_{2}\|_{H^{s}}^{2})+b^{2}(t )(\|w_{1}\|_{H^{s}}^{2}+\|w_{2}\|_{H^{s}}^{2})\] \[\leq C\alpha(\omega,t)(\|w_{1}\|_{W^{1,\infty}}+\|w_{2}\|_{W^{1, \infty}})(\|w_{1}\|_{H^{s}}^{2}+\|w_{2}\|_{H^{s}}^{2}).\]
Assume \(\|u_{0}\|_{H^{s}}^{2}+\|\gamma_{0}\|_{H^{s}}^{2}<\frac{b_{s}^{2}}{2C^{2}Q^{2} \lambda_{1}^{2}R}<\frac{b_{s}^{2}}{C^{2}Q^{2}\lambda_{1}^{2}}\) and define
\[\tau_{1}=\inf\bigg{\{}t<\tau^{*}:\alpha(\omega,t)(\|w_{1}\|_{W^{1,\infty}}+\| w_{2}\|_{W^{1,\infty}})=(\|u\|_{W^{1,\infty}}+\|\gamma\|_{W^{1,\infty}})> \frac{b(t)^{2}}{C\lambda_{1}}\bigg{\}}. \tag{6.6}\]
Then it follows from the embedding \((\|u_{0}\|_{W^{1,\infty}}+\|\gamma_{0}\|_{W^{1,\infty}})\leq Q(\|u_{0}\|_{H^{s }}+\|\gamma_{0}\|_{H^{s}})\) that \(\mathbb{P}\{\tau_{1}>0\}=1\), and it holds
\[\frac{d}{dt}(\|w_{1}\|_{H^{s}}^{2}+\|w_{2}\|_{H^{s}}^{2})+\frac{( \lambda_{1}-1)b^{2}(t)}{\lambda_{1}}(\|w_{1}\|_{H^{s}}^{2}+\|w_{2}\|_{H^{s}}^{ 2})\leq 0,\ \ t\in[0,\tau_{1}).\]
This implies that for any \(0<\lambda_{2}<\frac{\lambda_{1}-1}{\lambda_{1}}\), \(\mathbb{P}-a.s.\)
\[\|u(t)\|_{H^{s}}^{2}+\|\gamma(t)\|_{H^{s}}^{2}\] \[\leq(\|u_{0}\|_{H^{s}}^{2}+\|\gamma_{0}\|_{H^{s}}^{2})e^{\int_{0}^ {t}b(t^{{}^{\prime}})dW_{t^{{}^{\prime}}}-\int_{0}^{t}\frac{(\lambda_{1}-1)b^{ 2}(t^{{}^{\prime}})}{\lambda_{1}}dt^{{}^{\prime}}}\] \[=(\|u_{0}\|_{H^{s}}^{2}+\|\gamma_{0}\|_{H^{s}}^{2})e^{\int_{0}^{t} b(t^{{}^{\prime}})dW_{t^{{}^{\prime}}}-\lambda_{2}\int_{0}^{t}b^{2}(t^{{}^{ \prime}})dt^{{}^{\prime}}}e^{-\int_{0}^{t}\frac{(\lambda_{1}-1)-\lambda_{1} \lambda_{2}}{\lambda_{1}}b^{2}(t^{{}^{\prime}})dt^{{}^{\prime}}},\ \ t\in[0,\tau_{1}). \tag{6.7}\]
Define
\[\tau_{2}=\inf\{t>0:e^{\int_{0}^{t}b(t^{{}^{\prime}})dW_{t^{{}^{ \prime}}}-\lambda_{2}\int_{0}^{t}b^{2}(t^{{}^{\prime}})dt^{{}^{\prime}}}>R\}.\]
Notice that \(\mathbb{P}\{\tau_{2}>0\}=1\). From (6.7), we have
\[2\|u(t)\|_{H^{s}}^{2}+2\|\gamma(t)\|_{H^{s}}^{2} \leq \frac{2b_{s}^{2}}{2C^{2}Q^{2}\lambda_{1}^{2}R}\times R\times e^{- \int_{0}^{t}\frac{(\lambda_{1}-1)-\lambda_{1}\lambda_{2}}{\lambda_{1}}b^{2}(t^{{}^{ \prime}})dt^{{}^{\prime}}}\]
\[= \frac{b_{*}^{2}}{C^{2}Q^{2}\lambda_{1}^{2}}e^{-\int_{0}^{t}\frac{( \lambda_{1}-1)-\lambda_{1}\lambda_{2}}{\lambda_{1}}b^{2}(t^{{}^{\prime}})dt^{{}^ {\prime}}},\ \ t\in[0,\tau_{1}\wedge\tau_{2}). \tag{6.8}\]
By Assumption 3.3, (6.8) and (6.6), we find that on \([0,\tau_{1}\wedge\tau_{2})\), \(\mathbb{P}-a.s.\)
\[(\|u\|_{W^{1,\infty}}+\|\gamma\|_{W^{1,\infty}})\leq Q(\|u\|_{H^{ \varepsilon}}+\|\gamma\|_{H^{\varepsilon}})\leq\frac{b_{*}}{C\lambda_{1}}e^{ -\int_{0}^{t}\frac{(\lambda_{1}-1)-\lambda_{1}\lambda_{2}}{\lambda_{1}}b^{2}( t^{{}^{\prime}})dt^{{}^{\prime}}}\leq\frac{b^{2}(t)}{C\lambda_{1}}e^{-\int_{0}^{t} \frac{(\lambda_{1}-1)-\lambda_{1}\lambda_{2}}{2\lambda_{1}}b^{2}(t^{{}^{ \prime}})dt^{{}^{\prime}}},\]
which together with \(\lambda_{2}<\frac{\lambda_{1}-1}{\lambda_{1}}\) and \(b^{2}>0\) derives
\[\mathbb{P}\{\tau_{1}\geq\tau_{2}\}=1.\]
Therefore it follows from (6.8) and Lemma 2.6 that
\[\mathbb{P}\{\|u(t)\|_{H^{s}}^{2}+\|\gamma(t)\|_{H^{\varepsilon}}^{2}\leq\frac {b_{*}^{2}}{2C^{2}Q^{2}\lambda_{1}^{2}}\ \ for\ all\ t>0\}\geq\mathbb{P}\{\tau_{2}=\infty\}\geq 1-R^{-2 \lambda_{2}},\]
which completes the proof.
## 7 Proof of Theorem 3.8
### Proof of Theorem 3.8.
By proposition 6.1, we can proceed to prove Theorem 3.8. Since \(H^{s}\hookrightarrow C^{2}\) for \(s>5/2\), we have \(v_{1},v_{1x},v_{2},v_{2x}\in C^{1}([0,\tau^{*})\times\mathbb{R})\). Then for \(x\in\mathbb{R}\) and \(\mathbb{P}-a.s.\)\(\omega\in\Omega\), the problem
\[\left\{\begin{array}{ll}\frac{dg(\omega,t,x)}{dt}=\beta(\omega,t)v_{1x}( \omega,t,q(\omega,t,x)),\ \ t\in[0,\tau^{*}),\\ q(\omega,0,x)=x,\ \ x\in\mathbb{R}\end{array}\right. \tag{7.1}\]
has a unique solution \(q(\omega,t,x)\) such that \(q(\omega,t,x)\in C^{1}([0,\tau^{*})\times\mathbb{R})\) for \(\mathbb{P}-a.s.\)\(\omega\in\Omega\). Moreover, differentiating (7.1) with respect to \(x\) yields that for \(\mathbb{P}-a.s.\)\(\omega\in\Omega\),
\[\left\{\begin{array}{ll}\frac{dg_{x}(\omega,t.x)}{dt}=\beta(\omega,t)v_{1x} (\omega,t,q(\omega,t,x))q_{x},\ \ t\in[0,\tau^{*}),\\ q_{x}(\omega,0,x)=1,\ \ x\in\mathbb{R}.\end{array}\right.\]
For \(\mathbb{P}-a.s.\)\(\omega\in\Omega\), we solve the above equation to obtain
\[q_{x}(\omega,t,x)=\exp\bigg{(}\int_{0}^{t}\beta(\omega,t^{{}^{ \prime}})v_{1x}(\omega,t^{{}^{\prime}},q(\omega,t^{{}^{\prime}},x))dt^{{}^{ \prime}}\bigg{)},\ \ t\in(0,\tau^{*}).\]
Thus for \(\mathbb{P}-a.s.\)\(\omega\in\Omega\), \(q_{x}(\omega,t,x)>0\), \((t,x)\in[0,\tau^{*})\times\mathbb{R}\). Then the momentum variable \(V_{1}=(1-\partial_{x}^{2})v_{1},V_{2}=(1-\partial_{x}^{2})v_{2}\) satisfy \(\mathbb{P}-a.s.\)
\[V_{1t}+\beta v_{1}V_{1x}+2\beta v_{1x}V_{1}+\beta v_{2x}V_{2}=0,\] \[V_{2t}++\beta(v_{1}V_{2})_{x}=0. \tag{7.2}\]
Applying particle trajectory method (7.1) and the first equation of (7.2), we obtain
\[\frac{d}{dt}\left[e^{\int_{0}^{t}\frac{\beta(\omega,s)V_{0}v_{2x }(\omega,s,x)}{v_{1}(\omega,t,x)}ds}V_{1}(\omega,t,q(\omega,t,x))q_{x}^{2}( \omega,t,x)\right]\] \[= e^{\int_{0}^{t}\frac{\beta(\omega,s)V_{0}v_{2x}(\omega,s,x)}{V_{ 1}(\omega,s,x)}ds}\beta(\omega,s)q_{x}^{2}V_{2}v_{2x}(\omega,s,x)+e^{\int_{0}^{ t}\frac{\beta(\omega,s)V_{0}v_{2x}(\omega,s,x)}{V_{1}(\omega,s,x)}ds}q_{x}^{2}[V_{1t}+ \beta v_{1}V_{1x}+2\beta v_{1x}V_{1}]\] \[= e^{\int_{0}^{t}\frac{\beta(\omega,s)V_{0}v_{2x}(\omega,s,x)}{V_{ 1}(\omega,s,x)}ds}q_{x}^{2}[V_{1t}+\beta v_{1}V_{1x}+2\beta v_{1x}V_{1}+\beta v _{2x}V_{2}]\]
\(=\)0.
This and \(q_{x}(\omega,0,x)=1\) imply that
\[e^{\int_{0}^{t}\frac{\beta(\omega,\epsilon)V_{2xx}(\omega,\tau,x)}{V_{1}(\omega,x,x)}ds}V_{1}(\omega,t,q(\omega,t,x))q_{x}^{2}(\omega,t,x)=V_{1}(\omega,0,x). \tag{7.3}\]
Consequently, we have \({\rm sign}(V_{1}(\omega,t,x))\)=\({\rm sign}(V_{1}(\omega,0,x))\).
The next step, we give the following useful lemma that will be used in the sequel.
**Lemma 7.1**: _(Blow-up scenario 2) Let \(s>3/2\) and \((u_{0},\gamma_{0})\) be an \(H^{s}\times H^{s}\)-valued \({\cal F}_{0}\)-measurable random variable. Assume that \((z,\tau^{*})\) is the corresponding maximal solution. Then \(z\) as a \(W^{1,\infty}\times W^{1,\infty}\)-valued process is \({\cal F}_{t}\)-adapted for \(t<\tau^{*}\) and \({\mathbb{P}}-a.s.\) on the set \(\{\tau^{*}<\infty\}\)_
\[1_{\{\lim\sup_{t\to\tau^{*}}(\|u(t)\|_{H^{s}}+\|\gamma(t)\|_{H^{s}})=\infty\} }=1_{\{\lim\sup_{t\to\tau^{*}}\|u(t)\|_{W^{1,\infty}}=\infty\}}. \tag{7.4}\]
**Proof:**_It is clear that \(\{\lim\sup_{t\to\tau^{*}}\|u(t)\|_{W^{1,\infty}}=\infty\}\subset\{\lim \sup_{t\to\tau^{*}}(\|u(t)\|_{H^{s}}+\|\gamma(t)\|_{H^{s}})=\infty\}\). It is sufficient to prove \(\{\lim\sup_{t\to\tau^{*}}\|u(t)\|_{W^{1,\infty}}=\infty\}^{C}\subset\{\lim \sup_{t\to\tau^{*}}(\|u(t)\|_{H^{s}}+\|\gamma(t)\|_{H^{s}})=\infty\}^{C}\). Notice that_
\[\{\lim\sup_{t\to\tau^{*}}\|u(\omega,t)\|_{W^{1,\infty}}=\infty\}^{C}=\{\exists M (\omega)>0,s.t.\ \|u(\omega,t)\|_{W^{1,\infty}}\leq M(\omega),\ \ \forall t<\tau^{*}\}. \tag{7.5}\]
_By the equation (6.4) and using the identity \(\partial_{x}^{2}G*f=\partial_{x}^{2}(1-\partial_{x}^{2})^{-1}f=(1-\partial_{x }^{2})^{-1}f-f\), we have_
\[\left|\frac{dv_{2x}(\omega,t,q(\omega,t,x))}{dt}\right|\] \[=|v_{2tx}(t,q)+v_{2xx}(t,q)\beta v_{1}|\] \[=|-\beta v_{1x}v_{2x}-\beta\partial_{x}^{2}(1-\partial_{x}^{2})^ {-1}(v_{1x}v_{2x})-\beta\partial_{x}(1-\partial_{x}^{2})^{-1}(v_{1x}v_{2})|\] \[=|\beta(1-\partial_{x}^{2})^{-1}(v_{1x}v_{2x})-\beta\partial_{x}( 1-\partial_{x}^{2})^{-1}(v_{1x}v_{2})|\] \[\leq\beta\|G\|_{L^{\infty}}\|v_{1x}v_{2x}\|_{L^{1}}+\beta\| \partial_{x}G\|_{L^{\infty}}\|v_{1x}v_{2}\|_{L^{1}}\] \[\leq C\beta(2\|v_{1x}\|_{L^{2}}+\|v_{2x}\|_{L^{2}}+\|v_{2}\|_{L^{ 2}})\leq C\beta(\|v_{1}(0)\|_{H^{1}}+\|v_{2}(0)\|_{H^{1}}). \tag{7.6}\]
_For \(m\geq 1\), define_
\[\tau_{m}=\inf\{t<\tau^{*}:\|u(t)\|_{H^{s}}+\|\gamma(t)\|_{H^{s}}\geq m\}.\]
_By (7.6), Sobolev's embedding and (6.2), we have_
\[\|v_{2}(\omega,t,q(\omega,t,\cdot))\|_{W^{1,\infty}}\leq C\int_{0}^{t}\beta( \omega,t^{{}^{\prime}})dt^{{}^{\prime}}(\|u_{0}\|_{H^{1}}+\|\gamma_{0}\|_{H^{1 }})+\|\gamma_{0}\|_{W^{1,\infty}},\ \ t\leq\tau_{m}. \tag{7.7}\]
_In addition, we derive from (6.5) that_
\[\frac{d}{dt}(\|v_{1}\|_{H^{s}}^{2}+\|v_{2}\|_{H^{s}}^{2})\leq C(\|u\|_{W^{1, \infty}}+\beta(\omega,t)\|v_{2}\|_{W^{1,\infty}})(\|v_{1}\|_{H^{s}}^{2}+\|v_{2 }\|_{H^{s}}^{2}).\]
_By means of Gronwall's inequality and (7.5), for any \(\omega\in\{\lim\sup_{t\to\tau^{*}}\|u(\omega,t)\|_{W^{1,\infty}}=\infty\}^{C}\), we obtain_
\[\|v_{1}(T\wedge\tau_{m})\|_{H^{s}}^{2}+\|v_{2}(T\wedge\tau_{m})\|_ {H^{s}}^{2}\] \[\leq(\|u_{0}\|_{H^{s}}^{2}+\|\gamma_{0}\|_{H^{s}}^{2})\exp\left\{ \int_{0}^{T\wedge\tau_{m}}C(\|u\|_{W^{1,\infty}}+\beta(\omega,t)\|v_{2}\|_{W^{ 1,\infty}})dt\right\},\] \[\leq(\|u_{0}\|_{H^{s}}^{2}+\|\gamma_{0}\|_{H^{s}}^{2})\] \[\times\exp\left\{C\bigg{(}M(T\wedge\tau_{m})+\int_{0}^{T\wedge \tau_{m}}\beta(\omega,t)\bigg{[}\int_{0}^{t}\beta(\omega,t^{{}^{\prime}})dt^{{}^ {\prime}}(\|u_{0}\|_{H^{1}}+\|\gamma_{0}\|_{H^{1}})+\|\gamma_{0}\|_{W^{1, \infty}}\bigg{]}dt\right)\right\}.\]
_This implies on the set \(\{\tau^{*}<\infty\}\cap\{\lim\sup_{t\to\tau^{*}}\|u(\omega,t)\|_{W^{1,\infty}}= \infty\}^{C}\),_
\[\|u(T\wedge\tau_{m})\|_{H^{s}}^{2}+\|\gamma(T\wedge\tau_{m})\|_{H^{ s}}^{2}\] \[\leq (\|u_{0}\|_{H^{s}}^{2}+\|\gamma_{0}\|_{H^{s}}^{2})\beta(\omega,T \wedge\tau_{m})\] \[\times\exp\left\{C\bigg{(}M\tau_{m}+\int_{0}^{T\wedge\tau_{m}} \bigg{[}\beta(\omega,t)\int_{0}^{t}\beta(\omega,t^{{}^{\prime}})dt^{{}^{\prime }}(\|u_{0}\|_{H^{1}}+\|\gamma_{0}\|_{H^{1}})+\|\gamma_{0}\|_{W^{1,\infty}} \bigg{]}dt\bigg{)}\right\}\] \[< \infty,\]
_where we used \(\sup_{t>0}\beta(\omega,t)<\infty\) due to \(\sup_{t>0}\mathbb{E}\beta(\omega,t)=1\) and Doob's \(L^{1}\)-inequality. Hence we can see that on the set \(\{\tau^{*}<\infty\}\), \(\{\lim\sup_{t\to\tau^{*}}\|u(t)\|_{W^{1,\infty}}=\infty\}^{C}\subset\{\lim \sup_{t\to\tau^{*}}(\|u(t)\|_{H^{s}}+\|\gamma(t)\|_{H^{s}})=\infty\}^{C}\). So, we finish the proof._\(\Box\)
**Lemma 7.2**: _(Blow-up scenario 3) Let \(s>3/2\) and \(z_{0}\) be an \(H^{s}\times H^{s}\)-valued \(\mathcal{F}_{0}\)-measurable random variable. Assume that \((z,\tau^{*})\) is the corresponding maximal solution. Then \(z\) as a \(W^{1,\infty}\times W^{1,\infty}\)-valued process is \(\mathcal{F}_{t}\)-adapted for \(t<\tau^{*}\) and \(\mathbb{P}-a.s.\) on the set \(\{\tau^{*}<\infty\}\),_
\[1_{\{\lim\sup_{t\to\tau^{*}}(\|u(t)\|_{H^{s}}+\|\gamma(t)\|_{H^{ s}})=\infty\}}=1_{\{\lim\inf_{t\to\tau^{*}}\min_{x\in\Bbbk}\{u_{x}( \omega,t,x)\}=-\infty\}}. \tag{7.8}\]
**Proof:** It is clear that \(\{\liminf_{t\to\tau^{*}}\min_{x\in\mathbb{R}}\{u_{x}(\omega,t,x)\}=-\infty\} \subset\{\lim\sup_{t\to\tau^{*}}(\|u(t)\|_{H^{s}}+\|\gamma(t)\|_{H^{s}})= \infty\}\).
The rest of proof is similar to that of Lemma 7.1 by replacing equation (7.5) with
\[\{\lim\inf_{t\to\tau^{*}}\min_{x\in\mathbb{R}}\{u_{x}(\omega,t,x)\}=-\infty\} ^{C}=\{\exists M(\omega)>0,s.t.\ u_{x}(\omega,t,x)>-M(\omega),\ \ \forall t<\tau^{*}\}. \tag{7.9}\]
Without loss of generality, we only need to show that this Lemma holds for \(s=2\). Multiplying the first equation in (7.2) by \(V_{1}=(1-\partial_{x}^{2})v_{1}\) and integrating by parts, we get
\[\frac{d}{dt}\int_{\mathbb{R}}V_{1}^{2}dx = -2\beta(w,t)\int_{\mathbb{R}}v_{1}V_{1}v_{1x}dx-4\beta(w,t)\int_{ \mathbb{R}}V_{1}^{2}v_{1x}dx-2\beta(w,t)\int_{\mathbb{R}}V_{1}V_{2}v_{2x}dx \tag{7.10}\] \[= -3\beta(w,t)\int_{\mathbb{R}}V_{1}^{2}v_{1x}dx-2\beta(w,t)\int_{ \mathbb{R}}V_{1}V_{2}v_{2x}dx.\]
Multiplying the second equation in (7.2) by \(V_{2}=(1-\partial_{x}^{2})v_{2}\) and integrating by parts, we obtain
\[\frac{d}{dt}\int_{\mathbb{R}}V_{2}^{2}dx = -\beta(w,t)\int_{\mathbb{R}}v_{1x}V_{2}^{2}dx. \tag{7.11}\]
Thus, in view of (7.9), (7.10), (7.11) and (7.7), for any \(\omega\in\{\liminf_{t\to\tau^{*}}\min_{x\in\mathbb{R}}\{u_{x}(\omega,t,x)\}=- \infty\}^{C}\), we obtain
\[\frac{d}{dt}\int_{\mathbb{R}}(V_{1}^{2}+V_{2}^{2})dx\] \[=-3\beta(w,t)\int_{\mathbb{R}}V_{1}^{2}v_{1x}dx-\beta(w,t)\int_{ \mathbb{R}}v_{1x}V_{2}^{2}dx-2\beta(w,t)\int_{\mathbb{R}}V_{1}V_{2}v_{2x}dx\] \[\leq 3M\int_{\mathbb{R}}(V_{1}^{2}+V_{2}^{2})dx\] \[+\beta(\omega,t)\bigg{(}\int_{0}^{t}\beta(\omega,t^{{}^{\prime}}) dt^{{}^{\prime}}(\|u_{0}\|_{H^{1}}+\|\gamma_{0}\|_{H^{1}})+\|\gamma_{0}\|_{W^{1, \infty}}\bigg{)}\int_{\mathbb{R}}(V_{1}^{2}+V_{2}^{2})dx\]
By means of Gronwall's inequality, we arrive at
\[\|v_{1}(T\wedge\tau_{m})\|_{H^{2}}+\|v_{2}(T\wedge\tau_{m})\|_{H^{2}}=\|V_{1}(T \wedge\tau_{m})\|_{L^{2}}+\|V_{2}(T\wedge\tau_{m})\|_{L^{2}}\]
\[\begin{array}{l}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \
In addition, since \(q(\omega,t,\cdot)\) is an increasing diffeomorphism of \(\mathbb{R}\) with \(q_{x}(\omega,t,x)>0\) for all \((t,x)\in[0,\tau^{*})\times\mathbb{R}\), by (7.3), it follows that for any \(\omega\in A_{m}\),
\[\left\{\begin{array}{ll}V_{1}(\omega,t,x)\leq 0\ \ if\ \ x\leq q(\omega,t,x_{0}), \\ V_{1}(\omega,t,x)\geq 0\ \ if\ \ x\geq q(\omega,t,x_{0}),\\ V_{1}(\omega,t,q(\omega,t,x_{0}))=0.\end{array}\right. \tag{7.16}\]
Therefore, for any \(\omega\in A_{m}\), when \(x\leq q(\omega,t,x_{0})\), by (7.14) and (7.16), we have \(v_{1}(\omega,t,x)\leq v_{1x}(\omega,t,x)\); when \(x\geq q(\omega,t,x_{0})\), by (7.13) and (7.16), we have \(v_{1x}(\omega,t,x)\geq-v_{1}(\omega,t,x)\), Therefore, it follows from (6.2) that for any \(\omega\in A_{m}\),
\[-v_{1x}(\omega,t,x)\leq|v_{1}(\omega,t,x)|\leq\|v_{1}(\omega,t,x) \|_{L^{\infty}}\leq\frac{\sqrt{2}}{2}(\|u_{0}\|_{H^{1}}+\|\gamma_{0}\|_{H^{1} }),\ \forall(t,x)\in[0,\tau^{*})\times\mathbb{R} \tag{7.17}\]
Then for any \(\omega\in A_{m}\), \(u_{x}(\omega,t)\geq-\frac{\sqrt{2}}{2}\beta(\omega,t,x)(\|u_{0}\|_{H^{1}}+\| \gamma_{0}\|_{H^{1}})\). This together with Lemma 7.2 and \(\sup_{t>0}\beta(\omega,t,x)<\infty\) implies that \(z\) globally exists.
For any \(\omega\in A_{p}\cup A_{q}\), it follows from (7.15) that \(|v_{1x}(\omega,t,x)|\leq|v_{1}(\omega,t,x)|\), in view of Sobolev inequality and (6.2), we arrive at
\[\|v_{1x}(\omega,t,x)\|_{L^{\infty}}\leq\|v_{1}(\omega,t,x)\|_{L^ {\infty}}\leq \frac{\sqrt{2}}{2}(\|u_{0}\|_{H^{1}}+\|\gamma_{0}\|_{H^{1}}),\ \forall(t,x)\in[0,\tau^{*})\times\mathbb{R},\omega\in A_{p}\cup A_{q}. \tag{7.18}\]
Combining \(A_{p}\cap A_{q}\cap A_{m}=\emptyset\), (7.17) and (7.18), we derive (7.12). \(\Box\)
**Proof of Theorem 3.8.** Note that \(\sup_{t>0}\mathbb{E}\beta(\omega,t,x)=1\) and Doob's \(L^{1}\)-inequality implies that \(\sup_{t>0}\beta(\omega,t,x)<\infty\). Then we can infer from (7.4), (7.8) and (7.12) that \(\mathbb{P}\{\tau^{*}=\infty\}\geq p+q+m\). This completes the proof.
### Proof of Theorem 3.9
The proof of Theorem 3.9 relies on certain properties of the solution \(v_{1},v_{2}\) to the equations (6.3) and (6.4). We first prove the following lemma.
**Lemma 7.4**: _Let \(s>5/2\) and \(b(t)\) satisfy Assumption 3.3. Assume \((u_{0},\gamma_{0})\) is an \(H^{s}\times H^{s}\)-valued \(\mathcal{F}_{0}\)-measurable random variable. Let \(K=\frac{\sqrt{2}}{2}(\|u_{0}\|_{H^{1}}^{2}+\|\gamma_{0}\|_{H^{1}}^{2})^{\frac{1 }{2}}\). Then for \(v_{1},v_{2}\) defined by (6.3), (6.4) and any \(x_{0}\in\mathbb{R}\),_
\[g(\omega,t):=v_{1x}(\omega,t,q(\omega,t,x_{0}))\]
_satisfies \(\mathbb{P}-a.s.\)_
\[\frac{d}{dt}g(t)\leq\beta K^{2}-\frac{\beta}{2}g^{2}(t),\ \ t<\tau^{*}. \tag{7.19}\]
_Moreover, if there exists some \(x_{0}\in\mathbb{R}\) such that \(\mathbb{P}-a.s.\)\(g(0)<-\sqrt{2}K\), then \(\mathbb{P}-a.s..\)\(g(t)\) is non-increasing on \([0,\tau^{*})\) and_
\[g(t)<-\sqrt{2}K,\ \ t\in[0,\tau^{*}). \tag{7.20}\]
**Proof:** For any \(v_{1},v_{2}\in H^{1}\), by the representation of \(G*f=(1-\partial_{x}^{2})^{-1}f\), we have
\[G*\bigg{(}v_{1}^{2}+\frac{1}{2}v_{1x}^{2}\Big{)}(x)=\frac{1}{2} \int_{-\infty}^{x}\ e^{-x+y}\bigg{(}v_{1}^{2}+\frac{1}{2}v_{1x}^{2}\bigg{)}(y) dy+\frac{1}{2}\int_{x}^{\infty}e^{x-y}\bigg{(}v_{1}^{2}+\frac{1}{2}v_{1x}^{2} \bigg{)}(y)dy. \tag{7.21}\]
The following inequality
\[\int_{-\infty}^{x}e^{y}\bigg{(}v_{1}^{2}+v_{1x}^{2}\bigg{)}(y)dy \geq 2\int_{-\infty}^{x}\ e^{y}v_{1}v_{1x}(y)dy=e^{x}v_{1}^{2}(x)-\int_{- \infty}^{x}e^{y}v_{1}^{2}dy\]
implies that
\[\frac{1}{2}\int_{-\infty}^{x}e^{-x+y}\bigg{(}v_{1}^{2}+\frac{1}{2}v_{1x}^{2} \bigg{)}(y)dy\geq\frac{1}{4}v_{1}^{2}(x). \tag{7.22}\]
Similarly, we get the estimate of the second term in (7.21) as
\[\frac{1}{2}\int_{x}^{\infty}e^{x-y}\bigg{(}v_{1}^{2}+\frac{1}{2}v_{1x}^{2} \bigg{)}(y)dy\geq\frac{1}{4}v_{1}^{2}(x), \tag{7.23}\]
Combining (7.21), (7.22) and (7.23), we deduce \(G*(v_{1}^{2}+\frac{1}{2}v_{1x}^{2})(x)\geq\frac{1}{2}v_{1}^{2}(x)\). In addition,
\[\|G*v_{2x}^{2}\|_{L^{\infty}}\leq\|G\|_{L^{\infty}}\|v_{2x}^{2}\|_{L^{1}}= \frac{1}{2}\|v_{2x}^{2}\|_{L^{1}}. \tag{7.24}\]
Differentiating the first equation of (6.1) with respect to \(x\), and using (6.2) and (7.24), we have
\[\frac{d}{dt}v_{1x}(\omega,t,q(t,\omega,x)) = v_{1xt}+v_{1xx}\beta(\omega,t,x)v_{1}(\omega,t,q(\omega,t,x))\] \[= -\beta v_{1x}^{2}-\beta\partial_{x}^{2}(1-\partial_{x}^{2})^{-1} \left(v_{1}^{2}+\frac{1}{2}v_{1x}^{2}+\frac{1}{2}v_{2}^{2}-\frac{1}{2}v_{2x}^ {2}\right)\] \[\leq -\frac{1}{2}\beta v_{1x}^{2}+\frac{1}{2}\beta v_{1}^{2}+\frac{1} {4}\beta v_{2}^{2}+\frac{3}{4}\beta G*(v_{2x}^{2})\] \[\leq -\frac{1}{2}\beta v_{1x}^{2}+\frac{\beta}{2}(\|u_{0}\|_{H^{1}}^{ 2}+\|\gamma_{0}\|_{H^{1}}^{2}).\]
In view of the assumptions of Lemma 7.4, we have \(\mathbb{P}-a.s.\)
\[\frac{d}{dt}g(t)\leq-\frac{\beta}{2}g^{2}(t)+\beta K^{2},\ \ t<\tau^{*},\]
which is (7.19). In order to prove (7.20), define
\[\zeta(w):=\inf\bigg{\{}t\in[0,\tau^{*}):g(w,t)>-\sqrt{2}K\bigg{\}}.\]
If \(g(0)<-\sqrt{2}K\), then \(\mathbb{P}\{\zeta>0\}=1\). From the definition of \(\zeta(\omega)\), we find that \(\zeta(\omega)\leq\tau^{*}\), for \(\mathbb{P}-a.s.\)\(w\in\Omega\). From (7.19), we have that \(g(\omega,t)\) is nonincreasing for \(t\in[0,\zeta(\omega))\). Hence by the continuity of the path of \(g(\omega,t)\), we obtain that \(g(\omega,t)\leq g(0)<-\sqrt{2}K,\ \ t\in[0,\zeta(\omega))\). In view of the time continuity of \(g(\omega,t)\) again, we find that \(\mathbb{P}\{\zeta=\tau^{*}\}=1.\) Hence (7.20) is true.
\(\Box\)
**Proof of Theorem 3.9.** From Lemma 7.4 and (3.5), we rewrite (7.19) as
\[\frac{d}{dt}g(t) \leq -\frac{\beta(t)}{2}\bigg{(}1-\frac{2K^{2}}{g^{2}(0)}\bigg{)}g^{2 }(t)-\left(\frac{g^{2}(t)}{g^{2}(0)}-1\right)\beta(t)K^{2}\] \[\leq -\frac{\beta(t)}{2}\bigg{(}1-\frac{2K^{2}}{g^{2}(0)}\bigg{)}g^{2 }(t),\ \ t\in[0,\tau^{*}).\]
Integrating on both sides leads to \(\mathbb{P}-a.s.\)
\[\frac{1}{g(t)}-\frac{1}{g(0)}\geq\bigg{(}1-\frac{2K^{2}}{g^{2}(0)}\bigg{)} \int_{0}^{t}\frac{\beta(t^{{}^{\prime}})}{2}dt^{{}^{\prime}},\ \ t\in[0,\tau^{*}).\]
Assuming \(\Omega^{{}^{\prime}}=\{\omega:\beta(t,\omega)\geq ce^{-\frac{k^{*}}{2}t}\ \mbox{for all}\ t\},\)\(g(t)\leq-\sqrt{2}K\) means that \(\mathbb{P}-a.s.\)\(\omega\in\Omega^{{}^{\prime}}\)
\[-\frac{1}{g(0)}\geq\bigg{(}\frac{1}{2}-\frac{K^{2}}{g^{2}(0)}\bigg{)}\int_{0} ^{\tau^{*}}\beta(t^{{}^{\prime}})dt^{{}^{\prime}}\geq\bigg{(}\frac{1}{2}-\frac {K^{2}}{g^{2}(0)}\bigg{)}\bigg{(}\frac{2c}{b^{*}}-\frac{2c}{b^{*}}e^{-\frac{b ^{*}}{2}\tau^{*}}\bigg{)}.\]
If \(g(0)<-\frac{1}{2}\sqrt{\frac{(b^{*})^{2}}{c^{2}}+8K^{2}-\frac{b^{*}}{2c}}\), we obtain on \(\Omega^{\prime}\)
\[\bigg{(}\frac{1}{2}-\frac{K^{2}}{g^{2}(0)}\bigg{)}\frac{2c}{b^{*}}e^{-\frac{b^{* }}{2}\tau^{*}}\geq\frac{2c}{b^{*}}\bigg{(}\frac{1}{2}-\frac{K^{2}}{g^{2}(0)} \bigg{)}+\frac{1}{g(0)}>0.\]
Therefore we have \(\tau^{*}<\infty\)\(\mathbb{P}-a.s.\) on \(\Omega^{{}^{\prime}}\), which implies that
\[\mathbb{P}\{\tau^{*}<\infty\}\geq\mathbb{P}\{\beta(t)\geq ce^{-\frac{b^{*}}{2} t}\ for\ all\ t\}=\mathbb{P}\left\{e^{\int_{0}^{t}b(t^{{}^{\prime}})dW_{t^{{}^{ \prime}}}+\int_{0}^{t}\frac{b^{*}-b^{2}(t^{{}^{\prime}})}{2}dt^{{}^{\prime}}} \geq c\ for\ all\ t\right\}>0.\]
We finish the proof.
### Proof of Theorem 3.10
The proof of Theorem 3.10 is similar to that of Theorem 3.9. We first prove the following lemma.
**Lemma 7.5**: _Let \(s>5/2\) and \(b(t)\) satisfy Assumption 3.3. Assume \((u_{0},\gamma_{0})\) is an \(H^{s}\times H^{s}\)-valued \(\mathcal{F}_{0}\)-measurable random variable. Let \(K=\frac{\sqrt{2}}{2}(\|u_{0}\|_{H^{1}}^{2}+\|\gamma_{0}\|_{H^{1}}^{2})^{\frac{1 }{2}}\). Then for \(v_{1},v_{2}\) defined by (6.3), (6.4),_
\[N(\omega,t):=\int_{\mathbb{R}}v_{1x}^{3}(\omega,t,q(\omega,t,x))dx\]
_satisfies \(\mathbb{P}-a.s.\)_
\[\frac{d}{dt}N(t)\leq\frac{15\beta}{4}K^{4}-\frac{\beta}{4K^{2}}N^{2}(t),\ \ t<\tau^{*}. \tag{7.25}\]
_Moreover, if \(\mathbb{P}-a.s.\)\(N(0)<-\sqrt{15}K^{3}\), then \(\mathbb{P}-a.s..\)\(N(t)\) is non-increasing on \([0,\tau^{*})\) and_
\[N(t)<-\sqrt{15}K^{3},\ \ t\in[0,\tau^{*}). \tag{7.26}\]
**Proof:** _Differentiating the first equation of (6.1) with respect to \(x\), and using the \(\partial_{x}^{2}(1-\partial_{x}^{2})^{-1}f=\partial_{x}^{2}G*f=G*f-f\), we have_
\[v_{1xt}+\frac{\beta}{2}v_{1x}^{2}+\beta v_{1}v_{1xx}+\beta G*\left(v_{1}^{2}+ \frac{1}{2}v_{1x}^{2}+\frac{1}{2}v_{2}^{2}-\frac{1}{2}v_{2x}^{2}\right)-\beta \left(v_{1}^{2}+\frac{1}{2}v_{2}^{2}-\frac{1}{2}v_{2x}^{2}\right)=0. \tag{7.27}\]
_Let \(N(t):=\int_{\mathbb{R}}v_{1x}^{3}(\omega,t,x)dx,t\geq 0\). Multiplying (7.27) with \(v_{1x}^{2}\) and integrating by parts subsequently, by \(G*(v_{1}^{2}+\frac{1}{2}v_{1x}^{2})(x)\geq\frac{1}{2}v_{1}^{2}(x)\), we get_
\[\frac{1}{3}\frac{dN(t)}{dt}= -\frac{\beta}{6}\int_{\mathbb{R}}v_{1x}^{4}dx-\beta\int_{\mathbb{ R}}v_{1x}^{2}G*(v_{1}^{2}+\frac{1}{2}v_{1x}^{2}+\frac{1}{2}v_{2}^{2}- \frac{1}{2}v_{2x}^{2})dx+\beta\int_{\mathbb{R}}v_{1x}^{2}(v_{1}^{2}+\frac{1}{2 }v_{2}^{2}-\frac{1}{2}v_{2x}^{2})dx\] \[\leq -\frac{\beta}{6}\int_{\mathbb{R}}v_{1x}^{4}dx+\frac{\beta}{2}\int _{\mathbb{R}}v_{1x}^{2}v_{1x}^{2}dx+\frac{\beta}{2}\int_{\mathbb{R}}v_{1x}^{2} G*v_{2x}^{2}dx+\frac{\beta}{2}\int_{\mathbb{R}}v_{1x}^{2}v_{2}^{2}dx\] \[\leq -\frac{\beta}{6}\int_{\mathbb{R}}v_{1x}^{4}dx+\frac{\beta}{2}\int _{\mathbb{R}}v_{1x}^{2}(v_{1}^{2}+v_{2}^{2})dx+\frac{\beta}{4}\|v_{2x}^{2}\|_{ L^{1}}\int_{\mathbb{R}}v_{1x}^{2}dx.\]
_In view of Sobolev's embedding and the invariant property of \(\|v_{1}(t)\|_{H^{1}}^{2}+\|v_{2}(t)\|_{H^{1}}^{2}=\|u_{0}\|_{H^{1}}^{2}+\| \gamma_{0}\|_{H^{1}}^{2}\), we find that_
\[\frac{3}{2}\int_{\mathbb{R}}v_{1x}^{2}(v_{1}^{2}+v_{2}^{2})dx+\frac{3}{4}\|v_{ 2x}^{2}\|_{L^{1}}\int_{\mathbb{R}}v_{1x}^{2}dx\leq\frac{3}{4}(\|u_{0}\|_{H^{1}}^ {2}+\|\gamma_{0}\|_{H^{1}}^{2})^{2}+\frac{3}{16}(\|u_{0}\|_{H^{1}}^{2}+\|\gamma_{ 0}\|_{H^{1}}^{2})^{2}.\]
_On the other hand, the Cauchy-Schwarz inequality implies that_
\[\bigg{|}\int_{\mathbb{R}}v_{1x}^{3}dx\bigg{|}\leq\bigg{(}\int_{ \mathbb{R}}v_{1x}^{4}dx\bigg{)}^{\frac{1}{2}}\bigg{(}\int_{\mathbb{R}}v_{1x}^{2} dx\bigg{)}^{\frac{1}{2}},\]
_hence,_
\[\int_{\mathbb{R}}v_{1x}^{4}dx\geq\frac{1}{\|u_{0}\|_{H^{1}}^{2}+ \|\gamma_{0}\|_{H^{1}}^{2}}\bigg{(}\int_{\mathbb{R}}v_{1x}^{3}dx\bigg{)}^{2}.\]
_As defined in Lemma 6.5, \(K=\frac{\sqrt{2}}{2}(\|u_{0}\|_{H^{1}}^{2}+\|\gamma_{0}\|_{H^{1}}^{2})^{\frac{1 }{2}}\), we have the similar Riccati type equation_
\[\frac{dN(t)}{dt}\leq-\frac{\beta}{4K^{2}}N^{2}(t)+\frac{15\beta}{ 4}K^{4},\]
_which is (7.25). In order to prove (7.26), define stopping time_
\[\chi(w):=\inf\bigg{\{}t\in[0,\tau^{*}):N(w,t)>-\sqrt{15}K^{3}\bigg{\}}.\]
_If \(N(0)<-\sqrt{15}K^{3}\), then \(\mathbb{P}\{\chi(\omega)>0\}=1\). From the definition of \(\chi(\omega)\), we find that \(w\in\Omega\), \(\chi(\omega)\leq\tau^{*}\). From (7.25), we conclude that \(N(\omega,t)\) is nonincreasing for \(t\in[0,\chi(\omega))\). Hence by the continuity of the path of \(N(\omega,t)\), we obtain that \(N(\omega,t)\leq N(0)<-\sqrt{15}K^{3}\). In view of the time continuity of \(N(\omega,t)\) again, we find that \(\mathbb{P}\{\chi=\tau^{*}\}=1\). Therefore, (7.26) is true. \(\Box\)_
**Proof of Theorem 3.10.** From Lemma 7.5 and (3.6), we rewrite (7.25) as
\[\frac{d}{dt}N(t) \leq -\frac{\beta(t)}{4K^{2}}\bigg{(}1-\frac{15K^{6}}{N^{2}(0)}\bigg{)} N^{2}(t)-\left(\frac{N^{2}(t)}{N^{2}(0)}-1\right)\frac{15\beta(t)}{4}K^{4}\] \[\leq -\frac{\beta(t)}{4K^{2}}\bigg{(}1-\frac{15K^{6}}{N^{2}(0)}\bigg{)} N^{2}(t),\ \ t\in[0,\tau^{*}).\]
Integrating on both sides leads to \(\mathbb{P}-a.s.\)
\[\frac{1}{N(t)}-\frac{1}{N(0)}\geq\bigg{(}1-\frac{15K^{6}}{N^{2}( 0)}\bigg{)}\int_{0}^{t}\frac{\beta(t^{{}^{\prime}})}{4K^{2}}dt^{{}^{\prime}}, \ \ t\in[0,\tau^{*}).\]
Assuming \(\Omega^{{}^{\prime}}=\{\omega:\beta(t,\omega)\geq ce^{-\frac{b^{*}}{2}t}\ for\ all\ t\}\), also due to \(N(t)<-\sqrt{15}K^{3}\), we get \(\mathbb{P}-a.s.\)\(\omega\in\Omega^{{}^{\prime}}\)
\[-\frac{1}{N(0)}\geq\bigg{(}\frac{1}{4K^{2}}-\frac{15K^{4}}{4N^{2 }(0)}\bigg{)}\int_{0}^{\tau^{*}}\beta(t^{{}^{\prime}})dt^{{}^{\prime}}\geq \bigg{(}\frac{1}{4K^{2}}-\frac{15K^{4}}{4N^{2}(0)}\bigg{)}\bigg{(}\frac{2c}{b^ {*}}-\frac{2c}{b^{*}}e^{-\frac{b^{*}}{2}\tau^{*}}\bigg{)}.\]
If \(N(0)<-\sqrt{\frac{(b^{*})^{2}K^{4}}{c^{2}}+15K^{6}}-\frac{b^{*}K^{2}}{c}\), we obtain on \(\Omega^{\prime}\)
\[\bigg{(}\frac{1}{4K^{2}}-\frac{15K^{4}}{4N^{2}(0)}\bigg{)}\frac{ 2c}{b^{*}}e^{-\frac{b^{*}}{2}\tau^{*}}\geq\frac{2c}{b^{*}}\bigg{(}\frac{1}{4K ^{2}}-\frac{15K^{4}}{4N^{2}(0)}\bigg{)}+\frac{1}{N(0)}>0.\]
Therefore we obtain \(\tau^{*}<\infty\)\(\mathbb{P}-a.s.\) on \(\Omega^{{}^{\prime}}\), which means that
\[\mathbb{P}\{\tau^{*}<\infty\}\geq\mathbb{P}\{\beta(t)\geq ce^{- \frac{b^{*}}{2}t}\ for\ all\ t\}=\mathbb{P}\left\{e^{\int_{0}^{t}b(t^{{}^{ \prime}})dW_{t^{{}^{\prime}}}+\int_{0}^{t}\frac{b^{*}-b^{2}(t^{{}^{\prime}})}{ 2}dt^{{}^{\prime}}}\geq c\ for\ all\ t\right\}>0.\]
So, the proof is finished.
Acknowledgments
This paper is supported by Fundamental Research Funds for the Central Universities (No. 22D110913).
|
2308.08305 | Warped geometric information on the optimisation of Euclidean functions | We consider the fundamental task of optimising a real-valued function defined
in a potentially high-dimensional Euclidean space, such as the loss function in
many machine-learning tasks or the logarithm of the probability distribution in
statistical inference. We use Riemannian geometry notions to redefine the
optimisation problem of a function on the Euclidean space to a Riemannian
manifold with a warped metric, and then find the function's optimum along this
manifold. The warped metric chosen for the search domain induces a
computational friendly metric-tensor for which optimal search directions
associated with geodesic curves on the manifold becomes easier to compute.
Performing optimization along geodesics is known to be generally infeasible,
yet we show that in this specific manifold we can analytically derive Taylor
approximations up to third-order. In general these approximations to the
geodesic curve will not lie on the manifold, however we construct suitable
retraction maps to pull them back onto the manifold. Therefore, we can
efficiently optimize along the approximate geodesic curves. We cover the
related theory, describe a practical optimization algorithm and empirically
evaluate it on a collection of challenging optimisation benchmarks. Our
proposed algorithm, using 3rd-order approximation of geodesics, tends to
outperform standard Euclidean gradient-based counterparts in term of number of
iterations until convergence. | Marcelo Hartmann, Bernardo Williams, Hanlin Yu, Mark Girolami, Alessandro Barp, Arto Klami | 2023-08-16T12:08:50Z | http://arxiv.org/abs/2308.08305v2 | # Warped geometric information on the optimisation of Euclidean functions
###### Abstract
We consider the fundamental task of optimizing a real-valued function defined in a potentially high-dimensional Euclidean space, such as the loss function in many machine-learning tasks or the logarithm of the probability distribution in statistical inference. We use the warped Riemannian geometry notions to redefine the optimisation problem of a function on Euclidean space to a Riemannian manifold with a warped metric, and then find the function's optimum along this manifold. The warped metric chosen for the search domain induces a computational friendly metric-tensor for which optimal search directions associate with geodesic curves on the manifold becomes easier to compute. Performing optimization along geodesics is known to be generally infeasible, yet we show that in this specific manifold we can analytically derive Taylor approximations up to \(3^{\mathrm{rd}}\)-order. In general these approximations to the geodesic curve will not lie on the manifold, however we construct suitable retraction maps to pull them back onto the manifold. Therefore, we can efficiently optimize along the approximate geodesic curves. We cover the related theory, describe a practical optimization algorithm and empirically evaluate it on a collection of challenging optimisation benchmarks. Our proposed algorithm, using \(3^{\mathrm{rd}}\)-order approximation of geodesics, outperforms standard Euclidean gradient-based counterparts in term of number of iterations until convergence and an alternative method for Hessian-based optimisation routines.
## 1 Introduction
A central task in computational statistics and machine-learning (ML) is defined in terms of optimization. Usually termed as _learning_, the goal is to find a parameter \(\boldsymbol{\theta}\in\Theta\subseteq\mathbb{R}^{D}\) that maximises (or, equivalently, minimises) some objective function \(\ell(\boldsymbol{\theta})\). For instance, maximum a posteriori (MAP) estimation falls into this category, with \(\ell(\boldsymbol{\theta})=\log\pi_{\mathrm{post}}(\boldsymbol{\theta})\) corresponding to the logarithm of posterior distribution for a collection of real data (typically independent). Such optimization problems are routinely solved using gradient-based methods (Hestenes et al., 1952; Nocedal and Wright, 2006), with stochastic versions (Kingma and Ba, 2015) dominating the field for large-scale models such as deep neural networks and approximate second-order methods like BFGS (Nocedal, 1989) used for faster convergence in problems of smaller scale.
Typical optimization methods assume the objective function domain \(\Theta\) to be Euclidean and they vary primarily in terms of how the search directions are specified. From direct use of gradients to various forms of conjugate gradient variants, see for example Nesterov (1983), Bhaya and Kaszkurewicz (2004) or Shanno (1978), and how updates of those directions in
combination with gradients are specified (Shanno, 1978). The scientific literature covers such optimization methods in great detail, with several theoretical results and practical efficiency covered in Shanno (1978) and Polak (1997).
We approach the problem from the Riemannian geometry viewpoint. Rather than directly optimizing the target function \(\ell\) whose domain (search space) is Euclidean, we define a new function \(f\) on the target's function graph and endow the space in which the graph is immersed with a warped geometry. The domain of \(f\) can now be seen as embedded Riemannian manifold with a warped metric and this is formally called warped product space, see for example O'Neill (1983), Zhang (2014) more recently Barreto et al. (2023). For the sake of introduction, let's denoted this manifold as \(\mathcal{M}\) and its elements \(\mathbf{x}\) that will be made precise later on. Each point \(\mathbf{x}\) on the manifold encodes both \(\mathbf{\theta}\) and the function value \(\ell(\mathbf{\theta})\) in a bijective manner with \(\Theta\), thus the optima of \(f\) on \(\mathcal{M}\) preserves the optima of \(\ell\) on \(\Theta\). Because the set \(\mathcal{M}\) is a Riemannnian manifold, we can harness the geometric information contained in the domain of \(f\) and endow the optimisation routine with Riemannian tools. In the recent literature, Duruisseaux and Leok (2022a), Duruisseaux and Leok (2022b) and references therein have showed that optimisation on manifolds can achieve accelerated convergence rates.
For arbitrary manifolds the computational burden would increase due to the need for accounting extra Riemannian notions. For example, the notion of straight lines is replaced with geodesic paths on the \(\mathcal{M}\) and the generalization of parallelism relies on the parallel transport operation (see Do Carmo, 1992, Chapter 2 and Chapter 3). Those more general concepts commonly bring extra difficulty and higher computational cost as no closed-form arithmetics are usually known. However, for this particular embedding and suitably chosen metrics it turns out that we can perform all the necessary computations for individual updates within the algorithm fast, in linear time with respect to the problem dimensionality, matching the (asymptotic) cost of standard first-order gradient-based methods operating in the Euclidean space. This paper introduces such manifold with a particular metric, providing a practical optimization algorithm that is demonstrated to perform well especially in optimisation tasks of high curved surfaces that are difficult for standard methods.
Our proposed algorithm follows closely the work by Zhu (2020), where the search directions in Riemannian conjugate gradient (RCG) methods (see Sato, 2021; Sakai and Iiduka, 2021; Sato, 2022; Oviedo, 2022) and parallel transport operations are respectively replaced by a \(1^{\text{st}}\)-order geodesic approximation (retraction map) and vector transport, the latter using the idea of inverse backward retraction mapping via orthogonal projection (Luenberger, 1972). As also presented in Zhu (2020), these operations are of easy computation and have provided similar convergence speed performance compared to closed-form parallel transport on specific matrix manifolds (Absil et al., 2008; Sato, 2021; Boumal, 2023).
Our proposed approach builds on two key elements. First, we recast the optimisation task of a Euclidean function in the optimisation of a new function on the embedded manifold which is given by the function graph's associated with a specific warped Riemannian geometry. This will allows us to harness the intrinsic geometric properties of the problem to design a new optimisation algorithm. A related approach was recently used by Hartmann et al. (2022) for constructing a geometric Markov Chain Monte Carlo sampler and was shown to induce a natural Riemannian metric-tensor that has highly desirable computational properties. For instance, we can compute its inverse metric-tensor and the Christoffel symbols in closed-form to bring down the computational costs considerably. See also Tosi et al. (2014) for similar
construction as a pull-back metric-tensor in latent variable models.
The second key contribution is the use of a \(3^{\mathrm{rd}}\)-order approximation of a geodesics path as optimization search directions. While we cannot perform efficient computation along the exact geodesics because it would require numerical solution of a system of differential equations and within it calling the metric-tensor itself several times, we show that we can construct a computationally efficient \(3^{\mathrm{rd}}\)-order Taylor series of geodesics at any point on \(\mathcal{M}\). Monera et al. (2014) noted that the tangential component of the geodesic only depends on the \(2^{\mathrm{nd}}\)-order geometry of \(\mathcal{M}\), suggesting that both \(2^{\mathrm{nd}}\)- and \(3^{\mathrm{rd}}\)-order approximation are practically feasible. As we will show, the \(3^{\mathrm{rd}}\)-order approximation can be rewritten using only the \(2^{\mathrm{nd}}\)-order geometry of \(\mathcal{M}\)(see Song et al., 2018, Appendix, Section C for similar use approach) and it is not necessary to form the Hessian explicitly. Instead we directly implement its multiplication by a vector of suitable dimension. This brings down the cost to linear in the problem dimensionality (Pearlmutter, 1994). Because the approximate geodesics usually will not map back to a point in \(\mathcal{M}\), we need to perform a retraction step to pull the updated result back onto the manifold. For our case we can define a valid retraction map based on the embedding with no significantly additional computational costs.
The rest of the paper is organized as follows. In Sections 2, 3 and 4 we introduce the necessary background required for understanding the technical contributions and explain the problem formulation, that is, the warped Riemannian space, the Riemannian metric it induces on the tangent space of the embedding and the Riemann conjugate gradient optimisation approach.
In Sections 5 and 6 we present the choice of retraction based on the aforementioned approximation and, in Section 7, the particular form of the backward retraction map. In Section 8, we present the resulting Riemann conjugate gradient (RCG) that has linear complexity in terms of the input dimensionality. In Section 9 we evaluate the algorithm in three challenging benchmarks examples, the multidimensional squiggle, the multidimensional rosenbrock function, and a subset of models in the CUTE library. We compare all these cases against the state-of-art conjugate gradient method with Hager-Zhang type of inexact line-search (Hager and Zhang, 2006) considering gradient and Newton's search directions.
## 2 Preliminaries and notation
A set \(\mathcal{M}\) is called _manifold_ of dimension \(D\) if together with bijective mappings (at times called parametrisation) \(\xi_{i}:\Theta_{i}\subseteq\mathbb{R}^{D}\to\mathcal{M}\) satisfies (a) \(\cup_{i}\xi_{i}(\boldsymbol{\theta})=\mathcal{M}\) and (b) for each \(i\), \(j\)\(\xi_{i}(\Theta_{i})\cap\xi_{j}(\Theta_{j})\neq\emptyset\). A manifold is called a Riemmanian manifold when it is characterized by a pair \((\mathcal{M},g)\) where for each \(\boldsymbol{x}\in\mathcal{M}\) the function (called metric) \(g:T_{\boldsymbol{x}}\,\mathcal{M}\times T_{\boldsymbol{x}}\,\mathcal{M}\to \mathbb{R}\) associates the usual dot product of vectors in the tangent space at \(\boldsymbol{x}\) (denoted as \(T_{\boldsymbol{x}}\,\mathcal{M}\)), that is \((\boldsymbol{V},\boldsymbol{U})\xrightarrow{g}\langle\boldsymbol{V}, \boldsymbol{U}\rangle_{\boldsymbol{x}}\). If \(g\) is a positive function we call it _Riemannian metric_.
Let \((\mathcal{M}^{m},\langle\cdot,\cdot\rangle_{M})\) and \((\mathcal{N}^{n},\langle\cdot,\cdot\rangle_{N})\) be Riemannian manifolds of dimensions \(m\) and \(n\) respectively. Also let \(\psi:\mathcal{N}\to(0,\infty)\) be a positive and smooth function namely _warp function_. The product \(\mathcal{M}\times\mathcal{N}\) endowed with the Riemannian metric
\[g=\langle\cdot,\cdot\rangle_{\psi}=\psi^{2}\langle\cdot,\cdot\rangle_{ \mathcal{M}}+\langle\cdot,\cdot\rangle_{\mathcal{N}}, \tag{1}\]
is called _warped product space_ and denoted as \(\mathcal{N}\times\mathcal{M}_{\psi}\). Let \(\mathcal{M}=\mathbb{I}\subset\mathbb{R}\) and \(\mathcal{N}=\Theta\) where \(\Theta\) is the \(D\)-dimensional open set of \(\mathbb{R}^{D}\) with the usual Euclidean metric. Denote as
\(\ell:\Theta\to\mathbb{I}\subseteq\mathbb{R}\) an arbitrary function whose graph is defined as \(\Gamma_{\ell}=\{(\boldsymbol{\theta},\ell(\boldsymbol{\theta})):\boldsymbol{ \theta}\in\Theta\}\). The _canonical parametrisation_ of \(\Gamma_{\ell}\) in \(\mathcal{N}\times\mathcal{M}_{\psi}\) is set as \(\xi:\Theta\to\Gamma_{\ell}\subset\mathcal{N}\times\mathcal{M}_{\psi}\) where \(\xi(\boldsymbol{\theta})=(\boldsymbol{\theta},\ell(\boldsymbol{\theta}))\). Let's denote tangent vectors at \(\boldsymbol{x}\in\Gamma_{\ell}\) as \(\mathrm{d}\xi_{\boldsymbol{x}}(\boldsymbol{v})=\boldsymbol{M}_{\partial} \boldsymbol{v}\) and \(\mathrm{d}\xi_{\boldsymbol{x}}(\boldsymbol{u})=\boldsymbol{M}_{\partial} \boldsymbol{u}\) where \(\boldsymbol{M}_{\partial}=[\partial_{1}\xi\;\dots\;\partial_{D}\xi]\) stacks the tangent basis vectors associated with the canonical parametrisation and \(\boldsymbol{u},\boldsymbol{v}\in\Theta\). Then, the induced metric on \(T_{\boldsymbol{x}}\Gamma_{\ell}\), using (1) is given by
\[\langle\mathrm{d}\xi_{\boldsymbol{x}}(\boldsymbol{v}),\mathrm{d} \xi_{\boldsymbol{x}}(\boldsymbol{u})\rangle_{\psi} =\langle\boldsymbol{M}_{\partial}\boldsymbol{v},\boldsymbol{M}_{ \partial}\boldsymbol{u}\rangle_{\psi}\] \[=\psi^{2}\langle\boldsymbol{u}^{\top}\nabla\ell,\boldsymbol{v}^{ \top}\nabla\ell\rangle+\langle\boldsymbol{u},\boldsymbol{v}\rangle\] \[=\boldsymbol{v}^{\top}\big{(}I_{D}+\psi^{2}\nabla\ell\nabla\ell^ {\top}\big{)}\boldsymbol{u}^{\top}=:\langle\boldsymbol{v},\boldsymbol{u} \rangle_{G(\boldsymbol{x})} \tag{2}\]
where \(G(\boldsymbol{x})=I_{D}+\psi^{2}\nabla\ell\nabla\ell^{\top}\) is the _warped metric-tensor_. From now on we will omit the argument of functions that will depend either on \(\boldsymbol{\theta}\in\Theta\) or \(\boldsymbol{x}\in\Gamma_{\ell}\) and recall that since \(\xi\) is a bijection we will only make the use of the notation \(\boldsymbol{\theta}\) or \(\boldsymbol{x}\) as a variable of a function whenever the current text passage calls it necessary.
## 3 Problem formulation and method overview
Among the study of topological properties and invariances of given smooth sets, differential geometry aims to extend the notions of differential calculus to spaces more general than Euclidean, as to enable characterizing the rate of change for computing derivatives on \(\mathcal{M}\) intrinsically, without referring to any external coordinate space. The tangent space above does exactly this; if we were to choose a different global atlas \(\bar{\xi}(\bar{\Theta})=\mathcal{M}\) representing the manifold, the tangent vector would only have a different basis but is still the same. This seems, at least, a compelling reason to perform optimisation using notions of Riemannian geometry so that it would free us on the task of choosing a coordinate system on which the optimisation procedure behaves the best. Amari (1998), Honkela et al. (2010), Hartmann (2018) and Duruisseaux and Leok (2022a) observed that geometric notions can make algorithms less prone to stability issues and therefore more reliable, computationally efficient and lead to faster convergence rates (Ganea and Becigneul, 2018; Duruisseaux and Leok, 2022b). From now on, we will introduce the problem and formulate it from the Riemannian viewpoint.
Consider that \(\ell\) is now an objective function for which we aim to solve the maximization task
\[\boldsymbol{\theta}_{*}=\arg\max_{\boldsymbol{\theta}\in\Theta}\ell( \boldsymbol{\theta}). \tag{3}\]
We rephrase the optimisation of the function \(\ell\) to a problem of maximizing a function \(f:\Gamma_{\ell}\to\mathbb{R}\) where \(\Gamma_{\ell}\) is an embedded manifold with the metric given in (2). First specify the mapping \(\Gamma_{\ell}\ni\boldsymbol{x}\xrightarrow{f}x_{D+1}\) and since \(\xi\) is a bijection between \(\Gamma_{\ell}\) and \(\Theta\) we have,
\[\boldsymbol{x}_{*}=\arg\max_{\boldsymbol{x}\in\mathcal{M}}f(\boldsymbol{x}) \text{ where }\boldsymbol{x}_{*}=(\boldsymbol{\theta}_{*},\ell(\boldsymbol{\theta}_{*})) \text{ and }\boldsymbol{\theta}_{*}=\arg\max_{\boldsymbol{\theta}\in\Theta}\ell( \boldsymbol{\theta}). \tag{4}\]
This means that the first \(D\) components of \(\boldsymbol{x}_{*}\in\Gamma_{\ell}\) are the same as \(\boldsymbol{\theta}_{*}\in\Theta\). As \(\Gamma_{\ell}\) is now the search space endowed with a geometry that is Riemannian (see Do Carmo, 1992, 2017, for example) we can harness its intrinsic geometric information and design an optimisation algorithm based on Riemannian concepts. Observe that the metric-tensor \(G\) above has the same structural properties as the metric proposed by Hartmann et al. (2022) where the
function \(\psi\) plays the role of the parameter \(\alpha\)(see Hartmann et al., 2022) and that its inverse is fast to compute since
\[G^{-1}(\mathbf{x})=I_{D}-\tfrac{\psi^{2}}{W^{2}}\nabla\ell\nabla\ell^{\top}. \tag{5}\]
where \(W=\sqrt{\psi^{2}\|\nabla\ell\|^{2}+1}\).
## 4 Riemannian conjugate gradient (RCG) with backward retraction
The manifold \((\Gamma_{\ell},g)\) characterizes all the geometric information of the domain of \(f\) which can now be used to perform optimization of \(\ell\) through \(f\) using an iterative procedure following the general template of the RGC method presented by Zhu (2020). At each step we (a) we identify a point \(\mathbf{x}\), a search direction and the current Riemannian gradient, (b) obtain a new \(\mathbf{x}^{\prime}\) by optimizing the objective along a given curve passing through \(\mathbf{x}\) with the search direction given in (b), (c) transport the current search direction on \(\mathbf{x}\) to \(\mathbf{x}^{\prime}\) and repeat the above steps. We presented the mains steps of the algorithm and our main contributions in what follows, leaving the extra technical details of the algorithm itself in the original paper of Zhu (2020).
Let \(f:\Gamma_{\ell}\to\mathbb{R}\) be a smooth function. RCG methods ideally rely on the exponential map and parallel transport (see Do Carmo, 1992, Section 2 and 3 for techincal details). That is, for a point \(\mathbf{x}_{k}\in\Gamma_{\ell}\) and a tangent vector at \(\mathbf{x}_{k}\), \(\mathbf{V}_{k}=\mathbf{M}_{\partial}\,\mathbf{v}\in T_{\mathbf{x}_{k}}\Gamma_{\ell}\), the general form of the
Figure 1: Visual display of the domain of the functions \(\ell\) and \(f\). On the left pane, the plane region defined \((\theta_{1},\theta_{2})\in\Theta\) is to be understood as Euclidean. The coloured surface depicts where the function \(f\) is defined on the graph of \(\ell\), that is on \(\mathcal{M}\). In this example the function \(\ell(\mathbf{\theta})=\log\mathcal{N}\big{(}[\theta_{1},\theta_{2}+\sin(\theta_{ 1})]|\,\mathbf{\mu},\Sigma\big{)}\) where \(\mathcal{N}\) denotes the Gaussian density \(\mathbf{\mu}=\mathbf{0}\) and \(\Sigma=\mathrm{diag}(1,0.01)\). The set \(\mathcal{M}\) has element \(\mathbf{x}=(\mathbf{\theta},\ell(\mathbf{\theta}))\) and is showed on the ”height” axis. This set can be understood as a embedded Riemannian manifold in the higher-dimensional space \(\mathbb{R}^{3}\). On the right panel we show the behaviour of the domain of \(f\) as a function of \(\alpha\). As \(\alpha\) is closer to zero, the closer to Euclidean the set \(\mathcal{M}\) is.
iterative updates is given by,
\[\mathbf{x}_{k+1} =\exp_{\mathbf{x}_{k}}(t_{k}\mathbf{V}_{k})\] \[\mathbf{V}_{k+1} =\mathrm{grad}f(\mathbf{x}_{k+1})-\beta\mathcal{P}_{\mathbf{x}_{k},\mathbf{x}_ {k+1}}(t_{k}\mathbf{V}_{k}),\]
where \(\mathcal{P}_{\mathbf{x}_{k},\mathbf{x}_{k+1}}:T_{\mathbf{x}_{k}}\Gamma_{\ell}\to T_{\mathbf{x} _{k+1}}\Gamma_{\ell}\) is the parallel transport of \(\mathbf{V}_{k}\) along the geodesic from \(\mathbf{x}_{k}\) in direction of \(\mathbf{V}_{k}\) to \(\exp_{\mathbf{x}_{k}}(t_{k}\mathbf{V}_{k})\), and \(\mathrm{grad}f\) denotes the Riemannian gradient (the natural gradient, see Appendix B). Note that the choice of the scalar \(t_{k}\) must satisfy the Wolfe conditions in Riemannian settings (see Absil et al., 2008, for example). For the scalar parameter \(\beta\) many choices are also possible, each of which will impact the speed of convergence of RCG (see Sato, 2021, for empirical evaluation). In practice RCG methods are difficult, both geodesics and parallel transport require solving a system of differential equations whose solution is usually computed using numerical solvers. That is why, only in the last decades, these methodologies have been used for a few matrix manifolds (Absil et al., 2008; Byrne, 2013) where the exponential map and parallel transport have closed-form arithmetics.
Usually, in practice, the exponential map is replaced by the _retraction map_\(\mathcal{R}_{\mathbf{x}_{k}}(t_{k}\mathbf{V}_{k})\) and the _parallel transport_ by the vector transport \(\mathcal{T}_{\mathbf{x}_{k},\mathbf{x}_{k+1}}(t_{k}\mathbf{V}_{k})\). In this way the iterative updates take the form,
\[\mathbf{x}_{k+1} =\mathcal{R}_{\mathbf{x}_{k}}(t_{k}\mathbf{V}_{k})\] \[\mathbf{V}_{k+1} =\mathrm{grad}f(\mathbf{x}_{k+1})-\beta\mathcal{T}_{\mathbf{x}_{k},\mathbf{x} _{k+1}}(t_{k}\mathbf{V}_{k}).\]
From the numerical viewpoint both of these operations, when suitable defined, do not require solving a system of differential equations to compute both geodesics and parallel transport. Moreover, they can alleviated the computational cost considerably while preserving the convergence guarantees of RCG methods (Absil et al., 2008; Boumal, 2023). These operations are define as follows,
**Definition 1**: _Retraction. A retraction at \(\mathbf{x}\in\Gamma_{\ell}\) is a smooth map with the following properties. Let \(\mathcal{R}_{\mathbf{x}}:T_{\mathbf{x}}\Gamma_{\ell}\to\Gamma_{\ell}\) be a retraction of \(\mathcal{R}\) at \(\mathbf{x}\) then_
1. \(\mathcal{R}_{\mathbf{x}}(\mathbf{0})=\mathbf{x}\)__
2. \(D\mathcal{R}_{\mathbf{x}}(\mathbf{0}):T_{\mathbf{x}}\Gamma_{\ell}\to T_{\mathbf{x}}\Gamma_{ \ell}=\mathrm{id}\) _is the identical map._
_Equivalently for any given curve defined as \(c(t)=\mathcal{R}_{\mathbf{x}}(t\mathbf{V})\), the retraction map satisfies \(c(0)=\mathbf{x}\) and \(\dot{c}(0)=\mathbf{V}\)._
**Definition 2**: _Vector transport. A vector transport between two tangent spaces \(T_{\mathbf{x}}\Gamma_{\ell}\) and \(T_{\mathbf{y}}\Gamma_{\ell}\) is a map \(\mathcal{T}_{\mathbf{x},\mathbf{y}}:T_{\mathbf{x}}\Gamma_{\ell}\to T_{\mathbf{y}}\Gamma_{\ell}\) satisfying the following properties,_
1. _There exists an associate retraction_ \(\mathcal{R}\) _such that_ \(\mathcal{T}_{\mathbf{x},\mathcal{R}(\mathbf{U})}(\mathbf{V})\in T_{\mathcal{R}(\mathbf{U})} \Gamma_{\ell}\) _for all_ \(\mathbf{V},\mathbf{U}\in T_{\mathbf{x}}\Gamma_{\ell}\)__
2. \(\mathcal{T}_{\mathbf{x},\mathbf{x}}(\mathbf{V})=\mathbf{V}\)__
3. _for any_ \(a,b\in\mathbb{R}\)_,_ \(\mathcal{T}_{\mathbf{x},\mathbf{y}}(a\,\mathbf{V}+b\,\mathbf{U})=a\mathcal{T}_{\mathbf{x},\mathbf{y}}( \mathbf{V})+b\mathcal{T}_{\mathbf{x},\mathbf{y}}(\mathbf{U})\)__
_Remark 1_.: The proof of Theorem 1 is based on the fact that the gradient of \(\mathcal{R}\) at \(\mathbf{x}\) is a function of the form \(\mathcal{R}(\mathbf{U})=\mathcal{R}(\mathbf{U})\).
## 3 The RCG method
In this section, we present a method for constructing RCG methods for RCG methods for RCG methods. We first introduce a method for constructing RCG methods for RCG methods. We first introduce a method for constructing RCG methods for RCG methods. We then introduce a method for constructing RCG methods for RCG methods. We then introduce a method for constructing RCG methods for RCG methods. We then introduce a method for constructing RCG methods for RCG methods. We then introduce a method for constructing RCG methods for RCG methods for RCG methods. We then introduce a method for constructing RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods RCG methods for RCG methods for RCG methods for RCG methods for RCG methods RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods RCG methods for RCG methods for RCG methods for RCG methods for RCG methods RCG methods for RCG methods for RCG methods for RCG methods for RCG methods RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods RCGCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods RCG methods for RCG methods for RCG methods RCG methods for RCG methods for RCG methods RCG methods for RCG methods for RCG methods RCG methods for RCG methods for RCG methods for RCG methods RCG methods for RCG methods for RCG methods for RCG methods RCG methods for RCG methods RCG methods for RCG methods RCG methods for RCG methods RCG methods for RCG methods for RCG methods RCG methods for RCG methods for RCG methods for RCG methods RCG methods for RCG methods for RCG methods for RCG methods RCG methods for RCG methods RCG methods for RCG methods for RCG methods RCG methods for RCG methods for RCG methods RCG methods for RCG methods for RCG methods for RCG methods RCG methods for RCG methods for RCG methods for RCG methods RCG methods for RCG methods for RCG methods RCG methods for RCG methods for RCG methods RCG methods for RCG methods RCG methods for RCG methods RCG methods for RCG methods for RCG methods RCG methods for RCG methods for RCG methods for RCG methods for RCG methods for RCG methods RCG methods for RCG methods RCG methods for RCG methods for RCG methods RCG methods for RCG methods for RCG methods RCG methods for RCG methods RCG methods for RCG methods for RCG methods RCG methods for RCG methods RCG methods for RCG methods RCG methods for R
Recently, Zhu (2020) have proposed a RCG where the vector transport is defined via a _backward retraction map_, which is a way of measuring the displacement of two points on a manifold using tangent vectors. For general submanifolds of the Euclidean space, such as the manifold \(\Gamma_{\ell}\) we are working with, this is computationally feasible and fast to evaluate. They also show that by doing so, their algorithms are able to reduce wall-clock time to reach convergence (see Table 2, Section 6 in their paper).
This work presents the general RCG with the inverse backward retraction method proposed by Zhu (2020) (see Section 3, Section 5 and Equation (46) in their paper) and generalized by Sato (2022) (see Section 4). On the top of those formulations we propose a retraction map as the Taylor approximation of the geodesic path up to the \(3^{\rm rd}\)-order following Definition 1 and vector transport as the inverse backward retraction map following the Definition 2. As we will show, for this embedded manifold, both retraction map and vector transport will incur linear cost in memory requirements (\(\mathcal{O}(D)\)) and quadratic cost in the number of arithmetic operations (\(\mathcal{O}(D^{2})\)). Following the next sections, we present the Taylor approximation, the choice of retraction based on it and the particular form of the backward retraction map. After that, we finally present the RCG optimisation algorithm using these particular tools.
## 5 Third-order geodesic approximation
A geodesic on \(\Gamma_{\ell}\) is a curve \(\gamma:I\subseteq\mathbb{R}\to\mathcal{M}\) that (locally) minimizes distance (arc-length) between two points on \(\mathcal{M}\). It generalizes the notion of straight path on flat spaces (straight lines). Equivalently, the classical \(2^{\rm nd}\)-order derivative of these curves have only the normal vector component at each point \(\gamma(t)\). That is, \(\ddot{\gamma}(t)\in T_{\gamma(t)}\,\mathcal{M}^{\perp}\) where superscript \({}^{\perp}\) denotes the orthogonal complement of a set.
Following Monera et al. (2014) we compute a \(3^{\rm rd}\)-order approximation of a geodesic by explicitly considering the parametrisation \(\xi\) of \(\mathcal{M}\). Let \(\gamma(t)=\xi(\boldsymbol{\theta}(t))\) where \(\boldsymbol{\theta}:I\subseteq\mathbb{R}\to\Theta\) is a curve on the chart. Recall that the exponential map is also a retraction map, \(\exp_{\boldsymbol{x}}:T_{\boldsymbol{x}}\Gamma_{\ell}\to\Gamma_{\ell}\), which can be expressed \(\exp_{\boldsymbol{x}}(t\boldsymbol{V})=\gamma_{\boldsymbol{x},\boldsymbol{V} }(t)\) where \(\boldsymbol{V}\in T_{\boldsymbol{x}}\Gamma_{\ell}\). Take a vector \(\boldsymbol{V}\in T_{\boldsymbol{x}}\Gamma_{\ell}\) so that \(\boldsymbol{V}\in\mathcal{S}^{D}(T_{\boldsymbol{x}}\Gamma_{\ell})\) where \(\mathcal{S}^{D}\) is the \(D\)-dimensional unit sphere. Then the \(3^{\rm rd}\)-order approximation of the geodesic at \(\boldsymbol{x}=\gamma(t)\) with \(t=0\), in the direction of the tangent vector \(\boldsymbol{V}\), is given by,
\[\tilde{\gamma}_{\boldsymbol{x},\boldsymbol{V}}(t_{*})=\boldsymbol{x}+t_{*} \boldsymbol{V}+\frac{t_{*}^{2}}{2}\ddot{\gamma}(0)+\frac{t_{*}^{3}}{6}\dddot{ \gamma}(0). \tag{6}\]
where \(t_{*}\in\mathbb{R}\). From the fact that \(\gamma\) is a geodesic the quadratic component of the Taylor series \(Q_{\boldsymbol{x}}(\boldsymbol{V}):=\ddot{\gamma}(0)\in T_{\gamma(0)}\Gamma_{ \ell}^{\perp}\) and so the \(2^{\rm nd}\)-order geometry of \(\mathcal{M}\) around \(\boldsymbol{x}\) only depends on the second-fundamental form (see Do Carmo, 2017, for example). Monera et al. (2014) also observed that the tangential component of \(\dddot{\gamma}(0)=K_{\boldsymbol{x}}(\boldsymbol{V})\) only depends on the \(2^{\rm nd}\)-order geometry of \(\Gamma_{\ell}\). We will exploit these properties to compute a \(3^{\rm rd}\)-order approximation of the geodesic.
We start by noting that, in general, the second derivative of a curve on the embedded manifold \(\Gamma_{\ell}\) can be written as,
\[\ddot{\gamma}(t)=\nabla_{\dot{\gamma}(t)}\dot{\gamma}(t)+\mathbb{I}\mathbb{I }_{\gamma(t)}(\dot{\gamma}(t))\,\boldsymbol{N}_{\gamma(t)}, \tag{7}\]
where \(\nabla_{\boldsymbol{V}}\boldsymbol{X}\) denotes the covariant derivative of a tangent \(\boldsymbol{X}\) in the direction of \(\boldsymbol{V}\) (see Do Carmo, 1992; Tenenblat, 2008; Do Carmo, 2017, for example). Denote \(\boldsymbol{N}_{\gamma(t)}\) as the
normal component at \(\gamma(t)\) and \(\mathbb{II}\) the second-fundamental form of \(\mathcal{M}\) at \(\gamma(t)\) in the direction of \(\dot{\gamma}(t)\)(Do Carmo, 1992, Chapter 6). Also, express \(\dot{\gamma}(t)=\boldsymbol{M}_{\partial}\,\boldsymbol{v}\), where \(\frac{\mathrm{d}}{\mathrm{d}t}\xi^{-1}(\gamma(t))=\boldsymbol{v}\). Then the covariant derivative above can be expressed in matrix form as
\[\nabla_{\dot{\gamma}(t)}\dot{\gamma}(t)=\boldsymbol{M}_{\partial}\left(\begin{bmatrix} \|\boldsymbol{v}\|_{\Gamma^{1}(\gamma(t))}^{2}\\ \vdots\\ \|\boldsymbol{v}\|_{\Gamma^{D}(\gamma(t))}^{2}\end{bmatrix}+\dot{\boldsymbol {v}}\right), \tag{8}\]
where \(\frac{\mathrm{d}^{2}}{\mathrm{d}t^{2}}\xi^{-1}(\gamma(t))=\dot{\boldsymbol{v}}\). The Christoffel symbols \(\Gamma^{m}(\gamma(t))\) above were arranged in matrices (\(D\times D\)) for easiness and mnemonic. In particular they are given by,
\[\Gamma^{m}_{i,j}(\gamma(t))=\tfrac{1}{2}\sum_{k=1}^{D}G_{k,m}^{-1}(\gamma(t) )\big{(}\partial_{i}G_{j,k}(\gamma(t))+\partial_{j}G_{k,i}(\gamma(t))- \partial_{k}G_{i,j}(\gamma(t))\big{)}. \tag{9}\]
Here \(G_{i,j}^{-1}\) is the \((i,j)\) entry of the inverse of the metric-tensor \(G\) and \(\partial_{k}G_{i,j}\) is the derivative of \(G_{i,j}\) with respect to the \(k^{th}\) component of \(\xi^{-1}(\boldsymbol{x})\)(see Hartmann et al., 2022; Do Carmo, 1992, for more details). All the indexes have the same range \(i,j,k=1,\ldots,D\). Since geodesics have only normal component, the coordinates of the tangent component must be the zero vector, thus for the quadratic component of the Taylor approximation it holds that \(Q_{\boldsymbol{x}}(\boldsymbol{V})=\mathbb{II}_{\boldsymbol{x}}(\boldsymbol{ V})\,\boldsymbol{N}_{\boldsymbol{x}}\), where \(\dot{\gamma}(0)=\boldsymbol{V}\) at \(\boldsymbol{x}=\gamma(0)\).
Because \(\Gamma_{\ell}\) is an embedding there is a unique normal vector at \(\boldsymbol{x}\) which we have denoted as \(\boldsymbol{N}_{\boldsymbol{x}}\), such that \(\boldsymbol{N}_{\boldsymbol{x}}\) is of length one (under the warp metric) and it is orthogonal to any vector in \(T_{\boldsymbol{x}}\Gamma_{\ell}\). In our case this reads (see Appendix C)
\[\boldsymbol{N}_{\boldsymbol{x}}=\left(-\frac{\psi\nabla\ell}{W},\frac{1}{\psi W }\right). \tag{10}\]
The second-fundamental form \(\mathbb{II}\) is a bilinear form defined as \(\mathbb{II}_{\boldsymbol{x}}(\boldsymbol{V})=-\langle\bar{\nabla}_{ \boldsymbol{V}}\,\boldsymbol{N}_{\boldsymbol{x}},\boldsymbol{V}\rangle_{\psi}\) where \(\bar{\nabla}\) is the connection associated with the warped metric of the ambient space \(\mathcal{N}\times\mathcal{M}_{\psi}\). Specifically, after a long computation we obtain (see Appendix F)
\[\mathbb{II}_{\boldsymbol{x}}(\boldsymbol{V})=\boldsymbol{v}^{\top}\left( \tfrac{2}{W}\nabla\psi\nabla\ell^{\top}+\tfrac{\psi}{W}\nabla^{2}\ell+\tfrac {\psi}{2W}\langle\nabla\psi^{2},\nabla\ell\rangle\nabla\ell\nabla\ell^{\top} \right)\boldsymbol{v}\,. \tag{11}\]
The computation of the cubic term is slightly more involved as it depends on the time derivative of the second fundamental form, the normal vector and consequently depends on the geodesic equations (Monera et al., 2014). In the following, we will present the general derivative leaving the details in the appendix (see Appendix G).
\[\ddot{\gamma}^{\prime}(0) =\frac{\mathrm{d}}{\mathrm{d}t}Q_{\gamma(t)}(\dot{\gamma}(t)) \big{|}_{t=0} \tag{12}\] \[=\frac{\mathrm{d}}{\mathrm{d}t}\bigg{(}\tfrac{2}{W}\langle \boldsymbol{v},\nabla\psi\rangle\langle\boldsymbol{v},\nabla\ell\rangle+\tfrac {\psi}{W}\|\boldsymbol{v}\|_{\nabla^{2}\ell}^{2}+\tfrac{\psi}{2W}\langle \nabla\psi^{2},\nabla\ell\rangle\langle\nabla\ell,\boldsymbol{v}\rangle^{2} \bigg{)}\left[\tfrac{-\tfrac{\psi\nabla\ell}{W}}{\frac{1}{\psi W}}\right] \bigg{|}_{t=0}\] \[=:K_{\boldsymbol{x}}(\boldsymbol{V}).\]
As seen in the above equation the acceleration vector \(\dot{\mathbf{v}}=\frac{\mathrm{d}}{\mathrm{d}t}\,\mathbf{v}\) appears, and once we are approximating geodesic curves the derivative \(\dot{\mathbf{v}}\) is given by the geodesic equations (see detail in Appendix E). Therefore it follows that,
\[\dot{\mathbf{v}}=-\mathcal{O}_{1}\nabla\ell+\mathcal{O}_{2}\nabla\psi^{2}. \tag{13}\]
where
\[\mathcal{O}_{1}=\tfrac{1}{2W^{2}}\Big{(}2\langle\mathbf{v},\nabla\psi^{2}\rangle \langle\mathbf{v},\nabla\ell\rangle+2\psi^{2}\|\mathbf{v}\|_{\nabla^{2}\ell}^{2}+\psi^ {2}\langle\nabla\psi^{2},\nabla\ell\rangle\langle\mathbf{v},\nabla\ell\rangle^{2} \Big{)} \tag{14}\]
and
\[\mathcal{O}_{2}=\tfrac{1}{2}\langle\mathbf{v},\nabla\ell\rangle^{2}. \tag{15}\]
Thus, the \(3^{\mathrm{rd}}\)-order Taylor approximation of a geodesic path on \(\mathcal{N}\times\mathcal{M}_{\psi}\) for a given \(\mathbf{x}\) and \(\mathbf{V}\) is
\[\tilde{\gamma}_{\mathbf{x},\mathbf{V}}(t_{*})=\mathbf{x}+t_{*}\,\mathbf{V}+\frac{t_{*}^{2}}{2 }Q_{\mathbf{x}}(\mathbf{V})+\frac{t_{*}^{3}}{6}K_{\mathbf{x}}(\mathbf{V}). \tag{16}\]
We can now see that this final expression does not involve the inverse of Hessian but only Hessian-vector products both in \(Q_{\mathbf{x}}(\mathbf{V})\) and \(K_{\mathbf{x}}(\mathbf{V})\). Therefore the computational implementation has linear memory cost \(\mathcal{O}(D)\) and it is quadratic in the number of computer operations \(\mathcal{O}(D^{2})\). In the next sections we provide the choice of retraction map based on this approximation and the choice of the vector transport.
## 6 Retraction choice
In Section 4, we presented the retraction map, that for a given point \(\mathbf{x}\in\Gamma_{\ell}\), it takes a vector \(\mathbf{V}\in T_{\mathbf{x}}\Gamma_{\ell}\) and maps back onto \(\Gamma_{\ell}\). Usually, the point along the approximate geodesic path (16) will usually not lie onto \(\Gamma_{\ell}\) and thus it does not satisfy the definition of the retraction map. To define a valid retraction on \(\Gamma_{\ell}\), we proceed by applying the orthogonal projection of \(\tilde{\gamma}_{\mathbf{x},\mathbf{V}}\) onto \(\Theta\) and using the canonical parametrisation \(\xi\) to push it back to \(\Gamma_{\ell}\). That is,
\[\mathcal{R}_{\mathbf{x}}(t\mathbf{V})=\xi\big{(}\mathrm{Proj}_{T_{\xi^{-1}(\mathbf{x})} \Theta}\big{(}\tilde{\gamma}_{\mathbf{x},\mathbf{V}}(t)\big{)}\big{)} \tag{17}\]
where \(\mathbf{\theta}=\xi^{-1}(\mathbf{x})\) and
\[\mathrm{Proj}_{T_{\xi^{-1}(\mathbf{x})}\Theta}\big{(}\tilde{\gamma}_{\mathbf{x},\mathbf{V }}(t)\big{)}=\mathbf{\theta}+t\mathbf{v}+\frac{t^{2}}{2}\mathbf{q}+\frac{t^{3}}{6}\mathbf{k}. \tag{18}\]
The quadratic and cubic coefficients of the Taylor approximation are given by
\[\mathbf{q} =-\big{(}\tfrac{1}{W^{2}}\langle\mathbf{v},\nabla\psi^{2}\rangle \langle\mathbf{v},\nabla\ell\rangle+\tfrac{\psi^{2}}{W^{2}}\|\mathbf{v}\|_{\nabla^{2 }\ell}^{2}+\tfrac{\psi^{2}}{2W^{2}}\langle\nabla\psi^{2},\nabla\ell\rangle \langle\nabla\ell,\mathbf{v}\rangle^{2}\big{)}\nabla\ell\] \[\mathbf{k} =-\frac{\mathrm{d}}{\mathrm{d}t}\big{(}\tfrac{1}{W^{2}}\langle\bm {v},\nabla\psi^{2}\rangle\langle\mathbf{v},\nabla\ell\rangle+\tfrac{\psi^{2}}{W^{ 2}}\|\mathbf{v}\|_{\nabla^{2}\ell}^{2}+\tfrac{\psi^{2}}{2W^{2}}\langle\nabla\psi^{ 2},\nabla\ell\rangle\langle\nabla\ell,\mathbf{v}\rangle^{2}\big{)}\nabla\ell\Big{|} _{t=0}. \tag{19}\]
See Appendix E for more details. In order to show that Equation (17) is indeed a retraction map, let us show that it does satisfy the necessary properties of Definition 1. Define a curve \(c:(0,\infty)\to\mathcal{M}\) as,
\[c(t):=R_{\mathbf{x}}(t\mathbf{V})=\xi\left(\mathbf{\theta}+t\mathbf{v}+\frac{t^{2}}{2}\mathbf{q}+ \frac{t^{3}}{6}\mathbf{k}\right). \tag{20}\]
Evaluate this curve at \(t=0\), i.e., \(c(0)=\xi(\mathbf{\theta})=\xi(\xi^{-1}(\mathbf{x}))=\mathbf{x}\) and the first property holds. For the second property we need to show that we recover \(\mathbf{V}=\mathbf{M}_{\partial}\,\mathbf{v}\) in the derivative \(\dot{c}(0)\). The curve derivative is given by
\[\dot{c}(t) =\mathbf{M}_{\partial}\,\frac{\mathrm{d}}{\mathrm{d}t}\left(\mathbf{ \theta}+t\mathbf{v}+\frac{t^{2}}{2}\mathbf{q}+\frac{t^{3}}{6}\mathbf{k}\right)\] \[=\mathbf{M}_{\partial}\left(\mathbf{v}+t\mathbf{q}+\frac{t^{2}}{2}\mathbf{k}\right) \tag{21}\]
thus at \(t=0\) we have \(\dot{c}(0)=\mathbf{M}_{\partial}\,\mathbf{v}=\mathbf{V}\). Therefore we conclude that Equation (17) is a retraction map. It is also interesting to observe that the term \(\mathfrak{U}_{1}\) in the Equation (26), see Appendix E, equals the coefficient \(\mathbf{q}\) and that it is obtained by the projection of the normal component of the geodesic curve onto \(\Theta\). This shows the acceleration on the chart \(\Theta\) is not null and where the curved geometry information of \(\mathcal{N}\times\mathcal{M}_{\psi}\) plays out in \(\Theta\).
## 7 Vector transport as inverse backward retraction
The last tools necessary to complete the our proposed algorithm is to define a valid vector transport following Definition 2. We use the inverse backward retraction map proposed by Luenberger (1972) and Zhu (2020) as the inverse orthographic projection. At first, this map is a projection of the difference of two points on the \(\mathcal{M}\) onto a tangent space which does not seem to characterize a vector transport. However, we still we can express it as a vector transport as follows (Sato, 2022 and Appendix D for orthogonal projection details).
Let \(\mathbf{x}\), \(\mathbf{z}\in\mathcal{M}\), \(\Delta=[\mathbf{x}_{1:D}-\mathbf{z}_{1:D}\;\Delta\ell]\) and \(\Delta\ell=\ell(\mathbf{x}_{1:D})-\ell(\mathbf{z}_{1:D})\). The inverse backward retraction map is defined as \(\mathcal{R}_{\mathbf{z}}^{\mathrm{bw}^{-1}}(\mathbf{x})=\mathrm{Proj}_{T_{\mathbf{z}} \,\mathcal{M}}(\mathbf{x}-\mathbf{z})\). Given \(\mathbf{x}\), \(\mathbf{V}\in T_{\mathbf{x}}\,\mathcal{M}\) and \(\mathbf{z}=R_{\mathbf{x}}(t\mathbf{V})\), define vector transport operation \(\mathcal{T}_{\mathbf{x},\mathbf{z}}:T_{\mathbf{x}}\,\mathcal{M}\to T_{\mathbf{z}}\,\mathcal{M}\) along \(R_{\mathbf{x}}(t\mathbf{V})\) as
\[\mathcal{T}_{\mathbf{x},\mathbf{z}}(\mathbf{V}) =\mathcal{T}_{\mathbf{x},R_{\mathbf{z}}(t\mathbf{V})}(\mathbf{V})\] \[=-\tfrac{1}{t}\,\mathcal{R}_{R_{\mathbf{z}}(t\mathbf{V})}^{\mathrm{bw}^{ -1}}(\mathbf{x})\] \[=-\tfrac{1}{t}\mathrm{Proj}_{T_{R_{\mathbf{z}}(t\mathbf{V})}\,\mathcal{M} }(\mathbf{x}-R_{\mathbf{x}}(t\mathbf{V}))\] \[=-\tfrac{1}{t}\mathbf{M}_{\partial}\Big{(}\mathbf{M}_{\partial}^{\top}G_{ \psi}\mathbf{M}_{\partial}\Big{)}^{-1}\mathbf{M}_{\partial}^{\top}G_{\psi}\Delta\ \ \text{orthogonal projection of $\Delta$ onto $T_{\mathbf{z}}\,\mathcal{M}$}\] \[=-\tfrac{1}{t}\,\mathbf{M}_{\partial}\left(I-\tfrac{\psi^{2}(\mathbf{z}_{1 :D})}{W^{2}(\mathbf{z}_{1:D})}\nabla\ell(\mathbf{z}_{1:D})\nabla\ell(\mathbf{z}_{1:D})^{ \top}\right)\!\big{[}I_{D}\ \ \psi^{2}(\mathbf{z}_{1:D})\nabla\ell(\mathbf{z}_{1:D})\big{]}(\mathbf{x}-\mathbf{z})\] \[=-\tfrac{1}{t}\,\mathbf{M}_{\partial}\left[I-\tfrac{\psi^{2}(\mathbf{z}_ {1:D})}{W^{2}(\mathbf{z}_{1:D})}\nabla\ell(\mathbf{z}_{1:D})\nabla\ell(\mathbf{z}_{1:D})^{ \top}\ \tfrac{\psi^{2}(\mathbf{z}_{1:D})}{W^{2}(\mathbf{z}_{1:D})}\nabla\ell(\mathbf{z}_{1:D}) \right]\!(\mathbf{x}-\mathbf{z})\] \[=-\tfrac{1}{t}\,\mathbf{M}_{\partial}\left(\Delta_{1:D}-\tfrac{\psi^ {2}(\mathbf{z}_{1:D})}{W^{2}(\mathbf{z}_{1:D})}\left\langle\Delta_{1:D},\nabla\ell( \mathbf{z}_{1:D})\right\rangle\nabla\ell(\mathbf{z}_{1:D})+\tfrac{\psi^{2}(\mathbf{z}_{1 :D})\Delta\ell}{W^{2}(\mathbf{z}_{1:D})}\nabla\ell(\mathbf{z}_{1:D})\right)\] \[=-\tfrac{1}{t}\,\mathbf{M}_{\partial}\left(\Delta_{1:D}-\big{(} \left\langle\Delta_{1:D},\nabla\ell(\mathbf{z}_{1:D})\right\rangle-\Delta\ell\big{)} \tfrac{\psi^{2}(\mathbf{z}_{1:D})}{W^{2}(\mathbf{z}_{1:D})}\nabla\ell(\mathbf{z}_{1:D}) \right)\!. \tag{22}\]
Observe that when \(\psi\to 0^{+}\) we have the retraction \(R_{\mathbf{x}}(t\mathbf{V})=[\mathbf{x}_{1:D}+t\mathbf{v}\ 0]\) thus the vector transport becomes \(\mathcal{T}_{\mathbf{x},R_{\mathbf{z}}(t\mathbf{V})}(\mathbf{V})=\tfrac{1}{t}[\mathbf{x}_{1:D}+t\mathbf{ v}-\mathbf{x}_{1:D}\ 0]=[\mathbf{v}\ 0]\), that is, we recover the Euclidean parallelism on \(\mathcal{M}\).
## 8 The novel RCG algorithm
After having obtained Equations (17) and (22) as the retraction map and the vector transport, we now propose a new RCG algorithm that optimises the function \(f\) and therefore the function \(\ell\). This is presented in Algorithm 1. In the Step 1, we set the initial tangent vector \(\mathbf{V}\) as the Riemannian gradient. This can be recalled as the natural gradient (Amari, 1998); see Section B in the Appendix for details. The Step 3 consists in the optimisation of a univariate function \(g(t)\) which do not increase the computational costs, this is because the gradient \(\nabla\ell\) and the Hessian-vector product \(\nabla^{2}\ell\,\mathbf{v}\) do not need to be recomputed as they can be repeatedly retrieved from the cache memory at every iteration step in this inner optimisation. After the optima \(t_{k}\) of \(g(t)\) has been found, the Steps 5-8 compute \(\mathbf{x}_{k+1}\), \(s_{k}\), \(\beta_{k}^{\text{DY}}\) (see Equation (23)) and \(\mathbf{V}_{k+1}\). Particularly, in Steps 6-7, the computation of \(s_{k}\) and \(\beta_{k}^{\text{DY}}\) involves dot-products in tangent space but these can still be further simplified, see Appendix A. In Step 8, we do need to compute the Riemannian gradient at the new point \(\mathbf{x}_{k+1}\) and the vector transport of \(\mathbf{V}_{k}\) from \(\mathbf{x}\) to \(\mathbf{x}_{k+1}\) along \(\mathcal{R}(\mathbf{V}_{k}\,t_{k})\) to set the update \(\mathbf{V}_{k+1}\). However all those computations do not add more than \(\mathcal{O}(D)\) memory load and \(\mathcal{O}(D^{2})\) arithmetic operations.
\[\beta_{k}^{\text{DY}}=\frac{\|\text{grad}f(\mathbf{x}_{k+1})\|_{\mathbf{x}_{k+1}}^{2}} {s_{k}\langle\text{grad}f(\mathbf{x}_{k+1}),\mathcal{T}_{\mathbf{x}_{k},\mathbf{x}_{k+1}} (\mathbf{V})\rangle_{\mathbf{x}_{k+1}}-\langle\text{grad}f(\mathbf{x}_{k}),\mathbf{V}_{k}, \rangle_{\mathbf{x}_{k}}} \tag{23}\]
The convergence of the Algorithm 1 is guaranteed by the fact we have a valid retraction map and the value \(t_{k}\) at Step 4 satisfies the Wolfe conditions. See for example Sakai and Iiduka (2021), Zhu (2020) (Section 4) and the generalization of the methodological proofs in Sato (2022). An important question in the proposed algorithm is whether we can use variants of the scalar value \(\beta_{k}^{\text{DY}}\) analogous to the Euclidean cases. For example, Sato (2022) (Equations 4.10-4.12 in that paper) requires the vector transport of the gradient to the new point on the manifold. Unfortunately, we cannot apply the vector transport defined within
their proposed algorithm. There is no guarantee that by plugging \(\text{grad}f(\mathbf{x}_{k})\) into Equation (17) instead of \(\mathbf{V}_{k}\), it will end up at the same point on the manifold if we had used \(\mathbf{V}_{k}\). Therefore it derails the use of Equation (22). Solving differential equations to perform exact parallel is possible but then the low cost computation feature of the Algorithm 1 would be lost making it unviable in practice.
## 9 Experiments
In this section we conduct experiments using three sets of example of functions on Euclidean spaces with varying dimensionality \(D\). In the first two example, for each function and dimension \(D\), we perform optimisation using Algorithm 1 with the retraction maps based on the \(3^{\text{rd}}\)-order approximation of the geodesic. The choice of warp function \(\psi\) is given in the Appendix H, and as it involves an extra parameter \(\sigma\), we perform some preliminary runs in lower dimensions to check performance of the algorithm. We then finally compare our methodology to the two classical CG methods, where the conjugate search directions use the gradient and Newton's respectively. Also both of these classical methods will use the Hager-Zhang type of inexact line search. We denote these as "Gradient CG and "Newton CG" respectively. To implement the classical CG optimisers we use the package Optim.jl in Julia programming language (Bezanson et al., 2017). In all runs we set the maximum number of iterations to be \(8000\) and stop criteria as \(\Delta f=|f(\mathbf{x}_{k+1})-f(\mathbf{x}_{k})|<10^{-5}\) or \(\|\text{grad}f(x_{k})\|<10^{-6}\) and set the limit approximation of the time derivatives using \(r=1\times 10^{-20}\).
### The D-dimensional squiggle probability model
The squiggle probability distribution has expression \(\ell(\mathbf{\theta})=\log\mathcal{N}\big{(}[\theta_{1},\theta_{2}+\sin(a\theta_{ 1}),\ \dots,\ \theta_{D}+\sin(a\theta_{1})]|\,\mathbf{0},\Sigma\big{)}\) with parameters \(a>0\) and \(\Sigma\) positive-definite (PD) matrix. The squiggle function can have the shape of a thin sine function that concentrates its probability density around a zig-zag region, producing narrow uphill curved region towards its unique global maximiser \(\mathbf{\theta}_{*}=\mathbf{0}\) with \(\ell(\mathbf{\theta}_{*})=-\frac{D}{2}\log(2\pi)-\frac{1}{2}\log\det(\Sigma)\). The PD matrix control the orientation and how much thin the sine-shaped function can be. This effect is more pronounced with large values of \(a\). We set these parameters to \(a=1\) and \(\Sigma=\text{diag}(30,0.5,\dots,0.5)\). As initial value for this function we set \(\mathbf{\theta}_{0}=(-10,10,\dots,(-1)^{D}10)\) to be far away from the maxima, this mimics real practice where we do not know the maximizer beforehand. We set \(\sigma=1.0\).
In Figure 2, we display the trace of the three different optimisation routines considering varying dimensionality of the function squiggle. This experiment shows that the RCG method (depicted in purple) improves the number of iterations until convergence when compared to gradient CG (red colour). The Newton's CG (in Blue) remains faster than all the others.
### The generalized Rosenbrock function
The Rosenbrock function, \(\ell(\mathbf{\theta})=\sum_{i=2}^{D}-b(\theta_{i}-\theta_{i-1}^{2})^{2}-(a- \theta_{i-1})^{2}\), has been widely used in benchmark tests for numerical optimisation problems (Rosenbrock, 1960; Kok, 2009). For \(a=1\) and \(b=100\) its surface landscape is quite difficult. There is one global maxima in \(\mathbf{\theta}_{*}=(1,\dots,1)\) with \(\ell(\mathbf{\theta}_{*})=0\) and one local maxima at \(\mathbf{\theta}_{*}=(-1,\dots,1)\) with \(\ell(\mathbf{\theta}_{*})\approx-3.99\). The global maximiser lies in a very narrow uphill region which makes the optimisation harder.
The starting point for the optimisation routines is set to \(\boldsymbol{\theta}_{0}=(-5,5,\ldots,(-1)^{D}5\) to make the task harder as this function has been studied in the range \(-2.048\leq\theta_{i}\leq 2.048\)(Franca et al., 2020). Here we set \(\sigma=3\times 10^{2}\).
Figure 3 displays the performance of the RCG (in purple colour), gradient CG (in red) and Newton's CG (in green) for variety of varying dimensions for the Rosenbrock model. In this case we observe that the after the dimension \(D\approx 600\) the RCG starts to be faster in the number of iterations until convergence when compared to gradient CG and Newton's CG. We recall that this function is not log-concave in its entire domain \(\Theta\).
### A test-set of CUTE library
The last set of examples comprise some models from the library CUTE \(\dagger\) implemented in Julia programming language1. This subset of models can be found in the following website www.cuter.rl.ac.uk/Problems/classification.shtml with classification SUR2-AN-V-0 (unconstrained). The models chosen under this classification have IDs and dimensions respectively given by WATSON (D = 12), ERRINROS (D = 50), ERRINRSM (D = 50), CHNROSNB (D = 50), CHNRSNBM (D = 50), MANCINO (D = 100), INTEQNELS (D = 500), EXTROSNB (D = 1000). In this case we set \(\sigma=3\times 10^{2}\) with no preliminary runs.
Footnote 1: www.cuter.rl.ac.uk/mastsif.html
In Figure 4 we display the performance of the Riemann CG, gradient CG and Newton's CG on a subset of models from the CUTE library aforementioned. From the 8 target functions presented the RCG method is faster in 5 of them. The Newton's CG method is faster in all
Figure 2: Number of iteration until convergence for a variety of dimensions using the squiggle model. The RCG method in Algorithm 1, depicted in purple, presents faster convergence than the gradient CG (showed in red), but slower that Newton CG (in green colour) which tends to converge very fast in all varying dimensions. In this experiment the dimension is taken up to \(D=600\) since the computational wall-clock time to perform RCG starts to be large (See appendix for extra information).
of them but does to reach global maxima in two cases. The x-axis is depicted in log-scale since the convergence on the original scale for some cases are far from each other considering the three methods all together in the same display.
## 10 Concluding remarks and future direction
We presented an alternative way to optimise a function originally defined on Euclidean space by harnessing the warped geometry of the function's graph embedded in the warped product space. Despite the fact that Riemannian geometry requires extra concepts and therefore add other difficulties and more computational burden in practice, we were able to present a Riemannian version of conjugate gradient optimiser that has memory costs similar to their
Figure 3: This figure show the number of iterations until convergence considering the Rosenbrock function with varying dimensions \(D\). The parameters of the function were set to be \(a=1\) and \(b=100\) as it is usually done in benchmark settings. The RCG method (in purple), Algorithm 1, tends to converge faster than the gradient CG counterpart and after around \(D\approx 600\) the RCG method converges faster than Newton’s CG until \(D=1500\) in our experiments.
Euclidean counterparts, and can be a competitor when compared to the conjugate Newton's direction. This was possible due to the geodesic approximation by the Taylor expansion until 3rd-order on a function's graph \(\Gamma_{\ell}\) presented here. The computational cost is memory-wise similar as to forming a gradient and the convergence is fast when using a good values in the parameter of the warp function. Although the approach seems very attractive, selecting a optimal value \(\sigma\) may be not straightforward and the experiments have indicated that the 3rd-order approximation of the geodesics can achieve good convergence speed, thus a potential competitor method among gradient-based state-of-art optimizers.
Future work clearly aims in the choice of warp function \(\psi\) and vector transport \(\mathcal{T}\) with the goal of improve convergence speed of the Algorithm 1. The metric-tensor \(G\) presented in the beginning of the paper can also be seen as a pull-back metric tensor and we can plug it into geometric sampling algorithms used in its vast majority for Bayesian statistical inference problem, such as Riemann manifold Hamiltonian Monte Carlo (RMHMC) (Schervish, 2011), Lagrangian Monte Carlo (LMC) (Lan et al., 2015) and Manifold Metropolis-adjusted Langenvin Dynamics (Xifara et al., 2014) to cite a few.
**Acknowledgements**. This research is supported by the Academy of Finland grants 348952 (CORE), 345811 (ERI) and the Flagship programme Finnish Center for Artificial Intelli
Figure 4: This figure depicts the convergence of the RCG method, the gradient CG and the Newton’s CG in the test set of problems from the CUTE library. The RCG method in Algorithm 1 and depicted in purple reaches better converge in 5 out of 8 models tested. The Newton’s CG method converges faster in all case but seems the not reach global maxima in two cases. The gradient CG converges in all cases. The x-axis is showed in the log-scale as the number of the iterations may be to far away from each other.
gence (FCAI). Mark Girolami is supported by EPSRC grants EP/T000414/1, EP/R018413/2, EP/P020720/2, EP/R034710/1, EP/R004889/1, and a Royal Academy of Engineering Research Chair.
## Appendix A Simplification of dot-products on tangent spaces
Recall that for a point \(\mathbf{x}\in\Gamma_{\ell}\) and tangents expressed as \(\mathbf{V}=\mathbf{M}_{\partial}\,\mathbf{v},\mathbf{U}=\mathbf{M}_{\partial}\,\mathbf{u}\in T_{\mathbf{x }}\Gamma_{\ell}\), in the parameterization \(\xi\), the inner product at \(\mathbf{x}\) is given by \(\left\langle\mathbf{V},\mathbf{U}\right\rangle_{\mathbf{x}}=\left\langle\mathbf{v},\mathbf{u} \right\rangle_{G(\mathbf{x})}\). We use that fact that \(\Gamma_{\ell}\) is the embedded manifold to simplify the computations. The following inner products are given by
\[\left\langle\mathbf{U},\mathbf{V}\right\rangle_{\mathbf{x}} = \left\langle\mathbf{u},\mathbf{v}\right\rangle_{G(\mathbf{x})}=\mathbf{u}^{\top} \left(I_{D}+\psi^{2}\nabla\ell\nabla\ell^{\top}\right)\mathbf{v}\] \[= \left\langle\mathbf{u},\mathbf{v}\right\rangle+\psi^{2}\left\langle \nabla\ell,\mathbf{u}\right\rangle\left\langle\nabla\ell,\mathbf{v}\right\rangle\]
and
\[\left\langle\mathrm{grad}f(\mathbf{x}),\mathbf{V}\right\rangle_{\mathbf{x}}= \left\langle\frac{\nabla\ell}{W^{2}},\mathbf{v}\right\rangle_{G(\mathbf{ x})}=\frac{\nabla\ell^{\top}}{W^{2}}\left(I_{D}+\psi^{2}\nabla\ell\nabla \ell^{\top}\right)\mathbf{v}\] \[= \left\langle\frac{\nabla\ell}{W^{2}},\mathbf{v}\right\rangle+\frac{ \psi^{2}}{W^{2}}\left\|\nabla\ell\right\|^{2}\left\langle\nabla\ell,\mathbf{v}\right\rangle\] \[= \left\langle\frac{\nabla\ell}{W^{2}},\mathbf{v}\right\rangle+\frac{W ^{2}-1}{W^{2}}\left\langle\nabla\ell,\mathbf{v}\right\rangle\] \[= \left\langle\nabla\ell,\mathbf{v}\right\rangle.\]
If \(\mathbf{V}=\mathrm{grad}f(\mathbf{x})\) then \(\left\|\mathrm{grad}f(\mathbf{x})\right\|_{\mathbf{x}}^{2}=\left\|\nabla\ell\right\|^{ 2}/W^{2}\).
## Appendix B Derivation of the Riemannian gradient as the Natural gradient
We present an alternative derivation result of the fact that the Riemannian gradient is a element of the tangent space with components given by the Natural gradient (see Lee, 2003, page 342). Let \(\mathbf{x}\in\Gamma_{\ell}\) such that \(\xi(\mathbf{\theta})=\mathbf{x}\) with \(\mathbf{\theta}\in\Theta\subseteq\mathbb{R}^{D}\) (the chart). Moreover, let \(f\in C^{\infty}(\Gamma_{\ell})\) and by definition the Riemannian gradient is the vector in the tangent space \(\mathrm{grad}f\in T_{\mathbf{x}}\Gamma_{\ell}\), \(\mathrm{grad}f(\mathbf{x})=(\mathrm{grad}f)^{i}e_{i}\) such that for a given \(\mathbf{V}\in T_{\mathbf{x}}\Gamma_{\ell}\) it holds \(df(\mathbf{V})=\left\langle\mathbf{V},\mathrm{grad}f\right\rangle\). Recall that the base of the tangent space is \(e_{i}=\frac{\partial}{\partial\mathbf{\theta}_{i}}\). Then the differential is,
\[df(\mathbf{V}) = \left\langle\mathbf{V},\mathrm{grad}f\right\rangle\quad\text{by definition }df(\mathbf{V})=\mathbf{V}(f)\] \[\mathbf{V}(f) = \left\langle\mathbf{M}_{\partial}\,\mathbf{v},\mathbf{M}_{\partial}(\mathrm{ grad}f)\right\rangle\] \[\sum_{i}\mathbf{v}^{i}\,\frac{\partial}{\partial\mathbf{\theta}_{i}}f = \mathbf{v}^{\top}\mathbf{M}_{\partial}^{\top}\mathbf{M}_{\partial}(\mathrm{ grad}f)\] \[\mathbf{v}^{\top}\,\nabla_{\mathbf{\theta}}f = \mathbf{v}^{\top}\,G\,(\mathrm{grad}f).\]
From there we see \(\nabla_{\mathbf{\theta}}f=G\,(\mathrm{grad}f)\) and \((\mathrm{grad}f)=G^{-1}\,\nabla_{\mathbf{\theta}}f\). Note that \(f(\mathbf{x})=x_{D+1}\) so that \(f(\xi)=\ell\) which yields
\[\mathrm{grad}\,f(\mathbf{x})= \mathbf{M}_{\partial}\,G^{-1}\nabla\ell. \tag{24}\]
From where we identify the expression \(G^{-1}\nabla\ell\) as the Natural gradient, which are the components of the gradient vector of \(f\) at \(\mathbf{x}=\xi\). We also note that Boumal (2023) provides an easier way to compute the Riemmanian gradient by defining it in the following way. Denote the extension of \(f\) on the Euclidean space \(\mathbb{R}^{D+1}\) as \(\tilde{f}\) such that \(f=\tilde{f}|_{\Gamma_{\ell}}\). Apply the orthogonal projection of its classical gradient \(\nabla\tilde{f}\) towards the tangent spaces of the embedded manifold \(\Gamma_{\ell}\subset\mathcal{N}\times\mathcal{M}_{\psi}\). The resulting operation gives \(\text{grad}f(\mathbf{x})\). In our case, we would do it in two steps. First obtain the Riemannian gradient on the tangent space of \(\mathcal{N}\times\mathcal{M}_{\psi}\) and then project this gradient on the tangent space of \(\Gamma_{\ell}\).
### Normal vector on \(\Gamma_{\ell}\)
This section computes the normal vector at a point \(\mathbf{x}\in\Gamma_{\ell}\) considering the warped metric \(\langle\cdot,\cdot\rangle_{\psi}\). The notation \(\mathbf{N}_{\mathbf{x}}\) or \(\mathbf{N}\) will be used interchangeably. Denote \(\mathcal{N}\times\mathcal{M}_{\psi}\ni\mathbf{Z}=\mathbf{Z}^{T}+\mathbf{N}\) where \(\mathbf{Z}^{T}\in T_{\mathbf{x}}\Gamma_{\ell}\) and \(\mathbf{N}\) is the normal vector to \(T_{\mathbf{x}}\Gamma_{\ell}\). Considering the canonical parametrisation \(\xi\), the orthogonality under the metric \(\langle\cdot,\cdot\rangle_{\psi}\) gives \(\langle\partial_{i}\xi,\mathbf{N}\rangle_{\psi}=0\) for \(i=1,\ldots,D\). This implies the system of equations \(\mathbf{N}_{i}=-\psi^{2}\,\mathbf{N}_{D+1}\,\partial_{i}\ell\). Assuming that the normal vector has unit length we have \(\|\mathbf{N}\|_{\psi}^{2}=1\). Using the system of equations to solve for the coordinate \(\mathbf{N}_{D+1}\) we get
\[\sum_{i=1}^{D}(\mathbf{N}_{i})^{2}+\psi^{2}(\mathbf{N}_{D+1})^{2}=1,\]
and solving for the last coordinate we obtain,
\[\Big{(}\psi^{4}\sum_{i=1}^{D}(\partial_{i}\ell)^{2}+\psi^{2}\Big{)}(\mathbf{N}_{D +1})^{2}=1.\]
This leads to
\[\mathbf{N}_{D+1}=\frac{1}{\psi\sqrt{\psi^{2}\|\nabla\ell\|^{2}+1}}.\]
Therefore the normal vector at \(\mathbf{x}\in\Gamma_{\ell}\) is
\[\mathbf{N}_{\mathbf{x}} =\left(-\frac{\psi\partial_{1}\ell}{\sqrt{\psi^{2}\|\nabla\ell\| ^{2}+1}},\ldots,-\frac{\psi\partial_{D}\ell}{\sqrt{\psi^{2}\|\nabla\ell\|^{2 }+1}},\frac{1}{\psi\sqrt{\psi^{2}\|\nabla\ell\|^{2}+1}}\right)\] \[=\left(-\frac{\psi\nabla\ell}{W},\frac{1}{\psi W}\right) \tag{25}\]
where \(W=\sqrt{\psi^{2}\|\nabla\ell\|^{2}+1}\) and this vector has unit norm under the metric \(\langle\cdot,\cdot\rangle_{\psi}\).
### Orthogonal projection on \(T_{\mathbf{x}}\Gamma_{\ell}\)
Again denote \(\mathcal{N}\times\mathcal{M}_{\psi}\ni\mathbf{Z}=\mathbf{Z}^{T}+\mathbf{N}\) where \(\mathbf{Z}^{T}\in T_{\mathbf{x}}\Gamma_{\ell}\) and \(\mathbf{N}\) is the normal vector to \(T_{\mathbf{x}}\Gamma_{\ell}\). We known that \(\langle\mathbf{Z}^{T},\mathbf{N}\rangle_{\psi}=0\) and we want to find the orthogonal projection of \(\mathbf{Z}\) onto \(T_{\mathbf{x}}\Gamma_{\ell}\). Since \(\mathbf{Z}^{T}\in T_{\mathbf{x}}\Gamma_{\ell}\) we need to find the coordinate components \(\mathbf{v}\) of \(\mathbf{Z}^{T}=\mathbf{M}_{\partial}\,\mathbf{v}\). To do
so we need to solve \(\langle\mathbf{M}_{\partial}\,\mathbf{v},\mathbf{Z}-\mathbf{M}_{\partial}\,\mathbf{v}\rangle_{\psi}=0\) for \(\mathbf{v}\). This is given by the weighted least-square solution. That is,
\[\mathbf{v}=\left(\mathbf{M}_{\partial}^{\top}G_{\psi}\mathbf{M}_{\partial}\right)^{-1}\mathbf{M }_{\partial}^{\top}G_{\psi}\,\mathbf{Z}\]
where \(G_{\psi}=\operatorname{diag}(I,\psi^{2})\) is the metric-tensor of the ambient space \(\mathcal{N}\times\mathcal{M}_{\psi}\). Therefore the orthogonal projection of a vector \(\mathbf{Z}\) onto \(T_{\mathbf{x}}\Gamma_{\ell}\), denoted as \(\operatorname{Proj}_{T_{\mathbf{x}}\Gamma_{\ell}}\mathbf{Z}\) has the form,
\[\operatorname{Proj}_{T_{\mathbf{x}}\Gamma_{\ell}}\mathbf{Z}=\mathbf{M}_{\partial}\left( \mathbf{M}_{\partial}^{\top}\begin{bmatrix}I&\mathbf{0}\\ \mathbf{0}^{\top}&\psi^{2}\end{bmatrix}\mathbf{M}_{\partial}\right)^{-1}\mathbf{M}_{ \partial}^{\top}\begin{bmatrix}I&\mathbf{0}\\ \mathbf{0}^{\top}&\psi^{2}\end{bmatrix}\mathbf{Z}\,.\]
Observe that \(\mathbf{M}_{\partial}^{\top}\operatorname{diag}(I,\psi^{2})\mathbf{M}_{\partial}=I_{ D}+\psi^{2}\nabla\ell\nabla\ell^{\top}\) is the metric-tensor induced on \(\Gamma_{\ell}\).
## Appendix E Christoffel symbols and geodesic equations on \(\Gamma_{\ell}\)
Recall that \(\psi\) is a function of \(\mathbf{x}\), the metric-tensor induced on \(\Gamma_{\ell}\) is \(G=I_{D}+\psi^{2}\nabla\ell\nabla\ell^{\top}\) whose inverse is \(G^{-1}=I_{D}-\frac{\psi^{2}}{W^{2}}\nabla\ell\nabla\ell^{\top}\) where \(W=\sqrt{\psi^{2}\|\nabla\ell\|^{2}+1}\). The Christoffel symbols \(\Gamma_{i,j}^{m}\) for \(i,j,m=1,\dots,D\) are computed using its general formulation (see Do Carmo, 1992, for example). After some algebraic manipulation we can organize the Christofell symbols in matrices. The development is as follows.
\[\Gamma_{i,j}^{m}= \tfrac{1}{2}\sum_{k=1}^{D}G_{k,m}^{-1}\big{(}\partial_{i}G_{j,k} +\partial_{j}G_{k,i}-\partial_{k}G_{i,j}\big{)}\] \[= \tfrac{1}{2}\sum_{k=1}^{D}\Big{(}\delta_{k,m}-\tfrac{\psi^{2}}{W ^{2}}\partial_{k}\ell\partial_{m}\ell\Big{)}\left(\partial_{i}\psi^{2} \partial_{j}\ell\partial_{k}\ell+\psi^{2}\partial_{i,j}^{2}\ell\partial_{k} \ell+\psi^{2}\partial_{j}\ell\partial_{i,k}^{2}\ell+\partial_{j}\psi^{2} \partial_{k}\ell\partial_{i}\ell\right.\] \[+\psi^{2}\partial_{j,k}^{2}\ell\partial_{i}\ell+\psi^{2}\partial_{ k}\ell\partial_{j,i}^{2}\ell-\partial_{k}\psi^{2}\partial_{i}\ell\partial_{j} \ell-\psi^{2}\partial_{i}\ell\partial_{k,j}^{2}\ell\Big{)}\] \[= \tfrac{1}{2}\sum_{k=1}^{D}\Big{(}\delta_{k,m}-\tfrac{\psi^{2}}{W ^{2}}\partial_{k}\ell\partial_{m}\ell\Big{)}\left(\partial_{i}\psi^{2} \partial_{j}\ell\partial_{k}\ell+2\psi^{2}\partial_{i,j}^{2}\ell\partial_{k} \ell+\partial_{j}\psi^{2}\partial_{k}\ell\partial_{i}\ell-\partial_{k}\psi^{2} \partial_{i}\ell\partial_{j}\ell\right)\!.\]
Because the Hessian \(\nabla^{2}\ell\) is symmetric, the terms in the blue color combine and in red color will cancel out. Expanding the summation considering the terms on the right side of the right-side parenthesis, we get
\[\Gamma_{i,j}^{m} =\tfrac{1}{2}\bigg{(}\partial_{i}\psi^{2}\partial_{j}\ell \partial_{m}\ell-\tfrac{\psi^{2}}{W^{2}}\partial_{i}\psi^{2}\partial_{j}\ell \partial_{m}\ell\sum_{k=1}^{D}\partial_{k}\ell^{2}+\dots\bigg{)}\] \[=\tfrac{1}{2}\bigg{[}\bigg{(}1-\frac{\psi^{2}}{W^{2}}\sum_{k=1}^{ D}\partial_{k}\ell^{2}\bigg{)}\bigg{(}\partial_{i}\psi^{2}\partial_{j}\ell \partial_{m}\ell+2\psi^{2}\partial_{i,j}^{2}\ell\partial_{m}\ell+\partial_{j} \psi^{2}\partial_{i}\ell\partial_{m}\ell\bigg{)}+\dots\bigg{]}\] \[=\tfrac{1}{2}\bigg{(}\frac{1}{W^{2}}\partial_{i}\psi^{2}\partial_{j }\ell\partial_{m}\ell+\frac{2\psi^{2}}{W^{2}}\partial_{m}\ell\partial_{i,j}^{2} \ell+\frac{1}{W^{2}}\partial_{j}\psi^{2}\partial_{i}\ell\partial_{m}\ell- \partial_{m}\psi^{2}\partial_{i}\ell\partial_{j}\ell\] \[+\frac{\psi^{2}}{W^{2}}\sum_{k=1}^{D}\partial_{k}\psi^{2}\partial_{ k}\ell\partial_{m}\ell\partial_{i}\ell\partial_{j}\ell\bigg{)}.\]
Note that, except for the last term, all the terms in the first passage are computed similarly. That is why the equation is shortened. In the last passage we explicitly show the complete form of all the terms composing \(\Gamma^{m}_{i,j}\). The Christoffel symbols when arranged in full matrices are denoted as \(\Gamma^{m}\), \(m=1,\ldots,D\) and are generally written as,
\[\Gamma^{m}= \tfrac{1}{2}\biggl{(}\frac{1}{W^{2}}\nabla\psi^{2}\nabla\ell^{ \top}\partial_{m}\ell+\frac{1}{W^{2}}\nabla\ell(\nabla\psi^{2})^{\top}\partial _{m}\ell+\frac{2\psi^{2}}{W^{2}}\nabla^{2}\ell\partial_{m}\ell\] \[+\frac{\psi^{2}\langle\nabla\psi^{2},\nabla\ell\rangle}{W^{2}} \nabla\ell\nabla\ell^{\top}\partial_{m}\ell-\nabla\ell\nabla\ell^{\top} \partial_{m}\psi^{2}\biggr{)}.\]
To further simplify the notation, let
\[\Lambda=\nabla\psi^{2}\nabla\ell^{\top}+\nabla\ell(\nabla\psi^{2})^{\top}+2 \psi^{2}\nabla^{2}\ell+\psi^{2}\langle\nabla\psi^{2},\nabla\ell\rangle\nabla \ell\nabla\ell^{\top}.\]
Thus,
\[\Gamma^{m}=\tfrac{\Lambda}{2W^{2}}\partial_{m}\ell-\tfrac{1}{2}\nabla\ell \nabla\ell^{\top}\partial_{m}\psi^{2}.\]
The computation of the geodesic equations associated to \(\Gamma_{\ell}\) will also follow the general formalism. We will use the results above to make the equations more compact aiming at computational purposes. In general, geodesic equations can be written as,
\[\dot{\boldsymbol{v}}=-\begin{bmatrix}\|\boldsymbol{v}\|_{\Gamma^{1}(\gamma(t)) }^{2}\\ \vdots\\ \|\boldsymbol{v}\|_{\Gamma^{D}(\gamma(t))}^{2}\end{bmatrix}.\]
Expanding one element in the right hand-side of the equation above we get,
\[\|\boldsymbol{v}\|_{\Gamma^{m}(\gamma(t))}^{2}=\boldsymbol{v}^{\top}\bigl{(} \tfrac{\Lambda}{2W^{2}}\partial_{m}\ell-\tfrac{1}{2}\nabla\ell\nabla\ell^{ \top}\partial_{m}\psi^{2}\bigr{)}\,\boldsymbol{v}\,.\]
Observe that the quadratic form \(\|\boldsymbol{v}\|_{\Lambda}^{2}\) can be expanded to have a computational-friendly expression, so that we do not need to form squared matrices. It follows,
\[\mathcal{O}_{1}=\tfrac{1}{2W^{2}}\boldsymbol{v}^{\top}\Lambda\,\boldsymbol{v} =\tfrac{1}{2W^{2}}\Bigl{(}2\langle\boldsymbol{v},\nabla\psi^{2}\rangle \langle\boldsymbol{v},\nabla\ell\rangle+2\psi^{2}\|\boldsymbol{v}\|_{\nabla^ {2}\ell}^{2}+\psi^{2}\langle\nabla\psi^{2},\nabla\ell\rangle\langle \boldsymbol{v},\nabla\ell\rangle^{2}\Bigr{)}\]
and
\[\mathcal{O}_{2}=\tfrac{1}{2}\boldsymbol{v}^{\top}\nabla\ell\nabla\ell^{\top} \,\boldsymbol{v}=\tfrac{1}{2}\langle\boldsymbol{v},\nabla\ell\rangle^{2}\]
We now clearly see the need of only two gradients, each of which are multiplied respectively by \(\mathcal{O}_{1}\) and \(\mathcal{O}_{2}\) which are scalar numbers. Therefore the geodesic equations above become,
\[\dot{\boldsymbol{v}}=-\mathcal{O}_{1}(\boldsymbol{x},\boldsymbol{v})\nabla \ell(\boldsymbol{x})+\mathcal{O}_{2}(\boldsymbol{x},\boldsymbol{v})\nabla \psi^{2}(\boldsymbol{x}) \tag{26}\]
indeed all elements of the above equation are dependent on \(t\).
### Second-fundamental form on \(T_{\mathbf{x}}\Gamma_{\ell}\)
The second-fundamental form on the tangent space of \(\Gamma_{\ell}\) in computed as follows. In short notation the normal vector will be denoted \(\mathbf{N}=\Big{(}-\frac{\psi\nabla\ell}{W},\frac{1}{\psi W}\Big{)}\) and its Euclidean partial derivative in the \(i^{th}\) direction \(\partial_{i}\,\mathbf{N}\), that is
\[\partial_{i}\,\mathbf{N}=\Big{[}-\big{(}\partial_{i}\psi\tfrac{1}{W}+\psi\partial_ {i}\tfrac{1}{W}\big{)}\nabla\ell-\tfrac{\psi}{W}\nabla^{2}\ell_{i},\partial_{i }\tfrac{1}{\psi}\tfrac{1}{W}+\tfrac{1}{\psi}\partial_{i}\tfrac{1}{W}\Big{]}.\]
The second-fundamental form acting on the tangent space of \(\Gamma_{\ell}\) at \(\mathbf{x}\) in the direction of \(\mathbf{V}=(\mathbf{v},\langle\mathbf{v},\nabla\ell\rangle)\)SS is defined as,
Footnote §: Observe that the extension \(\bar{\mathbf{V}}\) on \(\mathcal{N}\times\mathcal{M}_{\psi}\) is the same as \(\mathbf{V}\)
\[\mathbb{II}_{\mathbf{x}}(\mathbf{V})=-\langle\bar{\nabla}_{\mathbf{V}}\,\mathbf{N},\mathbf{V} \rangle_{\psi}\]
where \(\mathbf{x}=(x_{1},\dots,x_{D+1})\in\mathcal{N}\times\mathcal{M}_{\psi}\) and \(\bar{\nabla}\) is the connection associated with the warped metric in the ambient space \(\mathcal{N}\times\mathcal{M}_{\psi}\). In the ambient space, the Christoffel symbols \(\bar{\Gamma}^{m}\) (in matrices forms) associated with the connection \(\bar{\nabla}\) are given by
\[\bar{\Gamma}^{m}=\tfrac{1}{2}\operatorname{diag}(\mathbf{0},-\partial_{m}\psi^{2})\]
for \(m=1,\dots,D\) and
\[\bar{\Gamma}^{D+1}=\tfrac{1}{2\psi^{2}}\begin{bmatrix}\mathbf{0}&\nabla\psi^{2} \\ \nabla\psi^{2\top}&\mathbf{0}\end{bmatrix}\]
since \(\psi\) does not depend on \(x_{D+1}\). The covariant derivative \(\bar{\nabla}_{\mathbf{V}}\,\mathbf{N}\) can be computed using the general definition in Do Carmo (1992). It follows,
\[\bar{\nabla}_{\mathbf{V}}\,\mathbf{N}= \begin{bmatrix}\mathbf{V}(\mathbf{N}_{1})+\langle\mathbf{V},\mathbf{N}\rangle_{ \bar{\Gamma}_{1}}\\ \vdots\\ \mathbf{V}(\mathbf{N}_{D+1})+\langle\mathbf{V},\mathbf{N}\rangle_{\bar{\Gamma}_{D+1}}\end{bmatrix}\] \[= \sum_{i=1}^{D}\mathbf{v}_{i}\partial_{i}\,\mathbf{N}+\Big{(}{-}\tfrac{1} {2}\tfrac{1}{\psi W}\langle\mathbf{v},\nabla\ell\rangle\nabla\psi^{2},\tfrac{1}{2 \psi^{2}}\big{(}-\tfrac{\psi}{W}\langle\mathbf{v},\nabla\ell\rangle\langle\nabla \psi^{2},\nabla\ell\rangle+\tfrac{1}{\psi W}\langle\mathbf{v},\nabla\psi^{2} \rangle\big{)}\Big{)}\,.\]
Plugging the covariant derivative \(\bar{\nabla}_{\mathbf{V}}\,\mathbf{N}\) and the tangent vector \(\mathbf{V}\) into the definition of the second-fundamental form yields
\[\langle\bar{\nabla}_{\mathbf{V}}\,\mathbf{N},\mathbf{V}\rangle_{\psi}= \left\langle\sum_{i=1}^{D}\mathbf{v}_{i}\partial_{i}\,\mathbf{N},(\mathbf{v}, \langle\mathbf{v},\nabla\ell\rangle)\right\rangle_{\psi}\] \[+ \left\langle\big{(}-\tfrac{1}{2\psi W}\langle\mathbf{v},\nabla\ell \rangle\nabla\psi^{2},-\tfrac{1}{2\psi W}\langle\mathbf{v},\nabla\ell\rangle\langle \nabla\psi^{2},\nabla\ell\rangle+\tfrac{1}{2\psi^{3}W}\langle\mathbf{v},\nabla \psi^{2}\rangle\big{)},\big{(}\,\mathbf{v},\langle\mathbf{v},\nabla\ell\rangle\big{)} \right\rangle_{\psi}.\]
After some algebraic manipulation the first term of the above sum becomes
\[\left\langle\sum_{i=1}^{D}\mathbf{v}_{i}\partial_{i}\,\mathbf{N},(\mathbf{v}, \langle\mathbf{v},\nabla\ell\rangle)\right\rangle_{\psi}= \,\mathbf{v}^{\top}\Big{[}-\Big{(}\nabla\psi\tfrac{1}{W}+\psi\nabla \tfrac{1}{W}\Big{)}\nabla\ell^{\top}-\tfrac{\psi}{W}\nabla^{2}\ell+\psi^{2} \Big{(}\nabla\psi\tfrac{1}{W}+\tfrac{1}{\psi}\nabla\tfrac{1}{W}\Big{)}\nabla \ell^{\top}\Big{]}\,\mathbf{v}\] \[= \,\mathbf{v}^{\top}\Big{(}\tfrac{2}{W}\nabla\psi\nabla\ell+\tfrac{\psi }{W}\nabla^{2}\ell\Big{)}\,\mathbf{v}\,.\]
SSObserve that the extension \(\bar{\mathbf{V}}\) on \(\mathcal{N}\times\mathcal{M}_{\psi}\) is the same as \(\mathbf{V}\)
Therefore, considering the negative sign, the second-fundamental acting on the tangent space of \(\Gamma_{\ell}\) is given by
\[\Pi_{\mathbf{x}}(\mathbf{V}) =-\langle\bar{\nabla}_{\mathbf{V}}\,\mathbf{N},\mathbf{V}\rangle_{\psi}\] \[=\mathbf{v}^{\top}\left(\tfrac{2}{W}\nabla\psi\nabla\ell^{\top}+ \tfrac{\psi}{W}\nabla^{2}\ell+\tfrac{\psi}{2W}\langle\nabla\psi^{2},\nabla\ell \rangle\nabla\ell\nabla\ell^{\top}\right)\mathbf{v}\,.\]
### Third-order Taylor expansion of the geodesic curve
The third-order order degree Taylor approximation of the geodesic curve on a point \(\mathbf{x}\in\Gamma_{\ell}\) in the direction of \(\mathbf{V}\in T_{\mathbf{x}}\Gamma_{\ell}\) is given by
\[\tilde{\gamma}_{\mathbf{x},\mathbf{V}}(t_{*})=\mathbf{x}+t_{*}\mathbf{V}+\frac{t_{*}^{2}}{2} \ddot{\gamma}(0)+\frac{t_{*}^{3}}{6}\dddot{\gamma}(0). \tag{27}\]
where \(\ddot{\gamma}(0)=Q_{\mathbf{x}}(\mathbf{V})\) is the normal component of the geodesic curve on \(\mathcal{N}\times\mathcal{M}_{\psi}\). This normal component is given by the second-fundamental form multiplied by the normal vector since geodesics have null tangential component. That is \(\ddot{\gamma}(t)=Q_{\mathbf{x}(t)}(\dot{\gamma}(t))=\Pi_{\mathbf{x}(t)}(\dot{\gamma}(t ))\,\mathbf{N}_{\mathbf{x}(t)}\). Expanding this expression at \(t=0\) we get
\[Q_{\mathbf{x}}(\mathbf{V})=\left(\tfrac{2}{W}\langle\mathbf{v},\nabla\psi\rangle\langle\bm {v},\nabla\ell\rangle+\tfrac{\psi}{W}\|\mathbf{v}\|_{\nabla^{2}\ell}^{2}+\tfrac{ \psi}{2W}\langle\nabla\psi^{2},\nabla\ell\rangle\langle\nabla\ell,\mathbf{v} \rangle^{2}\right)\left[\tfrac{-\tfrac{\psi\nabla\ell}{W}}{\frac{W}{\psi W}}\right] \tag{28}\]
where for a given \(\mathbf{V}\) the coordinates components \(\mathbf{v}\) can be recovered using the orthogonal projection above, that is, \(\mathbf{v}=\left(\mathbf{M}_{\partial}^{\top}G_{\psi}\mathbf{M}_{\partial}\right)^{-1}\!\bm {M}_{\partial}^{\top}G_{\psi}\,\mathbf{V}\). The third-order component of the approximate geodesic is \(\dddot{\gamma}(0)=K_{\mathbf{x}}(\mathbf{V})\) and obtained taking the time derivative of \(\ddot{\gamma}(t)\) at \(t=0\).
\[\dddot{\gamma}(t) =\frac{\mathrm{d}}{\mathrm{d}t}Q_{\gamma(t)}(\dot{\gamma}(t)) \tag{29}\] \[=\frac{\mathrm{d}}{\mathrm{d}t}\bigg{(}\tfrac{2}{W}\langle\mathbf{v}, \nabla\psi\rangle\langle\mathbf{v},\nabla\ell\rangle+\tfrac{\psi}{W}\|\mathbf{v}\|_{ \nabla^{2}\ell}^{2}+\tfrac{\psi}{2W}\langle\nabla\psi^{2},\nabla\ell\rangle \langle\nabla\ell,\mathbf{v}\rangle^{2}\bigg{)}\left[\tfrac{-\tfrac{\psi\nabla \ell}{W}}{\frac{W}{\psi W}}\right].\]
Here we are interested in the first \(D\) components of this approximation since \(\tilde{\gamma}_{\mathbf{x},\mathbf{V}}(t_{*})\notin\Gamma_{\ell}\) usually. Following the retraction map choice, we apply the orthogonal projection of \(\dddot{\gamma}\) towards \(\mathcal{N}=\Theta\) to obtain its first \(D\) component. Then we can write
\[\dddot{\gamma}(t)_{1:D} =\frac{\mathrm{d}}{\mathrm{d}t}Q_{\gamma(t)}(\dot{\gamma}(t))_{1 :D}\] \[=-\frac{\mathrm{d}}{\mathrm{d}t}\big{(}\tfrac{1}{W^{2}}\langle\mathbf{v },\nabla\psi^{2}\rangle\langle\mathbf{v},\nabla\ell\rangle+\tfrac{\psi^{2}}{W^{2} }\|\mathbf{v}\|_{\nabla^{2}\ell}^{2}+\tfrac{\psi^{2}}{2W^{2}}\langle\nabla\psi^{2 },\nabla\ell\rangle\langle\nabla\ell,\mathbf{v}\rangle^{2}\big{)}\nabla\ell \tag{30}\]
where we have used that \(\nabla\psi^{2}=2\psi\nabla\psi\). We then write down particular derivatives which are used to build the complete time derivative of the above equation without yet specifying the
particular form of \(\psi\).
\[\frac{\mathrm{d}}{\mathrm{d}t}\frac{1}{W^{2}} =\langle\mathbf{v},\nabla\tfrac{1}{W^{2}}\rangle\] \[\text{with }\nabla\tfrac{1}{W^{2}}=-\tfrac{2}{W^{3}}\nabla W\text{ and }\nabla W=\tfrac{1}{2W}\big{(}\nabla\psi^{2}\|\nabla\ell\|^{2}+\psi^{2}2\nabla^{2} \ell\nabla\ell\big{)},\] \[\frac{\mathrm{d}}{\mathrm{d}t}\psi^{2} =\langle\nabla\psi^{2},\mathbf{v}\rangle\] \[\frac{\mathrm{d}}{\mathrm{d}t}\langle\mathbf{v},\nabla\psi^{2}\rangle =\langle\dot{\mathbf{v}},\nabla\psi^{2}\rangle+\left\langle\mathbf{v}, \frac{\mathrm{d}}{\mathrm{d}t}\nabla\psi^{2}\right\rangle,\] \[\frac{\mathrm{d}}{\mathrm{d}t}\langle\mathbf{v},\nabla\ell\rangle =\langle\dot{\mathbf{v}},\nabla\ell\rangle+\|\mathbf{v}\|_{\nabla^{2}\ell} ^{2},\] \[\frac{\mathrm{d}}{\mathrm{d}t}\|\mathbf{v}\|_{\nabla^{2}\ell}^{2} =\left\langle\dot{\mathbf{v}},\nabla^{2}\ell\,\mathbf{v}\right\rangle+ \left\langle\mathbf{v},\frac{\mathrm{d}}{\mathrm{d}t}\nabla^{2}\ell\,\mathbf{v}\right\rangle\] \[=\left\langle\dot{\mathbf{v}},\nabla^{2}\ell\,\mathbf{v}\right\rangle+ \left\langle\mathbf{v},\lim_{r\to 0}\frac{(\nabla^{2}\ell^{+}-\nabla^{2}\ell^{-})\,\mathbf{v}} {2r}+\nabla^{2}\ell\dot{\mathbf{v}}\right\rangle,\] \[\frac{\mathrm{d}}{\mathrm{d}t}\langle\nabla\psi^{2},\nabla\ell\rangle =\left\langle\frac{\mathrm{d}}{\mathrm{d}t}\nabla\psi^{2},\nabla \ell\right\rangle+\langle\nabla\psi^{2},\nabla^{2}\ell\,\mathbf{v}\rangle.\]
The time derivative of the gradient of the warp function will be given accordingly to the choice of the warp function.
## Appendix H The choice of warp function
Suppose that we have a natural embedding of \(\Gamma_{\ell}\) on \(\mathbb{R}^{D+1}\) with the canonical parametrisation \(\xi\) aforementioned. Then the normal vector at \(\mathbf{x}\in\Gamma_{\ell}\subset\mathbb{R}^{D+1}\) is given by \(\mathbf{N}_{*}=\big{(}-\nabla\ell/\sqrt{\|\nabla\ell\|^{2}+1},1/\sqrt{\|\nabla \ell\|^{2}+1}\big{)}.\) We first define the warp function to be the norm of the orthogonal projection of \(\mathbf{N}_{*}\) over \(T_{\xi^{-1}(\mathbf{x})}\Theta\). This implies that
\[\psi_{*}=\|\text{Proj}_{T_{\xi^{-1}(\mathbf{x})}\Theta}\,\mathbf{N}_{*}\,\|=\frac{\| \nabla\ell\|}{\sqrt{1+\|\nabla\ell\|^{2}}}.\]
From here we can see that \(\psi_{*}\in(0,1)\) and in regions far away from the optima of \(\ell\) the function \(\psi_{*}\to 1^{-}\) as the components of the gradient \(\nabla\ell\) have high magnitude. Note that close to the optima \(\psi_{*}\to 0^{+}\) as the gradient components tend to zero and the metric \(G=I\) (identity), so that at the optima we recover the Euclidean metric. However, we believe that this function may be too restrictive for functions \(\ell\) that can induce strong "bending" of the approximate geodesic path which may undesirable for some practical purposes. For this reason we propose a more flexible warp function defining
\[\psi=\frac{\|\nabla\ell\|}{\sqrt{\sigma^{2}+\|\nabla\ell\|^{2}}},\]
where \(\sigma^{2}>0\) is scalar controlling the flattening of the function \(\psi\in(0,1)\). The larger the values of \(\sigma^{2}\) the smaller the function \(\psi\) will be. If \(\sigma^{2}\to 0^{+}\) the function \(\psi=1\) which, in this case, \(\mathcal{N}\times\mathcal{M}_{\psi}=\mathbb{R}^{D+1}\). In order to visualize the behaviour of the \(3^{\text{rd}}\) Taylor series
in the approximation of geodesics with varying \(\sigma\) see Figure 5. In this figure the example constructed considers the function \(\ell(\mathbf{\theta})=\log\mathcal{N}\big{(}[\theta_{1},\theta_{2}+\sin(1.3\theta_{1} )]\big{|}\,\mathbf{0},\Sigma\big{)}\) where \(\mathcal{N}\) denotes the Gaussian density function in two dimensions with \(\mathbf{\mu}=\mathbf{0}\) and \(\Sigma=\text{diag}(20,0.1)\). The approximations are made at \(\xi^{-1}(\mathbf{x})=[3.0\ 1.4]^{\top}\) and \(\mathbf{v}=-[1.2\ 1.0]^{\top}\) and both are depicted in blue colour. The panel displays the \(3^{\text{rd}}\) order Taylor series approximation with a series of increasing \(\sigma\) values. See also Equation (20) for the general approximate geodesic expression.
In the previous calculations across the paper the gradient of the warp function \(\psi^{2}\) was required. For this particular choice of warp function we obtain its gradient as follows. Denote \(W_{\sigma}=\sqrt{\sigma^{2}+\|\nabla\ell\|^{2}}\). Therefore the gradient is
\[\nabla\psi^{2} =\left(\nabla\|\nabla\ell\|^{2}\frac{1}{W_{\sigma}^{2}}+\|\nabla \ell\|^{2}\nabla\frac{1}{W_{\sigma}^{2}}\right)\] \[=\left(2\nabla^{2}\ell\nabla\ell\frac{1}{W_{\sigma}^{2}}-2\| \nabla\ell\|^{2}\frac{1}{W_{\sigma}^{4}}\nabla^{2}\ell\nabla\ell\right)\] \[=\left(\frac{2}{W_{\sigma}^{2}}-\frac{2\|\nabla\ell\|^{2}}{W_{ \sigma}^{4}}\right)\nabla^{2}\ell\nabla\ell\] \[=\left(\frac{2}{W_{\sigma}^{2}}-\frac{2(W_{\sigma}^{2}-\sigma^{2 })}{W_{\sigma}^{4}}\right)\nabla^{2}\ell\nabla\ell\] \[=\frac{2\sigma^{2}}{W_{\sigma}^{4}}\nabla^{2}\ell\nabla\ell,\]
and the time derivative of \(\nabla\psi^{2}\) is given by
\[\frac{\mathrm{d}}{\mathrm{d}t}\nabla\psi^{2}=\frac{2\sigma^{2}}{W_{\sigma}^{4}} \left(-\frac{4}{W_{\sigma}^{2}}\langle\boldsymbol{v},\nabla^{2}\ell\nabla\ell \rangle\nabla^{2}\ell\nabla\ell+\frac{\mathrm{d}}{\mathrm{d}t}\nabla^{2}\ell \nabla\ell\right).\]
Similarly as before,
\[\frac{\mathrm{d}}{\mathrm{d}t}\nabla^{2}\ell\nabla\ell\approx\frac{\left( \nabla^{2}\ell(\boldsymbol{\theta}+r\nabla\ell)-\nabla^{2}\ell(\boldsymbol{ \theta}-r\nabla\ell)\right)\boldsymbol{v}}{2r}+\nabla^{2}\ell\,\nabla^{2}\ell \,\boldsymbol{v}\]
for small \(r\).
## 1 Computational costs experiments
Here we show extra experiments showing the performance of RCG method proposed in this paper compared to the other benchmarks. In Figure 6, we added the wall-clock time and memory consumption of all experiments showed in the main paper. Panel (a) shows the squiggle model. In this case Newton's method dominates the speed of convergence, time and memory consumption. Our methodology is able to improve the number of iterations until convergence compared to the conjugate gradient but has higher computation costs compared to both Newton's CG an gradient CG. For the Rosenbrock model, in panel (b), we see that our method, after \(D\approx 600\), is faster in terms of number of iterations until convergence and wall clock time when compared to Newton's CG and gradient CG but our RCG still loses in terms of memory consumption. In panel (c) we depict the models from CUTE library. Of all the 8 models in the experiment, the RCG method is faster in 5 of them, but clearly we lose in wall-clock time until convergence and memory consumption. From these experiments we can notice that the proposed RCG has generally improved the number of iterations until convergence. However, an ideal computational implementation of the method is necessary to make a broader and fair comparison. |
2310.12117 | Effects of intra-layer correlations on electron-hole double-layer
superfluidity | We investigate the correlations acting within the layers in a superfluid
system of electron-hole spatially separated layers. In this system of
quasi-dipoles, the dominant correlations are Hartree--Fock. We find in the BEC
regime of the superfluid where screening is negligible, that the effect of the
correlations on superfluid properties is also negligible. However, in the
BCS-BEC crossover regime, where the screening plays a crucial role, we find
that the superfluid gap is significantly weakened because the correlations
significantly boost the number of low-energy particle-hole excitations
participating in the screening process. Finally, the intralayer correlations
are found in this system to suppress a predicted phenomenon in which the
average pair size passes through a minimum as the crossover regime is
traversed. In the presence of intralayer correlations, the minimum is either
extremely weak or completely absent. | Filippo Pascucci, Sara Conti, Andrea Perali, Jacques Tempere, David Neilson | 2023-10-18T17:14:24Z | http://arxiv.org/abs/2310.12117v1 | # Effects of intra-layer correlations on electron-hole double-layer superfluidity
###### Abstract
We investigate the correlations acting within the layers in a superfluid system of electron-hole spatially separated layers. In this system of quasi-dipoles, the dominant correlations are Hartree-Fock. We find in the BEC regime of the superfluid where screening is negligible, that the effect of the correlations on superfluid properties is also negligible. However, in the BCS-BEC crossover regime, where the screening plays a crucial role, we find that the superfluid gap is significantly weakened because the correlations significantly boost the number of low-energy particle-hole excitations participating in the screening process. Finally, the intralayer correlations are found in this system to suppress a predicted phenomenon in which the average pair size passes through a minimum as the crossover regime is traversed. In the presence of intralayer correlations, the minimum is either extremely weak or completely absent.
Recent reports of the likely observation of superfluidity with electron-hole pairs in spatially separated electron and hole conducting layers in zero magnetic fields [1; 2; 3; 4; 5], are currently attracting a lot of interest. The spatial separation opens a way to stable superfluids in equilibrium because it suppresses electron-hole recombination [6].
Theoretical investigations of these two-layer systems have focused on the electron-hole correlations needed to generate the electron-hole pairs. However, on account of very significant screening effects [7], the superfluidity is restricted to low carrier densities, and so correlations between electrons in one layer and correlations between holes in the other layer can be expected to play a significant role. It is the purpose of this paper to investigate the effect of the correlations acting within each layer on the superfluid properties.
For superfluidity of spatially indirect excitons, the average separation between the excitons is generally much greater than the layer spacing separating the electrons and holes. The excitons are then well approximated by particles with dipole moments perpendicular to the layers and mutually interacting through repulsive dipole-dipole interactions acting parallel to the layers [8; 9]. At the relatively low densities where superfluidity is found [2; 10], kinetic energy effects tend to dominate over the intralayer correlations caused by the dipolar interactions. In this case, an expansion of the corrections due to the intralayer correlations will be dominated by the Hartree-Fock contribution. This is in striking contrast to Wigner crystallization in double-layer coulombic systems, where at low densities the intralayer correlations from the Coulomb interactions are dominant over kinetic energy effects [11].
In this paper, we investigate the effect of intralayer correlations on superfluidity using the Hartree-Fock approximation. The coupled mean-field equations for the superfluid gap \(\Delta_{k}\) and layer density \(n\) at zero temperature are [12; 7; 13],
\[\Delta_{k} = -\frac{1}{S}\sum_{\mathbf{k}^{\prime}}V_{eh}^{sc}(k-k^{\prime}) \frac{\Delta_{k^{\prime}}}{2E_{k^{\prime}}}\, \tag{1}\] \[n = g_{s}g_{v}\sum_{\mathbf{k}}\frac{1}{2}\left(1-\frac{\varepsilon_{ k}-\mu_{s}}{E_{k}}\right). \tag{2}\]
\(E_{k}=\sqrt{\xi_{k}^{2}+\Delta_{k}^{2}}\) is the excitation energy, \(\xi_{k}=\varepsilon_{k}-\mu_{s}\), with \(\varepsilon_{k}\) the single-particle energy band and \(\mu_{s}\) the single-particle chemical potential. \(g_{s}\) and \(g_{v}\) are the spin and valley degeneracies and \(S\) is the area of the system.
\[V_{eh}^{sc}(\mathbf{q})=\frac{V_{eh}(\mathbf{q})-\Pi_{a}(\mathbf{q})(V_{ee}^{2 }(\mathbf{q})-V_{eh}^{2}(\mathbf{q}))}{1-2(V_{ee}(\mathbf{q})\Pi_{n}(\mathbf{ q})-V_{eh}(\mathbf{q})\Pi_{a}(\mathbf{q}))+\mathcal{A}_{\mathbf{q}}\mathcal{B}_{ \mathbf{q}}}\, \tag{3}\]
where \(V_{ee}(\mathbf{q})=1/q\) is the bare electron-electron (and hole-hole) interaction acting within each layer and \(V_{eh}(\mathbf{q})=-e^{-qd}/q\) the bare electron-hole interaction between layers, where \(d\) is the interlayer distance. \(\Pi_{n}(\mathbf{q})\) and \(\Pi_{a}(\mathbf{q})\) are the normal and anomalous polarizabilities in the superfluid phase [7; 14]. For brevity, we write \(\mathcal{A}_{\mathbf{q}}=V_{ee}^{2}(\mathbf{q})-V_{eh}^{2}(\mathbf{q})\) and \(\mathcal{B}_{\mathbf{q}}=\Pi_{n}^{2}(\mathbf{q})-\Pi_{a}^{2}(\mathbf{q})\).
In the Hartree-Fock approximation, the single-particle energy is given by [15; 16]:
\[\xi_{\mathbf{k}}^{HF}=\frac{\hbar^{2}\mathbf{k}^{2}}{2m}-\mu_{s}-\Sigma( \mathbf{k})\,, \tag{4}\]
where
\[\Sigma(\mathbf{k})=\frac{1}{S}\sum_{\mathbf{p}}V_{ee}^{sc}(\mathbf{p}-\mathbf{ k})v_{\mathbf{p}}^{2}\,, \tag{5}\]
with the Bogoliubov amplitude (density of states)
\[v_{\mathbf{k}}^{2}=1-u_{\mathbf{k}}^{2}=\frac{1}{2}\Bigg{(}1-\frac{\xi_{ \mathbf{k}}^{HF}}{E_{\mathbf{k}}^{HF}}\Bigg{)}, \tag{6}\]
The self-consistent static screened electron-electron (hole-hole) interaction within each layer is [16]
\[V_{ee}^{sc}(\mathbf{q})=\frac{V_{ee}(\mathbf{q})-\Pi_{n}(\mathbf{q})(V_{ ee}^{2}(\mathbf{q})-V_{eh}^{2}(\mathbf{q}))}{1-2(V_{ee}(\mathbf{q})\Pi_{n}( \mathbf{q})-V_{eh}(\mathbf{q})\Pi_{a}(\mathbf{q}))+\mathcal{A}_{\mathbf{q}} \mathcal{B}_{\mathbf{q}}} \tag{7}\]
To determine the effect of the Hartree-Fock corrections within the layers, we solve the gap and number equations Eqs. (1) and (2) using \(\xi_{\mathbf{k}}^{HF}\) for the single-particle energy. The screened interactions, Eqs. (3) and (7), are modified similarly.
We take single-particle parabolic bands \(\varepsilon_{k}=\hbar k^{2}/2m^{*}\), with equal effective masses \(m^{*}=m_{e}^{*}=m_{h}^{*}=0.04\). For the dielectric constant, we use \(\epsilon=2\) for double bilayer graphene. We express lengths in units of the effective Bohr radius, \(a_{B}^{*}=5.3\) nm, and energies in effective Rydberg \(Ry^{*}=35\) meV. We consider equal electron and hole layer densities, \(n=n_{e}=n_{h}\), corresponding to an average interparticle spacing in the layers of \(r_{0}=(\pi n)^{-1/2}\).
Figure 1(a) shows the resulting superfluid energy gap \(\Delta_{k}\) for a layer separation \(d=0.2\). The intralayer distance \(r_{0}\) shown spans the full range for superfluidity. Because of strong screening, there is a maximum threshold density for the superfluidity that corresponds to \(r_{0}\simeq 1\). As the threshold density is approached, we see that the Hartree-Fock correlations have a strong effect on the superfluidity, reducing the gap \(\Delta_{k}\) by as much as a factor of 2. However the effect of the correlations on the superfluidity weakens with decreasing density, and for \(r_{0}\gtrsim 3\), the correlations have negligible effect.
Figure 1(b) demonstrates that the suppression of \(\Delta_{k}\) seen at higher densities comes from the effect of the Hartree-Fock correlations weakening the self-consistent electron-hole screened interaction, \(V_{eh}^{sc}(\mathbf{q})\). This weakening as the density increases is due to Hartree-Fock boosting the number of the low-lying energy states that contribute to the screening (see Fig. 1(c)).
Figure 2 compares our results with Diffusion Quantum Monte Carlo (DQMC) numerical simulations [17]. We see in Fig. 2(a) that including the Hartree-Fock correlations significantly improves the agreement with DQMC for both the height and position of the maximum of the superfluid peak \(\Delta_{max}\). The correlations using static screening push down the threshold density somewhat, but corrections from dynamical screening will act to compensate this [18].
Figure 2(b) compares the single-particle chemical potential \(\mu_{s}\). We see that the Hartree-Fock corrections are significant and move \(\mu_{s}\) closer to the benchmark DQMC results.
An important feature of superfluidity in these electron-hole double-layer systems is that, by tuning the carrier density in the layers \(n\) using gate voltages, it is possible experimentally to sweep the superfluidity from a strong-coupled Bose-Einstein condensate (BEC) at the lowest densities, to the intermediate-coupled BCS-BEC crossover regime, through towards the weak-coupled BCS regime [7; 19]. Figure 3 maps out the superfluidity and its regimes at very low temperatures in the \(r_{0}\)-\(d\) phase space. We set the boundary between the BEC and the BCS-BEC crossover regimes as the line at which the chemical potential \(\mu_{s}\) changes sign from negative to positive (Fig. 2(b)) [20; 21].
Indicated for reference on the vertical axis of Fig. 3 are the smallest separations experimentally attained to date in Gallium Arsenide (GaAs) double quantum wells [22; 23; 24], double layers of bilayer graphene (DBG) [1], and double layers of Transition Metal Dichalcogenide (TMD) [2; 3].
We compare in Fig. 4(a) for a fixed value of the layer interparticle spacing \(r_{0}=3\), the evolution of the superfluid gap energy \(\Delta_{k}\) when the Hartree-Fock correlations within the layers are either included or neglected, for different layer separations \(d\). The corresponding (\(r_{0}\)-\(d\)) points are marked on the phase diagram, Fig. 3. Figure 4(b) compares the corresponding ratios of screened electron-hole attraction \(V_{eh}^{sc}(k)\) to the bare attraction
Figure 1: (a) Superfluid gap \(\Delta_{k}\) at densities characterized by \(r_{0}\), the average interparticle spacing within each layer. Layer separation \(d=0.2\). Solid red: within the mean field including intralayer correlations. Dashed blue: within the mean-field but neglecting intralayer correlations. (b) Ratio of self-consistent screened electron-hole attraction \(V_{eh}^{sc}(k)\) to the bare attraction \(V^{eh}(k)\) for the same densities. (c) Corresponding density of states \(n_{k}=v_{\mathbf{k}}^{2}\). Lengths are in units of the effective Bohr radius and energies are in units of the effective Rydberg (see text).
\(V_{eh}(k)\).
The layer spacing \(d=0.2\) lies deep in the BEC regime and Fig. 4(b) confirms that screening is indeed negligible there. Since the Hartree-Fock corrections primarily affect the screening, the correlations have almost no effect on \(\Delta_{k}\) for \(d=0.2\). However, \(d=0.4\) lies on the BCS-BEC crossover boundary, and we see at that point that screening is no longer negligible, and as a consequence, \(\Delta_{k}\) starts to develop a sensitivity to the Hartree-Fock corrections. As \(d\) is further increased and the crossover regime is traversed, both the screening and \(\Delta_{k}\) become increasingly sensitive to the Hartree-Fock corrections. By \(d=0.7\), the correlations boost the low-lying density of states so much that the screening is strongly enhanced. This in turn strongly suppresses \(\Delta_{k}\). \(d=0.7\) is close to the superfluid threshold where the screening kills the superfluidity.
Figure 5 compares, with intralayer correlations included or neglected, the spatial size of the electron-hole pairs [21; 25],
\[\xi_{pair}=\left[\frac{\sum_{\mathbf{k}}|\nabla_{\mathbf{k}}u_{\mathbf{k}}v_{ \mathbf{k}}|^{2}}{\sum_{\mathbf{k}}u_{\mathbf{k}}^{2}v_{\mathbf{k}}^{2}} \right]^{1/2}\,, \tag{8}\]
as a function of \(r_{0}\) for layer separation \(d=0.2\).
Without the intralayer correlations, starting from the low-density BEC regime, \(\xi_{pair}\) initially decreases as the density increases. In the BEC regime the pairs act
Figure 4: (a) Superfluid gap \(\Delta(k)\) for a fixed density corresponding to \(r_{0}=3\), at different \(d\) points in the BEC and BCS-BEC crossover regimes (refer Fig. 3). Solid red: within mean-field with intralayer correlations included. Dashed blue: within mean-field but neglecting intralayer correlations. (b) Ratio of self-consistent screened electron-hole attraction \(V_{eh}^{sc}(k)\) to the bare attraction \(V_{eh}(k)\) for the same \(r_{0}\)-\(d\) points spanning the BEC and BCS-BEC crossover regimes.
Figure 3: Dependence of BEC and BCS-BEC crossover regimes on layer separation \(d\) and average interparticle spacing within each layer \(r_{0}\). The BCS regime is preempted by strong screening that suppresses superfluidity at small \(r_{0}\) and large \(d\). The smallest separations experimentally attained to date in different material systems are indicated by the arrows on the vertical axis. Also marked are the points in \(r_{0}\)-\(d\) phase space used in Fig. 4.
Figure 2: (a) Maximum superfluid gap \(\Delta_{max}\) as function of \(r_{0}\), the average interparticle intralayer distance within a layer. Solid red: within mean field with intralayer correlations included. Dashed blue: within the mean-field but neglecting intralayer correlations. Shown for comparison (dash-dot green), the \(\Delta_{max}\) from Diffusion Quantum Monte Carlo numerical simulations [17]. (b) The corresponding single-particle chemical potential \(\mu_{s}\). |
2303.15740 | Concentration of Contractive Stochastic Approximation: Additive and
Multiplicative Noise | In this paper, we establish maximal concentration bounds for the iterates
generated by a stochastic approximation (SA) algorithm under a contractive
operator with respect to some arbitrary norm (for example, the
$\ell_\infty$-norm). We consider two settings where the iterates are
potentially unbounded: SA with bounded multiplicative noise and SA with
sub-Gaussian additive noise. Our maximal concentration inequalities state that
the convergence error has a sub-Gaussian tail in the additive noise setting and
a Weibull tail (which is faster than polynomial decay but could be slower than
exponential decay) in the multiplicative noise setting. In addition, we provide
an impossibility result showing that it is generally impossible to have
sub-exponential tails under multiplicative noise. To establish the maximal
concentration bounds, we develop a novel bootstrapping argument that involves
bounding the moment-generating function of a modified version of the
generalized Moreau envelope of the convergence error and constructing an
exponential supermartingale to enable using Ville's maximal inequality. We
demonstrate the applicability of our theoretical results in the context of
linear SA and reinforcement learning. | Zaiwei Chen, Siva Theja Maguluri, Martin Zubeldia | 2023-03-28T05:32:30Z | http://arxiv.org/abs/2303.15740v2 | # Concentration of Contractive Stochastic Approximation:
###### Abstract
In this work, we study the concentration behavior of a stochastic approximation (SA) algorithm under a contractive operator with respect to an arbitrary norm. We consider two settings where the iterates are potentially unbounded: (1) bounded multiplicative noise, and (2) additive sub-Gaussian noise. We obtain maximal concentration inequalities on the convergence errors, and show that these errors have sub-Gaussian tails in the additive noise setting, and super-polynomial tails (faster than polynomial decay) in the multiplicative noise setting. In addition, we provide an impossibility result showing that it is in general not possible to achieve sub-exponential tails for SA with multiplicative noise. To establish these results, we develop a novel _bootstrapping argument_ that involves bounding the moment generating function of the generalized Moreau envelope of the error and the construction of an exponential supermartingale to enable using Ville's maximal inequality.
To demonstrate the applicability of our theoretical results, we use them to provide maximal concentration bounds for a large class of reinforcement learning algorithms, including but not limited to on-policy TD-learning with linear function approximation, off-policy TD-learning with generalized importance sampling factors, and \(Q\)-learning. To the best of our knowledge, super-polynomial concentration bounds for off-policy TD-learning have not been established in the literature due to the challenge of handling the combination of unbounded iterates and multiplicative noise.
+
Footnote †: Equal contribution.
## 1 Introduction
Stochastic approximation (SA) (Robbins and Monro, 1951) is the underlying workhorse for modern large-scale optimization and machine learning, which have achieved great successes in solving many practical problems (Kober et al., 2013; Silver et al., 2017; Jumper et al., 2021). In this work, we consider an SA algorithm of the form
\[x_{k+1}=x_{k}+\alpha_{k}(F(x_{k},Y_{k})-x_{k}), \tag{1}\]
where \(Y_{k}\in\mathcal{Y}\) is a random variable representing the noise, \(F:\mathbb{R}^{d}\times\mathcal{Y}\mapsto\mathbb{R}^{d}\) is a (possibly nonlinear) operator, and \(\alpha_{k}>0\) is the stepsize. We assume that the expectation of the operator \(F(\cdot,Y_{k})\) with respect to the noise, denoted by \(\bar{F}(\cdot)\), is a contraction mapping with respect to some arbitrary norm \(\|\cdot\|_{c}\). See Section 2 for the formal description of the SA model.
Algorithm (1) covers many existing popular algorithms as its special cases. For example, when \(F(x_{k},Y_{k})=-\nabla J(x_{k})+x_{k}+Y_{k}\) for some objective function \(J(\cdot)\), Algorithm (1) is the stochastic gradient descent (SGD) algorithm used to minimize \(J(\cdot)\)(Lan, 2020). In the context of reinforcement learning (RL), popular algorithms such as \(Q\)-learning and variants of TD-learning can all be modeled in the form of Algorithm (1) (Bertsekas and Tsitsiklis, 1996), where the expected operator \(\bar{F}(\cdot)\) is closely related to the Bellman operator. Due to wide applications of Algorithm (1), theoretically understanding the evolution of \(\{x_{k}\}\) is of fundamental interest.
Early literature on SA focused on the asymptotic convergence, i.e., the behavior of \(x_{k}\) as \(k\) goes to infinity (Robbins and Monro, 1951; Borkar, 2009; Tsitsiklis, 1994; Kushner and Clark, 2012). In recent years, finite-sample analysis has received considerable attention (Bhandari et al., 2018; Srikant and Ying, 2019; Chen et al., 2020). In finite-sample analysis, the goal is to bound the error between the stochastic iterate \(x_{k}\) and its limit \(x^{*}\) (provided that the asymptotic convergence was already established) as a function of the number of iterations \(k\), and to study its decay rate. Finite-sample analysis not only provides theoretical understanding of the evolution of SA algorithms, but can also be used as a guideline in implementation.
Due to the stochastic nature of the iterates, there are multiple ways of measuring the distance between the iterates and the limit point. One natural way is to use the mean-square distance \(\mathbb{E}[\|x_{k}-x^{*}\|_{c}^{2}]\), which has been extensively studied in the literature (Srikant and Ying, 2019; Bhandari et al., 2018; Chen et al., 2022, 2020; Wainwright, 2019). Another popular way is to use the probability that \(\|x_{k}-x^{*}\|_{c}\leq\epsilon\) for some \(\epsilon>0\). A bound on this probability is called a "high probability bound", and is sometimes preferable over a mean-square bound as it not only provides the convergence rate, but also the confidence level. However, high probability bounds are in general more challenging to establish. For example, consider the convergence rate of the law of large numbers1. The establishment of the \(\mathcal{O}(1/k)\) mean-square bound is significantly easier than establishing exponential tail bounds such as Hoeffding's inequality, Chernoff bound, and Bernstein's inequality, etc.
Footnote 1: The average of a sequence of random variables \(\frac{1}{k}\sum_{i=0}^{k-1}Y_{k}\) can be computed in an iterative manner as \(x_{k+1}=x_{k}+\frac{1}{k+1}(-x_{k}+Y_{k})\) with \(x_{0}=\mathbf{0}\), which is clearly a special case of Algorithm (1).
To establish high probability bounds of Algorithm (1), the operator \(F(\cdot,\cdot)\) and noise sequence \(\{Y_{k}\}\) play important roles in the analysis. Most of existing literature focus on the setting where the noise in the SA algorithm appears in an additive manner, and is a.s. bounded. To illustrate, consider linear SA of the form \(x_{k+1}=x_{k}+\alpha_{k}(A(Y_{k})x_{k}-b(Y_{k}))\), where \(A:\mathcal{Y}\mapsto\mathbb{R}^{d\times d}\) and \(b:\mathcal{Y}\mapsto\mathbb{R}^{d}\) are deterministic functions. When \(A(Y_{k})\) is not random (or equivalently \(A(Y_{k})=\mathbb{E}[A(Y_{k})]\) a.s.), the noise is purely additive. For the multiplicative noise setting, which corresponds to \(A(Y_{k})\) being random, the analysis is much more challenging. Existing results either do not have super-polynomial tail bounds or require strong assumptions, such as \(A(Y_{k})\) being Hurwitz a.s. See Section 1.3 for a more detail literature review. In this work, we develop maximal super-polynomial concentration bounds for SA algorithms involving additive and/or multiplicative noise. In addition, we go beyond linear SA and consider more general contractive SA algorithms of the form (1), which covers linear SA as a special case as will be illustrated in Section 2.4.
### Our Contributions
The main contributions of this work are summarized in the following.
Multiplicative Noise Setting.We establish a super-polynomial high probability bound under diminishing stepsizes of the form \(\alpha_{k}=\alpha/(k+h)\). Importantly, our high probability bound provides a bound on the entire tail of the iterates, as our stepsizes do not depend on either the desired accuracy level \(\epsilon\) or the confidence level \(\delta\). Moreover, our bound is "maximal" in the sense that it is a bound on the concentration behavior of the entire trajectory of the iterates \(\{x_{k}\}\). As a complement of the concentration bounds, we provide an impossibility result showing that concentration bounds with sub-exponential tails are in general not achievable. To our knowledge, this is the first maximal super-polynomial high probability bound for SA with multiplicative noise. Even for the simple setting of linear SA (with a random \(A(Y_{k})\) that is not a.s. Hurwitz), such concentration result is unknown in the literature.
Additive Noise Setting.We also consider the case of purely additive noise. We allow the noise to be unbounded, albeit sub-Gaussian. In this case, we establish maximal high probability bounds (with exponentially small tail) for the SA algorithm when using either linearly diminishing stepsizes \(\alpha_{k}=\alpha/(k+h)\) or polynomially diminishing stepsizes \(\alpha_{k}=\alpha/(k+h)^{z}\) with \(z\in(0,1)\)
To our knowledge, except for the special case of SGD, such concentration results in the case of additive but unbounded noise are unknown in the literature.
Applications in RL.The generality of our SA results enables us to study the concentration behavior of a large class of RL algorithms, a typical example of which is off-policy TD-learning. Note that off-policy TD involves multiplicative noise, and does not have uniformly bounded iterates. This makes establishing high probability bounds of off-policy TD-learning very challenging, and to the best of our knowledge, there are no such results in the literature. In addition to off-policy TD, we also establish maximal high probability bounds for on-policy TD-learning with linear function approximation and \(Q\)-learning.
Methodological Contributions.To handle the multiplicative noise in the SA algorithm, we develop a novel proof technique involving (1) the establishment of a bound on a properly modified moment generating function (MGF) of the generalized Moreau envelope of the error, which serves as a potential/Lyapunov function in our approach, (2) the proper construction of an exponential supermartingale and the use of Ville's maximal inequality, and (3) a novel bootstrapping method used to overcome the potential unboundedness of the iterates. More details about the challenges and our technical contributions are presented in the next subsection.
### Challenges & Our Techniques
We use SA under multiplicative noise as an example to present the challenges and our techniques. The analysis of SA with additive noise follows a similar approach. The main challenge of obtaining super-polynomial high probability bounds is due to the combination of _unbounded iterates_ and _multiplicative noise_. While having unbounded iterates and multiplicative noise are not too problematic in isolation, the combination of both creates a setting where the variance of the noise is unbounded. In this case, since we allow the multiplicative noise to be large enough so that the "noisy" operator can be expansive with positive probability, the error can grow extremely fast with a significant probability. This creates a challenge that no approach in the literature can deal with in general. To overcome this challenge, we develop a novel bootstrapping argument. The high level ideas are presented in the following.
Initialization: Time-Varying Worst-Case Bounds.While the iterates of SA with multiplicative noise are not uniformly bounded by a constant, we show that they do admit a time-varying a.s. bound. The behavior of such time-varying bound depends on the contraction effect in the expected operator and the expansive effect in the multiplicative noise. In general, the bound can be polynomially _increasing_ with time.
Bootstrapping: An Iterative Framework to Improve the Bound.The key in the bootstrapping argument is to start with a non-decreasing sequence \(\{T_{k}(\delta)\}_{k\geq 0}\) such that
\[\mathbb{P}(\|x_{k}-x^{*}\|_{c}^{2}\leq T_{k}(\delta),\forall\ k\geq 0)\geq 1-\delta,\]
and obtain a sequence \(\{T_{k}(\delta,\delta^{\prime})\}_{k\geq 0}\), with \(T_{k}(\delta,\delta^{\prime})=\tilde{\mathcal{O}}(T_{k}(\delta)/k)\), such that
\[\mathbb{P}(\|x_{i}-x^{*}\|_{c}^{2}\leq T_{k}(\delta,\delta^{\prime}),\forall \ k\geq 0)\geq 1-\delta-\delta^{\prime}.\]
This blueprint enables us to start with the time-varying worst-case bound for the error (which can be polynomially increasing) and iteratively improve it to obtain our super-polynomial concentration bound with the desired convergence rate. It will become clear from the proof that, for this bootstrapping argument to work, the fact that the bounds hold for all \(k\geq 0\) is fundamental. To establish this blueprint, we develop a two-step Lyapunov argument.
* **Step 1: Recursive Bound on the Log-MGFs:** The first step is to obtain a recursive upper bound between the log-MGF of \(\|x_{k+1}-x^{*}\|_{c}^{2}\) and that of \(\|x_{k}-x^{*}\|_{c}^{2}\). Opening this recursion, we also obtain an outright bound on \(\|x_{k}-x^{*}\|_{c}^{2}\) that only depends on \(\|x_{0}-x^{*}\|_{c}^{2}\) and other model parameters. These bounds are valid for all \(k\geq 0\), and give us a tight grasp on the effect of the noise on the error.
* **Step 2: Exponential Supermartingale:** We construct a supermartingale \(\{\overline{M}_{k}\}_{k\geq 0}\) of the form \(\overline{M}_{k}=\exp(\|x_{k}-x^{*}\|_{c}^{2}\alpha_{k}^{-1}T_{k}(\delta)^{-1 }-C\sum_{i=0}^{k-1}\alpha_{k})\) and use Ville's maximal inequality to obtain a maximal bound on the iterates. In particular, this maximal bound states that \(\|x_{k}-x^{*}\|_{c}^{2}\) is \(\tilde{\mathcal{O}}(\alpha_{k}T_{k}(\delta))\) for all \(k\geq 0\) with high probability. Since \(\alpha_{k}=\mathcal{O}(1/k)\), the bootstrapping blueprint is established. Note that on an aside, if the initial high probability bound \(T_{k}(\delta)\) is a constant (which implies that the SA algorithm has bounded iterates, such as \(Q\)-learning), we only need to apply the bootstrapping argument once to get a maximal high probability bound of order \(\mathcal{O}(1/k)\).
### Related Literature
Due to the wide applications, there are extensive related literature on establishing concentration bounds of SA algorithms in the form of SGD, linear SA, and RL algorithms.
#### 1.3.1 Stochastic Gradient Descent
There is a large body of work about exponential high probability bounds for SGD and its variants. In Rakhlin et al. (2012); Hazan and Kale (2014), the authors obtain exponential high probability bounds for non-smooth strongly convex functions, when the noise is conditionally unbiased and the iterates are in a compact set. This was later generalized for the case of sub-Gaussian noise and unbounded iterates in Harvey et al. (2019), making it one of the rare cases where exponential high probability bounds are obtained with unbounded noise. Exponential high probability bounds were also obtained for the ergodic mirror descent (under Markovian, conditionally biased noise with uniformly bounded variance) in Duchi et al. (2012), under the additional assumption that the iterates are in a compact set. More recently, polynomial high probability bounds have been obtained in Lou et al. (2022) for SGD on linear models when the noise is heavy-tailed. Finally, in Telgarsky (2022), the authors analyze mirror descent with constant stepsize, and i.i.d. noise with uniformly bounded variance that is a.s. bounded or sub-Gaussian. By choosing the constant stepsize appropriately, they obtain exponential high probability bounds in this setting.
#### 1.3.2 Linear Stochastic Approximation
For linear SA, the first moment bounds for the \(\ell_{2}\)-norm of the error with constant stepsize were given in Lakshminarayanan and Szepesvari (2018); Srikant and Ying (2019). Based on these, one could obtain high probability bounds, albeit with polynomial tails instead of exponential ones.
To our knowledge, the strongest result on exponential high probability bounds for linear SA is given in Dalal et al. (2018). There, the authors analyze a two-timescale linear SA with decreasing stepsizes, and with multiplicative, a.s. bounded, martingale-difference noise. In this setting, they obtain maximal exponential high probability bounds for all iterates large enough by choosing stepsizes that depend on both the runtime and the probability tolerance level. On the other hand, there is a line of work that focuses primarily on the product of random matrices, and then applies these results to linear SA. In Durmus et al. (2021), the authors consider an linear SA with constant stepsize, where the noise is Markovian and a.s. bounded. In this setting, they develop high probability bounds on the product of random matrices to obtain sub-exponential high probability bounds when the random matrices are almost surely Hurwitz, and polynomial high probability bounds when the random matrices are only Hurwitz in expectation. This was later extended to the case of Polyak-Ruppert averaged iterates in Durmus et al. (2022); Mou et al. (2020).
#### 1.3.3 Reinforcement Learning
In one of the earliest works about exponential high probability bounds in RL (Even-Dar and Mansour, 2003), the authors analyze synchronous \(Q\)-learning algorithm when the stepsizes are \(\mathcal{O}(k^{-z})\), for \(z\in(1/2,1)\). In this setting, they obtain exponential high probability bounds for all iterates large enough. In A. et al. (2021), the authors consider the LSTD algorithm (which includes a projection step onto a compact set), and obtain exponential high probability bounds for the \(\ell_{2}\)-norm of the error when the stepsizes are \(\mathcal{O}(k^{-1})\). In Dalal et al. (2018), the authors analyze TD(0) with linear function approximation, where the noise is assumed to be i.i.d. instead of Markovian. In that setting, they obtain maximal exponential high-probability bounds for the \(\ell_{2}\)-norm of the error in the last iterate, for iterates beyond some point that is of order \(\log(1/\delta)\). Recently in Li et al. (2021, 2021), the authors analyze the popular \(Q\)-learning algorithm with constant stepsize and uniformly bounded, Markovian, possibly conditionally biased noise. In this setting, given a runtime and a performance guarantee, they obtain exponential high probability bounds at the end of the runtime, provided that it is large enough.
#### 1.3.4 General Stochastic Approximation
For general nonlinear SA under arbitrary norms and decreasing stepsizes, the authors of Chen et al. (2020, 2022) obtain bounds on the second moment of the error. These moment bounds can be used to obtain high probability bounds, albeit without exponential tails.
In Thoppe and Borkar (2019), the authors consider an SA with decreasing stepsizes, and martingale-difference sub-exponential noise. In this setting, they obtain maximal exponential high probability bounds conditioned on the event that the iterates are close enough to the fixed point after some time. In follow-up work Chandak et al. (2022), they assume that their noise is multiplicative, a.s. bounded, and Markovian. In this setting, they obtain maximal exponential high probability bounds without conditioning on an unknown event. However, their high probability bounds only hold for after some time, and both the bound and the probabilities depend on the unknown norm of the iterate after some time (which is random, with unknown distribution).
In a separate line of work, in Qu and Wierman (2020) the authors consider a general SA under the infinity norm, where the noise has an a.s. uniformly bounded martingale-difference part, and
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Paper & Operator & Unbounded iterates & No tuned stepsize & Mult. noise & For any \(k\geq 0\) \\ \hline \hline Li et al. (2021) & \(Q\)-learning & & & & \\ \hline Even-Dar and Mansour (2003) & \(Q\)-learning & & & & \\ \hline Qu and Wierman (2020) & \(Q\)-learning & & & & \\ \hline Duchi et al. (2012) & EMD & & & & \\ \hline A. et al. (2021) & LSTD & & & & \\ \hline Thoppe and Borkar (2019) & Any & & & \(\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ }}}}}}}}}}}}}\) & \\ \hline Chandak et al. (2022) & Any & & & \(\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ }}}}}}}}}}}}}}\) & \\ \hline Telgarsky (2022) & MD & & & & \\ \hline Mou et al. (2022) & Any & & & & \\ \hline Dalal et al. (2018) & Linear & & & & \\ \hline Rakhlin et al. (2012) & SGD & & & & \\ \hline Dalal et al. (2018) & TD(0) & & & & \\ \hline Durmus et al. (2021) & Linear & & & \(\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{ \boldsymbol{ \boldsymbol{ }}}}}}}}}}}}}}\) & \\ \hline Theorem 2.1 & Any & & & & \\ Theorem 2.3 & Any & & & & \\ \hline \end{tabular}
\end{table}
Table 1: Summary of previous work on high probability bounds for SA and its special cases. An asterisk indicates that the assumption only holds for constants small enough.
a Markovian part that only determines which coordinate of the iterate gets updated. Due to this structured noise, the random operator is a conditionally biased estimator of the original operator. In this setting, assuming that the iterates always are in a compact set, they obtain exponential high probability bounds. Finally, in Mou et al. (2022), the authors consider a variance-reduced version of the general SA in arbitrary Banach spaces, with constant stepsize and i.i.d., multiplicative, a.s. bounded noise. By appropriately choosing the stepsize and the averaging used to reduce the variance, they obtain exponential high probability bounds for the error.
**In summary**, all of the previous high probability bounds for SA in the literature have one or more of these limitations: (1) they force the iterates to belong to a compact set via a projection, or they introduce stringent assumptions on their noise so that their iterates belong a.s. to some compact set; (2) they tune the parameters of the algorithm according to the probability guarantee and total runtime; (3) they do not allow for multiplicative noise; (4) they are only valid for a particular iterate, or for a limited range of iterates, which can depend on the probability guarantee itself. These limitations (and other features) are summarized in Table 1 for previous work.
Organization of the PaperThe rest of the paper is organized as follows. In Section 2, we present our main results on maximal concentration bounds of SA under bounded multiplicative noise and sub-Gaussian additive noise. In Section 3, we present applications of our main results to various RL algorithms. Finally, we conclude this work in Section 4.
## 2 Stochastic Approximation
In this section we present our main results on high probability bounds of contractive SA algorithms. We begin by formally presenting our problem formulation.
Given a deterministic \(x_{0}\in\mathbb{R}^{d}\), we consider the \(d\)-dimensional stochastic iterative algorithm
\[x_{k+1}=x_{k}+\alpha_{k}(F(x_{k},Y_{k})-x_{k}), \tag{2}\]
where \(\{\alpha_{k}\}\) is a sequence of positive stepsizes, \(\{Y_{k}\}\) is a sequence of random variables over an arbitrary set \(\mathcal{Y}\), and \(F:\mathbb{R}^{d}\times\mathcal{Y}\to\mathbb{R}^{d}\) is an operator that is possibly nonlinear.
**Assumption 2.1** (Contraction Mapping).: There exist a constant \(\gamma_{c}\in[0,1)\), a norm \(\|\cdot\|_{c}\), and a random variable \(Y\) over \(\mathcal{Y}\), such that the operator \(\bar{F}:\mathbb{R}^{d}\mapsto\mathbb{R}^{d}\) defined as \(\bar{F}(\cdot)=\mathbb{E}[F(\cdot,Y)]\) satisfies
\[\|\bar{F}(x_{1})-\bar{F}(x_{2})\|_{c}\leq\gamma_{c}\|x_{1}-x_{2}\|_{c},\quad \forall\;x_{1},x_{2}\in\mathbb{R}^{d}.\]
By using Banach fixed-point theorem (Banach, 1922), Assumption 2.1 implies that \(\bar{F}(x)=x\) has a unique solution, which we denote by \(x^{*}\). Our results also hold when \(\bar{F}(\cdot)\) is a pseudo-contractive operator, i.e., \(\|\bar{F}(x)-x^{*}\|_{c}\leq\gamma_{c}\|x-x^{*}\|_{c}\) for all \(x\in\mathbb{R}^{d}\)(Bertsekas and Tsitsiklis, 1996). However, in this case, the existence of \(x^{*}\) needs to be assumed. Note that a contraction mapping is always a pseudo-contraction mapping.
**Assumption 2.2** (Unbiased Perturbation).: It holds that \(\mathbb{E}[F(x_{k},Y_{k})\mid\mathcal{F}_{k}]=\bar{F}(x_{k})\) a.s. for all \(k\geq 0\), where \(\mathcal{F}_{k}\) is the \(\sigma\)-algebra generated by \(\{Y_{0},Y_{1},\cdots,Y_{k-1}\}\).
A special case where Assumption 2.2 is satisfied is when \(\{Y_{k}\}\) is a sequence of i.i.d. random variables. In fact, Assumption 2.2 can be relaxed to \(\|\mathbb{E}[F(x_{k},Y_{k})\mid\mathcal{F}_{k}]-\bar{F}(x_{k})\|_{c}\leq L_{1} \left\|x_{k}-x^{*}\right\|_{c}\) a.s. for all \(k\geq 0\), for some small enough constant \(L_{1}\). For SA with general biased perturbation (a popular example of which is when \(\{Y_{k}\}\) is an ergodic Markov chain), establishing high probability bounds is a future direction. That being said, existing results that allow biased perturbation all require \(\{x_{k}\}\) being bounded a.s. by a deterministic constant, such as \(Q\)-learning and ergodic mirror descent. See Table 1 for more details.
### Stochastic Approximation with Multiplicative Noise
We consider \(\{x_{k}\}\) generated by Algorithm (2). The following assumption explains what we mean by multiplicative noise.
**Assumption 2.3** (Multiplicative Noise).: There exists \(\sigma>0\) such that \(\|F(x_{k},Y_{k})-\bar{F}(x_{k})\|_{c}\leq\sigma(1+\|x_{k}\|_{c})\) a.s. for all \(k\geq 0\).
One special case where Assumption 2.3 is satisfied is when the operator \(F(x,y)\) is Lipschitz continuous in \(x\), which is formally stated in the following.
**Assumption 2.3** (Lipschitz Continuity).: There exists \(L_{c}>0\) such that \(\|F(x_{1},y)-F(x_{2},y)\|_{c}\leq L_{c}\|x_{1}-x_{2}\|_{c}\) for all \(x_{1},x_{2}\in\mathbb{R}^{d}\), and \(\sup_{y\in\mathcal{Y}}\|F(\mathbf{0},y)\|_{c}<\infty\).
To see the implication, under Assumption 2.3\({}^{\prime}\), we have by triangle inequality that
\[\|F(x_{k},Y_{k})\|_{c} \leq\|F(x_{k},Y_{k})-F(\mathbf{0},Y_{k})\|_{c}+\|F(\mathbf{0},Y_{ k})\|_{c}\] \[\leq L_{c}\|x_{k}\|_{c}+\sup_{y\in\mathcal{Y}}\|F(\mathbf{0},y)\| _{c}\] \[\leq\sigma(1+\|x_{k}\|_{c}), \tag{3}\]
where \(\sigma:=\max(L_{c},\sup_{y\in\mathcal{Y}}\|F(\mathbf{0},y)\|_{c})<\infty\). In addition, Jensen's inequality implies that
\[\|\bar{F}(x_{k})\|_{c}=\|\mathbb{E}[F(x_{k},Y)]\|_{c}\leq\mathbb{E}[\|F(x_{k},Y)\|_{c}]\leq\sigma(1+\|x_{k}\|_{c}). \tag{4}\]
Assumption 2.3 then follows from combining Eqs. (3) and (4) with triangle inequality.
Note that Assumption 2.3\({}^{\prime}\) is automatically satisfied when the SA update equation is linear, i.e., \(x_{k+1}=x_{k}+\alpha_{k}(A(Y_{k})x_{k}-b(Y_{k}))\), where \(A(\cdot)\) and \(b(\cdot)\) are bounded functions. The terminology "multiplicative noise" is in fact inspired by linear SA. In addition, variants of nonlinear SA algorithms also satisfy Assumption 2.3\({}^{\prime}\). A typical example is the celebrated \(Q\)-learning algorithm for solving the RL problem, which will be studied in Section 3.3.
For SA with multiplicative noise, while the iterates \(\{x_{k}\}\) may not be uniformly bounded by a constant, which is the major challenge in the analysis, they in fact admit time-varying almost sure bounds. This is presented in the following proposition, and serves as the first step in establishing our maximal concentration bounds. Let \(D=\sigma+\gamma_{c}-1\), where \(\sigma\) is from Assumption 2.3 and \(\gamma_{c}\) is the contraction factor defined in Assumption 2.1. We consider using linearly diminishing stepsizes of the form \(\alpha_{k}=\alpha/(k+h)\), where \(\alpha,h>0\).
**Proposition 2.1** (Proof in Appendix A.5.1).: _Consider \(\{x_{k}\}\) generated by Algorithm (2). Suppose that Assumption 2.1 and Assumption 2.3 are satisfied, and \(\alpha_{k}=\alpha/(k+h)\) for all \(k\geq 0\), where \(\alpha,h>0\) are constants. Then we have \(\|x_{k}-x^{*}\|_{c}\leq B_{k}(D)\) a.s. for all \(k\geq 0\), where_
\[B_{k}(D)=\begin{cases}\left(\frac{k-1+h}{h-1}\right)^{\alpha D} \left(\|x_{0}-x^{*}\|_{c}+\frac{\sigma(1+\|x^{*}\|_{c})}{D}\right)-\frac{ \sigma(1+\|x^{*}\|_{c})}{D},&D>0,\\ \|x_{0}-x^{*}\|_{c}+\sigma(1+\|x^{*}\|_{c})\alpha\log\left(\frac{k-1+h}{h-1} \right),&D=0,\\ \|x_{0}-x^{*}\|_{c}-\frac{\sigma(1+\|x^{*}\|_{c})}{D},&D<0.\end{cases}\]
Intuitively, the parameter \(\gamma_{c}\) captures the contraction effect of the expected operator and the parameter \(\sigma\) captures the expansive effect of the noise. The combined effect is captured by the parameter \(D=\sigma+\gamma_{c}-1\). Proposition 2.1 states that \(\|x_{k}\|_{c}\) is uniformly bounded by a deterministic constant when \(D<0\), grow at most logarithmically when \(D=0\), and can grow at a polynomial rate \(\mathcal{O}(k^{\alpha D})\) when \(D>0\). Note that in the case of \(D<0\), while it appears that we have multiplicative noise, since Proposition 2.1 together with Assumption 2.3 implies that \(\|F(x_{k},Y_{k})-\bar{F}(x_{k})\|_{c}\) is uniformly bounded by a deterministic constant, we in fact only have bounded additive noise. In the rest of this section, we will just focus on the case where \(D\geq 0\)
For \(D<0\), the result is a special case of SA with sub-Gaussian additive noise, which will be studied in Section 2.3.
We next state our main result. Recall that we use \(\alpha_{k}=\alpha/(k+h)\) as our stepsize. For ease of exposition, we assume that \(2\alpha D\) is an integer, which is in fact without loss of generality because \(D=\sigma+\gamma_{c}-1\) and if Assumption 2.3 holds with some \(\sigma>0\) it also holds for any \(\sigma^{\prime}>\sigma\). Let \(m=2\alpha D+1\). The parameters \(\{c_{i}\}_{1\leq i\leq 4}\), \(c_{1}^{\prime}\), and \(D_{0}\in(0,1)\) used to present the following theorem are constants (explicit expressions in Appendix A).
**Theorem 2.1** (Proof in Appendix A).: _Consider \(\{x_{k}\}\) generated by Algorithm (2). Suppose that Assumptions 2.1 and 2.2 are satisfied. Then we have the following results._
1. _When_ \(D=0\)_,_ \(\alpha>2/D_{0}\)_, and_ \(h\) _is appropriately chosen, for any_ \(\delta>0\) _and_ \(K\geq 0\)_, with probability at least_ \(1-\delta\)_, we have for all_ \(k\geq K\) _that_ \[\|x_{k}-x^{*}\|_{c}^{2} \leq\frac{c_{1}^{\prime}\alpha\|x_{0}-x^{*}\|_{c}^{2}}{k+h}\left[ \log\left(\frac{k-1+h}{h-1}\right)\right]^{2}\] \[\quad\times\left[\log\left(\frac{1}{\delta}\right)+c_{2}\left( \frac{h}{K+h}\right)^{\alpha D_{0}/2-1}+c_{3}+c_{4}\log\left(\frac{k-1+h}{K-1 +h}\right)\right].\]
2. _When_ \(D>0\)_,_ \(\alpha>2/D_{0}\)_, and_ \(h\) _is appropriately chosen (explicit requirement in Appendix_ A_), for any_ \(\delta>0\) _and_ \(K\geq 0\)_, with probability at least_ \(1-\delta\)_, we have for all_ \(k\geq K\) _that_ \[\|x_{k}-x^{*}\|_{c}^{2} \leq\frac{c_{1}\alpha\|x_{0}-x^{*}\|_{c}^{2}}{k+h}\left[\log \left(\frac{m}{\delta}\right)+c_{2}+c_{3}+c_{4}\log\left(\frac{k-1+h}{h-1} \right)\right]^{m-1}\] \[\quad\times\left[\log\left(\frac{m}{\delta}\right)+c_{2}\left( \frac{h}{K+h}\right)^{\alpha D_{0}/2-1}+c_{3}+c_{4}\log\left(\frac{k-1+h}{K-1 +h}\right)\right].\]
Theorem 2.1 (1) states that when \(D=0\), the squared error \(\|x_{k}-x^{*}\|_{c}^{2}\) enjoys an \(\tilde{\mathcal{O}}(1/k)\) rate of convergence with sub-Gaussian tail. The case where \(D>0\) (cf. Theorem 2.1 (2)) is more complicated. In this case, the tail depends on the parameter \(m\). Since \(m=2\alpha D+1\) and \(D>0\), we in general only have super-polynomial tail. The fact that \(m\) is affine in \(D\) makes intuitive sense as larger \(D\) implies noisier updates, which in turn implies heavier tail.
In terms of the dependency on \(k\) and \(K\). Theorem 2.1 (2) states that, with probability at least \(1-\delta\), all the iterates lie in a cone that starts with radius \(\tilde{\Theta}((1+\log^{m/2}(1/\delta))K^{-1/2})\) when \(k=K\geq 0\), and then (for all \(k>K\)) its radius is of order \(\tilde{\Theta}((\log^{m/2}(1/\delta)+\log^{1/2}(k/K))k^{-1/2})\). Moreover, for small values of \(k\), this bound can be tightened by an a.s. bound that is polynomial in \(k\) (cf. Proposition 2.1). This is depicted in Figure 1. Note that, as a function of \(k\), the initial radius is always of order at most \(K^{-1}\), matching the rate obtained for the mean-square error in Chen et al. (2021). On the other hand, the radius decays at only a slightly slower rate than the initial radius as a function of \(k\).
Maximal concentration bounds immediately imply concentration bounds for a fixed iteration number (cf. Corollary 2.1.1), which in turn gives the full tail bound (cf. Corollary 2.1.2) and the sample complexity result (cf. Corollary 2.1.3). For ease of exposition, we here only present the results when \(D>0\). The case where \(D=0\) can be derived in a straightforward manner.
**Corollary 2.1.1** (Fixed-Time Concentration).: _Suppose that \(D>0\). Under the same assumptions in Theorem 2.1 (2), for any \(\delta>0\) and \(k\geq 0\), we have with probability at least \(1-\delta\) that_
\[\|x_{k}-x^{*}\|_{c}^{2}\leq\frac{c_{1}\alpha\|x_{0}-x^{*}\|_{c}^{2}}{k+h} \left[\log\left(\frac{m}{\delta}\right)+c_{2}+c_{3}+c_{4}\log\left(\frac{k-1+h}{ h-1}\right)\right]^{m}.\]
Corollary 2.1.1 follows by setting \(K=k\) in Theorem 2.1 (2).
**Corollary 2.1.2** (Full Tail Bound).: _Suppose that \(D>0\). Under the same assumptions in Theorem 2.1 (2), there exists \(C_{1}>0\) such that the following inequality holds for all \(\epsilon>0\) and \(k\geq 0\):_
\[\mathbb{P}\left(\frac{\sqrt{k+h}\;\|x_{k}-x^{*}\|_{c}}{(\log(k))^{m/2}}> \epsilon\right)<m\exp\left(-C_{1}\epsilon^{2/m}\right).\]
Corollary 2.1.2 is a direct implication of Corollary 2.1.1, and provides an upper bound for the whole complementary cumulative distribution function (CDF) of the error \(\|x_{k}-x^{*}\|_{c}\) for any iterate \(k\geq 0\), which can be integrated to obtain bounds for any moment of the error at any point in time.
**Corollary 2.1.3**.: _Given \(\epsilon>0\), to achieve \(\|x_{k}-x^{*}\|_{c}\leq\epsilon\) with probability at least \(1-\delta\), the sample complexity is \(\tilde{\mathcal{O}}((1+\log^{m}(1/\delta))\epsilon^{-2})\)._
As we see from Corollary 2.1.3, the sample complexity dependency on \(\epsilon\) is \(\tilde{\mathcal{O}}(\epsilon^{-2})\), which is known to be optimal (up to a logarithmic factor). In addition, we have super-polynomial tail as \(\delta\) appears as \(\log^{m}(1/\delta)\) in the bound.
Removing the Logarithmic FactorsIn view of Theorem 2.1 (2), the bound involves a product of logarithmic terms (i.e., \(\mathcal{O}([\log(k)]^{m-1})\)). It is possible to remove them at a cost of slightly compromising the tail. The result is presented in the following, where \(c_{1}^{\prime\prime}\) is a constant (explicit expressions in Appendix A.4).
**Theorem 2.1\({}^{\prime}\)** (Proof in Appendix A.4).: _Under the same conditions in Theorem 2.1 (2), for any \(\delta\in(0,1)\) and \(K\geq 0\), with probability at least \(1-\delta\), we have for all \(k\geq K\) that_
\[\|x_{k}-x^{*}\|_{c}^{2} \leq c_{1}^{\prime\prime}\alpha_{k}\|x_{0}-x^{*}\|_{c}^{2}\left[ \left(\log\left(\frac{m+1}{\delta}\right)\right)^{m}+1\right]\] \[\quad\times\left[\log\left(\frac{m+1}{\delta}\right)+c_{2}\left( \frac{h}{K+h}\right)^{\alpha D_{0}/2-1}+c_{3}+c_{4}\log\left(\frac{k-1+h}{K-1+ h}\right)\right].\]
Note that Theorem 2.1\({}^{\prime}\) implies that there exists \(C_{1}^{\prime}>0\) such that
\[\mathbb{P}(\sqrt{k+h}\;\|x_{k}-x^{*}\|_{c}>\epsilon)<(m+1)\exp\left(-C_{1}^{ \prime}\epsilon^{2/(m+1)}\right),\quad\forall\;k\geq 0,\epsilon>0.\]
Compared with Corollary 2.1.2 of Theorem 2.1, we see that the rate is improved by a logarithmic factor but the tail is heavier.
Figure 1: For \(D>0\), all the iterates lie in the blue shaded area with probability at least \(1-\delta\).
### An Impossibility Result on the Tail Decay Rate
Theorem 2.1 shows that SA with multiplicative noise is able to achieve an \(\tilde{\mathcal{O}}(1/k)\) rate of convergence with a super-polynomial tail. One may ask if sub-Gaussian (or sub-exponential) tail is achievable. In this section, we show that it is impossible to obtain a general sub-exponential tail bound whenever we only obtain a super-polynomial one.
Example SetupLet \(a\in(0,1)\) and \(N\geq 1\), and let \(\{Y_{k}\}\) be an i.i.d. sequence of real-valued random variables such that \(\mathbb{P}\left(Y_{k}=a+N\right)=1/(N+1)\) and \(\mathbb{P}\left(Y_{k}=a-1\right)=N/(N+1)\). Let \(F:\mathbb{R}^{2}\mapsto\mathbb{R}\) be an operator defined as
\[F(x,y)=yx,\quad\forall\ x,y\in\mathbb{R}.\]
Consider the \(1\)-dimensional stochastic iterative algorithm
\[x_{0}>0,\quad x_{k+1}=x_{k}+\alpha_{k}(F(x_{k},Y_{k})-x_{k}), \tag{5}\]
where \(\alpha_{k}>0\) is the stepsize. It can be easily verified that (1) \(\bar{F}(x):=\mathbb{E}[F(x,Y_{0})]\) is a contraction mapping with contraction factor \(\gamma_{c}=a\in(0,1)\), (2) \(\mathbb{E}[F(x_{k},Y_{k})\mid\mathcal{F}_{k}]=\bar{F}(x_{k})\) for all \(k\geq 0\), and (3) \(|F(x_{k},Y_{k})-\bar{F}(x_{k})|\leq N(|x_{k}|+1)\) for all \(k\geq 0\). Note that \(D=a+N-1\) in this example. Since all assumptions needed to apply Theorem 2.1 (and also Theorem 2.1\({}^{\prime}\)) are satisfied, we have the following result.
**Proposition 2.2**.: _Consider \(\{x_{k}\}\) generated by Algorithm (5). Suppose that \(\alpha_{k}=\alpha/(k+h)\), where \(\alpha>1/(1-a)\) and \(h\) is large enough so that \(\alpha_{0}<1/2\). Then, there exist \(K_{1},K_{2},\bar{K}_{1},\bar{K}_{2}>0\) such that the following inequalities hold for all \(\epsilon>0\) and \(k\geq 0\):_
\[\mathbb{P}\left(\frac{\sqrt{k+h}\ x_{k}}{(\log(k))^{m_{e}/2}}> \epsilon\right) <K_{1}\exp\left(-K_{2}\epsilon^{\frac{2}{m_{e}}}\right), \tag{6}\] \[\mathbb{P}\left(\sqrt{k+h}\ x_{k}>\epsilon\right) <\bar{K}_{1}\exp\left(-\bar{K}_{2}\epsilon^{\frac{2}{m_{e}+1}} \right). \tag{7}\]
_where \(m_{e}=\lceil 2\alpha D\rceil+1\)._
_Remark_.: Since Algorithm (5) starts with a positive \(x_{0}\), by ensuring \(\alpha_{0}<1/2\) we have \(x_{k}>0\) for all \(k\geq 0\). Moreover, it is easy to see that \(x^{*}=0\) in this example.
We next investigate the lower bound of Algorithm (5) through the following theorem.
**Theorem 2.2** (Proof in Appendix B.1).: _Consider \(\{x_{k}\}\) generated by Algorithm (5). Suppose that \(\alpha_{k}=\alpha/(k+h)^{z}\), where \(z\in(0,1]\), \(\alpha>0\), and \(h\) is chosen such that \(\alpha_{0}<1/2\)._
1. _When_ \(z=1\)_, for any_ \(\tilde{\beta}>2/(1+2\alpha D)\)_, we have_ \[\liminf_{k\to\infty}\mathbb{E}\left[\exp\left(\lambda\left[(k+h)^{1/2}x_{k} \right]^{\tilde{\beta}}\right)\right]=\infty,\quad\forall\ \lambda>0.\] _As a result, there do not exist_ \(K_{1}^{\prime},K_{2}^{\prime}>0\) _such that_ \(\mathbb{P}\left((k+h)^{1/2}\ x_{k}\geq\epsilon\right)\ \leq K_{1}^{\prime}\exp\left(-K_{2} ^{\prime}\epsilon^{\tilde{\beta}}\right)\) _for any_ \(\epsilon>0\) _and_ \(k\geq 0\)_._
2. _When_ \(z\in(0,1)\)_, for any_ \(\tilde{\beta},\tilde{\beta}^{\prime}>0\)_, we have_ \[\liminf_{k\to\infty}\mathbb{E}\left[\exp\left(\lambda(k+h)^{\tilde{\beta}^{ \prime}}x_{k}^{\tilde{\beta}}\right)\right]=\infty,\quad\forall\ \lambda>0.\] _As a result, there do not exist_ \(\bar{K}_{1}^{\prime},\bar{K}_{2}^{\prime}>0\) _such that_ \(\mathbb{P}\left((k+h)^{\tilde{\beta}^{\prime}/\tilde{\beta}}\ x_{k}\geq\epsilon\right) \ \leq\bar{K}_{1}^{\prime}\exp\left(-\bar{K}_{2}^{\prime}\epsilon^{\tilde{\beta}}\right)\) _for any_ \(\epsilon>0\) _and_ \(k\geq 0\)_._
Since \(\tilde{\beta}>2/(1+2\alpha D)\geq 2/(1+\lceil 2\alpha D\rceil)=2/m_{e}\), Theorem 2.2 (1) implies that our concentration bound is almost tight in the sense that it has either the best tail decay rate (at least when \(2\alpha D\) is an integer, cf. Figure 2) but with a slightly worse decay rate in \(k\) (cf. Eq. (6)), or it has the right decay rate in \(k\) but with a slightly compromised tail decay rate (cf. Eq. (7)). In particular, this means that we obtain a sub-exponential tail upper bound whenever such bound is achievable (that is, when \(D\leq 1/2\alpha\)).
Note that, as an aside, Theorem 2.2 (2) implies that not even super-polynomial tail bounds are possible when \(\alpha_{k}=\alpha/(k+h)^{z}\) (with \(z\in(0,1)\)), for any polynomial rate of convergence.
### Stochastic Approximation with Sub-Gaussian Additive Noise
In this section, we also consider \(\{x_{k}\}\) generated by Algorithm (2), but with additive sub-Gaussian noise. We begin by stating our assumption.
**Assumption 2.4**.: There exist \(\bar{\sigma}>0\) and a (possibly) dimension-dependent constant \(c_{d}>0\) such that the following two inequalities hold for any \(\mathcal{F}_{k}\)-measurable random vector \(v\) and \(k\geq 0\):
\[\mathbb{E}\left[\exp\left(\lambda\langle F(x_{k},Y_{k})-\mathbb{E }\left[F(x_{k},Y_{k})|\mathcal{F}_{k}\right],v\rangle\right)\right|\mathcal{F} _{k}\right] \leq\exp\left(\lambda^{2}\bar{\sigma}^{2}\|v\|_{c,*}^{2}/2\right),\ \forall\ \lambda>0, \tag{8}\] \[\mathbb{E}\left[\exp\left(\lambda\left\|F(x_{k},Y_{k})-\mathbb{E }\left[F(x_{k},Y_{k})|\mathcal{F}_{k}\right]\right\|_{c}^{2}\right)\right| \mathcal{F}_{k}\right] \leq\left(1-2\lambda\bar{\sigma}^{2}\right)^{-c_{d}/2},\ \forall\ \lambda\in\left(0,1/2\bar{\sigma}^{2}\right), \tag{9}\]
where \(\|\cdot\|_{c,*}\) is the dual norm of the contraction norm \(\|\cdot\|_{c}\).
Assumption 2.4 can be viewed as a generalization of the standard definition of a random vector being norm sub-Gaussian (Jin et al., 2019) to the case where we use an arbitrary norm \(\|\cdot\|_{c}\) instead of \(\|\cdot\|_{2}\). In fact, when \(\|\cdot\|_{c}=\|\cdot\|_{2}\) and \(c_{d}=d\), Eqs. (8) and (9) are exactly the equivalent definitions of sub-Gaussian random vectors (Jin et al., 2019; Wainwright, 2019). Since we use an arbitrary norm, we allow for a (possibly) different dimension-dependent constant \(c_{d}\). One special case where Assumptions 2.1, 2.2, and 2.4 are satisfied is when the noise \(Y_{k}\) is purely additive, and is either a martingale-difference sequence or an i.i.d. mean zero sequence with sub-Gaussian tail.
We now state the main result of this section. Unlike SA with multiplicative noise, we allow for polynomially decaying stepsizes \(\alpha_{k}=\alpha/(k+h)^{z}\), where \(z\in(0,1]\) and \(\alpha,h>0\) are appropriately chosen constants. The parameters \(\{\bar{c}_{i}\}_{1\leq i\leq 5}\) and \(\bar{D}_{1}\in(0,1)\) used in stating the following theorem are also constants. The explicit requirement on \(\alpha\) and \(h\), and the expressions of \(\{\bar{c}_{i}\}_{1\leq i\leq 5}\) and \(\bar{D}_{1}\) are presented in Appendix C.
**Theorem 2.3** (Proof in Appendix C).: _Consider \(\{x_{k}\}\) generated by Algorithm (2). Suppose that Assumptions 2.1, 2.2, and 2.4 are satisfied. Then we have the following results._
Figure 2: Best tail exponent in Proposition 2.2 (black) vs. upper bound on best possible tail exponent given by Theorem 2.2 (1)
1. _When_ \(z=1\) _and_ \(\alpha>2/\bar{D}_{1}\)_, for any_ \(\delta>0\) _and_ \(K\geq 0\)_, we have with probability at least_ \(1-\delta\) _that the following inequality holds for all_ \(k\geq K\)_:_ \[\|x_{k}-x^{*}\|_{c}^{2}\leq\frac{\bar{c}_{1}\log(1/\delta)}{k+h}+\bar{c}_{2}\|x _{0}-x^{*}\|_{c}^{2}\left(\frac{h}{k+h}\right)^{\bar{D}_{1}\alpha/2}+\frac{ \bar{c}_{3}+\bar{c}_{4}\log((k+1)/K^{1/2})}{k+h}.\]
2. _When_ \(z\in(0,1)\)_,_ \(\alpha>0\)_, and_ \(h\geq(\frac{4z}{D_{1}\alpha})^{1/(1-z)}\)_, for any_ \(\delta>0\) _and_ \(K\geq 0\)_, we have with probability at least_ \(1-\delta\) _that the following inequality holds for all_ \(k\geq K\)_:_ \[\|x_{k}-x^{*}\|_{c}^{2} \leq\frac{\bar{c}_{1}\log(1/\delta)}{(k+h)^{z}}+\bar{c}_{2}\|x_{0 }-x^{*}\|_{c}^{2}\exp\left(-\frac{\bar{D}_{1}\alpha}{2(1-z)}((k+h)^{1-z}-h^{1- z})\right)\] \[\quad+\frac{\bar{c}_{5}+\bar{c}_{4}\log((k+1)/K^{1/2})}{(k+h)^{z}}.\]
We will discuss the implications of Theorem 2.3 in terms of its dependence on \(\delta\), \(K\), and \(k\). First of all, since the probability tolerance level \(\delta\) appears as \(\log(1/\delta)\) in the norm-square bound, the norm error \(\|x_{k}-x^{*}\|_{c}\) has a sub-Gaussian tail. As for the dependence on \(K\) and \(k\), Theorem 2.3 implies that, with probability at least \(1-\delta\), all the iterates lie in a cone with a radius of order
\[\tilde{\Theta}\left(\sqrt{(\log(1/\delta)+\log(k/K^{1/2}))k^{-z}}\right)\]
for all \(k\geq K^{2}\).
As a side note, observe that when \(z=1\), the conditions on the stepsizes in Theorem 2.3 imply that \(\alpha\) must be bounded away from zero (\(\alpha>2/\bar{D}_{1}\) to be precise), with this bound being independent from \(h\). However, when \(z<1\), then \(\alpha\) only needs to be positive. This coincides with what was observed in the literature studying the mean-square error (Chen et al., 2020; Bhandari et al., 2018).
As a byproduct of the proof of Theorem 2.3, we obtain a fixed-time concentration bound of order \(\Theta(\sqrt{k^{-z}})\), matching the rate obtained for fixed-time mean-square error bounds in existing literature (Chen et al., 2020; Srikant and Ying, 2019).
**Proposition 2.3** (Fixed-Time Concentration).: _Under the same assumptions in Theorem 2.3, we have the following results._
1. _When_ \(\alpha_{k}=\alpha/(k+h)\) _with_ \(\alpha>2/\bar{D}_{1}\)_, for any_ \(\delta\in(0,1]\)_, we have_ \[\mathbb{P}\left(\|x_{k}-x^{*}\|_{c}^{2}\leq\frac{\bar{c}_{1}\log(1/\delta)}{ k+h}+\frac{\bar{c}_{2}\|x_{0}-x^{*}\|_{c}^{2}h^{\bar{D}_{1}\alpha/2}}{(k+h)^{ \bar{D}_{1}\alpha/2}}+\frac{\bar{c}_{3}+\bar{c}_{4}}{k+h}\right)\geq 1-\delta.\]
2. _When_ \(\alpha_{k}=\alpha/(k+h)^{z}\) _with_ \(z\in(0,1)\)_, for any_ \(\delta\in(0,1]\)_, we have_ \[\mathbb{P}\left(\|x_{k}-x^{*}\|_{c}^{2}\leq\frac{\bar{c}_{1}\log(1/\delta)}{ (k+h)^{z}}+\bar{c}_{2}\|x_{0}-x^{*}\|_{c}^{2}e^{\frac{D_{1}\alpha}{2(1-z)}((k +h)^{1-z}-h^{1-z})}+\frac{\bar{c}_{4}+\bar{c}_{5}}{(k+h)^{z}}\right)\geq 1-\delta.\]
The fixed-time concentration bound is equivalent to the full tail bound presented below.
**Corollary 2.3.1** (Full Tail Bound).: _Under the same assumptions in Theorem 2.3, we have the following results._
_(1) When_ \(\alpha_{k}=\alpha/(k+h)\) _with_ \(\alpha>2/\bar{D}_{1}\)_, for any_ \(k\geq 0\)_, we have for any_ \(\epsilon>0\) _that_
\[\mathbb{P}\left(\|x_{k}-x^{*}\|_{c}>\epsilon\right)\leq\exp\left(-\frac{k+h}{ \bar{c}_{1}}\left(\epsilon^{2}-\bar{c}_{2}\|x_{0}-x^{*}\|_{c}^{2}\left(\frac{h} {k+h}\right)^{\bar{D}_{1}\alpha/2}-\frac{\bar{c}_{3}+\bar{c}_{4}}{k+h}\right) \right).\]
_(2) When \(\alpha_{k}=\alpha/(k+h)^{z}\) with \(z\in(0,1)\) and \(h\geq(\frac{4z_{-}}{D_{1}\alpha})^{\frac{1}{1-z}}\), for any \(k\geq 0\), we have for any \(\epsilon>0\) that_
\[\mathbb{P}\left(\|x_{k}-x^{*}\|_{c}>\epsilon\right)\leq \exp\left(-\frac{(k+h)^{z}}{\bar{c}_{1}}\left(\epsilon^{2}-\bar{c}_{2}\|x_{0 }-x^{*}\|_{c}^{2}e^{-\frac{D_{1}\alpha}{2(1-z)}((k+h)^{1-z}-h^{1-z})}-\frac{ \bar{c}_{4}+\bar{c}_{5}}{(k+h)^{z}}\right)\right).\]
The proof of Corollary 2.3.1 follows by representing the probability tolerance level \(\delta\) from Proposition 2.3 as a function of the accuracy tolerance level \(\epsilon\). Observe that Corollary 2.3.1 provides a sub-Gaussian upper bound for the whole complementary CDF of the error \(\|x_{k}-x^{*}\|_{c}\), for any \(k\geq 0\). Therefore, we can use the formula \(\mathbb{E}[\|x_{k}-x^{*}\|_{c}^{r}]=\int_{0}^{\infty}\mathbb{P}(\|x_{k}-x^{*} \|_{c}^{r}>x)dx\) (where \(r\) is a positive integer) to integrate this bound to obtain bounds for any moment of the error at any point in time.
### Linear Stochastic Approximation
One special case of SA is linear SA presented below:
\[x_{k+1}=x_{k}+\alpha_{k}(A(Y_{k})x_{k}-b(Y_{k})), \tag{10}\]
where \(\{Y_{k}\}\) (taking values in \(\mathcal{Y}\)) is a sequence of i.i.d. random variables with distribution \(\nu\), and \(A:\mathcal{Y}\mapsto\mathbb{R}^{d\times d}\) and \(b:\mathcal{Y}\mapsto\mathbb{R}^{d}\) are deterministic functions. Linear SA has wide applications, such as solving least-square problems and TD-learning in RL (Bertsekas and Tsitsiklis, 1996). In this section, we formally show that linear SA can be equivalently remodeled as a contractive SA in the form of Algorithm (2) with multiplicative noise. As a result, Theorem 2.1 is applicable for us to establish super-polynomial high probability bounds of linear SA.
To study Algorithm (10), we impose the following two assumptions.
**Assumption 2.5**.: \(A_{\max}:=\sup_{y\in\mathcal{Y}}\|A(y)\|_{2}<\infty\) and \(b_{\max}:=\sup_{y\in\mathcal{Y}}\|b(y)\|_{2}<\infty\).
Assumption 2.5 is widely used in studying the asymptotic convergence (Bertsekas and Tsitsiklis, 1996) or finite-sample mean-square bounds (Srikant and Ying, 2019) of linear SA.
**Assumption 2.6**.: The matrix \(\bar{A}=\mathbb{E}_{Y\sim\nu}[A(Y)]\) is Hurwitz, i.e., the eigenvalues of \(\bar{A}\) have strictly negative real parts.
Assumption 2.6 is usually imposed to ensure the stability of Algorithm (10) (Srikant and Ying, 2019). In fact, consider the ODE associated with Algorithm (10):
\[\dot{x}(t)=\bar{A}x(t)-\bar{b}, \tag{11}\]
where \(\bar{b}:=\mathbb{E}_{Y\sim\nu}[b(Y)]\)(Borkar, 2009). When \(\bar{A}\) is Hurwitz, the following Lyapunov equation
\[\bar{A}^{\top}P+P\bar{A}+I_{d}=0\]
has a unique positive definition solution (Khalil and Grizzle, 2002), denoted by \(\bar{P}\). It follows that the unique equilibrium point \(x^{*}=\bar{A}^{-1}\bar{b}\) of ODE (11) is exponentially stable (Haddad and Chellaboina, 2011), which in turn guarantees the asymptotic convergence of Algorithm (10) via the ODE method (Borkar, 2009).
We next reformulate Algorithm (10) in the form of Algorithm (2). Let \(F_{\beta}:\mathcal{Y}\times\mathbb{R}^{d}\mapsto\mathbb{R}^{d}\) be defined as
\[F_{\beta}(x,y)=\beta A(y)x-\beta b(y)+x,\quad\forall\ x\in\mathbb{R}^{d},y\in \mathcal{Y},\]
where \(\beta=\frac{1}{2}\lambda_{\max}^{-1}(\bar{A}^{\top}\bar{P}\bar{A})\). In this work, \(\lambda_{\max}(\cdot)\) (respectively, \(\lambda_{\min}(\cdot)\)) returns the largest (respectively, smallest) eigenvalue of a symmetric matrix. Then Algorithm (10) can be equivalently written as
\[x_{k+1}=x_{k}+\frac{\alpha_{k}}{\beta}(F_{\beta}(x_{k},Y_{k})-x_{k}),\]
which is in the same form of Algorithm (2) because we can absorb \(\beta\) into the stepsize. We next show that Assumptions 2.1, 2.2, and 2.3 are satisfied in the context of linear SA. Let \(\|\cdot\|_{\bar{P}}\) be a norm defined as \(\|x\|_{\bar{P}}=(x^{\top}\bar{P}x)^{1/2}\) for all \(x\in\mathbb{R}^{d}\).
**Proposition 2.4** (Proof in Appendix D).: _Suppose that Assumptions 2.5 and 2.6 are satisfied. Then the following results hold._
1. _There exists_ \(\bar{\gamma}\in(0,1)\) _such that the operator_ \(\bar{F}_{\beta}(\cdot):=\mathbb{E}_{Y\sim\nu}[F_{\beta}(\cdot,Y)]\) _is a_ \(\bar{\gamma}\) _- contraction mapping with respect to_ \(\|\cdot\|_{P}\)_._
2. _It holds for all_ \(k\geq 0\) _that_ \(\mathbb{E}[\bar{F}_{\beta}(x_{k},Y_{k})\mid\mathcal{F}_{k}]=\bar{F}_{\beta}(x_ {k})\)_, where_ \(\mathcal{F}_{k}\) _is the_ \(\sigma\)_-algebra generated by_ \(\{Y_{0},Y_{1},\cdots,Y_{k-1}\}\)_._
3. _There exists_ \(\hat{\sigma}>0\) _such that_ \(\|F_{\beta}(x_{k},Y_{k})-\bar{F}_{\beta}(x_{k})\|_{\bar{P}}\leq\hat{\sigma}( \|x\|_{\bar{P}}+1)\) _for all_ \(k\geq 0\)_._
Proposition 2.4 enables us to apply Theorem 2.1 to establish the finite-sample high probability bound of Algorithm (10). The result is presented in the following.
**Theorem 2.4**.: _Suppose that Assumptions 2.5 and 2.6 are satisfied, and \(\alpha_{k}=\alpha\beta/(k+h)\) with appropriately chosen \(\alpha\) and \(h\). Then, the same bound in Theorem 2.1 (and Theorem 2.1\({}^{\prime}\)) holds here. In addition, there exists an integer \(m_{\ell}>0\) such that for any \(\epsilon>0\) and \(\delta\in(0,1)\), to achieve \(\|x_{k}-x^{*}\|_{\bar{P}}\leq\epsilon\) with probability at least \(1-\delta\), the sample complexity is \(\tilde{\mathcal{O}}((1+\log^{m_{\ell}}(1/\delta))\epsilon^{-2})\)._
Theorem 2.4 is qualitatively similar to Theorem 2.1 in that we achieve \(\tilde{\mathcal{O}}(\epsilon^{-2})\) sample complexity in terms of the accuracy level \(\epsilon\) and the bound has super-polynomial tail.
### Proof Sketch of Theorem 2.1
We here only present the proof sketch for Theorem 2.1. The proof of Theorem 2.3 follows from a qualitatively similar approach. Our high-level idea is a novel bootstrapping argument. The first step is to establish a time-varying worst case bound as an initialization, which is done in Proposition 2.1, and the second step is to establish the iterative refinement of bounds presented in Section 1.2, which is the focus this section.
Suppose that there exists \(\delta>0\) and a _non-decreasing_ sequence \(\{T_{k}(\delta)\}_{k\geq 0}\) such that
\[\mathbb{P}(\|x_{k}-x^{*}\|_{c}^{2}\leq T_{k}(\delta),\forall\ k\geq 0)\geq 1-\delta.\]
Our goal is to show that for any \(\delta^{\prime}>0\), there exists a sequence \(T_{k}(\delta,\delta^{\prime})=\mathcal{O}(T_{k}(\delta)\alpha_{k})\) such that
\[\mathbb{P}(\|x_{k}-x^{*}\|_{c}^{2}\leq T_{k}(\delta,\delta^{\prime}),\forall\ k \geq 0)\geq 1-\delta-\delta^{\prime}, \tag{12}\]
thereby establishing the bootstrapping blueprint.
Step 1: Bounding the log-MGF of the Generalized Moreau EnvelopeTo establish Eq. (12), we develop a Lyapunov argument with a modified version of the log-MGF as the Lyapunov function. Denote \(E_{k}(\delta)=\{\|x_{t}-x^{*}\|_{c}^{2}\leq T_{t}(\delta),\forall\,t=0,1, \cdots,k\}\). Note that we have by definition that \(\{E_{k}(\delta)\}\) is a sequence of decreasing events, i.e., \(E_{k+1}(\delta)\subseteq E_{k}(\delta)\), and satisfies \(\mathbb{P}(E_{k}(\delta))\geq 1-\delta\) for any \(k\geq 0\). Let \(\lambda_{k}=\theta\alpha_{k}^{-1}T_{k}(\delta)^{-1}\), where \(\theta\) is a tunable constant and \(\alpha_{k}\) is the stepsize. Using a time-varying \(\lambda_{k}\) as opposed to constant \(\lambda\) in the proof of classical concentration inequalities (such as Hoeffding inequality) is crucially important in our approach. For any \(k\geq 0\), let
\[Z_{k}=\log\left(\mathbb{E}\left[\exp\left(\lambda_{k}\mathds{1}_{E_{k}(\delta) }M(x_{k}-x^{*})\right)\right]\right)\]
be a modified version of the log-MGF. Here \(M(\cdot)\) is the generalized Moreau envelope introduced in Chen et al. (2020) as a _smooth approximation_ of the norm-square function \(\frac{1}{2}\|\cdot\|_{c}^{2}\). The explicit definition of \(M(\cdot)\) and its properties are summarized in Lemma A.1. We view \(Z_{k}\) as our Lyapunov function, and the key step in establishing Eq. (12) is to derive a bound for \(Z_{k}\).
Working with \(\mathbb{E}[\exp\left(\lambda_{k}\mathds{1}_{E_{k}(\delta)}M(x_{k}-x^{*})\right)]\) presents new challenges, as the exponential nature of the MGF prevents us from exploiting the linearity of the expectation, which was used extensively in deriving mean-square bounds (Srikant and Ying, 2019; Chen et al., 2020). Instead, after representing \(Z_{k+1}\) using \(Z_{k}\), we have the expectation of a product of random variables. To overcome this challenge, we use a conditioning argument along with the Cauchy-Schwarz inequality and, most importantly, the time-varying \((1-\delta)-\) confidence bound \(T_{k}(\delta)\). Eventually, we obtain the following inequality:
\[Z_{k}\leq W_{1}\left(\frac{h}{k+h}\right)^{\alpha D_{0}/2-1}+W_{2},\quad \forall\ k\geq 0, \tag{13}\]
where \(W_{1}\) and \(W_{2}\) are (problem-dependent) constants.
Step 2: An Exponential Supermartingale and Ville's Maximal InequalityLet \(\{\overline{M}_{k}\}\) be a sequence of random variables defined as
\[\overline{M}_{k}=\exp\left(\lambda_{k}\mathds{1}_{E_{k}(\delta)}M(x_{k}-x^{*} )-W_{3}\sum_{i=0}^{k-1}\alpha_{i}\right),\quad\forall\ k\geq 0,\]
where \(W_{3}\) is a constant. We show that \(\{\overline{M}_{k}\}\) is a supermartingale with respect to the filtration \(\{\mathcal{F}_{k}\}_{k\geq 0}\), which enables us to use Ville's maximal inequality together with Eq. (13) to establish a maximal concentration inequality. Specifically, we have for any \(K\geq 0\) that
\[\mathbb{P}\left(\sup_{k\geq K}\left\{\lambda_{k}\mathds{1}_{E_{k} (\delta)}M(x_{k}-x^{*})-W_{3}\sum_{i=0}^{k-1}\alpha_{i}\right\}>\epsilon\right)\] \[=\mathbb{P}\left(\sup_{k\geq K}\left\{\exp\left(\lambda_{k} \mathds{1}_{E_{k}(\delta)}M(x_{k}-x^{*})-W_{3}\sum_{i=0}^{k-1}\alpha_{i}\right) \right\}>e^{\epsilon}\right)\] \[\leq\exp\left(W_{1}\left(\frac{h}{K+h}\right)^{\alpha D_{0}/2-1}+ W_{2}-W_{3}\sum_{i=0}^{K-1}\alpha_{i}-\epsilon\right),\]
where the last line follows from Ville's maximal inequality and Eq. (13). By using the fact that the generalized Moreau envelope \(M(\cdot)\) is an approximation of the norm-square function \(\frac{1}{2}\|\cdot\|_{c}^{2}\), setting \(K=0\), and mostly importantly, dividing by \(\lambda_{k}=\mathcal{O}(\alpha_{k}^{-1}T_{k}^{-1}(\delta))\), we have for any \(\delta^{\prime}\in(0,1)\) that
\[\mathds{1}_{E_{k}(\delta)}\|x_{k}-x^{*}\|_{c}^{2}\leq W_{4}\alpha_{k}T_{k}( \delta)(\log(1/\delta^{\prime})+1):=T_{k}(\delta,\delta^{\prime}),\quad \forall\ k\geq 0\]
with probability at least \(1-\delta^{\prime}\), where \(W_{4}>0\) is a constant. After using union bound to remove the indicator function, the previous inequality reads
\[\mathbb{P}(\|x_{k}-x^{*}\|_{c}^{2}\leq T_{k}(\delta,\delta^{\prime}),\ \forall\ k\geq 0) \leq 1-\delta-\delta^{\prime}, \tag{14}\]
which establishes Eq. (12) we desire. Theorem 2.1 then follows by using Proposition 2.1 as an initialization and repeatedly using Eq. (14) to improve the bound.
Looking back, using a time-varying \(\lambda_{k}=\mathcal{O}(\alpha_{k}^{-1}T_{k}^{-1}(\delta))\) is the key to make sure that the new bound \(T_{k}(\delta,\delta^{\prime})\) is improved by a factor of \(\alpha_{k}=\mathcal{O}(1/k)\) compared to the old bound \(T_{k}(\delta)\).
## 3 Applications in Reinforcement Learning
Consider an infinite horizon discounted Markov decision process (MDP) defined by a finite state-space \(\mathcal{S}\), a finite action-space \(\mathcal{A}\), a set of transition probability matrices \(\{P_{a}\in\mathbb{R}^{|\mathcal{S}|\times|\mathcal{S}|}\ |\ a\in\mathcal{A}\}\), a reward function \(\mathcal{R}:\mathcal{S}\times\mathcal{A}\mapsto[0,1]\), and a discount factor \(\gamma\in(0,1)\). Note that in RL, the
transition probabilities and the reward function are unknown to the agent. Given a stationary policy \(\pi:\mathcal{S}\mapsto\Delta^{|\mathcal{A}|}\), where \(\Delta^{|\mathcal{A}|}\) stands for the \(|\mathcal{A}|\) - dimensional probability simplex, its value function \(V^{\pi}:\mathcal{S}\mapsto\mathbb{R}\) and \(Q\)-function \(Q^{\pi}:\mathcal{S}\times\mathcal{A}\mapsto\mathbb{R}\) are defined as
\[V^{\pi}(s) =\mathbb{E}_{\pi}\left[\sum_{k=0}^{\infty}\gamma^{k}\mathcal{R}(S _{k},A_{k})\ \middle|\ S_{0}=s\right],\quad\forall\ s\in\mathcal{S},\] \[Q^{\pi}(s,a) =\mathbb{E}_{\pi}\left[\sum_{k=0}^{\infty}\gamma^{k}\mathcal{R}(S _{k},A_{k})\ \middle|\ S_{0}=s,A_{0}=a\right],\quad\forall\ (s,a)\in\mathcal{S}\times \mathcal{A},\]
where we use the notation \(\mathbb{E}_{\pi}[\,\cdot\,]\) to mean that the actions are selected based on the policy \(\pi\). The goal is to find an optimal policy \(\pi^{*}\) so that its value function \(V^{*}\), or equivalently its \(Q\)-function \(Q^{*}\), is uniformly maximized.
Since many RL algorithms can be modeled as a contractive SA algorithm for solving some variant of the Bellman equation (Bertsekas and Tsitsiklis, 1996), our SA results provide a unified framework for establishing the concentration bounds. We next use our SA results to establish high probability bounds of popular RL algorithms such as off-policy TD-learning with generalized importance sampling factors, on-policy TD-learning with linear function approximation, and \(Q\)-learning. To the best of our knowledge, concentration bounds (with super-polynomial tails) for off-policy TD have never been established in the literature before due to the combination of potentially unboundedness of the iterates and the multiplicative noise.
### Off-Policy TD-Learning with Generalized Importance Sampling Factors
TD-learning is a common approach for solving the policy evaluation problem (i.e., estimating the value function of a policy \(\pi\)), which, after combined with policy gradient, forms the celebrated actor-critic framework (Konda and Tsitsiklis, 2000) for finding an optimal policy, thereby solving the RL problem. In TD-learning, there are two policies that play important roles in the algorithm. One is the policy \(\pi\) we want to evaluate, called the target policy, and the other is the policy \(\pi_{b}\) used to collect samples, called the behavior policy. When \(\pi=\pi_{b}\), the corresponding algorithm is called on-policy TD, otherwise it is called off-policy TD. Compared with on-policy TD, off-policy TD has many advantages both practically and theoretically; see Levine et al. (2020) for more details.
Due to the popularity of off-policy learning, there are many variants of off-policy TD-learning proposed in the literature, such as \(Q^{\pi}(\lambda)\)(Harutyunyan et al., 2016), \(\text{TB}(\lambda)\)(Precup, 2000), \(\text{Retrace}(\lambda)\)(Munos et al., 2016), and \(Q\)-trace (Chen et al., 2021), etc. A unified approach for establishing finite-sample mean-square bounds of these algorithms was presented in Chen et al. (2021). To establish the concentration bounds, we next present the generic framework of importance sampling based multi-step off-policy TD-learning presented in Chen et al. (2021).
Recall that we use \(\pi\) as the target policy and \(\pi_{b}\) as the behavior policy. Let \(c,\rho:\mathcal{S}\times\mathcal{A}\mapsto\mathbb{R}_{+}\) be the generalized importance sampling factors. We impose the following assumption on the behavior policy.
**Assumption 3.1**.: It holds that \(\{a\in\mathcal{A}\mid\pi(a|s)>0\}\subseteq\{a\in\mathcal{A}\mid\pi_{b}(a|s)>0\}\) for all \(s\in\mathcal{S}\). In addition, the Markov chain \(\{S_{k}\}\) induced by \(\pi_{b}\) has a unique stationary distribution \(\kappa_{S,b}\in\Delta^{|\mathcal{S}|}\), which satisfies \(\kappa_{S,b}(s)>0\) for all \(s\in\mathcal{S}\).
The first part of Assumption 3.1 is usually called the coverage assumption in the literature, which states that the support of the behavior policy should cover the support of the target policy.
Let \(\{(S^{0}_{k},A^{0}_{k},S^{1}_{k},A^{1}_{k},\cdots,S^{n}_{k},A^{n}_{k})\}_{k\geq 0}\) be a sequence of i.i.d. samples such that \(S^{0}_{k}\sim\kappa_{S,b}(\cdot)\), \(A^{i}_{k}\sim\pi_{b}(\cdot|S^{i}_{k})\), and \(S^{i+1}_{k}\sim P_{A^{i}_{k}}(S^{i}_{k},\cdot)\) for all \(i\in\{0,1,\cdots,n\}\), where \(n\) is a non-negative integer. In this paper, we assume i.i.d. sampling for RL algorithms, which is satisfied when there is a generative model. In practice, sometimes an RL algorithm is also implemented with a single trajectory of Markovian samples (generated by applying some suitable behavior policy to the underlying MDP). We want to point out that our current SA results do not allow for such Markovian sampling because we require the noise to be unbiased; see Assumption 2.2.
Studying SA with Markovian noise is an immediate future direction. That being said, existing concentration results studying SA with Markovian noise all require the iterates to be bounded by a deterministic constant (Qu and Wierman, 2020; Li et al., 2021), and hence are not applicable to either off-policy TD-learning or on-policy TD-learning with linear function approximation.
We consider evaluating the \(Q\)-functions. The algorithm and results can easily be extended to TD-learning for evaluating the \(V\)-functions. The importance sampling based \(n\)-step off-policy TD-learning algorithm updates the estimate \(Q_{k}\) of the target value function \(Q^{\pi}\) according to
\[Q_{k+1}(s,a) =Q_{k}(s,a)+\alpha_{k}(s,a)\sum_{i=k}^{k+n-1}\gamma^{i-k}\prod_{j= k+1}^{i}c(S_{k}^{j},A_{k}^{j})\] \[\quad\times\big{(}\mathcal{R}(S_{k}^{i},A_{k}^{i})+\gamma\rho(S_{ k}^{i+1},A_{k}^{i+1})Q_{k}(S_{k}^{i+1},A_{k}^{i+1})-Q_{k}(S_{k}^{i},A_{k}^{i}) \big{)} \tag{15}\]
when \((s,a)=(S_{k}^{0},A_{k}^{0})\), and \(Q_{k+1}(s,a)=Q_{k}(s,a)\) otherwise. To understand Algorithm (15), consider the special case of choosing the generalized importance sampling factors as \(c(s,a)=\rho(s,a)=\pi(a|s)/\pi_{b}(a|s)\) for all \((s,a)\). Then Algorithm (15) reduces to the classical on-policy \(n\)-step TD-learning (Sutton and Barto, 2018) when \(\pi=\pi_{b}\), and vanilla off-policy TD when \(\pi\neq\pi_{b}\). More generally, the factors \(c(\cdot,\cdot)\) and \(\rho(\cdot,\cdot)\) can be modified to trade off the bias and the variance in importance sampling based TD-learning algorithms. As a result of such trade-off, the limit of Algorithm (15) is biased from the target value function \(Q^{\pi}\), and is denoted by \(Q_{\pi,\rho}\). See Chen et al. (2021) for more details.
We next remodel Algorithm (15) in the form of Algorithm (2), and verify that Assumptions 2.1, 2.2, and 2.3 are satisfied. Let \(\{Y_{k}\}_{k\geq 0}\) be an i.i.d. sequence defined as \(Y_{k}=(S_{k}^{0},A_{k}^{0},S_{k}^{1},A_{k}^{1},\cdots,S_{k}^{n},A_{k}^{n})\) for all \(k\geq 0\). Denote the distribution of \(Y_{k}\) by \(\kappa_{Y,b}\). Let \(F:\mathbb{R}^{|\mathcal{S}|}\times\mathcal{Y}\mapsto\mathbb{R}^{|\mathcal{S}|}\) be an operator defined as
\[[F(Q,y)](s,a)=\ [F(Q,s_{0},a_{0},...,s_{n},a_{n})](s,a)\] \[=\mathds{1}_{\{(s_{0},a_{0})=(s,a)\}}\sum_{i=0}^{n-1}\gamma^{i} \prod_{j=1}^{i}c(s_{j},a_{j})(\mathcal{R}(s_{i},a_{i})+\gamma\rho(s_{i+1},a_{ i+1})Q(s_{i+1},a_{i+1})-Q(s_{i},a_{i}))\] \[\quad+Q(s,a)\]
for all \(Q\in\mathbb{R}^{|\mathcal{S}||\mathcal{A}|}\) and \(y\in\mathcal{Y}\). Then, Algorithm (15) can be equivalently written as
\[Q_{k+1}=Q_{k}+\alpha_{k}(F(Q_{k},Y_{k})-Q_{k}).\]
The next proposition shows that Assumptions 2.1, 2.2, and 2.3 are satisfied in the context of off-policy TD. The result was previous established in Chen et al. (2021) for mean-square analysis, and was restated here for completeness.
**Proposition 3.1**.: _Suppose that Assumption 3.1 is satisfied, and the generalized importance sampling factors satisfy (1) \(\rho(s,a)\geq c(s,a)\) for all \((s,a)\) and (2) \(\max_{s}\sum_{a}\pi_{b}(a|s)\rho(s,a)\leq 1/\gamma\). Then we have the following results._
1. _There exists_ \(\gamma_{o}\in(0,1)\) _such that the operator_ \(\bar{F}(\cdot):=\mathbb{E}_{Y\sim\kappa_{Y,b}}[F(\cdot,Y)]\) _is a_ \(\gamma_{o}\) _- contraction mapping with respect to_ \(\|\cdot\|_{\infty}\)_._
2. _It holds that_ \(\mathbb{E}[F(Q_{k},Y_{k})\ |\ \mathcal{F}_{k}]=\bar{F}(Q_{k})\) _a.s. for all_ \(k\geq 0\)_, where_ \(\mathcal{F}_{k}\) _is the_ \(\sigma\)_-algebra generated by_ \(\{Y_{0},Y_{2},\cdots,Y_{k-1}\}\)_._
3. _There exists_ \(L_{o}>0\) _such that_ \(\|F(Q_{k},Y_{k})-\bar{F}(Q_{k})\|_{\infty}\leq L_{o}(1+\|Q_{k}\|_{\infty})\) _for all_ \(k\geq 0\)_._
We next apply Theorem 2.1 to establish the concentration bounds of off-policy TD-learning with generalized importance sampling factors. The parameter \(n_{o}\) used to present the following theorem is a positive integer, and \(\{c_{o,i}\}_{1\leq i\leq 5}\) and \(D_{o}\in(0,1)\) are positive constants.
**Theorem 3.1**.: _Consider \(\{Q_{k}\}\) generated by Algorithm (15). Suppose that Assumption 3.1 is satisfied, the generalized importance sampling factors \(c(\cdot,\cdot)\) and \(\rho(\cdot,\cdot)\) satisfy (1) \(\rho(s,a)\geq c(s,a)\) for all \((s,a)\), and (2) \(\max_{s}\sum_{a}\pi_{b}(a|s)\rho(s,a)\leq 1/\gamma\), and \(\alpha_{k}=\alpha/(k+h)\) with \(\alpha>2/D_{o}\) and \(h\) large enough. Then, for any \(\delta>0\) and \(K\geq 0\), with probability at least \(1-\delta\), we have for all \(k\geq K\) that_
\[\|Q_{k}-Q_{\pi,\rho}\|_{\infty}^{2} \leq\frac{\alpha c_{o,1}\|Q_{0}-Q_{\pi,\rho}\|_{c}^{2}}{k+h}\left[ \left(\log\left(\frac{n_{o}+1}{\delta}\right)\right)^{n_{o}-1}+c_{o,2}\right]\] \[\quad\times\left[\log\left(\frac{n_{o}+1}{\delta}\right)+c_{o,3} \left(\frac{h}{K+h}\right)^{\alpha D_{o}/2-1}+c_{o,4}+c_{o,5}\log\left(\frac{k -1+h}{K-1+h}\right)\,\right].\]
Theorem 3.1 implies that off-policy TD-learning enjoys an \(\tilde{\mathcal{O}}(1/k)\) rate of convergence with super-polynomial tail, and appears to be the first result in the literature that establishes concentration bounds of off-policy TD-learning.
### On-Policy TD-Learning with Linear Function Approximation
In this section, we consider on-policy TD-learning with linear function approximation, where function approximation is introduced to overcome the curse of dimensionality in RL.
Given a target policy \(\pi\), consider approximating \(V^{\pi}\) from a linear sub-space of \(\mathbb{R}^{|\mathcal{S}|}\) spanned by \(d\) basis vectors \(\phi_{i}\in\mathbb{R}^{|\mathcal{S}|}\), \(1\leq i\leq d\). That is, we approximate \(V^{\pi}\) using \(V_{\theta}=\sum_{i=1}^{d}\phi_{i}\theta_{i}\), where \(\theta\in\mathbb{R}^{d}\) is the weight vector. We assume without loss of generality that \(\{\phi_{i}\}_{1\leq i\leq d}\) are linearly independent and are uniformly bounded such that \(\max_{s\in\mathcal{S}}\|\phi(s)\|_{2}\leq 1\). We also denote \(\Phi=[\phi_{1},\phi_{2}\cdots,\phi_{d}]\in\mathbb{R}^{|\mathcal{S}|\times d}\) as the feature matrix and \(\phi(s)=(\phi_{1}(s),\phi_{2}(s),\cdots,\phi_{d}(s))\in\mathbb{R}^{d}\) as the feature vector associated with state \(s\). In addition, we impose the following assumption on the target policy \(\pi\).
**Assumption 3.2**.: The Markov chain \(\{S_{k}\}\) induced by \(\pi\) admits a unique stationary distribution \(\kappa_{S}\in\Delta^{|\mathcal{S}|}\), which satisfies \(\kappa_{S}(s)>0\) for all \(s\).
Assumption 3.2 essentially states that the target policy \(\pi\) should enable the agent to sufficiently explore the state-space, which is, to some extend, a necessary requirement for learning its value function. Let \(\{(S_{k},A_{k},S_{k}^{\prime})\}_{k\geq 0}\) be a sequence of i.i.d. samples such that \(S_{k}\sim\kappa_{S}(\cdot)\), \(A_{k}\sim\pi(\cdot|S_{k})\), and \(S_{k}^{\prime}\sim P_{A_{k}}(S_{k},\cdot)\). Then, with a deterministic initialization \(\theta_{0}\in\mathbb{R}^{d}\), TD-learning with linear function approximation iteratively updates the weight vector \(\theta_{k}\) according to
\[\theta_{k+1}=\theta_{k}+\alpha_{k}\phi(S_{k})(\mathcal{R}(S_{k},A_{k})+\gamma \phi(S_{k}^{\prime})^{\top}\theta_{k}-\phi(S_{k})^{\top}\theta_{k}), \tag{16}\]
where \(\{\alpha_{k}\}\) is a positive sequence of stepsizes (Sutton and Barto, 2018). Algorithm (16) can be interpreted as an SA algorithm for solving a projected Bellman equation; see Tsitsiklis and Van Roy (1997) for more details.
In view of Eq. (16), TD-learning with linear function approximation can be modeled as a linear SA algorithm in the form of Algorithm (10). Formally, let \(Y_{k}=(S_{k},A_{k},S_{k}^{\prime})\in\mathcal{Y}:=\mathcal{S}\times\mathcal{A }\times\mathcal{S}\) for all \(k\geq 0\). It is clear that \(\{Y_{k}\}\) is also an i.i.d. sequence. Denote the distribution of \(Y_{k}\) by \(\kappa_{Y}\), which satisfies \(\kappa_{Y}(s,a,s^{\prime})=\kappa_{S}(s)\pi(a|s)P_{a}(s,s^{\prime})\) for all \((s,a,s^{\prime})\). Let \(A_{o}:\mathbb{R}^{d}\times\mathcal{Y}\mapsto\mathbb{R}^{d}\) and \(b_{o}:\mathbb{R}^{d}\times\mathcal{Y}\mapsto\mathbb{R}^{d}\) be defined as
\[A_{o}(y) =A_{o}(s,a,s^{\prime})=\phi(s)(\gamma\phi(s^{\prime})-\phi(s))^{ \top},\quad\forall\ y=(s,a,s^{\prime})\in\mathcal{Y},\] \[b_{o}(y) =b_{o}(s,a,s^{\prime})=-\phi(s)\mathcal{R}(s,a),\quad\forall\ y=(s,a,s^{\prime})\in\mathcal{Y}.\]
Then Algorithm (16) can be equivalently written as
\[\theta_{k+1}=\theta_{k}+\alpha_{k}(A_{o}(Y_{k})\theta_{k}-b_{o}(Y_{k})),\quad \forall\ k\geq 0. \tag{17}\]
We next verify that Assumptions 2.5 and 2.6 are satisfied in the context of TD-learning with linear function approximation. Let \(\mathcal{K}_{S}=\text{diag}(\kappa_{S})\in\mathbb{R}^{|\mathcal{S}|\times| \mathcal{S}|}\).
**Proposition 3.2** (Proof in Appendix E.1).: _Suppose that Assumption 3.2 is satisfied. Then the following results hold._
1. _It holds for all_ \(y\in\mathcal{Y}\) _that_ \(\|A_{o}(y)\|_{2}\leq 2\) _and_ \(\|b_{o}(y)\|_{2}\leq 1\)_._
2. _The matrix_ \(\bar{A}_{o}:=\mathbb{E}_{Y\sim_{\mathcal{N}Y}}[A_{o}(Y)]\) _is Hurwitz._
Proposition 3.2 enables us to apply Theorem 2.4 to establish the sample complexity bound of TD-learning with linear function approximation. Let \(\theta^{*}\) be the unique solution to the equation \(\bar{A}_{o}\theta=\bar{b}_{o}\), where \(\bar{b}_{o}=\mathbb{E}_{Y\sim_{\mathcal{N}Y}}[b_{o}(Y)]\).
**Theorem 3.2**.: _Consider \(\{\theta_{k}\}\) generated by Algorithm (16). Suppose that Assumption 3.2 is satisfied, and \(\alpha_{k}=\alpha/(k+h)\) with appropriately chosen \(\alpha\) and \(h\). Then, there exists an integer \(m_{o}>0\) such that for any \(\epsilon>0\) and \(\delta\in(0,1)\), to achieve \(\|\theta_{k}-\theta^{*}\|_{2}\leq\epsilon\) with probability at least \(1-\delta\), the sample complexity is \(\tilde{\mathcal{O}}((1+\log^{m_{o}}(1/\delta))\epsilon^{-2})\)._
### \(Q\)-Learning
So far we have been studying variants of TD-learning, which are usually used in an actor-critic framework to find an optimal policy. Another popular method for solving RL problems is the celebrated \(Q\)-learning algorithm (Watkins and Dayan, 1992), which is the focus of this section. The \(Q\)-learning algorithm finds an optimal policy \(\pi^{*}\) by finding the optimal \(Q\)-function \(Q^{*}=Q^{\pi^{*}}\), the motivation of which is that \(\pi^{*}(\cdot|s)\) is supported on the set \(\arg\max_{a\in\mathcal{A}}Q^{*}(s,a)\) for all \(s\in\mathcal{S}\). See Sutton and Barto (2018); Bertsekas and Tsitsiklis (1996) for more details about \(Q\)-learning.
Let \(\pi_{b}\) be the behavior policy, which satisfies the following assumption.
**Assumption 3.3**.: The Markov chain \(\{S_{k}\}\) induced by \(\pi_{b}\) admits a unique stationary distribution \(\kappa_{b}\in\Delta^{|\mathcal{S}|}\), which satisfies \(\kappa_{b}(s)>0\) for all \(s\).
With a sequence of i.i.d. samples \(\{(S_{k},A_{k},S_{k}^{\prime})\}\) generated as \(S_{k}\sim\kappa_{b}(\cdot)\), \(A_{k}\sim\pi_{b}(\cdot|S_{k})\), and \(S_{k}^{\prime}\sim P_{A_{k}}(S_{k},\cdot)\) for all \(k\geq 0\), the \(Q\)-learning algorithm iteratively updates an estimate \(Q_{k}\) of \(Q^{*}\) according to
\[Q_{k+1}(s,a)=Q_{k}(s,a)+\alpha_{k}\mathds{1}_{\{(S_{k},A_{k})=(s,a)\}}( \mathcal{R}(S_{k},A_{k})+\gamma\max_{a^{\prime}\in\mathcal{A}}Q_{k}(S_{k}^{ \prime},a^{\prime})-Q_{k}(S_{k},A_{k}))\]
for all \((s,a)\) and \(k\geq 0\), where \(Q_{0}\) is initialized arbitrarily but satisfies \(\|Q_{0}\|_{\infty}\leq 1/(1-\gamma)\).
We next remodel \(Q\)-learning in the form of Algorithm (2). Let \(F:\mathbb{R}^{|\mathcal{S}||\mathcal{A}|}\times\mathcal{S}\times\mathcal{A} \times\mathcal{S}\mapsto\mathbb{R}^{|\mathcal{S}||\mathcal{A}|}\) be an operator defined as
\[[F(Q,s_{0},a_{0},s_{1})](s,a)=\mathds{1}_{\{(s_{0},a_{0})=(s,a)\}}(\mathcal{R }(s_{0},a_{0})+\gamma\max_{a^{\prime}\in\mathcal{A}}Q(s_{1},a^{\prime})-Q(s_{ 0},a_{0}))+Q(s,a)\]
for all \((s,a)\) and \((Q,s_{0},a_{0},s_{1})\). Then the update of \(Q\)-learning can be equivalently written as
\[Q_{k+1}=Q_{k}+\alpha_{k}(F(Q_{k},S_{k},A_{k},S_{k}^{\prime})-Q_{k}),\]
which is in the same form of SA algorithm (2) with \(x_{k}\) being \(Q_{k}\) and \(Y_{k}\) being the triple \((S_{k},A_{k},S_{k}^{\prime})\). We next show that \(Q\)-learning is a contractive SA with sub-Gaussian additive noise. Before that, we need to introduce some notation. Let \(\mathcal{H}:\mathbb{R}^{|\mathcal{S}||\mathcal{A}|}\mapsto\mathbb{R}^{| \mathcal{S}||\mathcal{A}|}\) be the Bellman optimality operator defined as
\[[\mathcal{H}(Q)](s,a)=\mathcal{R}(s,a)+\gamma\mathbb{E}\left[\max_{a^{\prime} \in\mathcal{A}}Q(S_{1},a^{\prime})\ \bigg{|}\ S_{0}=s,A_{0}=a\right],\ \forall\ (s,a)\ \text{and}\ Q\in\mathbb{R}^{|\mathcal{S}|| \mathcal{A}|}.\]
Let \(D_{b}\) be an \(|\mathcal{S}||\mathcal{A}|\) by \(|\mathcal{S}||\mathcal{A}|\) diagonal matrix with diagonal components \(\{\kappa_{b}(s)\pi_{b}(a|s)\}\). Denote the minimum diagonal entry of \(D_{b}\) by \(D_{b,\min}\). Let \(\mathcal{F}_{k}\) be the \(\sigma\)-algebra generated by \(\{S_{i},A_{i},S_{i}^{\prime}\}_{0\leq i\leq k-1}\). Note that \(Q_{k}\) is measurable with respect to \(\mathcal{F}_{k}\).
**Proposition 3.3** (Proof in Appendix E.2).: _The operator \(\bar{F}(\cdot):=\mathbb{E}[F(\cdot,S_{k},A_{k},S^{\prime}_{k})]\) is explicitly given as_
\[\bar{F}(Q)=D_{b}\mathcal{H}(Q)+(I_{|\mathcal{S}||\mathcal{A}|}-D_{ b})Q,\quad\forall\;Q\in\mathbb{R}^{|\mathcal{S}||\mathcal{A}|}.\]
_In addition, we have the following results._
1. \(\bar{F}(\cdot)\) _is a_ \(\hat{\gamma}_{c}\)_-contraction mapping with respect to_ \(\|\cdot\|_{\infty}\)_, where_ \(\hat{\gamma}_{c}=1-D_{b,\min}(1-\gamma)\)_;_
2. \(\bar{F}(Q_{k})=\mathbb{E}[F(Q_{k},S_{k},A_{k},S^{\prime}_{k})\mid\mathcal{F}_ {k}]\) _for all_ \(k\geq 0\)_;_
3. _Assumption_ 2.4 _holds with_ \(\bar{\sigma}=4/(1-\gamma)\) _and_ \(c_{d}=1\)_._
Proposition 3.3 enables us to apply Theorem 2.3 to get maximal concentration bound of \(Q\)-learning, which is presented below.
**Theorem 3.3** (Proof in Appendix E.3).: _Suppose that Assumption 3.3 is satisfied, \(\alpha_{k}=\alpha/(k+h)\) with \(\alpha>2/(1-\hat{\gamma}_{c})\) and appropriately chosen \(h\). Then for any \(K\geq 0\), we have with probability at least \(1-\delta\) that the following inequality holds for all \(k\geq K\):_
\[\|Q_{k}-Q^{*}\|_{\infty}^{2}\leq c_{q}\left[\frac{\log(1/\delta)}{ k+h}+\left(\frac{h}{k+h}\right)^{(1-\hat{\gamma}_{c})\alpha/2}+\frac{1+\log((k+1)/ K^{1/2})}{k+h}\right],\]
_where \(c_{q}=\frac{\log(|\mathcal{S}||\mathcal{A}|)}{D_{b,\min}^{3}(1-\gamma)^{5}}\)._
Since \(D_{b,\min}\) is the minimum entry of the stationary distribution of the Markov chain \(\{(S_{k},A_{k})\}\) induced by the behavior policy \(\pi_{b}\), we have \(D_{b,\min}\leq 1/(|\mathcal{S}||\mathcal{A}|)\). In the ideal case where we have uniform sampling, i.e., \(D_{b,\min}=1/(|\mathcal{S}||\mathcal{A}|)\), Theorem 3.3 implies that the sample complexity to achieve \(\|Q_{k}-Q^{*}\|_{\infty}\leq\epsilon\) is \(\tilde{\mathcal{O}}(|\mathcal{S}|^{3}|\mathcal{A}|^{3}(1-\gamma)^{-5}\epsilon^ {-2})\).
## 4 Conclusion
In this paper we establish maximal concentration bounds for general contractive SA with additive and multiplicative noise. Specifically, we show that the sample paths remains in a cone (with decaying radius) with high probability. Moreover, we showcase how these general bounds can be applied to many RL algorithms, obtaining performance guarantees that were not available in previous literature. In order to overcome the challenge of having unbounded iterates with multiplicative noise, we develop a novel bootstrapping argument that enables us to iteratively improve a potentially loose bound to a tighter one. The key steps involves bounding a modified version of the log-MGF of the error, and carefully constructing supermartingales to obtain maximal bounds.
We recognize two avenues of future work. On the theoretical side, one avenue would be to extend our results to allow the operator to have Markovian noise with a large conditional bias, or to extend them to the more challenging two-time scale SA case. On the applications side, the other avenue would be to apply our results to other algorithms beyond RL.
## Acknowledgements
We would like to thank Prof. R. Srikant from the University of Illinois at Urbana-Champaign for the insightful comments about using the telescoping technique to establish maximal concentration bounds. |
2309.00470 | Deep Joint Source-Channel Coding for Adaptive Image Transmission over
MIMO Channels | This paper introduces a vision transformer (ViT)-based deep joint source and
channel coding (DeepJSCC) scheme for wireless image transmission over
multiple-input multiple-output (MIMO) channels, denoted as DeepJSCC-MIMO. We
consider DeepJSCC-MIMO for adaptive image transmission in both open-loop and
closed-loop MIMO systems. The novel DeepJSCC-MIMO architecture surpasses the
classical separation-based benchmarks with robustness to channel estimation
errors and showcases remarkable flexibility in adapting to diverse channel
conditions and antenna numbers without requiring retraining. Specifically, by
harnessing the self-attention mechanism of ViT, DeepJSCC-MIMO intelligently
learns feature mapping and power allocation strategies tailored to the unique
characteristics of the source image and prevailing channel conditions.
Extensive numerical experiments validate the significant improvements in
transmission quality achieved by DeepJSCC-MIMO for both open-loop and
closed-loop MIMO systems across a wide range of scenarios. Moreover,
DeepJSCC-MIMO exhibits robustness to varying channel conditions, channel
estimation errors, and different antenna numbers, making it an appealing
solution for emerging semantic communication systems. | Haotian Wu, Yulin Shao, Chenghong Bian, Krystian Mikolajczyk, Deniz Gündüz | 2023-09-01T14:09:53Z | http://arxiv.org/abs/2309.00470v4 | # Deep Joint Source-Channel Coding for Adaptive Image Transmission over MIMO Channels
###### Abstract
This paper introduces a vision transformer (ViT)-based deep joint source and channel coding (DeepJSCC) scheme for wireless image transmission over multiple-input multiple-output (MIMO) channels, denoted as DeepJSCC-MIMO. We consider DeepJSCC-MIMO for adaptive image transmission in both open-loop and closed-loop MIMO systems. The novel DeepJSCC-MIMO architecture surpasses the classical separation-based benchmarks with robustness to channel estimation errors and showcases remarkable flexibility in adapting to diverse channel conditions and antenna numbers without requiring retraining. Specifically, by harnessing the self-attention mechanism of ViT, DeepJSCC-MIMO intelligently learns feature mapping and power allocation strategies tailored to the unique characteristics of the source image and prevailing channel conditions. Extensive numerical experiments validate the significant improvements in transmission quality achieved by DeepJSCC-MIMO for both open-loop and closed-loop MIMO systems across a wide range of scenarios. Moreover, DeepJSCC-MIMO exhibits robustness to varying channel conditions, channel estimation errors, and different antenna numbers, making it an appealing solution for emerging semantic communication systems.
Joint source-channel coding, semantic communication, vision transformer, MIMO, image transmission.
## I Introduction
The exponential growth of wireless multimedia applications, encompassing augmented reality, virtual reality, real-time streaming, and edge intelligence, has significantly heightened the demand for efficient wireless transmission of image/video signals, particularly under strict delay constraints [1, 2, 3]. Consequently, this has sparked a notable surge in research focused on designing optimized image communication systems tailored for wireless channels [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14].
The traditional solution from Shannon's separation theorem is to independently design source and channel coding, which is optimal in the asymptotic limit of infinite block length for ergodic source and channel distributions [15]. However, the separation-based approach is known to be sub-optimal in the practical finite block length regime, which becomes particularly limiting in applications that impose strict latency constraints [16, 17]. Despite the known theoretical benefits, designing practical joint source-channel coding (JSCC) schemes has been an ongoing challenge for many decades. Significant progress has been made in designing JSCC schemes over recent years thanks to the introduction of deep neural networks (DNNs) [4, 5, 6, 7]. The first deep learning-based JSCC (DeepJSCC) scheme for wireless image transmission was presented in [4], and it was shown to outperform the concatenation of state-of-the-art image compression algorithm better portable graphics (BPG) with low-density parity-check (LDPC) codes. It was later extended to transmission with adaptive channel bandwidth in [8] and [9] and the transmission over multi-path fading channels in [5] and [18]. All of these prior works demonstrate the advantage of the practical DeepJSCC scheme in low block length. However, these existing DeepJSCC schemes solely consider single-antenna transmitters and receivers, disregarding the potential of employing multiple-input multiple-output (MIMO) systems to improve capacity and reliability. While there is a growing literature successfully employing DNNs for various MIMO-related tasks, such as detection and channel estimation [19, 20, 21], with the exception of our initial results [22], no previous work has so far applied DeepJSCC to the more challenging MIMO channels. JSCC over MIMO channels is initially studied in [23] from a theoretical perspective. It is challenging to design a practical JSCC scheme for MIMO channels, where the model needs to retrieve coupled signals from different antennas experiencing different channel gains.
The advantages of DeepJSCC and end-to-end MIMO schemes in practical transmission scenarios have prompted the investigation of JSCC over MIMO systems by integrating deep learning (DL) technologies. Despite this, the number of studies focusing on end-to-end MIMO communication schemes based on DNNs remains limited. The first autoencoder (AE) based end-to-end MIMO communication method is introduced in [24]. In the subsequent work [25], the authors systematically establish the symbol error rate benchmarks for MIMO channels by evaluating several AE-based models with channel state information (CSI). A singular-value decomposition (SVD) based AE is proposed in [26] to achieve the state-of-the-art bit error rate. These works indicate the potential of end-to-end MIMO communication schemes to enhance transmission quality when coupled with advancing DL technologies. However, it is essential to note that these MIMO schemes solely consider the transmission of bits at a fixed signal-to-noise ratio (SNR) value, ignoring the source signal's semantics and the model's adaptability to varying channel conditions. Two recent studies that are more relevant to this work are [27] and [28]. The authors in [27] theoretically analyze the excess distortion exponent for semantic-aware MIMO systems in the context of JSCC. The authors in [28] propose a semantically coded
transmission scheme utilizing MIMO, focusing solely on the scenarios, where the CSI is available at the receiver.
One critical shortcoming of the aforementioned works is the lack of consideration for essential factors of the proposed DeepJSCC model, such as the generalizability, the ability to adapt to diverse channel conditions, and the robustness to channel estimation errors. Addressing these aspects is pivotal in ensuring the efficacy and practicality of the proposed solutions. As for channel adaptability, [6] and [18] incorporates the attention mechanism to adapt the DeepJSCC model to different channel SNRs. Regarding the robustness against the channel estimation errors, [29] revealed that even with imperfect CSI and minimal noise, the performance of the MIMO system can still be enhanced. However, significant noise accompanying imperfect CSI can significantly degrade the system's performance. [30] considered the channel estimation errors and explored DL methods for joint MIMO channel estimation and signal detection, which demonstrated the potential of DL approaches in improving the model robustness against channel estimation errors. To the best of our knowledge, prior research has not delved into the specific exploration of adaptive JSCC-MIMO systems designed for wireless image transmission, encompassing both scenarios with open-loop MIMO with only receiver-side channel state information (CSIR) and closed-loop MIMO with both transmitter and receiver side channel state information (CSIT). Such a system would be capable of leveraging the semantics of the source signal and channel conditions simultaneously, and efficiently managing both fluctuating channel conditions and potential channel estimation errors, all within a cohesive framework.
In light of this research gap, our paper introduces an innovative DeepJSCC scheme meticulously tailored for MIMO image transmission. Our approach introduces a unified vision transformer (ViT)-based DeepJSCC scheme, named DeepJSCC-MIMO, which accommodates both scenarios with CSIR and CSIT. Inspired by the success of the attention mechanism in the development of flexible communication schemes [6, 18, 31, 32], we leverage the self-attention mechanism inherent in the ViT for adaptive wireless image transmission. Specifically, we represent the channel conditions with a channel heatmap and adapt the JSCC encoding and decoding parameters according to this heatmap. Our approach can learn global attention between the source image and the channel conditions across all the intermediate layers of the DeepJSCC encoder and decoder. Intuitively, we expect this design to simultaneously learn channel symbol mapping and power allocation, considering different channel conditions, channel estimation errors, and different antenna number settings. Therefore, unlike most existing literature on DNN-aided MIMO communications, we eliminate the need for training separate DNN models for each different transceiver pair and different channel conditions. The proposed DeepJSCC-MIMO is a practical and unified scheme with robustness to channel estimation errors.
Our main contributions can be summarized as follows:
* This paper presents the first DeepJSCC scheme over the MIMO system, DeepJSCC-MIMO, for adaptive image transmission. The key innovation lies in the utilization of a ViT-based model, which adeptly exploits both the semantic features of the image and the CSI (subject to availability) through a self-attention mechanism.
* DeepJSCC-MIMO is a comprehensive and versatile solution that can be seamlessly applied to both open-loop and closed-loop MIMO systems. Extensive numerical evaluations validate the superiority of the proposed model, showcasing substantial enhancements in transmission quality across a wide range of channel conditions and bandwidth ratios when compared to traditional separate source and channel coding schemes. Furthermore, the DeepJSCC-MIMO scheme exhibits remarkable resilience to channel estimation errors.
* Importantly, DeepJSCC-MIMO offers remarkable flexibility and adaptability, as it can effectively accommodate varying channel conditions and configurations of antenna numbers without the need for retraining. By leveraging the channel heatmap and employing a self-attention mechanism, DeepJSCC-MIMO intelligently captures and adapts to the characteristics of the channel conditions and antenna configurations.
## II System Model
We consider an \(M\times M\) MIMO communication system, where an \(M\)-antenna transmitter aims to deliver an image \(\mathbf{S}\in\mathbb{R}^{h\times w\times 3}\) to an \(M\)-antenna receiver (\(h\) and \(w\) denote the height and width of the image, while \(3\) refers to the color channels R, G, and B). The transmitter encodes the image into a vector of channel symbols \(\mathbf{X}\in\mathbb{C}^{M\times k}\), where \(k\) is the number of channel uses. Following the previous standard definition [4], we denote the _bandwidth ratio_ (i.e., channel usage to the number of source symbols ratio) by \(R\triangleq k/n\), where \(n=3hw\) is the number of source symbols. Intuitively, \(R\) reflects the number of available channel symbols for each source symbol. The transmitted signal \(\mathbf{X}\) is subject to a power constraint \(P_{s}\) as:
\[\frac{1}{Mk}\|\mathbf{X}\|_{F}^{2}\leq P_{s}, \tag{1}\]
where \(\|\cdot\|_{\text{F}}\) denotes the Frobenius norm, and we set \(P_{s}=1\) without loss of generality.
The channel model can be written as:
\[\mathbf{Y}=\mathbf{H}\mathbf{X}+\mathbf{W}, \tag{2}\]
where \(\mathbf{X}\in\mathbb{C}^{M\times k}\) and \(\mathbf{Y}\in\mathbb{C}^{M\times k}\) denote the channel input and output matrices, respectively, while \(\mathbf{W}\in\mathbb{C}^{M\times k}\) is the Additive White Gaussian Noise (AWGN) term that follows independent and identically distributed (i.i.d.) complex Gaussian distribution with zero mean and variance \(\sigma_{w}^{2}\), i.e., \(\mathbf{W}[i,j]\sim\mathcal{CN}(0,\sigma_{w}^{2})\). The entries of the channel gain matrix \(\mathbf{H}\in\mathbb{C}^{M\times M}\) follow i.i.d. complex Gaussian distribution with zero mean and variance \(\sigma_{h}^{2}\), i.e., \(\mathbf{H}[i,j]\sim\mathcal{CN}(0,\sigma_{h}^{2})\). We consider a slow block-fading channel model, in which the channel matrix \(\mathbf{H}\) remains constant for \(k\) channel uses, corresponding to the transmission of one image, and takes an independent realization in the next block.
Given the channel output \(\mathbf{Y}\), the receiver reconstructs the source image as \(\mathbf{\hat{S}}\in\mathbb{R}^{h\times w\times 3}\). We use the peak signal-to-noise ratio (PSNR) as the distortion metric:
\[\text{PSNR}=10\log_{10}\frac{\|\mathbf{S}\|_{\infty}^{2}}{\text{MSE}(\mathbf{S},\mathbf{ \hat{S}})}\ (\text{dB}), \tag{3}\]
where \(\|\cdot\|_{\infty}\) is the infinity norm and \(\text{MSE}(\mathbf{S},\mathbf{\hat{S}})\triangleq\frac{1}{3hw}\|\mathbf{S}-\mathbf{\hat{S}}\|_ {2}^{2}\) is the mean squared error (MSE) between \(\mathbf{S}\) and \(\mathbf{\hat{S}}\).
As shown in Fig. 1, there can be two approaches to solve this problem, i.e., the traditional separate source and channel coding scheme and the JSCC scheme. This paper considers both the CSIR (open-loop MIMO) and CSIT (closed-loop MIMO) scenarios for each transmission scheme.
### _Open-loop MIMO with CSIR_
In an open-loop MIMO system, CSI is only available at the receiver, enabling it to equalize the transmitted signals and subsequently decode the image.
#### Ii-A1 Separate source and channel coding scheme
In an open-loop MIMO system with a separate source and channel coding transmission scheme, the transmitter follows a sequential process, including the source coding, channel coding, and the modulation to generate the channel symbols \(\mathbf{X}\), which will go through the MIMO channel, as in Eqn. (2). We note that, due to the lack of CSI, the transmitter has to choose the compression and channel coding rates and the constellation size independently of the channel realization.
Given the received signal \(\mathbf{Y}\) and the CSI, the receiver decodes the source signal as \(\mathbf{\hat{S}}\) by sequentially performing the MIMO detection, demodulation, channel decoding, and source decoding operations.
#### Ii-A2 DeepJSCC scheme
Different from the traditional separation-based scheme, DL-based JSCC scheme uses a joint source-channel encoder at the transmitter, denoted as \(f_{\mathbf{\theta}}\), to directly map the source signal \(\mathbf{S}\) into the channel input matrix \(\mathbf{X}\) for MIMO channel transmission. This process can be represented as:
\[\mathbf{X}=f_{\mathbf{\theta}}(\mathbf{S}). \tag{4}\]
In the receiver, a MIMO equalization algorithm, such as zero-forcing (ZF) or minimum mean square error (MMSE) algorithm [33], is firstly employed to exploit the CSI to decouple the entangled signal \(\mathbf{Y}\) as \(\mathbf{X^{\prime}}\). Subsequently, a JSCC decoder, referred to as \(f_{\mathbf{\phi}}\), is employed to reconstruct the source signal based on the available CSI represented by (\(\mathbf{H}\), \(\sigma_{w}^{2}\)), along with \(\mathbf{X^{\prime}}\). This can be expressed as:
\[\mathbf{\hat{S}}=f_{\mathbf{\phi}}(\mathbf{X^{\prime}},\mathbf{H},\sigma_{w}^{2}). \tag{5}\]
It is important to note that an alternative approach is to directly learn the decoding process using the CSI without explicitly performing the channel equalization operation. This alternative approach involves training the model to minimize the reconstruction distortion in an end-to-end manner. However, in order to achieve better transmission quality, we adopt a model-driven approach that explicitly performs channel equalization prior to the decoding process.
### _Closed-loop MIMO with CSIT_
Within a closed-loop MIMO system, the CSI is accessible to both the transmitter and receiver, which allows them to apply pre-coding and power allocation at the transmitter, and MIMO equalization at the receiver, thereby improving the image transmission quality.
#### Ii-B1 Separate source and channel coding scheme
The transmitter sequentially performs source coding, channel coding, and modulation to generate the channel input matrix \(\mathbf{X}\), the elements of which are constellations with average power normalized to 1. Additional operations at the transmitter with CSIT are precoding and power allocation, which can boost the communication rate. Specifically, given the CSI, we first decompose the channel matrix by singular-value decomposition (SVD), yielding \(\mathbf{H}=\mathbf{U}\mathbf{\Sigma}\mathbf{V^{H}}\), where \(\mathbf{U}\in\mathbb{C}^{M\times M}\) and \(\mathbf{V}\in\mathbb{C}^{M\times M}\) are unitary matrices, and \(\mathbf{\Sigma}\) is a diagonal matrix whose singular values are in descending order. We denote \(\mathbf{\Sigma}\) by \(\text{diag}(s_{1},s_{2},\ldots,s_{M})\), where \(s_{1}\geq s_{2}\geq\cdots\geq s_{M}\).
Let us denote the power allocation matrix by \(\mathbf{\Lambda}\), where \(\mathbf{\Lambda}\) is diagonal, with its diagonal elements being the power allocation weights for the signal streams of separate antennas. \(\mathbf{\Lambda}\) can be derived using the standard water-filling algorithm. With power allocation and SVD precoding (precoding \(\mathbf{X}\) into \(\mathbf{V}\mathbf{X}\)), Eqn. (2) can be rewritten as:
\[\mathbf{Y}=\mathbf{H}\mathbf{V}\mathbf{\Lambda}\mathbf{X}+\mathbf{W}=\mathbf{U}\mathbf{\Sigma}\mathbf{\Lambda}\mathbf{ X}+\mathbf{W}. \tag{6}\]
Multiplying both sides of Eqn. (6) by \(\mathbf{U}^{H}\) (MIMO detection) gives us:
\[\mathbf{X^{\prime}}=\mathbf{\Sigma}\mathbf{\Lambda}\mathbf{X}+\mathbf{U}^{H}\mathbf{W}. \tag{7}\]
As can be seen, SVD-based precoding converts the MIMO channel into a set of parallel subchannels with different SNRs. In particular, the SNR of the \(i\)-th subchannel is determined by the \(i\)-th singular value of \(s_{i}\) and the \(i\)-th power allocation coefficient of \(\mathbf{\Lambda}\).
Given \(\mathbf{X^{\prime}}\in\mathbb{C}^{M\times k}\), the receiver performs demodulation, channel decoding, and source decoding sequentially to reconstruct the source image as \(\mathbf{\hat{S}}\).
Fig. 1: Block diagram of the MIMO image transmission system: (a) conventional separate source-channel coding scheme and (b) DeepJSCC scheme, where the gray blocks with dashed lines are the additional operations for the closed-loop MIMO system.
#### Ii-B2 DeepJSCC scheme
For the DeepJSCC scheme, we exploit DL technologies to parameterize the encoder and decoder functions, which are trained jointly on an image dataset and the channel model as in Eqn. (2). Let us denote the DeepJSCC encoder and decoder by \(f_{\mathbf{\theta}}\) and \(f_{\mathbf{\phi}}\), respectively, where \(\mathbf{\theta}\) and \(\mathbf{\phi}\) denote the network parameters. We have
\[\mathbf{X}=f_{\mathbf{\theta}}(\mathbf{S},\mathbf{H},\sigma_{w}^{2}). \tag{8}\]
Unlike the separate source-channel coding scheme, the transmitter does away with power allocation. Instead, we leverage the DeepJSCC encoder to perform feature extraction and channel symbol mapping, and power allocation, all at once. Intuitively, the DNN is expected to transmit critical features over subchannels with higher SNRs, thereby improving the transmission performance.
We note here that one option is to train the encoder/decoder networks directly, hoping they will learn to exploit the spatial degrees-of-freedom MIMO channel provides. We will instead follow the model-driven approach, where we will exploit the SVD as is done above for the channel coding scheme and convert the MIMO channel into subchannels. The received signal can then be written as
\[\mathbf{Y}=\mathbf{HVX}+\mathbf{W}=\mathbf{U}\mathbf{\Sigma X}+\mathbf{W}. \tag{9}\]
To simplify the training, we apply MIMO equalization by left multiplying both sides of Eqn. (9) by \(\mathbf{\Sigma}^{\dagger}\mathbf{U}^{H}\) to obtain
\[\mathbf{X^{\prime}}=\mathbf{\Sigma}^{\dagger}\mathbf{U}^{H}\mathbf{Y}=\mathbf{X}+\mathbf{W^{\prime}}, \tag{10}\]
where \(\mathbf{W^{\prime}}\triangleq\mathbf{\Sigma}^{\dagger}\mathbf{U}^{H}\mathbf{W}\in\mathbb{C}^ {M\times k}\) is the equivalent noise term, \(\mathbf{\Sigma}^{\dagger}\) is the Moore-Penrose inverse of matrix \(\mathbf{\Sigma}\), and \(\mathbf{U}^{H}\) is the conjugate transpose of matrix \(\mathbf{U}\).
Finally, we feed both \(\mathbf{X^{\prime}}\) and the CSI into the DeepJSCC decoder to recover the image as
\[\mathbf{\hat{S}}=f_{\mathbf{\phi}}(\mathbf{X^{\prime}},\mathbf{H},\sigma_{w}^{2}). \tag{11}\]
## III Proposed method
This section presents a novel DeepJSCC architecture called DeepJSCC-MIMO, which aims to enable efficient image transmission in MIMO systems. The DeepJSCC-MIMO scheme is mainly based on the vision transformer and incorporates self-attention mechanisms to exploit the CSI. Notably, our DeepJSCC-MIMO design exhibits versatility and can be employed in both open-loop and closed-loop MIMO systems.
The pipeline of our DeepJSCC-MIMO scheme is illustrated in Fig. 2, and presented in Algorithm 1, where DeepJSCC-MIMO utilizes a pair of ViT-based encoder \(f_{\mathbf{\theta}}\) and decoder \(f_{\mathbf{\phi}}\), respectively. Further elaborations on the inner structures of these ViTs are provided in Fig. 3. In the following, we detail the pipeline of DeepJSCC-MIMO in five main steps: image-to-sequence transformation, channel heatmap construction, ViT encoding, ViT decoding, and the loss function.
### _Image-to-sequence transformation_
To construct the input of DeepJSCC-MIMO, we first convert the three-dimensional input image \(\mathbf{S}\) into a sequence of vectors, denoted by \(\mathbf{S_{s}}=Seq(\mathbf{S})\). Specifically, given a source image \(\mathbf{S}\in\mathbb{R}^{h\times w\times 3}\), we divide \(\mathbf{S}\) into a grid of \(p\times p\) patches, and reshape each patch into a vector of dimension \(\mathbb{R}^{\frac{3hw}{p^{2}}}\). In this way, \(\mathbf{S}\) is converted to \(\mathbf{S_{s}}\in\mathbb{R}^{l\times c}\), where \(l=p^{2}\) is the sequence length and \(c\triangleq\frac{3hw}{p^{2}}\) is the dimension of each vector in the sequence.
### _Channel heatmap construction_
To enable efficient training, we construct a channel heatmap from CSI, indicating the effective noise variance faced by each channel symbol generated by ViT. Let us define \(\mathbf{P_{n}}\in\mathbb{R}^{M\times k}\) as the average power of the additive noise term \(\mathbf{W^{\prime}}\). Specifically, for different scenarios, \(\mathbf{P_{n}}\) is defined as:
\[\mathbf{P_{n}}\triangleq\begin{cases}\sigma_{w}^{2}(\mathbf{H_{w}}\odot\mathbf{\bar{H}_{w} })\mathbf{J_{M\times k}},&\text{MIMO with CSIR}\\ \sigma_{w}^{2}\mathbf{\Sigma}^{\dagger}\mathbf{U}^{H}\mathbf{J_{M\times k}},&\text{MIMO with CSIT},\end{cases} \tag{12}\]
where \(\odot\) denotes the Hadamard product and \(\mathbf{\bar{H}_{w}}\) is the element-wise conjugate of matrix \(\mathbf{H_{w}}\), which is obtained from the zero-forcing operation as: \(\mathbf{H_{w}}\triangleq(\mathbf{H^{H}}\mathbf{H})^{\mathbf{-1}}\mathbf{H^{H}}\), and \(\mathbf{J}\) is the matrix of ones.
Then the heatmap \(\mathbf{M}\in\mathcal{R}^{l\times\frac{2Mk}{l}}\) is constructed as follows:
\[\mathbf{M}\triangleq\texttt{reshape}\left(\texttt{concat}\left(\frac{1}{2}\mathbf{P_ {n}},\frac{1}{2}\mathbf{P_{n}}\right)\right), \tag{13}\]
where \(\texttt{concat}(\cdot)\) and \(\texttt{reshape}(\cdot)\) denote the concatenation and reshape operations, respectively. We concatenate two matrices \(\frac{1}{2}\mathbf{P_{n}}\) to get the shape of \(\mathbb{R}^{M\times 2k}\), and reshape it into \(\mathbf{M}\in\mathcal{R}^{l\times\frac{2Mk}{l}}\), which represents the equivalent noise term faced by each real encoder output element.
We expect that feeding the CSI in the form of a heatmap will simplify the training process as the model only needs to focus on the 'additive' noise power faced by each channel symbol. As we will show later, this design enables our model to be used under various antenna numbers without retraining.
Fig. 2: The pipeline of the DeepJSCC-MIMO scheme, where the source image \(\mathbf{S}\) is encoded by a ViT-encoder and reconstructed by a ViT-decoder as \(\mathbf{S}\). Precoding operation in dashed line will be performed if the CSI is available at the transmitter, and the CSI \((\mathbf{H},\sigma_{w}^{2})\) is fed to the encoder, if available, and decoder in the form of a “heatmap” \(\mathbf{M}\) to facilitate the JSCC encoding/decoding process.
### _ViT encoding_
For the two scenarios, the input sequences to our ViT encoder \(\mathbf{S_{in}}\in\mathbb{R}^{l\times c_{in}}\) are given by:
\[\mathbf{S_{in}}=\begin{cases}\mathbf{S_{s}},&\text{MIMO with CSIR}\\ \texttt{concat}(\mathbf{S_{s}},\mathbf{M}),&\text{MIMO with CSIT},\end{cases} \tag{14}\]
where the dimensions of the vectors in these sequences are \(c_{in}=c\) and \(c_{in}=c+\frac{2Mk}{l}\) for the MIMO with CSIR and MIMO with CSIT scenarios, separately.
The architecture of our encoder is shown in Fig. 3, which mainly consists of linear projection layers, a positional embedding layer, and several transformer layers.
**Linear projection and positional embedding: \(\mathbf{S_{in}}\)** firstly goes through a linear projection operation with parameters \(\mathbf{W_{0}}\in\mathbb{R}^{c_{in}\times d}\) and a positional embedding operation \(P_{e}(\cdot)\) to get the initial input \(\mathbf{F_{0}}\in\mathbb{R}^{l\times d}\) for the following transformer layers:
\[\mathbf{F_{0}}=\mathbf{S_{in}}\mathbf{W_{0}}+P_{e}(\mathbf{p}), \tag{15}\]
where \(d\) is the output dimension of the hidden projection layer. Positional embedding \(P_{e}(\cdot)\) serves the purpose of representing the spatial arrangement of sequences for better performance, where two typical methods are dense layer-based position embedding (DPE) [34] and conditional position embedding (CPE) [35]. In specific, \(P_{e}(\cdot)\) is implemented with a DPE that employs a dense layer to embed the index vector \(\mathbf{p}\) of each patch into a \(d\) dimensional vector.
**Transformer Layer:** As shown in Fig. 3, the intermediate feature map \(\mathbf{F_{i}}\) is generated by the \(i\)-\(th\) transformer layer by a multi-head self-attention (MSA) block and a multi-layer perceptron (MLP) layer as:
\[\mathbf{F_{i}}=MSA(\mathbf{F_{i-1}})+MLP(MSA(\mathbf{F_{i-1}})), \tag{16}\]
where \(\mathbf{F_{i}}\in\mathbb{R}^{l\times d}\) is the output sequence of the \(i\)-th transformer layer, GeLU activation and layer normalization operations are applied before each MSA and MLP block.
Each MSA block consists of \(N_{s}\) self-attention (SA) modules with a residual skip, which can be formulated as:
\[MSA(\mathbf{F_{i}})=\mathbf{F_{i}}+[SA_{1}(\mathbf{F_{i}}),\cdots,SA_{N_{s}}(\mathbf{F_{i}})] \mathbf{W_{i}}, \tag{17}\]
where the output of all SA modules \(SA(\mathbf{F_{i}})\in\mathbb{R}^{l\times d_{s}}\) are concatenated for a linear projection \(\mathbf{W_{i}}\in\mathbb{R}^{d_{s}N_{s}\times d}\), \(d_{s}=d/N_{s}\) is the output dimension of each SA operation.
For each SA module, the operations are formulated as:
\[SA(\mathbf{F_{l-1}})=softmax\left(\frac{\mathbf{q}\mathbf{k}^{T}}{\sqrt{d}}\right)\mathbf{v}, \tag{18}\]
where the \(\mathbf{q},\mathbf{k},\mathbf{v}\in\mathbb{R}^{l\times d_{s}}\) are the query, key, and value vectors generated through three linear projection layers \(\mathbf{W_{q}},\mathbf{W_{k}},\mathbf{W_{v}}\in\mathbb{R}^{d\times d_{s}}\) as:
\[\mathbf{q}=\mathbf{F_{l-1}}\mathbf{W_{q}},\ \ \mathbf{k}=\mathbf{F_{l-1}}\mathbf{W_{k}},\ \ \mathbf{v}=\mathbf{F_{l-1}}\mathbf{W_{v}}. \tag{19}\]
**Linear projection and power normalization:** After \(L_{t}\) transformer layers, we apply a linear projection \(\mathbf{W_{c}}\in\mathbb{R}^{d\times\frac{2Mk}{l}}\) to map the output of the transformer layers \(\mathbf{F_{L_{s}}}\) into the channel symbols as:
\[\mathbf{Z_{c}}=\mathbf{F_{L_{t}}}\mathbf{W_{c}}, \tag{20}\]
where \(\mathbf{Z_{c}}\in\mathbb{R}^{l\times\frac{2Mk}{l}}\) is then reshaped and normalized to satisfy the power constraints to form the complex channel input symbols \(\mathbf{X}\in\mathbb{C}^{M\times k}\).
### _ViT decoding_
**Channel equalization:** To simplify the decoding process, we perform an equalization operation on the channel output \(\mathbf{Y}\) to decouple the transmitted data of each antenna as \(\mathbf{X^{\prime}}\in\mathbb{C}^{M\times k}\) before feeding the signal into the ViT decoder. We apply different equalization methods for the MIMO systems with CSIR and with CSIT:
\[\mathbf{X^{\prime}}=\begin{cases}\mathbf{H_{w}}\mathbf{Y}+f_{res}(\mathbf{Y},\mathbf{H}),&\text{MIMO with CSIR}\\ \mathbf{\Sigma^{\dagger}}\mathbf{U^{H}}\mathbf{Y},&\text{MIMO with CSIT}.\end{cases} \tag{21}\]
Fig. 3: The architecture of the ViT-based encoder and decoder, where both encoder and decoder comprise linear projection layers, a positional embedding layer, and multiple transformer layers.
To perform MIMO equalization in an open-loop MIMO system with CSIR, we designed a DL-aided equalization method (as detailed in Fig. 4) to retrieve the transmitted signal \(\mathbf{X^{\prime}}\) from \(\mathbf{Y}\). Specifically, we add a residual block, denoted as \(f_{res}(\cdot)\), to learn the compensation after the zero-forcing (ZF) channel equalization operation based on the received signals and CSI, given as:
\[\mathbf{X^{\prime}} =\mathbf{H_{w}}\mathbf{Y}+f_{res}(\mathbf{Y},\mathbf{H}) \tag{22}\] \[=\mathbf{X}+\mathbf{H_{w}}\mathbf{W}+f_{res}(\mathbf{Y},\mathbf{H}), \tag{23}\]
where \(\mathbf{W^{\prime}}\triangleq\mathbf{H_{w}}\mathbf{W}\in\mathbb{C}^{M\times\frac{n}{l}}\) is the equivalent additive noise term from the channel. The residual operation \(f_{res}(\cdot)\) encompasses two linear layers, parameterized with \(\mathbf{W_{r1}}\in\mathbb{R}^{(2M^{2}+2M)\times 128}\) and \(\mathbf{W_{r2}}\in\mathbb{R}^{128\times 2M}\), and a PReLU activation function. We expect this additional residual block to learn to compensate for the ZF equalization for a better transmission quality when faced with different channel conditions and channel estimation errors, which will be detailed later.
Note that we can also apply other MIMO equalization methods. However, we have observed that a simple ZF estimation method, together with a learned compensation, is sufficiently effective in the DeepJSCC-MIMO scheme. Moreover, the ZF method converts the MIMO channels into equivalent parallel sub-channels, which suits our approach of applying the self-attention mechanism over different parallel sub-channels and source signals.
To perform MIMO equalization in a closed-loop MIMO system with CSIT, we perform SVD decomposition and precoding as in Eqn. (9). At the receiver, we perform equalization as in Eqn. (10) to get:
\[\mathbf{X^{\prime}} =\mathbf{\Sigma^{\dagger}U^{H}Y} \tag{24}\] \[=\mathbf{X}+\mathbf{\Sigma^{\dagger}U^{H}W}, \tag{25}\]
which converts the MIMO channel model into a set of parallel subchannels.
Given the equalized signal \(\mathbf{X^{\prime}}\), which is reshaped into \(\mathbf{X^{\prime}_{s}}\in\mathbb{R}^{l\times\frac{2M}{l}}\), and the noise heatmap \(\mathbf{M}\), a ViT-based decoder \(f_{\mathbf{\phi}}\) is designed to recover the source image as \(\hat{\mathbf{S}}=f_{\mathbf{\phi}}(\mathbf{X^{\prime}_{s}},\mathbf{M})\). Our ViT-based decoder consists of a Siamese layer, positional embedding layer, transformer layer, and linear projection layer, elaborated upon below:
**Siamese layer and positional embedding:** We design a weight-shared Siamese layer, denoted as \(\text{Siam}(\cdot)\), consisting of several linear projection layers and GeLU activation functions. To form the input of the Siamese layer \(\mathbf{S_{d}}\), we concatenate \(\mathbf{X^{\prime}_{s}}\) and \(\mathbf{M}\) as:
\[\mathbf{S_{d}}=\texttt{concat}(\mathbf{X^{\prime}_{s}},\mathbf{M})\in\mathbb{R}^{l\times \frac{4M}{l}}. \tag{26}\]
\(\mathbf{S_{d}}\) multiplied by \(1\) and \(-1\) are fed into several linear projection layers and GeLU functions, as illustrated in Fig. 3. In doing so, our networks are tasked with handling the positive and the negative noise realizations through two parallel branches, and subsequently, the resultant features are aggregated by a linear layer to obtain the final output. We expect that these GeLU functions and linear projection layers can learn to truncate excessive noise realizations to bootstrap the performance, where similar designs can be found in [31, 32]. To introduce the positional information in the decoding process, we use the same positional embedding layer as in the Eqn. (15) to get the output \(\mathbf{D_{0}}\in\mathbb{R}^{l\times d}\):
\[\mathbf{D_{0}}=\text{Siam}(\mathbf{S_{d}})+P_{e}(\mathbf{p}). \tag{27}\]
**Transformer layer:** After the Siamese layer and positional embedding, \(\mathbf{D_{0}}\) is then passed through \(L_{t}\) transformer layers, where
\[\mathbf{D_{t}}=MSA(\mathbf{D_{t-1}})+MLP(MSA(\mathbf{D_{t-1}})), \tag{28}\]
where \(\mathbf{D_{i}}\in\mathbb{R}^{l\times d}\) is the output of the \(i\)-th transformer layer at the decoder, and the \(MSA\) and \(MLP\) blocks share the same structure as those in Eqn. (16).
**Linear Projection:** Given the output of the \(L_{t}\)-th transformer layer \(\mathbf{D_{L_{t}}}\), we apply a linear projection \(\mathbf{W_{out}}\in\mathbb{R}^{d\times c}\)
Fig. 4: Structure of DL-aided channel equalization method, where the input at the \(i\)-th channel use is the concatenation of all elements within \(\mathbf{H}\) and the channel output
and then reshape the output into a matrix of size \(\mathbb{R}^{h\times\mathrm{w}\times 3}\) to reconstruct the input image as:
\[\boldsymbol{\hat{S}}=\texttt{reshape}(\boldsymbol{D_{L_{t}}}\boldsymbol{W_{out}}). \tag{29}\]
### _Loss function_
The encoder and decoder are optimized jointly to maximize the PSNR between \(\boldsymbol{S}\) and \(\boldsymbol{\hat{S}}\). In specific, we adopt the mean square error as the loss function:
\[\mathcal{L}(\boldsymbol{\theta},\boldsymbol{\phi})=\text{MSE}(\boldsymbol{S}, \boldsymbol{\hat{S}}), \tag{30}\]
where we train the model to search for the optimal parameter pair \((\boldsymbol{\theta}^{*},\boldsymbol{\phi}^{*})\) with a minimal \(\mathcal{L}(\boldsymbol{\theta},\boldsymbol{\phi})\) as: \((\boldsymbol{\theta}^{*},\boldsymbol{\phi}^{*})=\arg\min_{\boldsymbol{\theta},\boldsymbol{\phi}}\mathbb{E}\big{[}\mathcal{L}(\boldsymbol{\theta}, \boldsymbol{\phi})\big{]}\), where the expectation is taken over the image and channel datasets.
## IV Training and evaluation
This section conducts a set of numerical experiments to evaluate the performance of our DeepJSCC-MIMO in various bandwidth and SNR scenarios. Unless stated otherwise, we consider a \(2\times 2\) MIMO system and the CIFAR10 dataset [36], which has \(50000\) images with dimensions \(3\times 32\times 32\) (color, height, width) as the training dataset and \(10000\) images as the test dataset. We consider both the practical separation-based codes and the theoretical bounds (assuming capacity-achieving channel codes) as benchmarks.
### _Experimental setup_
For the practical separation-based coding scheme, we use the image compression algorithm BPG as the source coding method and low-density-parity check (LDPC) codes for channel coding, denoted as the BPG-LDPC scheme. We consider the BPG as the compression method due to its status as one of the most widely employed benchmark codecs. Notably, it demonstrates competitive performance within our evaluation dataset, particularly at low bit-per-pixel (bpp) values, when compared to other DL-based compression methods that entail higher complexities. For channel coding, we consider LDPC codes at rates (\(1/2,2/3,3/4,5/6\)) and various constellation sizes. In particular, we adopt WiFi (IEEE 802.11n) LDPC code construction, featuring block lengths of 648, 1296, and 1944 bits, along with constellations of QPSK, 4QAM, 16QAM, and 64-QAM.
For the theoretical bound, we assume capacity-achieving channel codes for transmission, denoted as the BPG-Capacity scheme. For the MIMO system with CSIT, we apply the water-filling algorithm for power allocation among the parallel channels formed after Eqn. 7.
Our model was implemented in Pytorch with two GTX 3090Ti GPUs. We use a learning rate of \(5e^{-5}\) and a batch size of \(128\) with an Adam optimizer. Models were trained until the performance on a validation set stopped improving. Considering the model complexity and the performance, we set \(p=8\), \(l=64\), \(c=48\) for image vectorization, and \(L_{t}=8\), \(d=256\), and \(N_{s}=8\) for each transformer layer of the ViT.
Each element of \(\boldsymbol{H}\) is sampled from a complex Gaussian process as \(H[i,j]\sim\mathcal{CN}(0,1)\), where we set \(\sigma_{h}^{2}=1\). To measure the channel quality, we define the SNR value \(\mu\) as:
\[\mu\triangleq 10\log_{10}\frac{\mathbb{E}_{\boldsymbol{H},\boldsymbol{X}}[ \|\boldsymbol{H}\boldsymbol{X}\|_{F}^{2}]}{\mathbb{E}_{\boldsymbol{W}}[\| \boldsymbol{W}\|_{F}^{2}]}\ (\text{dB})=10\log_{10}\frac{M}{\sigma_{w}^{2}}. \tag{31}\]
### _Open-Loop MIMO system with CSIR_
We first evaluate our DeepJSCC-MIMO scheme for the open-loop MIMO system with CSIR. Specifically, we compare our DeepJSCC-MIMO, trained at specific SNRs, with the BPG-Capacity and BPG-LDPC schemes in different SNR values and bandwidth ratios.
For the BPG-Capacity benchmark here, we assume the transmitter knows the capacity of each channel realization and can select the best compression rate without power allocation as an upper bound. As to the BPG-LDPC scheme, we apply the sphere decoding algorithm [37] to decode the signals
#### Iv-B1 General performance
Fig. 4(a) and Fig. 4(b) show the performance of the DeepJSCC-MIMO, BPG-Capacity, and BPG-LDPC schemes over the SNR values from \(0\)dB to \(20\)dB
Fig. 5: Performance comparisons between the proposed DeepJSCC-MIMO and BPG-Capacity over different SNR values and bandwidth ratios for the MIMO systems with CSIR.
when the bandwidth ratio is \(R=1/24\) and \(R=1/12\). We can observe that our DeepJSCC-MIMO can generally outperform the BPG-Capacity and BPG-LDPC schemes in all SNR values. We emphasize that the BPG-Capacity performance shown in these figures is not achievable by practical channel coding schemes. Thus, it serves as an upper bound on any separation-based scheme that employs BPG for compression of the input image. We observe that DeepJSCC-MIMO provides significant improvements (at least \(1.3\)dB and \(5.05\)dB) for \(R=1/24\), compared with the BPG-Capacity scheme and the BPG-LDPC scheme, respectively. For \(R=1/12\), our DeepJSCC-MIMO can still outperform the BPG-Capacity scheme (at least \(0.93\)dB) and the BPG-LDPC scheme (at least \(5.93\)dB) in all SNRs. We observe that the gap between DeepJSCC-MIMO and BPG capacity is larger in the lower bandwidth ratio case.
#### Iv-A2 SNR-adaptability
Note that each point on the DeepJSCC-MIMO curve is obtained by training a separate encoder-decoder pair to be used exclusively for that training SNR. We also consider a random SNR training strategy to evaluate the SNR adaptability of the DeepJSCC-MIMO scheme, denoted as DeepJSCC-MIMO-universal. We train the model with random SNR values uniformly sampled from \([0,22]\) dB and test the well-trained model at different SNRs. The comparisons in Fig. 5 show that there is a slight performance degradation compared to training a separate DeepJSCC-MIMO encoder-decoder pair for each channel SNR (up to \(0.99\)dB for \(R=1/24\) and \(2.02\)dB for \(R=1/12\)); however, the DeepJSCC-MIMO-universal brings significant advantage in terms of training complexity and storage requirements. We can conclude that the DeepJSCC-MIMO-universal scheme learns to adapt to different SNRs from the random training strategy at the expense of a slight loss in PSNR, which tends to increase with SNR.
#### Iv-A3 Ablation experiments over the equalization methods
To evaluate the effectiveness of the DL-aided equalization method
Fig. 6: Visual comparisons of images from the Kodak dataset transmitted over the open-loop MIMO channel at SNR \(5\)dB and \(12\)dB with \(R=1/24\). The first and second columns are the original image and the original patches of the bounding boxes.
Fig. 7: Performance comparison of DeepJSCC-MIMO and BPG-LDPC on the Kodak dataset for different bandwidth ratios at SNR=\(5\)dB.
with the residual block, we repeat the experiments in Fig. (b)b with an MMSE receiver. The experimental results in Table I show that the residual block improves the performance (up to \(0.62\)dB). More interestingly, we observe that the ZF combined with the residual block achieves the best performance, which shows that the residual block helps to compensate for the limitation of ZF and outperforms MMSE.
#### Iv-B4 Different bandwidth ratios and datasets
To further evaluate the effect of the bandwidth ratio and data generalizability for the DeepJSCC-MIMO, we evaluate the DeepJSCC-MIMO and BPG-LDPC schemes with various bandwidth ratios and SNRs over the Kodak dataset. We train the models with randomly cropped \(128\times 128\) patches from the ImageNet dataset, and evaluate the trained models on the Kodak dataset.
In Fig. 6, we present examples of images transmitted by DeepJSCC-MIMO scheme and BPG-LDPC at \(5\)dB and \(12\)dB and bandwidth ratio \(R=1/24\), where SNR values and PSNR of the entire image are provided at the bottom of each visualized patch. It can be observed that reconstructed by DeepJSCC-MIMO are qualitatively better with more detailed high-frequency features. The comparisons here can clearly show the advantages of the DeepJSCC-MIMO in high-resolution datasets. We also conclude that training on a sufficiently large dataset (ImageNet) can allow our DeepJSCC-MIMO scheme to perform well on a never-seen dataset (Kodak) for a wide range of bandwidth ratios.
We also evaluate the model at various bandwidth ratios when SNR=\(5\)dB, as shown in Fig. 7. We want to emphasize that the sphere decoding algorithm used in BPG-LDPC is a much better MIMO detection method, compared with ZF employed by DeepJSCC-MIMO. As shown in Fig. 7, DeepJSCC-MIMO outperforms the BPG-LDPC in all bandwidth ratios, where the gap is even larger at low bandwidth ratios (at least \(2.95\) dB), which shows that DeepJSCC-MIMO outperforms BPG-LDPC despite using a weaker detection algorithm.
#### Iv-B5 Robustness to channel estimation errors
We take the same approach as in [29] to evaluate the robustness of MIMO systems to channel estimation errors, where \(\mathbf{H}\) is imperfectly estimated as \(\mathbf{\hat{H}}\) at the receiver. Similar to the work in [29, 38, 39], let \(\mathbf{E}_{n}\triangleq\mathbf{H}-\mathbf{\hat{H}}\), where the entries of \(\mathbf{E}_{n}\) are zero-mean complex Gaussian with variance \(\sigma_{e}^{2}\). Here \(\sigma_{e}^{2}\) is the 'noise variance' term to capture the channel estimation quality, which can be appropriately selected depending on the channel conditions and channel estimation method.
We evaluate the DeepJSCC-MIMO-universal and BPG-LDPC schemes with different \(\sigma_{e}^{2}\in\{0,0.2,1\}\) values over SNR\({}_{\text{test}}\in[1,12]\)dB on the Kodak dataset with \(R=1/24\). For the DeepJSCC-MIMO-universal scheme, we train and test the model both with imperfect \(\mathbf{\hat{H}}\). The experimental results are shown in Fig. 9. As expected, for both schemes, the performance degrades as the channel estimation error increases. In specific, for \(\sigma_{e}^{2}=0.2\), the performance loss of our DeepJSCC-MIMO scheme is up to \(0.3\)dB, compared with the performance loss up to \(5.9\)dB from the BPG-LDPC scheme. For \(\sigma_{e}^{2}=1\), the BPG-LDPC scheme starts to fail to work. However, our DeepJSCC-MIMO scheme can still work with a performance loss of less than \(0.85\)dB. To visualize the performance of DeepJSCC-MIMO-universal in the presence of channel estimation errors, we present the reconstructed
Fig. 8: Visual comparisons of images reconstructed by DeepJSCC-MIMO-universal in the presence of channel estimation errors, where the model is trained on the ImageNet dataset and validated on the Kodak dataset at SNR=12dB and \(R=1/24\). The first and second columns are the source image and the image patch from the red bounding box of the original image. The third to sixth columns are the reconstructions of the same patches by DeepJSCC-MIMO universal with different channel estimation errors, respectively. The average PSNR metrics of the entire image are provided at the bottom of each visualized patch.
Fig. 9: Comparisons of DeepJSCC-MIMO-universal and BPG-LDPC schemes with channel estimation errors in Kodak dataset
images with different channel estimation errors in Fig. 8. We can observe that the increasing channel estimation error is reflected in the gradual loss of high-frequency details during transmission. We want to emphasize that even with a significant channel estimation error (\(\sigma_{e}^{2}=1\)), DeepJSCC-MIMO continues to function, with only limited performance loss, compared with the significant performance loss with the traditional separation benchmark. We can conclude that the proposed scheme is more robust to channel estimation errors, as it can learn to compensate for it during transmission.
### _Closed-Loop MIMO system with CSIT_
In this section, we consider the closed-loop MIMO system with CSI at the transmitter as well as at the receiver. We evaluate the DeepJSCC-MIMO trained at specific SNRs with \(\textit{SNR}_{\textit{test}}\in[0,20]\)dB in three bandwidth ratios \(R=1/24\), \(R=1/12\) and \(R=1/6\).
#### Iv-C1 General performance
As before, we first evaluate the DeepJSCC-MIMO model under the setup where the training SNR, denoted by \(\textit{SNR}_{\textit{train}}\) matches the test SNR, \(\textit{SNR}_{\textit{test}}\). In particular, we set \(\textit{SNR}_{\textit{train}}=\textit{SNR}_{\textit{test}}\in\{1,5,10,15,19\}\)dB and the bandwidth ratio \(R\in\{1/24,1/12,1/6\}\).
The comparison between DeepJSCC-MIMO and the BPG-Capacity benchmark is shown in Fig. 10. As can be seen, DeepJSCC-MIMO outperforms the separation-based benchmark in all SNR and bandwidth-ratio scenarios. Specifically, from Fig. (a)a and (b)b, we can see that DeepJSCC-MIMO significantly outperforms the benchmark by at least \(1.98\)dB and \(1.78\)dB when \(R=1/24\) and \(R=1/12\), respectively. When \(R=1/6\), improvements of up to \(3.5\)dB can be observed. These significant improvements demonstrate the superiority of the DeepJSCC-MIMO scheme and its capacity in extracting and mapping image features to the available channels in an adaptive fashion.
#### Iv-C2 Different bandwidth ratios
To further evaluate the impact of the bandwidth ratio \(R\) on the system performance, we compare the DeepJSCC-MIMO and BPG-Capacity schemes over a wide range of bandwidth ratios in Fig. 11, where the test SNR is set to \(10\)dB. We observe that DeepJSCC-MIMO can outperform the BPG-Capacity benchmark at all bandwidth ratios for \(10\)dB, with a gain of up to \(2.2\)dB in PSNR, respectively. The gap between the two is more significant for low \(R\) values. We would like to emphasize that the BPG-Capacity performance shown in these figures serves as an upper bound for separate source and channel coding schemes employing BPG compression, and the real gap can be even larger, especially in the short block length regime, i.e., low \(R\). Thus, the results here illustrate the clear superiority of DeepJSCC-MIMO for the CIFAR dataset compared to separation-based alternatives.
#### Iv-C3 SNR-adaptability
We also consider a random SNR training strategy to evaluate the channel adaptability of the DeepJSCC-MIMO scheme. We train the DeepJSCC-MIMO model with random SNR values uniformly sampled from \([0,22]\) dB and test the well-trained model at different test SNRs, denoted by DeepJSCC-MIMO-universal. The comparisons in Fig. 10 show that there is a slight performance degradation compared to training a separate DeepJSCC-MIMO encoder-decoder pair for each channel SNR (up to \(0.65\)dB, \(0.83\)dB, and \(2.1\)dB for \(R=1/24\), \(R=1/12\) and \(R=1/6\)); however, the DeepJSCC-MIMO-universal brings significant advantage in terms of training complexity and storage requirements.
We can observe more performance loss in the high SNR
Fig. 11: Performance of DeepJSCC-MIMO and BPG-LDPC scheme with respect to the bandwidth ratio at SNR=10dB.
Fig. 10: Performance of the proposed DeepJSCC-MIMO model compared with the BPG-Capacity benchmark at different channel SNR and bandwidth ratio scenarios for the closed-loop MIMO.
regime. However, DeepJSCC-MIMO-universal still significantly outperforms the BPG-Capacity benchmark. We conclude that the DeepJSCC-MIMO-universal scheme can adapt to SNR variations with a slight loss in PSNR, especially in the high-SNR regime.
#### V-B4 SVD ablation study
To evaluate the effectiveness of the SVD strategy in our scheme, we train the models of DeepJSCC-MIMO-universal without SVD decomposition-based precoding in different bandwidth ratios, denoted by DeepJSCC-MIMO-universal w/o SVD. Specifically, we directly feed the channel heatmap \(\mathbf{M}\) into both transceivers and the performance is shown in Fig. 10. Compared with the DeepJSCC-MIMO-universal, we can observe that the performance of DeepJSCC-MIMO-universal w/o SVD has a gap of \(0.97\)dB, \(1.6\)dB and \(2.9\)dB for \(R=1/24\), \(R=1/12\), and \(R=1/6\), respectively. Although the networks can still learn to communicate over the MIMO channel, the SVD-based model-driven strategy can significantly simplify the training process and improve the performance, which illustrates the importance of exploiting domain knowledge in the design of data-driven communication technologies.
#### V-B5 High resolution dataset
To evaluate the model generalizability, we validate the DeepJSCC-MIMO on Kodak and CelebA datasets, where we train the DeepJSCC-MIMO and DeepJSCC-MIMO-universal schemes with randomly cropped \(128\times 128\) patches from the ImageNet dataset, and evaluate the well-trained models on the Kodak and CelebA [40] datasets. The detailed visualization and the PSNR performance of the transmission, compared with BPG-Capacity, are shown in Fig. 12 and Fig. 13, respectively.
Similarly to the CIFAR dataset, from Fig. 13, we can observe that our DeepJSCC-MIMO can outperform BPG-Capacity at all test SNRs and generally provide higher gains at lower SNRs (up to \(3.03\)dB) in the CelebA dataset. There is a slight performance degradation of the DeepJSCC-MIMO-universal scheme (up to \(0.61\)dB) compared with DeepJSCC-MIMO trained at specific SNRs; however, the DeepJSCC-MIMO-universal can still outperform the BPG-Capacity scheme at all SNRs.
Fig. 12: Visual comparison of different schemes under various SNRs on the Kodak dataset with \(R=1/24\). The first and second columns are the original image, the original patch of the red bounding box. The third, fourth, and fifth columns are the performance of the BPG-Capacity, DeepJSCC-MIMO-universal, and DeepJSCC-MIMO schemes, respectively.
Fig. 13: Performance of DeepJSCC-MIMO and BPG-LDPC scheme in the CelebA dataset when \(R=1/48\).
We also visualize the results of the DeepJSCC-MIMO scheme on the Kadak dataset and compare them with those of the BPG-Capacity scheme in Fig. 12. We can observe that DeepJSCC-MIMO performs better in the low SNR regime (\(1\) and \(5\) dB), with more detailed high-frequency features. Comparing the performance of DeepJSCC-MIMO-universal and DeepJSCC-MIMO schemes, we can observe that the DeepJSCC-MIMO-universal can achieve comparable performance at the expense of a slight drop in distortion.
Considering that the BPG-Capacity performance shown in these figures is an upper bound on the performance of any separation-based scheme that employs the BPG codec, the comparison here clearly shows the benefits of the DeepJSCC-MIMO and DeepJSCC-MIMO-universal schemes in high-resolution datasets. We can also conclude that a sufficiently large dataset (ImageNet) can allow our DeepJSCC-MIMO scheme to perform well on other never-seen datasets (Kodak/CelebA) for a wide range of channel conditions (SNRs).
#### Iv-B6 Visualization of the channel heatmap and power allocation
We visualize the channel heatmap and the power allocation of the channel input matrix for a closed-loop MIMO system in Fig. 14. Within our channel heatmap, we visualize the noise power experienced by each channel symbols, with the color red indicating higher noise power, and hence, a poorer channel condition. In the power allocation map, the color red denotes a higher allocation of power to the respective channel symbols. We can observe that for each SNR value, DeepJSCC-MIMO generally assigns more power to antennas with better channel conditions. Specifically, in the low SNR regime, DeepJSCC-MIMO concentrates most of the power on a few select antennas with exceptional quality. In certain extreme cases, DeepJSCC-MIMO refrains from allocating power to the sub-channels characterized by the worst channel conditions. As the SNR increases, we note that DeepJSCC-MIMO demonstrates a tendency to distribute power more evenly across a greater number of antennas, instead of concentrating on a limited set of antennas with the worst channel conditions.
#### Iv-B7 Adaptability to the antenna numbers
In all the above experiments, we have the same number of \(M=2\) antennas at the transmitter and receiver. In this subsection, we investigate the generalizability of the DeepJSCC-MIMO architecture to different antenna numbers. In particular, we repeat the experiment of DeepJSCC-MIMO in Fig. 10(b) for different \(M\) values. The performance of our DeepJSCC-MIMO models with different antenna numbers \(M\in\{2,3,4\}\) and channel SNRs \(SNR_{test}\in\{1,5,10\}\)dB is shown in Fig. 15. We can observe that DeepJSCC-MIMO can generally achieve better performance with the increase in the number of antennas, thanks to its more flexible power allocation.
We also consider a random antenna number training strategy with a maximal antenna number setting \(M_{max}\), denoted as adaptive-M. We train the DeepJSCC-MIMO model with the \(M\) uniformly sampled from \(\{2,\ldots,M_{max}\}\) and test the well-trained model with different \(M\) values. Specifically, our encoder generates the channel symbols \(\mathbf{X}\in\mathbb{C}^{M_{max}\times k}\), and only transmits first \(\mathbb{C}^{M_{i}\times k}\) symbols for the \(M=M_{i}\) antenna scenario, while the receiver pads the received signal into \(\mathbb{C}^{M_{max}\times k}\) with zeros for decoding. From experiments with \(M_{max}=4\), we observe in Fig. 15 that the adaptive-M strategy can achieve similar performance compared with the model trained for specific \(M\) values: the performance loss is at most \(0.6\)dB. We can conclude that DeepJSCC-MIMO is highly flexible and adaptive that can be applied to different number of antennas without retraining.
## V Conclusion
We introduced the first DeepJSCC-enabled MIMO communication system designed for wireless image transmission. The proposed DeepJSCC-MIMO model utilizes a vision transformer-based architecture to exploit contextual semantic features and channel conditions through a self-attention mechanism. The results demonstrate significant improvements in transmission quality across a wide range of SNRs and bandwidth scenarios, with enhanced robustness to channel estimation errors. Moreover, DeepJSCC-MIMO is a unified and adaptable design capable of accommodating varying channel conditions and different antenna numbers without the need for retraining.
Moving forward, there are several potential directions for future exploration based on the findings of this paper:
Fig. 14: Visualization of the channel heatmap and power allocation across the channel symbols, where antenna number is set as \(M=32\) and \(R=1/6\)
Fig. 15: Performance of DeepJSCC-MIMO with different antennas number.
* JSCC with space-time coding. To harness the full potential of the spatial and temporal dimensions in MIMO, the integration of DeepJSCC with space-time codes presents a promising avenue for the open-loop scenario [41]. This synergy can allow for even greater flexibility and performance enhancements, especially in the low-SNR regime.
* Variable length JSCC. Building upon the adaptability of ViT-based models to varying antenna numbers, future investigations can extend this concept to accommodate variable code lengths or even variable number of antennas, where the channel resources are judiciously exploited depending on the channel state as well as the input signal.
|
2302.02115 | On the convergence of an inertial proximal algorithm with a Tikhonov
regularization term | This paper deals with an inertial proximal algorithm that contains a Tikhonov
regularization term, in connection to the minimization problem of a convex
lower semicontinuous function $f$. We show that for appropriate Tikhonov
regularization parameters the value of the objective function in the sequences
generated by our algorithm converges fast (with arbitrary rate) to the global
minimum of the objective function and the generated sequences converges weakly
to a minimizer of the objective function. We also obtain the fast convergence
of the discrete velocities towards zero and some sum estimates. Nevertheless,
our main goal is to obtain strong convergence results and also pointwise and
sum estimates for the same constellation of the parameters involved. Our
analysis reveals that the extrapolation coefficient and the Tikhonov
regularization coefficient are strongly correlated and there is a critical
setting of the parameters that separates the cases when strong respective weak
convergence results can be obtained. | Szilárd Csaba László | 2023-02-04T07:00:41Z | http://arxiv.org/abs/2302.02115v4 | # On the convergence of an inertial proximal algorithm with a Tikhonov regularization term
###### Abstract.
This paper deals with an inertial proximal algorithm that contains a Tikhonov regularization term, in connection to the minimization problem of a convex lower semicontinuous function \(f\). We show that for appropriate Tikhonov regularization parameters the value of the objective function in the sequences generated by our algorithm converges fast (with arbitrary rate) to the global minimum of the objective function and the generated sequences converges weakly to a minimizer of the objective function. We also obtain the fast convergence of the discrete velocities towards zero and some sum estimates. Nevertheless, our main goal is to obtain strong convergence results and also pointwise and sum estimates for the same constellation of the parameters involved. Our analysis reveals that the extrapolation coefficient and the Tikhonov regularization coefficient are strongly correlated and there is a critical setting of the parameters that separates the cases when strong convergence results or weak convergence results can be obtained.
**Key Words.** convex optimization, inertial proximal algorithm, Tikhonov regularization, strong convergence, convergence rate
**AMS subject classification.** 46N10, 65K05, 65K10, 90C25, 90C30
## 1. Introduction
Consider the minimization problem
\[\left(\text{P}\right)\ \inf_{x\in\mathcal{H}}f(x),\]
where \(\mathcal{H}\) be a Hilbert space endowed with the scalar product \(\left\langle\cdot,\cdot\right\rangle\) and norm \(\|\cdot\|\) and \(f:\mathcal{H}\longrightarrow\overline{\mathbb{R}}\) is a convex proper lower semicontinuous function whose solution set \(\operatorname{argmin}f\) is nonempty. We associate to (P) the following inertial proximal algorithm: for all \(k\geq 1\)
\[\begin{cases}x_{0},x_{1}\in\mathcal{H}\\ y_{k}=x_{k}+\left(1-\frac{\alpha}{k^{q}}\right)(x_{k}-x_{k-1})\\ x_{k+1}=\operatorname{prox}_{\lambda_{k}f}\left(y_{k}-\frac{c}{k^{p}}x_{k} \right),\end{cases} \tag{1}\]
where \(\alpha\), \(q\), \(c\), \(p>0\) and \((\lambda_{k})\) is a sequence of positive real numbers. Further, \(\operatorname{prox}_{sf}:\mathcal{H}\rightarrow\mathcal{H},\quad\operatorname {prox}_{sf}(x)=\operatorname{argmin}_{y\in\mathcal{H}}\left(f(y)+\frac{1}{2s} \|y-x\|^{2}\right),\) denotes the proximal point operator of the convex function \(sf\). By rewriting \(\operatorname{prox}_{sf}\) as the resolvent operator of the subdifferential of the convex function \(sf\), that is \(\operatorname{prox}_{sf}(x)=(I+s\partial f)^{-1}(x),\) algorithm (1) can be reformulated as
\[x_{k+1}\in\alpha_{k}(x_{k}-x_{k-1})-\lambda_{k}\partial f(x_{k+1})+\left(1-c_{ k}\right)x_{k},\text{ for all }k\geq 1, \tag{2}\]
where we denote \(\alpha_{k}=1-\frac{\alpha}{k^{q}}\) and \(c_{k}=\frac{c}{k^{p}}.\) For a better insight in inertial proximal algorithms we refer to [13, 14, 16, 19, 24, 26, 27, 28]
Note that \(c_{k}x_{k}\) is a Tikhonov regularization term, which may assure the strong convergence of a generated sequence to the minimizer of minimal norm of the objective function \(f\), (see [1, 4, 6, 9, 10, 11, 15, 18, 20, 23, 25, 31, 32]).
We emphasize that in case \(q=1\) algorithm (1) is the Tikhonov regularized version of the inertial proximal algorithm (IPA) studied in [7] (see also [3, 5, 12]). Further, for constant \(\lambda_{k}\) algorithm (1) is an implicit discretization of the dynamical system studied in [25], that is
\[\begin{cases}\ddot{x}(t)+\frac{\alpha}{t^{q}}\dot{x}(t)+\nabla f\left(x(t) \right)+\frac{c}{t^{p}}x(t)=0\\ x(t_{0})=u_{0},\,\dot{x}(t_{0})=v_{0},\end{cases} \tag{3}\]
where \(t_{0}>0\), \((u_{0},v_{0})\in\mathcal{H}\times\mathcal{H}\). However, in our algorithm we do not assume that the objective function is smooth, moreover, throughout this paper we assume that the stepsize parameter has the form \(\lambda_{k}=\lambda k^{\delta}\), \(\lambda>0\), \(\delta\in\mathbb{R}\).
The strong convergence of the trajectories of second order continuous dynamical systems with a Tikhonov regularization term to a minimizer of minimum norm of a smooth convex function, were the subject of many recent investigations, (see [1, 2, 6, 8, 11, 15, 17, 25]). These dynamical systems via explicit/implicit discretizations lead to inertial algorithms with a Tikhonov regularization term. However, concerning the discrete case, there are only few results in the literature (see [11]).
As it was expected, the most important features of a trajectory generated by the dynamical system (3) are inherited by the sequences generated by algorithm (1). This underlines again the importance of the study of the continuous case (see [7, 20]), when one ought to design an optimization algorithm with desirable properties.
Our analysis reveals that the inertial coefficient \(\alpha_{k}\) and the Tikhonov regularization coefficient \(c_{k}\) are strongly correlated. More precisely, if \(q+1<p\), (and \(\delta\geq 0\)), then the sequence \((x_{k})\) generated by algorithm (1) converges weakly to a minimizer of our objective function \(f\), (see Theorem 2.4). Further, fast convergence of the potential energy \(f(x_{k})-\min_{\mathcal{H}}f\) and discrete velocity \(\|x_{k}-x_{k-1}\|\) to zero is assured, (see Theorem 2.1). Our results, (for this setting of parameters), are in concordance with the results obtained by Guler in [22], (see also [21]), but our parameters have a much simpler form. Further, in case \(q=1\) we reobtain the results from [7]. However, we show that the best choice of \(q\) in the inertial parameter \(\alpha_{k}\) is not \(q=1\), but rather \(0<q<1.\) Indeed, in the case \(0<q<1\), according to Theorem 2.1, arbitrary large convergence rate for the potential energy \(f(x_{k})-\min_{\mathcal{H}}f\) can be obtained, for a fixed inertial parameter \(\alpha>0.\) Note that this result does not hold in case \(q=1\) (see [7] or Theorem 2.1), since in this case the inertial parameter \(\alpha_{k}\) and the stepsize parameter \(\lambda_{k}\) are correlated. In other words, for obtaining a desired convergence rate, beside the stepsize one must also change the inertial parameter. Because of the fact that for a fixed inertial parameter one can obtain arbitrary large convergence rates is an important and new feature of our algorithm, let us give some details in what follows. In one hand, in [7] and also in Theorem 2.1, for the constellation \(q=1\), \(\alpha>3\) and \(\lambda_{k}=\mathcal{O}(k^{\delta})\) as \(k\to+\infty,\) for \(\delta<\alpha-3\) is obtained the rate \(f(x_{k})-\min_{\mathcal{H}}f=o(k^{-2-\delta}),\) as \(k\to+\infty.\) This means that for a fixed \(\alpha\) one can obtain at most \(f(x_{k})-\min_{\mathcal{H}}f=\mathcal{O}(k^{1-\alpha}),\) as \(k\to+\infty.\) On the other hand, according to Theorem 2.1, for \(0<q<1\) and \(q+1<p\leq 2\) one has \(f(x_{k})-\min_{\mathcal{H}}f=\mathcal{O}(k^{-q-\delta-1}),\) as \(k\to+\infty,\) which indeed can be arbitrary large. Even more, our proof works also when \(c=0,\) that is, we do not have Tikhonov regularization, and of course then the assumption \(q+1<p\leq 2\) can be dropped.
Further, if \(1<p<q+1\), (and \(\delta\leq 0\)), then the strong convergence result \(\liminf_{k\to+\infty}\|x_{k}-x^{*}\|=0\), where \(x^{*}\) is the minimum norm minimizer of the objective function \(f\), is obtained, (see Theorem 3.2). According to Theorem 3.1 also in this case fast convergence of the potential energy \(f(x_{k})-\min_{\mathcal{H}}f\) and discrete velocity \(\|x_{k}-x_{k-1}\|\) to zero is assured. Due to our best knowledge, similar results were obtained only in [11], for the case \(q=1\) and \(p=2\), which is not covered by our analysis. However, as we mentioned before \(q=1\) is not an optimal choice for our algorithm since in case \(q<1\) improved convergence rates can be obtained.
We emphasize that the greatest strength of our paper is that for the case \(0<q<1\), \(1<p<q+1\) and \(\lambda_{k}\equiv 1\) we are able to obtain full strong convergence to the minimal norm solution \(x^{*}\), that is, \(\lim_{k\to+\infty}\|x_{k}-x^{*}\|=0\). This results can be considered somehow natural, since \(\lambda_{k}\equiv 1\) is the case when algorithm (1) can be obtained from the dynamical system (3) via natural implicit discretization, however in order to obtain this result some new techniques have been developed. Also in this case we were able to obtain fast convergence of the potential energy \(f(x_{k})-\min_{\mathcal{H}}f\) and discrete velocity \(\|x_{k}-x_{k-1}\|\) to zero and even some sum estimates, which makes our result to be the first result of this type in literature, (see Theorem 3.3).
Concerning the case \(p=q+1\), we underline that this case is critical in the sense that neither weak nor strong convergence of the generated sequences cannot be obtained. Nevertheless, convergence rates for the potential energy \(f(x_{k})-\min_{\mathcal{H}}f\) and discrete velocity \(\|x_{k}-x_{k-1}\|\) can be provided (see Theorem 2.1).
The main contributions of the paper to the state of the art can be summarized in the following result.
**Theorem 1.1**.: _Assume that \(0<q<1\) and let \((x_{k})\) be a sequence generated by (1). For every \(k\geq 2\) let us denote \(u_{k}\) the element from \(\partial f(x_{k})\) that satisfies (2) with equality, that is, \(x_{k}=\alpha_{k-1}(x_{k-1}-x_{k-2})-\lambda_{k-1}u_{k}+(1-c_{k-1})\,x_{k-1}.\)_
* _If_ \(q+1<p\leq 2,\,\delta\geq 0\) _and for_ \(p=2\) _one has_ \(c>q(1-q)\)_, then_ \((x_{n})\) _converges weakly to a minimizer of_ \(f\)_. Further,_ \(f(x_{k})-\min_{\mathcal{H}}f=\mathcal{O}(k^{-q-\delta-1})\)_,_ \(\|x_{k}-x_{k-1}\|=\mathcal{O}(k^{-\frac{q+1}{2}})\) _and_ \(\|u_{k}\|=o(k^{-\frac{q+1}{2}-\delta})\) _as_ \(k\to+\infty\)_. Moreover,_ \(\sum_{k=1}^{+\infty}k^{q+\delta}(f(x_{k})-\min_{\mathcal{H}}f)<+\infty,\)__\(\sum_{k=1}^{+\infty}k\|x_{k}-x_{k-1}\|^{2}<+\infty\) _and_ \(\sum_{k=2}^{+\infty}k^{q+2\delta+1}\|u_{k}\|^{2}<+\infty.\)__
* _If_ \(p=q+1\) _and_ \(\delta\geq 0\)_, then for all_ \(s\in\left]\frac{1}{2},\frac{q+1}{2}\right[\) _one has_ \(f(x_{k})-\min_{\mathcal{H}}f=o(k^{-2s-\delta})\)_,_ \(\|x_{k}-x_{k-1}\|=o(k^{-s})\) _and_ \(\|u_{k}\|=o(k^{-s-\delta})\) _as_ \(k\to+\infty\)_. Further,_ \(\sum_{k=1}^{+\infty}k^{2s+\delta-1}(f(x_{k})-\min_{\mathcal{H}}f)<+\infty,\)__ \(\sum_{k=1}^{+\infty}k^{2s-q}\|x_{k}-x_{k-1}\|^{2}<+\infty\) _and_ \(\sum_{k=2}^{+\infty}k^{2s+2\delta}\|u_{k}\|^{2}<+\infty.\)__
* _If_ \(1<p<q+1\) _and_ \(p-q-1<\delta<0\) _or_ \(\delta=0\) _and_ \(\lambda\in]0,1[\)_, then_ \(\liminf_{k\to+\infty}\|x_{k}-x^{*}\|=0\)_, where_ \(x^{*}\) _is the minimal norm element from_ \(\operatorname{argmin}f\)_. Further,_ \(f(x_{k})-\min_{\mathcal{H}}f=\mathcal{O}(k^{-p-\delta})\)_,_ \(\|x_{k}-x_{k-1}\|=\mathcal{O}(k^{-\frac{p}{2}})\) _and_ \(\|u_{k}\|=\mathcal{O}(k^{-\frac{p}{2}-\delta})\) _as_ \(k\to+\infty.\) _Additionally, for all_ \(s\in\left]\frac{1}{2},\frac{p}{2}\right[\) _one has_ \(\sum_{k=1}^{+\infty}k^{2s+\delta-1}(f(x_{k})-\min_{\mathcal{H}}f)<+\infty,\)__\(\sum_{k=1}^{+\infty}k^{2s-q}\|x_{k}-x_{k-1}\|^{2}<+\infty\) _and_ \(\sum_{k=2}^{+\infty}k^{2s+2\delta}\|u_{k}\|^{2}<+\infty.\)__
* _If_ \(1<p<q+1\) _and_ \(\delta=0,\lambda=1\)_, then_ \(\lim_{k\to+\infty}\|x_{k}-x^{*}\|=0\)_, where_ \(x^{*}\) _is the minimal norm element from_ \(\operatorname{argmin}f\)_. Further, if_ \(p\leq\frac{3q+1}{2}\) _then_ \(\|x_{k}-x_{k-1}\|^{2},\)__\(\|u_{k}\|^{2}\in\mathcal{O}(k^{-q-1})\) _as_ \(k\to+\infty\) _and_ \(f(x_{k})-\min_{\mathcal{H}}f=\mathcal{O}(k^{-p})\) _as_ \(k\to+\infty.\)__
_If \(\frac{3q+1}{2}<p<q+1\), then \(\|x_{k}-x_{k-1}\|^{2},\)\(\|u_{k}\|^{2}\in\mathcal{O}(k^{2p-4q-2})\) as \(k\to+\infty.\) Additionally, if \(2q<p<\frac{4q+2}{3}\), then \(f(x_{k})-\min_{\mathcal{H}}f=\mathcal{O}(k^{-p})\) as \(k\to+\infty\) and if \(\frac{4q+2}{3}\leq p<q+1\), then \(f(x_{k})-\min_{\mathcal{H}}f=\mathcal{O}(k^{2p-4q-2})\) as \(k\to+\infty.\) Moreover, if \(2q\leq p\) the following estimates hold: \(\sum_{k=1}^{+\infty}k^{2q}\|u_{k}\|^{2}<+\infty\) and \(\sum_{k=1}^{+\infty}k^{q}\|x_{k+1}-x_{k}\|^{2}<+\infty.\)_
The paper is organized as follows. In the next section we treat the case \(q+1\leq p\) in order to obtain fast convergence rates for the function values in the sequence generated by algorithm (1) but also for the discrete velocity and subgradient. Further, if \(q+1<p\) then the weak convergence of the generated sequences to a minimizer of the objective function is also obtained. In section 3 we deal with the case \(1<p\leq q+1\). We obtain fast convergence results concerning the potential energy, discrete velocity and subgradient. Moreover, if \(1<p<q+1\) strong convergence results for the sequence generated by (1) to the minimum norm minimizer of the objective function is shown. Further, in case the stepsize parameter \(\lambda_{k}\equiv 1\) we obtain full strong convergence of the sequences generated by Algorithm (1) and improved convergence rates for the function values and velocity. Finally we conclude our paper by underlying some possible further researches.
## 2. Convergence rates and weak convergence for the case \(q+1\leq p\)
In this section we analyze the weak convergence properties of the sequence generated by the algorithm (1). We obtain fast convergence to zero of the discrete velocity and subgradient. We also show that the function values in the generated sequences converge to the global minimum of the objective function \(f.\) Even more, the variable stepsize parameter \(\lambda_{k}=\lambda k^{\delta},\)\(\lambda,\delta>0\) allows to obtain the estimate of order \(\mathcal{O}(k^{-q-\delta-1})\) for the decay \(f(x_{n})-\min_{\mathcal{H}}f\) which can be arbitrary large, depending on parameter \(\delta.\)
### Convergence rates
Concerning fast convergence of the function values, discrete velocity and subgradient, we have the following result.
**Theorem 2.1**.: _Assume that \(0<q\leq 1\), \(q+1\leq p\), \(\lambda_{k}=\lambda k^{\delta},\)\(\lambda>0,\)\(\delta\geq 0\) and let \((x_{k})\) be a sequence generated by (1). For every \(k\geq 2\) let us denote \(u_{k}\) the element from \(\partial f(x_{k})\) that satisfies (2) with equality, i.e.,_
\[x_{k}=\alpha_{k-1}(x_{k-1}-x_{k-2})-\lambda_{k-1}u_{k}+\left(1-c_{k-1}\right)x _{k-1}.\]
_Then the following results are valid._
* _If_ \(\alpha>0,\)__\(\delta\geq 0,\)__\(0<q<1,\)__\(q+1<p\leq 2\) _and for_ \(p=2\) _one has_ \(c>q(1-q)\)_, or_ \(\alpha>3,\)__\(0\leq\delta<\alpha-3,\)__\(q=1\) _and_ \(p>2\) _then_ \[f(x_{k})-\min_{\mathcal{H}}f=\mathcal{O}(k^{-q-\delta-1}),\)__\(\|x_{k}-x_{k-1}\|=\mathcal{O}(k^{-\frac{q+1}{2}})\) _and_ \(\|u_{k}\|=o(k^{-\frac{q+1}{2}-\delta})\) _as_ \(k\to+\infty.\)__
_Further,_
\[\sum_{k=1}^{+\infty}k^{q+\delta}(f(x_{k})-\min_{\mathcal{H}}f)<+\infty,\,\sum_{k =1}^{+\infty}k\|x_{k}-x_{k-1}\|^{2}<+\infty\text{ and }\sum_{k=2}^{+\infty}k^{q+2\delta+1}\|u_{k}\|^{2}<+\infty.\]
* _If_ \(\alpha>0,\)__\(\delta\geq 0,\)__\(0<q<1\) _and_ \(q+1\leq p\)_, or_ \(\alpha>3,\)__\(0\leq\delta<\alpha-3,\)__\(q=1\) _and_ \(p\geq 2\) _then for all_ \(s\in\left]\frac{1}{2},\frac{q+1}{2}\right[\) _one has_ \[f(x_{k})-\min_{\mathcal{H}}f=o(k^{-2s-\delta}),\)__\(\|x_{k}-x_{k-1}\|=o(k^{-s})\) _and_ \(\|u_{k}\|=o(k^{-s-\delta})\) _as_ \(k\to+\infty.\)__
_Further,_
\[\sum_{k=1}^{+\infty}k^{2s+\delta-1}(f(x_{k})-\min_{\mathcal{H}}f)<+\infty,\,\sum_{ k=1}^{+\infty}k^{2s-q}\|x_{k}-x_{k-1}\|^{2}<+\infty\text{ and }\sum_{k=2}^{+\infty}k^{2s+2\delta}\|u_{k}\|^{2}<+\infty.\]
Proof.: Given \(x^{*}\in\operatorname{argmin}f,\) set \(f^{*}=f(x^{*})=\min_{\mathcal{H}}f.\)
For \(k\geq 2,\) consider the discrete energy
\[E_{k}= \mu_{k-1}(f(x_{k-1})-f^{*})+\|a_{k-1}(x_{k-1}-x^{*})+b_{k-1}(x_{k} -x_{k-1}+\lambda_{k-1}u_{k})\|^{2}\] \[+\nu_{k-1}\|x_{k-1}-x^{*}\|^{2}+\sigma_{k-1}\|x_{k-1}\|^{2}, \tag{4}\]
where \(a_{k}=ak^{r-1},\)\(b_{k}=k^{r},\)\(r\in\left(\frac{1}{2},\frac{q+1}{2}\right],\)\(2r+\delta<a,\)\(\mu_{k}:=(2b_{k}^{2}-2a_{k}b_{k})\lambda_{k},\)\(\nu_{k}:=-\alpha_{k+1}a_{k+1}b_{k+1}-a_{k}^{2}+a_{k}b_{k}\) and \(\sigma_{k}=\alpha_{k+1}b_{k+1}^{2}c_{k+1},\) for all \(k\geq 1.\)
If \(q=1,\) hence \(\alpha>3,\) we also assume that \(a<\alpha-1,\) hence \(\delta<\alpha-1-2r.\)
Let us develop \(E_{k}\). We show first, that there exists \(k_{0}\geq 1\) such that the coefficients \(\mu_{k},\nu_{k}\) and \(\sigma_{k}\) are nonnegative for all \(k\geq k_{0}.\)
According to the form of \((a_{k})\) and \((b_{k}),\) there exists \(k_{1}\geq 1\) such that \(b_{k}\geq a_{k}\) for all \(k\geq k_{1},\) hence
\[\mu_{k}=(2b_{k}^{2}-2a_{k}b_{k})\lambda_{k}\geq 0\text{ for all }k\geq k_{1}\text{ and }\mu_{k}=\mathcal{O}(k^{2r+\delta})\text{ as }k\to+\infty. \tag{5}\]
Obviously \(\nu_{k}=-\alpha_{k+1}a_{k+1}b_{k+1}-a_{k}^{2}+a_{k}b_{k}=-a(k+1)^{2r-1}+\alpha a (k+1)^{2r-1-q}-a^{2}k^{2r-2}+ak^{2r-1}\) and we show that \(\phi(x,r)=-a(x+1)^{2r-1}+\alpha a(x+1)^{2r-1-q}-a^{2}x^{2r-2}+ax^{2r-1}\geq 0\) for \(x\) big enough and that \(\phi(x,r)=\mathcal{O}(x^{2r-1-q})\) as \(x\to+\infty.\) Indeed, one has
\[\lim_{x\to+\infty}\frac{\phi(x,r)}{x^{2r-1-q}} =\lim_{x\to+\infty}\frac{ax^{2r-1}-a(x+1)^{2r-1}+\alpha a(x+1)^{2 r-1-q}-a^{2}x^{2r-2}}{x^{2r-1-q}}\] \[=\lim_{x\to+\infty}\left(\frac{a-a\left(1+\frac{1}{x}\right)^{2r -1}}{x^{-q}}+\alpha a\left(1+\frac{1}{x}\right)^{2r-1-q}-a^{2}x^{q-1}\right)\] \[=\lim_{x\to+\infty}\left(-\frac{a(2r-1)}{q}\frac{\left(1+\frac{ 1}{x}\right)^{2r-2}}{x^{1-q}}+\alpha a-a^{2}x^{q-1}\right)=L,\]
where \(L=\alpha a>0\) if \(q<1\) and \(L=-a(2r-1)+\alpha a-a^{2}\) if \(q=1.\) However, if \(q=1\) one has \(a<\alpha-1\) and consequently \(0<\alpha+1-a-2r\) hence \(L>0\) also in this case.
Hence, there exists \(k_{2}\geq 1\) such that for all \(\frac{1}{2}<r\leq\frac{q+1}{2}\) one has
\[\nu_{k}=-\alpha_{k+1}a_{k+1}b_{k+1}-a_{k}^{2}+a_{k}b_{k}\geq 0,\text{ for all }k\geq k_{2}\text{ and }\nu_{k}=\mathcal{O}(k^{2r-1-q})\text{ as }k\to+\infty. \tag{6}\]
Finally, it is obvious that there exists \(k_{3}\geq 1\) such that \(\alpha_{k+1}b_{k+1}^{2}c_{k+1}=c\left(1-\frac{\alpha}{(k+1)^{q}}\right)(k+1)^ {2r-p}\geq 0\) for all \(k\geq k_{3},\) hence for all \(\frac{1}{2}<r\leq\frac{q+1}{2}\) one has
\[\sigma_{k}=\alpha_{k+1}b_{k+1}^{2}c_{k+1}\geq 0\text{ for all }k\geq k_{3}\text{ and }\sigma_{k}=\mathcal{O}(k^{2r-p})\text{ as }k\to+\infty. \tag{7}\]
Now, take \(k_{0}=\max(k_{1},k_{2},k_{3})\) and one has \(\mu_{k},\nu_{k},\sigma_{k}\geq 0\) for all \(k\geq k_{0}.\)
For simplicity let us denote \(v_{k}=\|a_{k-1}(x_{k-1}-x^{*})+b_{k-1}(x_{k}-x_{k-1}+\lambda_{k-1}u_{k})\|^{2}.\) Then,
\[v_{k}= a_{k-1}^{2}\|x_{k-1}-x^{*}\|^{2}+b_{k-1}^{2}\|x_{k}-x_{k-1}\|^{2}+b_{k-1 }^{2}\lambda_{k-1}^{2}\|u_{k})\|^{2}+2a_{k-1}b_{k-1}\langle x_{k}-x_{k-1},x_{k -1}-x^{*}\rangle\] \[+2a_{k-1}b_{k-1}\lambda_{k-1}\langle u_{k},x_{k-1}-x^{*}\rangle+2 b_{k-1}^{2}\lambda_{k-1}\langle u_{k},x_{k}-x_{k-1}\rangle. \tag{8}\]
Further
\[2a_{k-1}b_{k-1}\langle x_{k}-x_{k-1},x_{k-1}-x^{*}\rangle=a_{k-1}b_{k-1}(\|x_{k} -x^{*}\|^{2}-\|x_{k}-x_{k-1}\|^{2}-\|x_{k-1}-x^{*}\|^{2})\]
and
\[2a_{k-1}b_{k-1}\lambda_{k-1}\langle u_{k},x_{k-1}-x^{*}\rangle=2a_{k-1}b_{k-1} \lambda_{k-1}\langle u_{k},x_{k}-x^{*}\rangle-2a_{k-1}b_{k-1}\lambda_{k-1} \langle u_{k},x_{k}-x_{k-1}\rangle.\]
Consequently, (8) becomes
\[v_{k}= a_{k-1}b_{k-1}\|x_{k}-x^{*}\|^{2}+(a_{k-1}^{2}-a_{k-1}b_{k-1})\|x_{k -1}-x^{*}\|^{2}+(b_{k-1}^{2}-a_{k-1}b_{k-1})\|x_{k}-x_{k-1}\|^{2}\] \[+b_{k-1}^{2}\lambda_{k-1}^{2}\|u_{k}\|^{2}+2a_{k-1}b_{k-1}\lambda_{k -1}\langle u_{k},x_{k}-x^{*}\rangle+(2b_{k-1}^{2}-2a_{k-1}b_{k-1})\lambda_{k-1} \langle u_{k},x_{k}-x_{k-1}\rangle. \tag{9}\]
Let us proceed similarly with \(v_{k+1}\). First notice that from (2) we have
\[v_{k+1}=\|a_{k}(x_{k}-x^{*})+b_{k}(\alpha_{k}(x_{k}-x_{k-1})-c_{k}x_{k})\|^{2}.\]
Therefore, after development we get
\[v_{k+1}= a_{k}^{2}\|x_{k}-x^{*}\|^{2}+\alpha_{k}^{2}b_{k}^{2}\|x_{k}-x_{k-1}\| ^{2}+b_{k}^{2}c_{k}^{2}\|x_{k}\|^{2}+2\alpha_{k}a_{k}b_{k}\langle x_{k}-x_{k-1},x_{k}-x^{*}\rangle\] \[-2\alpha_{k}b_{k}^{2}c_{k}\langle x_{k}-x_{k-1},x_{k}\rangle-2a_{ k}b_{k}c_{k}\langle x_{k},x_{k}-x^{*}\rangle. \tag{10}\]
Further,
\[2\alpha_{k}a_{k}b_{k}\langle x_{k}-x_{k-1},x_{k}-x^{*}\rangle=- \alpha_{k}a_{k}b_{k}(\|x_{k-1}-x^{*}\|-\|x_{k}-x_{k-1}\|^{2}-\|x_{k}-x^{*}\|^{2})\] \[-2\alpha_{k}b_{k}^{2}c_{k}\langle x_{k}-x_{k-1},x_{k}\rangle= \alpha_{k}b_{k}^{2}c_{k}(\|x_{k-1}\|^{2}-\|x_{k}-x_{k-1}\|^{2}-\|x_{k}\|^{2})\] \[-2a_{k}b_{k}c_{k}\langle x_{k},x_{k}-x^{*}\rangle=a_{k}b_{k}c_{k} (\|x^{*}\|^{2}-\|x_{k}-x^{*}\|^{2}-\|x_{k}\|^{2}).\]
Hence, (10) yields
\[v_{k+1}=(a_{k}^{2}+\alpha_{k}a_{k}b_{k}-a_{k}b_{k}c_{k})\|x_{k}- x^{*}\|^{2}-\alpha_{k}a_{k}b_{k}\|x_{k-1}-x^{*}\|^{2}\] \[+(\alpha_{k}^{2}b_{k}^{2}+\alpha_{k}a_{k}b_{k}-\alpha_{k}b_{k}^{2 }c_{k})\|x_{k}-x_{k-1}\|^{2}+(b_{k}^{2}c_{k}^{2}-\alpha_{k}b_{k}^{2}c_{k}-a_{k }b_{k}c_{k})\|x_{k}\|^{2}\] \[+\alpha_{k}b_{k}^{2}c_{k}\|x_{k-1}\|^{2}+a_{k}b_{k}c_{k}\|x^{*}\|^ {2}. \tag{11}\]
Hence, (11) and (9) lead to
\[v_{k+1}-v_{k}= (a_{k}^{2}+\alpha_{k}a_{k}b_{k}-a_{k}b_{k}c_{k}-a_{k-1}b_{k-1})\| x_{k}-x^{*}\|^{2}\] \[+(-\alpha_{k}a_{k}b_{k}-a_{k-1}^{2}+a_{k-1}b_{k-1})\|x_{k-1}-x^{*} \|^{2}\] \[+(\alpha_{k}^{2}b_{k}^{2}+\alpha_{k}a_{k}b_{k}-\alpha_{k}b_{k}^{2 }c_{k}-b_{k-1}^{2}+a_{k-1}b_{k-1})\|x_{k}-x_{k-1}\|^{2}\] \[+(b_{k}^{2}c_{k}^{2}-\alpha_{k}b_{k}^{2}c_{k}-a_{k}b_{k}c_{k})\| x_{k}\|^{2}+\alpha_{k}b_{k}^{2}c_{k}\|x_{k-1}\|^{2}-b_{k-1}^{2}\lambda_{k-1}^{2}\|u_{k}\| ^{2}\] \[+2a_{k-1}b_{k-1}\lambda_{k-1}\langle u_{k},x^{*}-x_{k}\rangle+(2b _{k-1}^{2}-2a_{k-1}b_{k-1})\lambda_{k-1}\langle u_{k},x_{k-1}-x_{k}\rangle\] \[+a_{k}b_{k}c_{k}\|x^{*}\|^{2}. \tag{12}\]
From the subgradient inequality we have
\[\langle u_{k},x^{*}-x_{k}\rangle\leq f^{*}-f(x_{k})\text{ and }\langle u_{k},x_{k-1}-x_{k} \rangle\leq f(x_{k-1})-f(x_{k}).\]
Consequently, we get for all \(k>k_{0}\) that
\[2a_{k-1}b_{k-1}\lambda_{k-1}\langle u_{k},x^{*}-x_{k}\rangle+(2b _{k-1}^{2}-2a_{k-1}b_{k-1})\lambda_{k-1}\langle u_{k},x_{k-1}-x_{k}\rangle\] \[\leq(2b_{k-1}^{2}-2a_{k-1}b_{k-1})\lambda_{k-1}(f(x_{k-1})-f^{*}) -2b_{k-1}^{2}\lambda_{k-1}(f(x_{k})-f^{*})\] \[=\mu_{k-1}(f(x_{k-1})-f^{*})-(\mu_{k}+(2b_{k-1}^{2}\lambda_{k-1} -2b_{k}^{2}\lambda_{k}+2a_{k}b_{k}\lambda_{k}))(f(x_{k})-f^{*}). \tag{13}\]
Let us denote \(m_{k}:=2b_{k-1}^{2}\lambda_{k-1}-2b_{k}^{2}\lambda_{k}+2a_{k}b_{k}\lambda_{k}\) and let us show that for all \(\frac{1}{2}<r\leq\frac{q+1}{2}\) one has \(2b_{k-1}^{2}\lambda_{k-1}-2b_{k}^{2}\lambda_{k}+2a_{k}b_{k}\lambda_{k}\geq 0\) for all \(k\geq 1\). We can write equivalently as \(k^{2r+\delta}-ak^{2r+\delta-1}-(k-1)^{2r+\delta}\leq 0\) for all \(k\geq 1\). Since \(2r+\delta<a\), by convexity of the function \(x\mapsto x^{2r+\delta}\), the gradient differential inequality gives
\[(x-1)^{2r+\delta}\geq x^{2r+\delta}-(2r+\delta)x^{2r+\delta-1}\geq x^{2r+ \delta}-ax^{2r+\delta-1}\]
and the claim follows. Hence,
\[m_{k}\geq 0\text{ for all }k\geq k_{0}\text{ and observe that }m_{k}=\mathcal{O}(k^{2r+\delta-1})\text{ as }k\to+\infty. \tag{14}\]
Combining (12) and (13) we get for all \(k\geq k_{0}\) that
\[v_{k+1}-v_{k}+\mu_{k}(f(x_{k})-f^{*})-\mu_{k-1}(f(x_{k-1})-f^{*}) +m_{k}(f(x_{k})-f^{*})\leq\] \[(a_{k}^{2}+\alpha_{k}a_{k}b_{k}-a_{k}b_{k}c_{k}-a_{k-1}b_{k-1})\|x _{k}-x^{*}\|^{2}\] \[+(-\alpha_{k}a_{k}b_{k}-a_{k-1}^{2}+a_{k-1}b_{k-1})\|x_{k-1}-x^{*} \|^{2}\] \[+(\alpha_{k}^{2}b_{k}^{2}+\alpha_{k}a_{k}b_{k}-\alpha_{k}b_{k}^{2 }c_{k}-b_{k-1}^{2}+a_{k-1}b_{k-1})\|x_{k}-x_{k-1}\|^{2}\] \[+(b_{k}^{2}c_{k}^{2}-\alpha_{k}b_{k}^{2}c_{k}-a_{k}b_{k}c_{k})\|x_{k }\|^{2}+\alpha_{k}b_{k}^{2}c_{k}\|x_{k-1}\|^{2}-b_{k-1}^{2}\lambda_{k-1}^{2}\|u_{k}\| ^{2}\] \[+a_{k}b_{k}c_{k}\|x^{*}\|^{2}. \tag{15}\]
Let us analyze now the sign of the coefficients of the right hand side of (15). We have,
\[a_{k}^{2}+\alpha_{k}a_{k}b_{k}-a_{k}b_{k}c_{k}-a_{k-1}b_{k-1} =(\alpha_{k+1}a_{k+1}b_{k+1}+a_{k}^{2}-a_{k}b_{k})\] \[+(\alpha_{k}a_{k}b_{k}-a_{k}b_{k}c_{k}-a_{k-1}b_{k-1}-\alpha_{k+1} a_{k+1}b_{k+1}+a_{k}b_{k})\] \[=-\nu_{k}-n_{k},\]
where \(n_{k}:=-(\alpha_{k}a_{k}b_{k}-a_{k}b_{k}c_{k}-a_{k-1}b_{k-1}-\alpha_{k+1}a_{k+1 }b_{k+1}+a_{k}b_{k})\).
Now, one has
\[n_{k}=-(2ak^{2r-1}-\alpha ak^{2r-1-q}-ack^{2r-1-p}-a(k-1)^{2r-1}-a(k+1)^{2r-1}+ \alpha a(k+1)^{2r-1-q}).\]
We show that for all \(\frac{1}{2}<r\leq\frac{q+1}{2}\) one has
\[\phi(x,r)=-2ax^{2r-1}+\alpha ax^{2r-1-q}+acx^{2r-1-p}+a(x-1)^{2r-1}+a(x+1)^{2r -1}-\alpha a(x+1)^{2r-1-q}\geq 0\]
for \(x\) big enough.
Indeed, if \(q=1\) then one can take \(r=1\) and we have \(\phi(x,1)=acx^{1-p}>0\). Otherwise, for \(\frac{1}{2}<r<1\) one has
\[\lim_{x\to+\infty}\frac{a(x-1)^{2r-1}+a(x+1)^{2r-1}-2ax^{2r-1}}{x ^{2r-3}}=\lim_{x\to+\infty}\frac{a\left(1-\frac{1}{x}\right)^{2r-1}+a\left(1+ \frac{1}{x}\right)^{2r-1}-2a}{x^{-2}}\] \[=\lim_{x\to+\infty}\frac{a(2r-1)}{x^{2}}\frac{\left(1-\frac{1}{x} \right)^{2r-2}-\left(1+\frac{1}{x}\right)^{2r-2}}{-2x^{-3}}=\lim_{x\to+\infty} \frac{a(2r-1)}{-2}\frac{\left(1-\frac{1}{x}\right)^{2r-2}-\left(1+\frac{1}{x} \right)^{2r-2}}{x^{-1}}\] \[=\lim_{x\to+\infty}\frac{a(2r-1)(2r-2)}{-2x^{2}}\frac{\left(1- \frac{1}{x}\right)^{2r-3}+\left(1+\frac{1}{x}\right)^{2r-3}}{-x^{-2}}=a(2r-1)( 2r-2)<0. \tag{16}\]
Consequently, there exists \(C_{1}>0\) such that
\[a(x-1)^{2r-1}+a(x+1)^{2r-1}-2ax^{2r-1}\geq-C_{1}x^{2r-3}\text{ for $x$ big enough.} \tag{17}\]
Further, if \(r=\frac{q+1}{2}\), then \(\alpha ax^{2r-1-q}-\alpha a(x+1)^{2r-1-q}=0\), otherwise
\[\lim_{x\to+\infty}\frac{\alpha ax^{2r-1-q}-\alpha a(x+1)^{2r-1-q} }{x^{2r-2-q}}=\lim_{x\to+\infty}\alpha a\frac{1-\left(1+\frac{1}{x}\right)^{2 r-1-q}}{x^{-1}}\] \[=-\alpha a\lim_{x\to+\infty}(2r-1-q)\left(1+\frac{1}{x}\right)^{2 r-2-q}=\alpha a(1+q-2r)>0. \tag{18}\]
Consequently, there exists \(C_{2}>0\) such that
\[\alpha ax^{2r-1-q}-\alpha a(x+1)^{2r-1-q}\geq C_{2}x^{2r-2-q}\text{ for $x$ big enough.} \tag{19}\]
From the above relations one can deduce the following:
1. If \(q=1\) and \(r=1\) we have \(p>2\) and \(\phi(x,r)=acx^{1-p}>0\), hence \(\phi(x,r)=\mathcal{O}(x^{1-p})\) as \(x\to+\infty\).
2. If \(q=1\) and \(\frac{1}{2}<r<1\) then \(p\geq 2\) and according to (16) and (18) and the fact that \(\alpha>3\) we have \[\lim_{x\to+\infty}\frac{(a(x-1)^{2r-1}+a(x+1)^{2r-1}-2ax^{2r-1})+( \alpha ax^{2r-1-q}-\alpha a(x+1)^{2r-1-q})}{x^{2r-3}}\] \[=a(2r-1)(2r-2)+\alpha a(2-2r)=a(2-2r)(\alpha+1-2r)>0.\] Hence, \(\phi(x,r)\geq Cx^{2r-3}+acx^{2r-1-p}\) for some \(C>0\) and for \(x\) big enough. Consequently, also in this case \(\phi(x,r)>0\) if \(x\) is big enough and since \(p\geq 2\) one has \(\phi(x,r)=\mathcal{O}(x^{2r-3})\) as \(x\to+\infty\).
3. If \(0<q<1\), \(r=\frac{q+1}{2}\) then \(q+1<p\leq 2\) and according to (17) one has \[(a(x-1)^{2r-1}+a(x+1)^{2r-1}-2ax^{2r-1})+(\alpha ax^{2r-1-q}- \alpha a(x+1)^{2r-1-q})\] \[\geq-C_{1}x^{q-2}\text{ for $x$ big enough.}\] Hence, \(\phi(x,r)\geq acx^{q-p}-C_{1}x^{q-2}\) for \(x\) big enough. Obviously \(\phi(x,r)>0\) if \(p<2\) and \(x\) is big enough. Further, if \(p=2\) then (16) gives \(\lim_{x\to\infty}\frac{\phi(x,r)}{x^{q-2}}=a(c+q(q-1))>0\), hence one has \(\phi(x,r)>0\) if \(x\) is big enough.
Observe that in this case one has \(\phi(x,r)=\mathcal{O}(x^{q-p})\) as \(x\to+\infty\).
* If \(0<q<1\), \(\frac{1}{2}<r<\frac{q+1}{2}\) then \(q+1\leq p\) and according to (17) and (19) one has \[\phi(x,r)\geq-C_{1}x^{2r-3}+C_{2}x^{2r-2-q}+acx^{2r-1-p}\geq Cx^{2r-2-q}\text{ for some }C>0\text{ and for }x\text{ big enough}.\] Consequently, also in this case \(\phi(x,r)>0\) if \(x\) is big enough and observe that \(\phi(x,r)=\mathcal{O}(x^{2r-2-q})\) as \(x\rightarrow+\infty\).
We conclude that there exist \(K_{1}\geq k_{0}\) such that for all \(\frac{1}{2}<r\leq\frac{q+1}{2}\) one has
\[n_{k}\geq 0,\text{ for all }k\geq K_{1}\text{ and the appropriate estimates emphasized at (N1)-(N4) hold.} \tag{20}\]
For the coefficient of discrete velocity \(\|x_{k}-x_{k-1}\|^{2}\) we have
\[\alpha_{k}^{2}b_{k}^{2}+\alpha_{k}a_{k}b_{k}-\alpha_{k}b_{k}^{2} c_{k}-b_{k-1}^{2}+a_{k-1}b_{k-1} =k^{2r}-(k-1)^{2r}-2\alpha k^{2r-q}+ak^{2r-1}\] \[+a(k-1)^{2r-1}+\alpha^{2}k^{2r-2q}-\alpha ak^{2r-q-1}-ck^{2r-p}\] \[+\alpha ck^{2r-q-p}.\]
We show that for all \(\frac{1}{2}<r\leq\frac{q+1}{2}\) one has
\[\phi(x,r)= (x-1)^{2r}-x^{2r}+2\alpha x^{2r-q}-ax^{2r-1}-a(x-1)^{2r-1}-\alpha ^{2}x^{2r-2q}+\alpha ax^{2r-q-1}+cx^{2r-p}\] \[-\alpha cx^{2r-q-p}\geq 0,\text{ if }x\text{ is big enough.}\]
Even more, \(\phi(x,r)=\mathcal{O}(x^{2r-q})\) as \(x\rightarrow+\infty\).
Indeed
\[\lim_{x\rightarrow+\infty}\frac{(x-1)^{2r}-x^{2r}+2\alpha x^{2r-q }-ax^{2r-1}-a(x-1)^{2r-1}-\alpha^{2}x^{2r-2q}+\alpha ax^{2r-q-1}-\alpha cx^{2 r-q-p}}{x^{2r-q}}\] \[=\lim_{x\rightarrow+\infty}\frac{\left(1-\frac{1}{x}\right)^{2r }-1-ax^{-1}-\frac{a}{x}\left(1-\frac{1}{x}\right)^{2r-1}}{x^{-q}}+2\alpha\] \[=\lim_{x\rightarrow+\infty}\frac{-\frac{2r}{q}\left(1-\frac{1}{ x}\right)^{2r-1}-\frac{a}{q}-\frac{a}{q}\left(1-\frac{1}{x}\right)^{2r-1}}{x^{1-q}}+2 \alpha=L.\]
Obviously, \(L=2\alpha>0\) if \(q<1\) and \(L=-2r-2a+2\alpha\) if \(q=1.\) But then \(\alpha>3\), \(a<\alpha-1\) and \(r\leq 1\), hence also in this case \(L=-2r-2a+2\alpha>0.\) Consequently, there exists \(C>0\) such that
\[\phi(x,r)\geq Cx^{2r-q}+cx^{2r-p}>0\text{ if }x\text{ is big enough}\]
and since \(p>1\) one has
\[\phi(x,r)=\mathcal{O}(x^{2r-q})\text{ as }x\rightarrow+\infty.\]
We conclude that there exist \(K_{2}\geq k_{0}\) such that for all \(\frac{1}{2}<r\leq\frac{q+1}{2}\) one has
\[\eta_{k}\geq 0,\text{ for all }k\geq K_{2}\text{ and }\eta_{k}=\mathcal{O}(k^{2r-q})\text{ as }k \rightarrow+\infty, \tag{21}\]
where \(\eta_{k}:=-\alpha_{k}^{2}b_{k}^{2}-\alpha_{k}a_{k}b_{k}+\alpha_{k}b_{k}^{2}c_{ k}+b_{k-1}^{2}-a_{k-1}b_{k-1}.\)
The coefficient of \(\|x_{k-1}\|^{2}\) is \(\sigma_{k-1}=\alpha_{k}b_{k}^{2}c_{k}\), hence we write the coefficient of \(\|x_{k}\|^{2}\) as
\[b_{k}^{2}c_{k}^{2}-\alpha_{k}b_{k}^{2}c_{k}-a_{k}b_{k}c_{k}=-\sigma_{k}+(b_{k}^ {2}c_{k}^{2}+\alpha_{k+1}b_{k+1}^{2}c_{k+1}-\alpha_{k}b_{k}^{2}c_{k}-a_{k}b_{k} c_{k}).\]
We have
\[b_{k}^{2}c_{k}^{2}+\alpha_{k+1}b_{k+1}^{2}c_{k+1}-\alpha_{k}b_{k} ^{2}c_{k}-a_{k}b_{k}c_{k} =c^{2}k^{2r-2p}+c(k+1)^{2r-p}-\alpha c(k+1)^{2r-p-q}\] \[-ck^{2r-p}+\alpha ck^{2r-p-q}-ack^{2r-1-p}.\]
We show that for all \(\frac{1}{2}<r\leq\frac{q+1}{2}\) one has
\[\phi(x,r)=c(x+1)^{2r-p}-cx^{2r-p}-\alpha c(x+1)^{2r-p-q}+\alpha cx^{2r-p-q}- acx^{2r-1-p}+c^{2}x^{2r-2p}\leq 0\]
for \(x\) big enough. Even more, \(\phi(x,r)=\mathcal{O}(x^{2r-p-1})\) as \(x\rightarrow+\infty\).
Indeed, since \(1<2r\leq q+1\leq p\) we have,
\[\lim_{x\to+\infty}\frac{c(x+1)^{2r-p}-cx^{2r-p}-\alpha c(x+1)^{2r-p- q}+\alpha cx^{2r-p-q}-acx^{2r-1-p}}{x^{2r-p-1}}\] \[=\lim_{x\to+\infty}\left(\frac{c\left(1+\frac{1}{x}\right)^{2r-p} -c}{x^{-1}}+\alpha c\frac{-\left(1+\frac{1}{x}\right)^{2r-p-q}+1}{x^{q-1}} \right)-ac\] \[=\lim_{x\to+\infty}\frac{c(2r-p)x^{-2}\left(1+\frac{1}{x}\right)^ {2r-p-1}}{x^{-2}}-ac=c(2r-p-a)<0.\]
Obviously, there exists \(C>0\) such that
\[c(x+1)^{2r-p}-cx^{2r-p}-\alpha c(x+1)^{2r-p-q}+\alpha cx^{2r-p-q}-acx^{2r-1-p} \leq-Cx^{2r-p-1}\]
for \(x\) big enough, and from the fact that \(p>1\) we get that \(\phi(x,r)\leq 0\) for \(x\) big enough.
We conclude that there exist \(K_{3}\geq k_{0}\) such that for all \(\frac{1}{2}<r\leq\frac{q+1}{2}\) one has
\[s_{k}\geq 0\text{ for all }k\geq K_{3}\text{ and }s_{k}=\mathcal{O}(k^{2r-p-1}) \text{ as }k\to+\infty, \tag{22}\]
where \(s_{k}:=-(b_{k}^{2}c_{k}^{2}+\alpha_{k+1}b_{k+1}^{2}c_{k+1}-\alpha_{k}b_{k}^{2 }c_{k}-a_{k}b_{k}c_{k}).\) Let \(K_{0}=\max(K_{1},K_{2},K_{3}).\)
Combining (15), (20), (21) and (22) we obtain that for all \(k\geq K_{0}\) and \(r\in\left(\frac{1}{2},\frac{q+1}{2}\right]\) it holds
\[v_{k+1}-v_{k}\leq -(\mu_{k}(f(x_{k})-f^{*})-\mu_{k-1}(f(x_{k-1})-f^{*}))-m_{k}(f(x_ {k})-f^{*})\] \[-(\nu_{k}\|x_{k}-x^{*}\|^{2}-\nu_{k-1}\|x_{k-1}-x^{*}\|^{2})-n_{k }\|x_{k}-x^{*}\|^{2}\] \[-(\sigma_{k}\|x_{k}\|^{2}-\sigma_{k-1}\|x_{k-1}\|^{2})-s_{k}\|x_{ k}\|^{2}\] \[-\eta_{k}\|x_{k}-x_{k-1}\|^{2}-b_{k-1}^{2}\lambda_{k-1}^{2}\|u_{k }\|^{2}+a_{k}b_{k}c_{k}\|x^{*}\|^{2}. \tag{23}\]
Consequently
\[E_{k+1} -E_{k}+m_{k}(f(x_{k})-f^{*})+\eta_{k}\|x_{k}-x_{k-1}\|^{2}+b_{k-1 }^{2}\lambda_{k-1}^{2}\|u_{k}\|^{2}+n_{k}\|x_{k}-x^{*}\|^{2}+s_{k}\|x_{k}\|^{2}\] \[\leq a_{k}b_{k}c_{k}\|x^{*}\|^{2}=ac\|x^{*}\|^{2}k^{2r-1-p}, \tag{24}\]
for all \(k\geq K_{0}.\)
Now in concordance to the hypotheses of the theorem we take \(r<\frac{q+1}{2}\) if \(p=q+1,\) consequently one has \(2r-1-p<-1,\) hence
\[ac\|x^{*}\|^{2}\sum_{k\geq K_{0}}k^{2r-1-p}<+\infty.\]
By summing up (24) from \(k=K_{0}\) to \(k=n>K_{0},\) we obtain that there exists \(C_{1}>0\) such that
\[E_{n+1}\leq C_{1},\]
consequently
\[\mu_{n}(f(x_{n})-f^{*})\leq C_{1},\text{ hence }f(x_{n})-f^{*}=\mathcal{O}(n^{-2r- \delta})\text{ as }n\to+\infty,\]
\[\nu_{n}\|x_{n}-x^{*}\|^{2}\leq C_{1},\text{ hence }\|x_{n}-x^{*}\|^{2}=\mathcal{O}(n^{q+1-2r}) \text{ as }n\to+\infty,\]
\[\sigma_{n}\|x_{n}\|^{2}\leq C_{1},\text{ hence }\|x_{n}\|^{2}=\mathcal{O}(n^{p-2r}) \text{ as }n\to+\infty\]
and
\[\sup_{n\geq 1}\|an^{r-1}(x_{n}-x^{*})+n^{r}(x_{n+1}-x_{n}+\lambda n^{\delta}u_{n +1})\|<+\infty.\]
Further,
\[\sum_{k=K_{0}}^{n}m_{k}(f(x_{k})-f^{*})\leq C_{1},\text{ hence according to (\ref{eq:k1}) one has }\sum_{k\geq 1}k^{2r+\delta-1}(f(x_{k})-f^{*})<+\infty,\] \[\sum_{k=K_{0}}^{n}\eta_{k}\|x_{k}-x_{k-1}\|^{2}\leq C_{1},\text{ hence according to (\ref{eq:k1}) one has }\sum_{k\geq 1}k^{2r-q}\|x_{k}-x_{k-1}\|^{2}<+\infty,\] \[\sum_{k=K_{0}}^{n}b_{k-1}^{2}\lambda_{k-1}^{2}\|u_{k}\|^{2}\leq C_ {1},\text{ hence one has }\sum_{k\geq 1}k^{2r+2\delta}\|u_{k}\|^{2}<+\infty,\] \[\sum_{k=K_{0}}^{n}s_{k}\|x_{k}\|^{2}\leq C_{1},\text{ hence according to (\ref{eq:k1}) one has }\sum_{k\geq 1}k^{2r-p-1}\|x_{k}\|^{2}<+\infty.\]
Moreover, \(\sum_{k=K_{0}}^{n}n_{k}\|x_{k}-x^{*}\|^{2}\leq C_{1}\), hence according to (20) one has
\[\sum_{k\geq 1}k^{q-p}\|x_{k}-x^{*}\|^{2}<+\infty,\text{ if }r=\frac{q+1}{2}\]
and
\[\sum_{k\geq 1}k^{2r-2-q}\|x_{k}-x^{*}\|^{2}<+\infty,\text{ if }r<\frac{q+1}{2}.\]
Since \(\sum_{k\geq 1}k^{2r+2\delta}\|u_{k}\|^{2}<+\infty\) one has \(\|u_{n}\|=o(n^{-r-\delta})\) as \(n\to+\infty\) which yields
\[\sup_{n\geq 1}\|an^{r-1}(x_{n}-x^{*})+n^{r}(x_{n+1}-x_{n})\|<+\infty.\]
Combining the latter relation with the facts that \(\|x_{n}-x^{*}\|^{2}=\mathcal{O}(n^{q+1-2r})\) as \(n\to+\infty\) and \(n^{r-1}\leq n^{\frac{2r-q-1}{2}}\) we obtain
\[\|x_{n+1}-x_{n}\|=\mathcal{O}(n^{-r})\text{ as }n\to+\infty.\]
Let us show now, that for \(\frac{1}{2}<r<\frac{q+1}{2}\) one has \(f(x_{n})-f^{*}=o(n^{-2r-\delta})\) and \(\|x_{n}-x_{n-1}\|=o(n^{-r})\).
From (24) we get
\[\sum_{k\geq 1}[(E_{k+1}-E_{k}]_{+}<+\infty,\text{ where }[s]_{+}=\max(s,0).\]
Therefore, the following limit exists
\[\lim_{k\to+\infty}(\|ak^{r-1}(x_{k}-x^{*})+k^{r}(x_{k+1}-x_{k}+\lambda k^{ \delta}u_{k+1})\|^{2}+\sigma_{k}\|x_{k}\|^{2}+\mu_{k}(f(x_{k})-f^{*})+\nu_{k} \|x_{k}-x^{*}\|^{2}). \tag{25}\]
Note that according to (7), (5) and (6) one has \(\sigma_{k}=\mathcal{O}(k^{2r-p})\), \(\mu_{k}=\mathcal{O}(k^{2r+\delta})\) and \(\nu_{k}=\mathcal{O}(k^{2r-1-q})\), respectively.
Further, we have \(\sum_{k\geq 1}k^{2r-2-q}\|x_{k}-x^{*}\|^{2}<+\infty\), if \(r<\frac{q+1}{2},\sum_{k\geq 1}k^{2r-q}\|x_{k}-x_{k-1}\|^{2}<+\infty\), \(\sum_{k\geq 1}k^{2r+2\delta}\|u_{k}\|^{2}<+\infty\), \(\sum_{k\geq 1}k^{2r+\delta-1}(f(x_{k})-f^{*})<+\infty\) and \(\sum_{k\geq 1}k^{2r-1-p}\|x_{k}\|^{2}<+\infty\), hence
\[\sum_{k\geq 1}\frac{1}{k}(\|ak^{r-1}(x_{k}-x^{*})+k^{r}(x_{k+1}-x_ {k}+\lambda k^{\delta}u_{k+1})\|^{2}+\sigma_{k}\|x_{k}\|^{2}+\mu_{k}(f(x_{k})-f ^{*})+\nu_{k}\|x_{k}-x^{*}\|^{2})\] \[\leq\sum_{k\geq 1}2a^{2}k^{2r-3}\|x_{k}-x^{*}\|^{2}+\sum_{k\geq 1}4k ^{2r-1}\|x_{k+1}-x_{k}\|^{2}+\sum_{k\geq 1}4\lambda^{2}(k+1)^{2r+2\delta-1}\|u_{k+1} \|^{2}\] \[+C\left(\sum_{k\geq 1}k^{2r-p-1}\|x_{k}\|^{2}+\sum_{k\geq 1}k^{2r+ \delta-1}(f(x_{k})-f^{*})+\sum_{k\geq 1}k^{2r-2-q}\|x_{k}-x^{*}\|^{2}\right)<+\infty, \tag{26}\]
for some constant \(C>0\).
Combining the facts that \(\sum_{k\geq 1}\frac{1}{k}=+\infty\) and \(\|u_{n}\|=o(n^{-r-\delta})\) as \(n\to+\infty\) with (26) and (25) we get
\[\lim_{k\to+\infty}(\|ak^{r-1}(x_{k}-x^{*})+k^{r}(x_{k+1}-x_{k}+\lambda k^{ \delta}u_{k+1})\|^{2}+\sigma_{k}\|x_{k}\|^{2}+\mu_{k}(f(x_{k})-f^{*})+\nu_{k} \|x_{k}-x^{*}\|^{2})=0\]
and the claim follows.
**Remark 2.2**.: Note that our analysis also works in case \(c=0\). In that case we do not have Tikhonov regularization, hence one does not have to impose any assumption on \(p\) in the hypotheses of Theorem 2.1 and the conclusion of the theorem remains valid.
### On weak convergence and boundedness of the generated sequences
In this section we provide sufficient conditions that assure that the sequence \((x_{k})\) generated by the algorithm (1) converges weakly to a minimizer of \(f.\) In order to continue our analysis we need the following lemma, which is an extension of Lemma 8.3 from [7].
**Lemma 2.3**.: _Assume that \((a_{k})_{k\geq 1},\)\((\omega_{k})_{k\geq 1}\) are nonnegative real sequences that after an index \(k_{0}\) satisfy_
\[a_{k+1}\leq\left(1-\frac{\alpha}{k^{q}}\right)a_{k}+\omega_{k},\text{ for all }k\geq k_{0},\]
_where \(q\in\left]0,1\right]\) and for \(q=1\) one has \(\alpha>1.\) Assume further, that \(\sum_{k\geq k_{0}}k^{q}\omega_{k}<+\infty.\) Then,_
\[\sum_{k\geq 1}a_{k}<+\infty.\]
Proof.: We have \(k^{q}a_{k+1}-k^{q}a_{k}+\alpha a_{k}\leq k^{q}\omega_{k},\text{ for all }k\geq k_{0}.\) If \(q=1\) then \(\alpha>1\) hence we have for all \(k\geq k_{0}\) that \(ka_{k+1}-ka_{k}+\alpha a_{k}=ka_{k+1}-(k-1)a_{k}+(\alpha-1)a_{k},\) consequently
\[ka_{k+1}-(k-1)a_{k}+(\alpha-1)a_{k}\leq k\omega_{k},\text{ for all }k\geq k_{0}.\]
By summing up the latter relation from \(k=k_{0}\) to \(k=n>k_{0}\) we get
\[na_{n+1}+(\alpha-1)\sum_{k=k_{0}}^{n}a_{k}\leq(k_{0}-1)a_{k_{0}}+\sum_{k=k_{0} }^{n}k\omega_{k}.\]
Now, we omit the term \(na_{n+1}\) and we take the limit \(n\to+\infty\) in order to show that
\[\sum_{k=k_{0}}^{+\infty}a_{k}\leq\frac{(k_{0}-1)a_{k_{0}}}{\alpha-1}+\frac{1} {\alpha-1}\sum_{k=k_{0}}^{+\infty}k\omega_{k}<+\infty.\]
If \(q<1\) then, since \(\lim_{k\to+\infty}\frac{k^{q}-(k-1)^{q}}{k^{q-1}}=q>0,\) we conclude that there exists \(C>0\) and \(k_{1}\geq k_{0}\) such that \(k^{q}-(k-1)^{q}\leq Ck^{q-1}\) for all \(k\geq k_{1}.\)
Hence, there exists \(k_{2}\geq k_{1}\) such that for all \(k\geq k_{2}\) one has
\[-k^{q}\geq-(k-1)^{q}-Ck^{q-1}\geq-(k-1)^{q}-\frac{\alpha}{2}.\]
Consequently \(k^{q}a_{k+1}-(k-1)^{q}a_{k}+\frac{\alpha}{2}a_{k}\leq k^{q}\omega_{k},\text{ for all }k\geq k_{2}.\) By summing up the latter relation from \(k=k_{2}\) to \(k=n>k_{2}\) we get
\[n^{q}a_{n+1}+\frac{\alpha}{2}\sum_{k=k_{2}}^{n}a_{k}\leq(k_{2}-1)^{q}a_{k_{2} }+\sum_{k=k_{2}}^{n}k^{q}\omega_{k}\]
and the conclusion follows.
Now we can prove the weak convergence of the sequences generated by algorithm (1) to a minimizer of the objective function \(f.\)
**Theorem 2.4**.: _Assume that \(\alpha>0,\)\(0<q<1,0\leq\delta,\)\(q+1<p\leq 2\) and for \(p=2\) one has \(c>q(1-q)\), or \(q=1\), \(p>2\), \(\alpha>3,\)\(0\leq\delta<\alpha-3.\) Then the sequence \((x_{n})\) generated by (1) converges weakly to a minimizer of \(f.\)_
Proof.: We use the Opial lemma (see [30]). To this purpose first we show that for all \(x^{*}\in\operatorname*{argmin}f\) the limit \(\lim_{k\to+\infty}\|x_{k}-x^{*}\|\) exists. Let \(x^{*}\in\operatorname*{argmin}f\) and for all \(k\geq 1\) consider the sequence \(h_{k}=\frac{1}{2}\|x_{k}-x^{*}\|^{2}.\) Then, by using (1) we have
\[h_{k+1}-h_{k} =\frac{1}{2}\|x_{k+1}-x_{k}\|^{2}+\langle x_{k+1}-x_{k},x_{k}-x^{*}\rangle\] \[=\frac{1}{2}\|x_{k+1}-x_{k}\|^{2}+\langle\alpha_{k}(x_{k}-x_{k-1 })-\lambda_{k}u_{k+1}-c_{k}x_{k},x_{k}-x^{*}\rangle. \tag{27}\]
Further, one has
\[\langle\alpha_{k}(x_{k}-x_{k-1}),x_{k}-x^{*}\rangle=\frac{\alpha_{k}}{2}(\|x_{k}-x _{k-1}\|^{2}+\|x_{k}-x^{*}\|^{2}-\|x_{k-1}-x^{*}\|^{2}),\]
\[\langle-\lambda_{k}u_{k+1},x_{k}-x^{*}\rangle\leq\frac{\lambda}{2}(k^{1-q}\|x_ {k+1}-x_{k}\|^{2}+k^{q+2\delta-1}\|u_{k+1}\|^{2})\]
and
\[\langle-c_{k}x_{k},x_{k}-x^{*}\rangle=\frac{c_{k}}{2}(\|x^{*}\|^{2}-\|x_{k}\|^ {2}-\|x_{k}-x^{*}\|^{2}).\]
Consequently, (27) leads to
\[h_{k+1}-h_{k} \leq\alpha_{k}(h_{k}-h_{k-1})+\frac{\alpha_{k}}{2}\|x_{k}-x_{k-1 }\|^{2}+\frac{1}{2}\|x_{k+1}-x_{k}\|^{2}+\frac{\lambda}{2}k^{1-q}\|x_{k+1}-x_ {k}\|^{2}\] \[+\frac{\lambda}{2}k^{q+2\delta-1}\|u_{k+1}\|^{2}+\frac{c_{k}}{2} \|x^{*}\|^{2}. \tag{28}\]
We use Lemma 2.3 with \(a_{k}=[h_{k}-h_{k-1}]_{+}\) and \(\omega_{k}=\frac{\alpha_{k}}{2}\|x_{k}-x_{k-1}\|^{2}+\frac{1}{2}\|x_{k+1}-x_{ k}\|^{2}+\frac{\lambda}{2}k^{1-q}\|x_{k+1}-x_{k}\|^{2}+\frac{\lambda}{2}k^{q+2 \delta-1}\|u_{k+1}\|^{2}+\frac{c_{k}}{2}\|x^{*}\|^{2}.\) Hence, we need to show that \(\sum_{k\geq 1}k^{q}\omega_{k}<+\infty.\)
According to Theorem 2.1 (i) and the fact that \(p>q+1\) we have
\[\sum_{k\geq 1}k\|x_{k}-x_{k-1}\|^{2}<+\infty,\,\sum_{k=1}^{+\infty}k^{q+2 \delta+1}\|u_{k}\|^{2}<+\infty\text{ and }\sum_{k\geq 1}k^{q}c_{k}=\sum_{k\geq 1} \frac{c}{k^{p-q}}<+\infty.\]
Now, it is obvious that \(\sum_{k=1}^{+\infty}k^{2q+2\delta-1}\|u_{k+1}\|^{2}<+\infty.\) Consequently, \(\sum_{k\geq 1}k^{q}\omega_{k}<+\infty\) and by Lemma 2.3 we get that
\[\sum_{k\geq 1}[h_{k}-h_{k-1}]_{+}<+\infty,\]
which shows that \(\lim_{k\to+\infty}\|x_{k}-x^{*}\|\) exists.
Next we show that every weak sequential cluster point of \((x_{k})\) belongs to \(\operatorname{argmin}f.\) Indeed, let \(x^{*}\) a weak sequential cluster point of \((x_{k}).\) Then there exists an increasing sequence of natural numbers \((k_{n})\) with \(k_{n}\to+\infty,\text{ as }n\to+\infty,\) such that \(x_{k_{n}}\rightharpoonup x^{*}\) as \(n\to+\infty,\) where "\(\rightharpoonup\)" denotes the convergence with respect of weak topology of \(\mathcal{H}.\) Since \(f\) is convex and lower semicontinuous it is also lower semicontinuous with respect to the weak topology of \(\mathcal{H}\). Further, according to Theorem 2.1 one has \(\lim_{n\to+\infty}f(x_{k_{n}})=\min_{\mathcal{H}}f,\) hence
\[f(x^{*})\leq\liminf_{n\to+\infty}f(x_{k_{n}})=\min_{\mathcal{H}}f,\]
which shows that \(x^{*}\in\operatorname{argmin}f.\)
Consequently, Opial's lemma yields that the sequence \((x_{n})\) converges weakly to a minimizer of our objective function \(f\).
**Remark 2.5**.: Also here our analysis remains valid in case \(c=0,\) hence in that case one may obtain the weak convergence of the sequences generated by Algorithm (1) without any restriction imposed on the parameter \(p.\)
According to Theorem 2.4 in case \(\alpha>0,\)\(0<q<1,0\leq\delta,\)\(q+1<p\leq 2\) the sequence \((x_{n})\) generated by (1) is bounded. We show next that this result also holds in case \(1<p<q+1.\)
**Theorem 2.6**.: _Assume that \(\alpha>0,\)\(0<q<1,0\leq\delta,\)\(1<p<q+1\). Then the sequence \((x_{n})\) generated by (1) is bounded._
Proof.: We use the energy functional and notations from the proof of Theorem 2.1 but we assume that \(\frac{p}{2}<r<\frac{q+1}{2}.\) Note that all the estimates from the proof of Theorem 2.1 concerning the coefficients \(\mu_{k},\nu_{k},\sigma_{k},\mu_{k},\eta_{k}\) remains valid.
Let us compute the order of \(n_{k}.\) We have \(n_{k}=-(\alpha_{k}a_{k}b_{k}-a_{k}b_{k}c_{k}-a_{k-1}b_{k-1}-\alpha_{k+1}a_{k+ 1}b_{k+1}+a_{k}b_{k}),\) hence
\[n_{k} =a((k+1)^{2r-1}+(k-1)^{2r-1}-2k^{2r-1})-a\alpha((k+1)^{2r-1-q}-k ^{2r-1-q})+ack^{2r-1-p}\] \[=\mathcal{O}(k^{2r-3})+\mathcal{O}(k^{2r-2-q})+ack^{2r-1-p}.\]
Consequently, \(n_{k}>0\) for \(k\) big enough and \(n_{k}=\mathcal{O}(k^{2r-1-p})\) as \(k\to+\infty.\)
Further, we have \(s_{k}=-(b_{k}^{2}c_{k}^{2}+\alpha_{k+1}b_{k+1}^{2}c_{k+1}-\alpha_{k}b_{k}^{2}c_{k }-a_{k}b_{k}c_{k})\), hence
\[s_{k} =ack^{2r-1-p}+c(k^{2r-p}-(k+1)^{2r-p})+\alpha c((k+1)^{2r-q-p}-k^{2r -q-p})+c^{2}k^{2r-2p}\] \[=ack^{2r-1-p}-c(2r-p)\mathcal{O}(k^{2r-1-p})-\alpha c(2r-q-p) \mathcal{O}(k^{2r-1-q-p})+\mathcal{O}(k^{2r-2p}).\]
Since \(a>2r+\delta>2r-p\) we conclude that \(s_{k}>0\) for \(k\) big enough and \(s_{k}=\mathcal{O}(k^{2r-1-p})\) as \(k\to+\infty\). Consequently (24) holds with these coefficients after an index \(K_{0}\) big enough. By neglecting the nonegative term \(m_{k}(f(x_{k})-f^{*})+\eta_{k}\|x_{k}-x_{k-1}\|^{2}+b_{k-1}^{2}\lambda_{k-1}^ {2}\|u_{k}\|^{2}+n_{k}\|x_{k}-x^{*}\|^{2}+s_{k}\|x_{k}\|^{2}\) in (24) we get
\[E_{k+1}-E_{k}\leq ac\|x^{*}\|^{2}k^{2r-1-p},\text{ for all }k\geq K_{0}. \tag{29}\]
By summing up (29) from \(k=K_{0}\) to \(k=n>K_{0}\), we obtain that
\[E_{n+1}\leq ac\|x^{*}\|^{2}\sum_{k=K_{0}}^{n}k^{2r-1-p}+E_{K_{0}},\]
and since \(\sum_{k=K_{0}}^{n}k^{2r-1-p}=\mathcal{O}(n^{2r-p})\) as \(n\to+\infty\) we conclude that there exists \(C_{0}>0\) such that \(E_{n+1}\leq C_{0}n^{2r-p}.\) In particular we have \(\sigma_{n}\|x_{n}\|^{2}\leq C_{0}n^{2r-p}\) and according to (7) \(\sigma(n)=\mathcal{O}(n^{2r-p})\), hence \(x_{n}\) is bounded.
## 3. Convergence rates and strong convergence results for the case \(p\leq q+1\)
We continue the present section by emphasizing the main idea behind the Tikhonov regularization, which will assure strong convergence results for the sequence generated our algorithm (1) to a minimizer of the objective function of minimal norm. By \(\overline{x}_{k}\) we denote the unique solution of the strongly convex minimization problem
\[\min_{x\in\mathcal{H}}\left(f(x)+\frac{c}{2k^{p}}\|x\|^{2}\right).\]
We know, (see for instance [9]), that \(\lim_{k\to+\infty}\overline{x}_{k}=x^{*}\), where \(x^{*}=\operatorname*{argmin}_{x\in\operatorname*{argmin}f}\|x\|\) is the minimal norm element from the set \(\operatorname*{argmin}f.\) Obviously, \(\{x^{*}\}=\operatorname*{pr}_{\operatorname*{argmin}f}0\) and we have the inequality \(\|\overline{x}_{k}\|\leq\|x^{*}\|\) (see [15]).
Since \(\overline{x}_{k}\) is the unique minimum of the strongly convex function \(f_{k}(x)=f(x)+\frac{c}{2k^{p}}\|x\|^{2}\), obviously one has
\[\partial f_{k}(\overline{x}_{k})=\partial f(\overline{x}_{k})+\frac{c}{k^{p}} \overline{x}_{k}\ni 0. \tag{30}\]
Further, Lemma A.1 c) leads to the following. For every \(p_{1}>p\) there exists \(k_{0}\geq 1\) such that
\[\|\overline{x}_{k+1}-\overline{x}_{k}\|\leq\min\left(\frac{p_{1}}{k}\| \overline{x}_{k}\|,\frac{p_{1}}{k+1}\|\overline{x}_{k+1}\|\right)\text{ for every }k\geq k_{0}. \tag{31}\]
Note that since \(f_{k}\) is strongly convex, from the subgradient inequality we have
\[f_{k}(y)-f_{k}(x)\geq\langle u_{k},y-x\rangle+\frac{c}{2k^{p}}\|x-y\|^{2},\text { for all }x,y\in\mathcal{H}\text{ and }u_{k}\in\partial f_{k}(x). \tag{32}\]
In particular
\[f_{k}(x)-f_{k}(\overline{x}_{k})\geq\frac{c}{2k^{p}}\|x-\overline{x}_{k}\|^{2},\text{ for all }x\in\mathcal{H}. \tag{33}\]
Finally, observe that for all \(x,y\in\mathcal{H}\), one has
\[f(x)-f(y)=(f_{k}(x)-f_{k}(\overline{x}_{k}))+(f_{k}(\overline{x}_{k})-f_{k}(y) )+\frac{c}{2k^{p}}(\|y\|^{2}-\|x\|^{2})\leq f_{k}(x)-f_{k}(\overline{x}_{k})+ \frac{c}{2k^{p}}\|y\|^{2}. \tag{34}\]
### Convergence rates
Concerning convergence rates for the function values, discrete velocity and subgradient even for this restrictive case we obtain some results that are comparable to the convergence rates obtained for the famous Nesterov algorithm [29].
The main result of the present section is the following.
**Theorem 3.1**.: _Assume that \(0<q<1\), \(1<p\leq q+1\), \(\lambda_{k}=\lambda k^{\delta},\)\(\lambda>0,\)\(\delta\leq 0\) and if \(\delta=0\) then \(\lambda\in]0,1[.\) Let \((x_{k})\) be a sequence generated by (1). For every \(k\geq 2\) let us denote by \(u_{k}\) the element from \(\partial f(x_{k})\) that satisfies (2) with equality, i.e.,_
\[x_{k}=\alpha_{k-1}(x_{k-1}-x_{k-2})-\lambda_{k-1}u_{k}+\left(1-c_{k-1}\right)x_{ k-1}.\]
_Then the following results are valid._
1. _If_ \(p<q+1\) _then_ \((x_{k})\) _is bounded and_ \[f(x_{k})-\min_{\mathcal{H}}f=\mathcal{O}(k^{-p-\delta}),\,\|x_{k}-x_{k-1}\|= \mathcal{O}(k^{-\frac{p}{2}})\text{ and }\|u_{k}\|=\mathcal{O}(k^{-\frac{p}{2}-\delta})\text{ as }k\to+\infty.\] _Further, for all_ \(s\in\left]\frac{1}{2},\frac{p}{2}\right[\) _one has_ \[\sum_{k=1}^{+\infty}k^{2s+\delta-1}(f(x_{k})-\min_{\mathcal{H}}f)<+\infty, \sum_{k=1}^{+\infty}k^{2s-q}\|x_{k}-x_{k-1}\|^{2}<+\infty\text{ and }\sum_{k=2}^{+\infty}k^{2s+2\delta}\|u_{k}\|^{2}<+\infty.\] _Moreover, the following ergodic type convergence results hold._ \[\limsup_{n\to+\infty}\frac{\sum_{k=1}^{n}k^{q+\delta}(f_{k-1}(x_{k-1})-f_{k-1} (\overline{x}_{k-1}))}{n^{q+1-p}}<+\infty,\limsup_{n\to+\infty}\frac{\sum_{k =1}^{n}k\|x_{k}-x_{k-1}\|^{2}}{n^{q+1-p}}<+\infty\] \[\text{ and }\limsup_{n\to+\infty}\frac{\sum_{k=1}^{n}k^{q+1+2 \delta}\|u_{k}\|^{2}}{n^{q+1-p}}<+\infty.\]
2. _If_ \(p=q+1\) _then_ \[f(x_{k})-\min_{\mathcal{H}}f=\mathcal{O}(k^{-p-\delta}\ln k),\,\|x_{k}-x_{k-1} \|=\mathcal{O}(k^{-\frac{p}{2}}\sqrt{\ln k})\text{ and }\|u_{k}\|=\mathcal{O}(k^{-\frac{p}{2}- \delta}\sqrt{\ln k})\text{ as }k\to+\infty.\] _Further, for all_ \(s\in\left]\frac{1}{2},\frac{p}{2}\right[\) _one has_ \[\sum_{k=1}^{+\infty}k^{2s+\delta-1}(f(x_{k})-\min_{\mathcal{H}}f)<+\infty, \sum_{k=1}^{+\infty}k^{2s-q}\|x_{k}-x_{k-1}\|^{2}<+\infty\text{ and }\sum_{k=2}^{+\infty}k^{2s+2\delta}\|u_{k}\|^{2}<+\infty.\] _Moreover, the following ergodic type convergence results hold._ \[\limsup_{n\to+\infty}\frac{\sum_{k=1}^{n}k^{q+\delta}(f_{k-1}(x_{k-1})-f_{k-1 }(\overline{x}_{k-1}))}{\ln n}<+\infty,\limsup_{n\to+\infty}\frac{\sum_{k=1}^ {n}k\|x_{k}-x_{k-1}\|^{2}}{\ln n}<+\infty\] \[\text{ and }\limsup_{n\to+\infty}\frac{\sum_{k=1}^{n}k^{q+1+2 \delta}\|u_{k}\|^{2}}{\ln n}<+\infty.\] _Additionally, if_ \(\delta<0\) _one has_ \[\limsup_{n\to+\infty}\frac{\sum_{k=1}^{n}k^{q+\delta}(f(x_{k-1})-\min_{ \mathcal{H}}f)}{\ln n}<+\infty.\]
Proof.: Consider first \(a_{k}=ak^{u}\), \(b_{k}=k^{v}\), \(u,v\in\mathbb{R},\,a>0,\,u+1\geq v\geq u+q\) and define, for every \(k\geq 2\), the following discrete energy functional.
\[E_{k} =\mu_{k-1}(f_{k-1}(x_{k-1})-f_{k-1}(\overline{x}_{k-1}))+\|a_{k-1 }(x_{k-1}-\overline{x}_{k-1})+b_{k-1}(x_{k}-x_{k-1}+\lambda_{k-1}u_{k})\|^{2}\] \[+\nu_{k-1}\|x_{k-1}-\overline{x}_{k-1}\|^{2}+\sigma_{k-1}\|x_{k-1 }\|^{2}, \tag{35}\]
where the sequences \((\mu_{k})\), \((\nu_{k})\) and \((\sigma_{k})\) will be specified lather.
**I. Lyapunov analysis**
All the following estimates hold after an index \(k\) big enough. Now, if we denote \(v_{k}=\|a_{k-1}(x_{k-1}-\overline{x}_{k-1})+b_{k-1}(x_{k}-x_{k-1}+\lambda_{k- 1}u_{k})\|^{2}\) then proceeding as in the proof of Theorem (2.1) we obtain
\[v_{k}= a_{k}b_{k-1}\|x_{k}-\overline{x}_{k-1}\|^{2}+(a_{k-1}^{2}-a_{k-1}b_{k- 1})\|x_{k-1}-\overline{x}_{k-1}\|^{2}+(b_{k-1}^{2}-a_{k-1}b_{k-1})\|x_{k}-x_{k-1 }\|^{2}\] \[+b_{k-1}^{2}\lambda_{k-1}^{2}\|u_{k}\|^{2}+2a_{k-1}b_{k-1}\lambda_ {k-1}\langle u_{k},x_{k}-\overline{x}_{k-1}\rangle\] \[+(2b_{k-1}^{2}-2a_{k-1}b_{k-1})\lambda_{k-1}\langle u_{k},x_{k}- x_{k-1}\rangle. \tag{36}\]
Further, from (4) we have
\[v_{k+1}=\|a_{k}(x_{k}-\overline{x}_{k})+b_{k}(\alpha_{k}(x_{k}-x_{k-1})-c_{k}x _{k})\|^{2}.\]
Therefore, after development we get
\[v_{k+1}= a_{k}^{2}\|x_{k}-\overline{x}_{k}\|^{2}+\alpha_{k}^{2}b_{k}^{2}\|x_{k }-x_{k-1}\|^{2}+b_{k}^{2}c_{k}^{2}\|x_{k}\|^{2}+2\alpha_{k}a_{k}b_{k}\langle x_{k }-x_{k-1},x_{k}-\overline{x}_{k}\rangle\] \[-2\alpha_{k}b_{k}^{2}c_{k}\langle x_{k}-x_{k-1},x_{k}\rangle-2a_{k} b_{k}c_{k}\langle x_{k},x_{k}-\overline{x}_{k}\rangle. \tag{37}\]
Further,
\[2\alpha_{k}a_{k}b_{k}\langle x_{k}-x_{k-1},x_{k}-\overline{x}_{k} \rangle=-\alpha_{k}a_{k}b_{k}(\|x_{k-1}-\overline{x}_{k}\|-\|x_{k}-x_{k-1}\|^{2 }-\|x_{k}-\overline{x}_{k}\|^{2})\] \[-2\alpha_{k}b_{k}^{2}c_{k}\langle x_{k}-x_{k-1},x_{k}\rangle= \alpha_{k}b_{k}^{2}c_{k}(\|x_{k-1}\|^{2}-\|x_{k}-x_{k-1}\|^{2}-\|x_{k}\|^{2})\] \[-2a_{k}b_{k}c_{k}\langle x_{k},x_{k}-\overline{x}_{k}\rangle=a_{k }b_{k}c_{k}(\|\overline{x}_{k}\|^{2}-\|x_{k}-\overline{x}_{k}\|^{2}-\|x_{k}\|^ {2}).\]
Hence, (37) yields
\[v_{k+1} =(a_{k}^{2}+\alpha_{k}a_{k}b_{k}-a_{k}b_{k}c_{k})\|x_{k}-\overline {x}_{k}\|^{2}-\alpha_{k}a_{k}b_{k}\|x_{k-1}-\overline{x}_{k}\|^{2}\] \[+(\alpha_{k}^{2}b_{k}^{2}+\alpha_{k}a_{k}b_{k}-\alpha_{k}b_{k}^{ 2}c_{k})\|x_{k}-x_{k-1}\|^{2}+(b_{k}^{2}c_{k}^{2}-\alpha_{k}b_{k}^{2}c_{k}-a_{ k}b_{k}c_{k})\|x_{k}\|^{2}\] \[+\alpha_{k}b_{k}^{2}c_{k}\|x_{k-1}\|^{2}+a_{k}b_{k}c_{k}\| \overline{x}_{k}\|^{2}. \tag{38}\]
Consequently, one has
\[v_{k+1}-v_{k} =(a_{k}^{2}+\alpha_{k}a_{k}b_{k}-a_{k}b_{k}c_{k})\|x_{k}-\overline {x}_{k}\|^{2}-a_{k-1}b_{k-1}\|x_{k}-\overline{x}_{k-1}\|^{2}\] \[-\alpha_{k}a_{k}b_{k}\|x_{k-1}-\overline{x}_{k}\|^{2}-(a_{k-1}^{2 }-a_{k-1}b_{k-1})\|x_{k-1}-\overline{x}_{k-1}\|^{2}\] \[+(\alpha_{k}^{2}b_{k}^{2}+\alpha_{k}a_{k}b_{k}-\alpha_{k}b_{k}^{ 2}c_{k}-b_{k-1}^{2}+a_{k-1}b_{k-1})\|x_{k}-x_{k-1}\|^{2}\] \[+(b_{k}^{2}c_{k}^{2}-\alpha_{k}b_{k}^{2}c_{k}-a_{k}b_{k}c_{k})\| x_{k}\|^{2}+\alpha_{k}b_{k}^{2}c_{k}\|x_{k-1}\|^{2}-b_{k-1}^{2}\lambda_{k-1}^{2} \|u_{k}\|^{2}\] \[+(2b_{k-1}^{2}-2a_{k-1}b_{k-1})\lambda_{k-1}\langle u_{k},x_{k-1} -x_{k}\rangle\] \[+2a_{k-1}b_{k-1}\lambda_{k-1}\langle u_{k},\overline{x}_{k-1}-x_ {k}\rangle+a_{k}b_{k}c_{k}\|\overline{x}_{k}\|^{2}. \tag{39}\]
Now, by using the sub-gradient inequality we get
\[(2b_{k-1}^{2}-2a_{k-1}b_{k-1})\lambda_{k-1}\langle u_{k},x_{k-1} -x_{k}\rangle+2a_{k-1}b_{k-1}\lambda_{k-1}\langle u_{k},\overline{x}_{k-1}-x_{ k}\rangle\] \[\leq -2b_{k-1}^{2}\lambda_{k-1}f(x_{k})+(2b_{k-1}^{2}-2a_{k-1}b_{k-1}) \lambda_{k-1}f(x_{k-1})+2a_{k-1}b_{k-1}\lambda_{k-1}f(\overline{x}_{k-1})\] \[= -2b_{k-1}^{2}\lambda_{k-1}(f_{k}(x_{k})-f_{k}(\overline{x}_{k})) +2b_{k-2}^{2}\lambda_{k-2}(f_{k-1}(x_{k-1})-f_{k-1}(\overline{x}_{k-1}))\] \[+[(2b_{k-1}^{2}-2a_{k-1}b_{k-1})\lambda_{k-1}-2b_{k-2}^{2} \lambda_{k-2}](f_{k-1}(x_{k-1})-f_{k-1}(\overline{x}_{k-1}))\] \[+2b_{k-1}^{2}\lambda_{k-1}(f_{k-1}(\overline{x}_{k-1})-f_{k}( \overline{x}_{k}))\] \[+b_{k-1}^{2}\lambda_{k-1}c_{k}\|x_{k}\|^{2}+(a_{k-1}b_{k-1} \lambda_{k-1}c_{k-1}-b_{k-1}^{2}\lambda_{k-1}c_{k-1})\|x_{k-1}\|^{2}\] \[-a_{k-1}b_{k-1}\lambda_{k-1}c_{k-1}\|\overline{x}_{k-1}\|^{2}. \tag{40}\]
Further, according to (30) one has \(f_{k-1}(\overline{x}_{k})-f_{k-1}(\overline{x}_{k-1})\geq\frac{c_{k-1}}{2}\| \overline{x}_{k}-\overline{x}_{k-1}\|^{2}\) hence
\[2b_{k-1}^{2}\lambda_{k-1}(f_{k-1}(\overline{x}_{k-1})-f_{k}( \overline{x}_{k})) =2b_{k-1}^{2}\lambda_{k-1}\left(f_{k-1}(\overline{x}_{k-1})-f_ {k-1}(\overline{x}_{k})+\frac{c_{k-1}-c_{k}}{2}\|\overline{x}_{k}\|^{2}\right)\] \[\leq 2b_{k-1}^{2}\lambda_{k-1}\left(-\frac{c_{k-1}}{2}\| \overline{x}_{k}-\overline{x}_{k-1}\|^{2}+\frac{c_{k-1}-c_{k}}{2}\|\overline{x }_{k}\|^{2}\right)\]
hence (40) becomes
\[(2b_{k-1}^{2}-2a_{k-1}b_{k-1})\lambda_{k-1}\langle u_{k},x_{k-1} -x_{k}\rangle+2a_{k-1}b_{k-1}\lambda_{k-1}\langle u_{k},\overline{x}_{k-1}-x_{k}\rangle\] \[\leq -2b_{k-1}^{2}\lambda_{k-1}(f_{k}(x_{k})-f_{k}(\overline{x}_{k}))+2b_ {k-2}^{2}\lambda_{k-2}(f_{k-1}(x_{k-1})-f_{k-1}(\overline{x}_{k-1}))\] \[+[(2b_{k-1}^{2}-2a_{k-1}b_{k-1})\lambda_{k-1}-2b_{k-2}^{2} \lambda_{k-2}](f_{k-1}(x_{k-1})-f_{k-1}(\overline{x}_{k-1}))\] \[+b_{k-1}^{2}\lambda_{k-1}c_{k}\|x_{k}\|^{2}+(a_{k-1}b_{k-1} \lambda_{k-1}c_{k-1}-b_{k-1}^{2}\lambda_{k-1}c_{k-1})\|x_{k-1}\|^{2}\] \[+b_{k-1}^{2}\lambda_{k-1}c_{k-1}-c_{k}\|\overline{x}_{k}\|^{2}-a_ {k-1}b_{k-1}\lambda_{k-1}c_{k-1}\|\overline{x}_{k-1}\|^{2}-b_{k-1}^{2}\lambda_{k- 1}c_{k-1}\|\overline{x}_{k}-\overline{x}_{k-1}\|^{2}. \tag{41}\]
Combining (39) and (41) we get
\[v_{k+1}-v_{k}+2b_{k-1}^{2}\lambda_{k-1}(f_{k}(x_{k})-f_{k}(\overline {x}_{k}))-2b_{k-2}^{2}\lambda_{k-2}(f_{k-1}(x_{k-1})-f_{k-1}(\overline{x}_{k-1}))\] \[-\left[(2b_{k-1}^{2}-2a_{k-1}b_{k-1})\lambda_{k-1}-2b_{k-2}^{2} \lambda_{k-2}](f_{k-1}(x_{k-1})-f_{k-1}(\overline{x}_{k-1}))+b_{k-1}^{2}\lambda _{k-1}^{2}\|u_{k}\|^{2}\right.\] \[\leq(a_{k}^{2}+\alpha_{k}a_{k}b_{k}-a_{k}b_{k}c_{k})\|x_{k}- \overline{x}_{k}\|^{2}-a_{k-1}b_{k-1}\|x_{k}-\overline{x}_{k-1}\|^{2}\] \[-\alpha_{k}a_{k}b_{k}\|x_{k-1}-\overline{x}_{k}\|^{2}-(a_{k-1}^{2 }-a_{k-1}b_{k-1})\|x_{k-1}-\overline{x}_{k-1}\|^{2}\] \[+(\alpha_{k}^{2}b_{k}^{2}+\alpha_{k}a_{k}b_{k}-\alpha_{k}b_{k}^{2 }c_{k}-b_{k-1}^{2}+a_{k-1}b_{k-1})\|x_{k}-x_{k-1}\|^{2}\] \[+(b_{k}^{2}c_{k}^{2}-\alpha_{k}b_{k}^{2}c_{k}-a_{k}b_{k}c_{k}+b_{ k-1}^{2}\lambda_{k-1}c_{k})\|x_{k}\|^{2}\] \[+(a_{k-1}b_{k-1}\lambda_{k-1}c_{k-1}-b_{k-1}^{2}\lambda_{k-1}c_{k -1}+\alpha_{k}b_{k}^{2}c_{k})\|x_{k-1}\|^{2}\] \[+[b_{k-1}^{2}\lambda_{k-1}(c_{k-1}-c_{k})+a_{k}b_{k}c_{k}]\| \overline{x}_{k}\|^{2}-a_{k-1}b_{k-1}\lambda_{k-1}c_{k-1}\|\overline{x}_{k-1}\| ^{2}\] \[-b_{k-1}^{2}\lambda_{k-1}c_{k-1}\|\overline{x}_{k}-\overline{x}_{k -1}\|^{2}. \tag{42}\]
We estimate in what follows the entities \(-a_{k-1}b_{k-1}\|x_{k}-\overline{x}_{k-1}\|^{2}\) and \(-\alpha_{k}a_{k}b_{k}\|x_{k-1}-\overline{x}_{k}\|^{2}.\) Using the straightforward inequality \(\pm 2\langle a,b\rangle\leq\frac{1}{s}\|a\|^{2}+s\|b\|^{2}\) for all \(s>0\) we obtain that
\[-a_{k-1}b_{k-1}\|x_{k}-\overline{x}_{k-1}\|^{2} =-a_{k-1}b_{k-1}\|(x_{k}-\overline{x}_{k})+(\overline{x}_{k}- \overline{x}_{k-1})\|^{2}=-a_{k-1}b_{k-1}\|x_{k}-\overline{x}_{k}\|^{2}\] \[-a_{k-1}b_{k-1}\|\overline{x}_{k}-\overline{x}_{k-1}\|^{2}-2a_{k-1 }b_{k-1}\langle x_{k}-x_{k-1},\overline{x}_{k}-\overline{x}_{k-1}\rangle\] \[-2a_{k-1}b_{k-1}\langle x_{k-1}-\overline{x}_{k-1},\overline{x}_{k }-\overline{x}_{k-1}\rangle+2a_{k-1}b_{k-1}\langle\overline{x}_{k}-\overline{ x}_{k-1},\overline{x}_{k}-\overline{x}_{k-1}\rangle\] \[\leq-a_{k-1}b_{k-1}\|x_{k}-\overline{x}_{k}\|^{2}+2a_{k-1}b_{k-1} \|x_{k}-x_{k-1}\|^{2}\] \[+\left(1+\frac{1}{2}\right)a_{k-1}b_{k-1}\|\overline{x}_{k}- \overline{x}_{k-1}\|^{2}+2a_{k-1}b_{k-1}\langle x_{k-1}-\overline{x}_{k-1}, \overline{x}_{k-1}-\overline{x}_{k}\rangle. \tag{43}\]
Further,
\[-\alpha_{k}a_{k}b_{k}\|x_{k-1}-\overline{x}_{k}\|^{2} =-\alpha_{k}a_{k}b_{k}\|x_{k-1}-\overline{x}_{k-1}\|^{2}-\alpha_{ k}a_{k}b_{k}\|\overline{x}_{k-1}-\overline{x}_{k}\|^{2}\] \[-2\alpha_{k}a_{k}b_{k}\langle x_{k-1}-\overline{x}_{k-1}, \overline{x}_{k-1}-\overline{x}_{k}\rangle, \tag{44}\]
and for \(s_{k-1}=\frac{s}{(k-1)^{p-q}}\) with \(s<\frac{c}{\alpha}\) one has
\[(2a_{k-1}b_{k-1}-2\alpha_{k}a_{k}b_{k}) \langle x_{k-1}-\overline{x}_{k-1},\overline{x}_{k-1}-\overline{x} _{k}\rangle\leq\] \[(a_{k-1}b_{k-1}-\alpha_{k}a_{k}b_{k})\left(s_{k-1}\|x_{k-1}- \overline{x}_{k-1}\|^{2}+\frac{1}{s_{k-1}}\|\overline{x}_{k-1}-\overline{x}_{k} \|^{2}\right) \tag{45}\]
Now, combining (43) and (44) and (45) it holds
\[-a_{k-1}b_{k-1}\|x_{k}-\overline{x}_{k-1}\|^{2}-\alpha_{k}a_{k}b_ {k}\|x_{k-1}-\overline{x}_{k}\|^{2}\leq-a_{k-1}b_{k-1}\|x_{k}-\overline{x}_{k} \|^{2}\] \[+2a_{k-1}b_{k-1}\|x_{k}-x_{k-1}\|^{2}+(-\alpha_{k}a_{k}b_{k}+(a_{k-1 }b_{k-1}-\alpha_{k}a_{k}b_{k})s_{k-1})\|x_{k-1}-\overline{x}_{k-1}\|^{2}\] \[+\left(\left(1+\frac{1}{s_{k-1}}\right)(a_{k-1}b_{k-1}-\alpha_{k}a _{k}b_{k})+\frac{a_{k-1}b_{k-1}}{2}\right)\|\overline{x}_{k}-\overline{x}_{k-1} \|^{2}. \tag{46}\]
Injecting (46) in (42) we get
\[v_{k+1}-v_{k}+2b_{k-1}^{2}\lambda_{k-1}(f_{k}(x_{k})-f_{k}(\overline {x}_{k}))-2b_{k-2}^{2}\lambda_{k-2}(f_{k-1}(x_{k-1})-f_{k-1}(\overline{x}_{k-1}))\] \[-[(2b_{k-1}^{2}-2a_{k-1}b_{k-1})\lambda_{k-1}-2b_{k-2}^{2}\lambda_{ k-2}](f_{k-1}(x_{k-1})-f_{k-1}(\overline{x}_{k-1}))+b_{k-1}^{2}\lambda_{k-1}^{2}\|u_{k} \|^{2}\] \[\leq(a_{k}^{2}+\alpha_{k}a_{k}b_{k}-a_{k}b_{k}c_{k}-a_{k-1}b_{k-1} )\|x_{k}-\overline{x}_{k}\|^{2}\] \[+(-a_{k-1}^{2}+(1+s_{k-1})(a_{k-1}b_{k-1}-\alpha_{k}a_{k}b_{k})) \|x_{k-1}-\overline{x}_{k-1}\|^{2}\] \[+(\alpha_{k}^{2}b_{k}^{2}+\alpha_{k}a_{k}b_{k}-\alpha_{k}b_{k}^{2 }c_{k}-b_{k-1}^{2}+3a_{k-1}b_{k-1})\|x_{k}-x_{k-1}\|^{2}\] \[+(b_{k}^{2}c_{k}^{2}-\alpha_{k}b_{k}^{2}c_{k}-a_{k}b_{k}c_{k}+b_{ k-1}^{2}\lambda_{k-1}c_{k})\|x_{k}\|^{2}\] \[+(a_{k-1}b_{k-1}\lambda_{k-1}c_{k-1}-b_{k-1}^{2}\lambda_{k-1}c_{k -1}+\alpha_{k}b_{k}^{2}c_{k})\|x_{k-1}\|^{2}\] \[+\left(b_{k-1}^{2}\lambda_{k-1}(c_{k-1}-c_{k})+a_{k}b_{k}c_{k} \right)\|\overline{x}_{k}\|^{2}-a_{k-1}b_{k-1}\lambda_{k-1}c_{k-1}\|\overline{ x}_{k-1}\|^{2}\] \[+\left(\left(1+\frac{1}{s_{k-1}}\right)(a_{k-1}b_{k-1}-\alpha_{k} a_{k}b_{k})+\frac{a_{k-1}b_{k-1}}{2}-b_{k-1}^{2}\lambda_{k-1}c_{k-1}\right)\| \overline{x}_{k}-\overline{x}_{k-1}\|^{2}. \tag{47}\]
Consider now \(u=r-1,\,v=r\) and assume that \(a>1+q,\,r\in\left(\frac{1}{2},\frac{q+1}{2}\right]\). Further, let \(\mu_{k}=2b_{k-1}^{2}\lambda_{k-1}\), \(\nu_{k}=-a_{k}^{2}-\alpha_{k}a_{k}b_{k}+a_{k}b_{k}c_{k}+a_{k-1}b_{k-1}\) and \(\sigma_{k}=-b_{k}^{2}c_{k}^{2}+\alpha_{k}b_{k}^{2}c_{k}+a_{k}b_{k}c_{k}-b_{k-1 }^{2}\lambda_{k-1}c_{k}\) for all \(k\geq 1\).
Next we show that all the sequences defined above are positive after an index \(K_{0}\) big enough. For an easier readability we emphasize that by \(h_{k}-\mathcal{O}(k^{l})\) we understand the difference of a sequence \(h_{k}\) and a positive sequence of order \(\mathcal{O}(k^{l})\) as \(k\to+\infty.\) Similarly, by \(h_{k}+\mathcal{O}(k^{l})\) we understand the sum of a sequence \(h_{k}\) and a positive sequence of order \(\mathcal{O}(k^{l})\) as \(k\to+\infty.\) Further, by \(s\mathcal{O}(k^{l})\), \(s>0\) we understand the positive sequences \(u_{k}\) that after an index satisfy \(u_{k}\leq sk^{l}.\) All the estimates bellow hold after an index \(K_{0}\) big enough.
Obviously, one has
\[\mu_{k}=2\lambda(k-1)^{2r+\delta}>0\text{ and }\mu_{k}=\mathcal{O}(k^{2r+\delta}). \tag{48}\]
Further, since \(q<1<p\) one has
\[\nu_{k} =-a^{2}k^{2r-2}-\left(1-\frac{\alpha}{k^{q}}\right)ak^{2r-1}+ack^{2 r-1-p}+a(k-1)^{2r-1}\] \[=a\alpha k^{2r-1-q}-\mathcal{O}(k^{2r-2})+\mathcal{O}(k^{2r-1-p} )>0\text{ and }\nu_{k}=\mathcal{O}(k^{2r-1-q}). \tag{49}\]
Now, since \(\lambda_{k}=\lambda k^{\delta}<1\), for \(k\) big enough, i.e. \(\delta\leq 0\) and \(0<\lambda<1\) if \(\delta=0\), one has
\[\sigma_{k} =-c^{2}k^{2r-2p}+\left(1-\frac{\alpha}{k^{q}}\right)ck^{2r-p}+ack^ {2r-1-p}-\lambda c(k-1)^{2r+\delta}k^{-p}\] \[=ck^{-p}(k^{2r}-\lambda(k-1)^{2r+\delta})+ack^{2r-1-p}-\alpha ck^{2 r-q-p}-c^{2}k^{2r-2p}\] \[=ck^{-p}(k^{2r}-\lambda(k-1)^{2r+\delta})+\mathcal{O}(k^{2r-1-p} )-\mathcal{O}(k^{2r-q-p})-\mathcal{O}(k^{2r-2p})>0\text{ and }\sigma_{k}=\mathcal{O}(k^{2r-p}). \tag{50}\]
Consequently, \(E_{k}\geq 0\) for all \(k\geq K_{0}\).
In other words (47) can be written as
\[E_{k+1}-E_{k}+(-\alpha_{k}^{2}b_{k}^{2}-\alpha_{k}a_{k}b_{k}+ \alpha_{k}b_{k}^{2}c_{k}+b_{k-1}^{2}-3a_{k-1}b_{k-1})\|x_{k}-x_{k-1}\|^{2}\] \[+b_{k-1}^{2}\lambda_{k-1}^{2}\|u_{k}\|^{2}+(2a_{k-1}b_{k-1}\lambda_ {k-1}+2b_{k-2}^{2}\lambda_{k-2}-2b_{k-1}^{2}\lambda_{k-1})(f_{k-1}(x_{k-1})-f_{k- 1}(\overline{x}_{k-1}))\] \[+(-\alpha_{k-1}a_{k-1}b_{k-1}+a_{k-1}b_{k-1}c_{k-1}+a_{k-2}b_{k-2} -(1+s_{k-1})(a_{k-1}b_{k-1}-\alpha_{k}a_{k}b_{k}))\|x_{k-1}-\overline{x}_{k-1} \|^{2}\] \[+(-b_{k-1}^{2}c_{k-1}^{2}+\alpha_{k-1}b_{k-1}^{2}c_{k-1}+a_{k-1}b_{ k-1}c_{k-1}-b_{k-2}^{2}\lambda_{k-2}c_{k-1}-a_{k-1}b_{k-1}\lambda_{k-1}c_{k-1}+\] \[+b_{k-1}^{2}\lambda_{k-1}c_{k-1}-\alpha_{k}b_{k}^{2}c_{k})\|x_{k-1} \|^{2}\] \[\leq\left(b_{k-1}^{2}\lambda_{k-1}(c_{k-1}-c_{k})+a_{k}b_{k}c_{k} \right)\|\overline{x}_{k}\|^{2}-a_{k-1}b_{k-1}\lambda_{k-1}c_{k-1}\| \overline{x}_{k-1}\|^{2}\] \[+\left(\left(1+\frac{1}{s_{k-1}}\right)(a_{k-1}b_{k-1}-\alpha_{k} a_{k}b_{k})+\frac{a_{k-1}b_{k-1}}{2}-b_{k-1}^{2}\lambda_{k-1}c_{k-1}\right)\| \overline{x}_{k}-\overline{x}_{k-1}\|^{2}. \tag{51}\]
For simplicity, let us denote
\[\xi_{k}=b_{k-1}^{2}\lambda_{k-1}^{2}\] \[m_{k}=2a_{k-1}b_{k-1}\lambda_{k-1}+2b_{k-2}^{2}\lambda_{k-2}-2b_{k -1}^{2}\lambda_{k-1}\] \[n_{k}=-\alpha_{k-1}a_{k-1}b_{k-1}+a_{k-1}b_{k-1}c_{k-1}+a_{k-2}b_ {k-2}-(1+s_{k-1})(a_{k-1}b_{k-1}-\alpha_{k}a_{k}b_{k})\] \[\eta_{k}=-\alpha_{k}^{2}b_{k}^{2}-\alpha_{k}a_{k}b_{k}+\alpha_{k}b _{k}^{2}c_{k}+b_{k-1}^{2}-3a_{k-1}b_{k-1}\] \[t_{k}=-b_{k-1}^{2}c_{k-1}^{2}+\alpha_{k-1}b_{k-1}^{2}c_{k-1}+a_{k -1}b_{k-1}c_{k-1}-b_{k-2}^{2}\lambda_{k-2}c_{k-1}-a_{k-1}b_{k-1}\lambda_{k-1}c _{k-1}+\] \[\quad+b_{k-1}^{2}\lambda_{k-1}c_{k-1}-\alpha_{k}b_{k}^{2}c_{k},\]
and we show that all the sequences above are positive after an index \(K_{1}\geq K_{0}\) big enough.
First one has
\[\xi_{k}=\lambda^{2}(k-1)^{2r+2\delta}>0\text{ and }\xi_{k}=\mathcal{O}(k^{2r+2 \delta}). \tag{52}\]
Obviously, since \(a>1+q\geq 2r\) one has
\[m_{k} =2a\lambda(k-1)^{2r-1+\delta}+2\lambda((k-2)^{2r+\delta}-(k-1)^{2 r+\delta})\] \[=2a\lambda(k-1)^{2r-1+\delta}-2\lambda(2r+\delta)\mathcal{O}(k^{2 r-1+\delta})>0\text{ and }m_{k}=\mathcal{O}(k^{2r-1+\delta}). \tag{53}\]
If \(q<1\), \(1+q>p>1\), by taking into account that \((k-1)^{2r-1-q}-k^{2r-1-q}=0\) if \(r=\frac{q+1}{2}\) and \((k-1)^{2r-1-q}-k^{2r-1-q}=\mathcal{O}(k^{2r-2-q})\) if \(r<\frac{q+1}{2}\) and \(s<\frac{c}{\alpha}\) one has
\[n_{k} =-\left(1-\frac{\alpha}{(k-1)^{q}}\right)a(k-1)^{2r-1}+ac(k-1)^{2 r-1-p}+a(k-2)^{2r-1}\] \[-\left(1+\frac{s}{(k-1)^{p-q}}\right)\left(a(k-1)^{2r-1}-\left(1- \frac{\alpha}{k^{q}}\right)ak^{2r-1}\right)\] \[=ac(k-1)^{2r-1-p}+a((k-2)^{2r-1}+k^{2r-1}-2(k-1)^{2r-1})\] \[-\frac{as}{(k-1)^{p-q}}((k-1)^{2r-1}-k^{2r-1}+\alpha k^{2r-1-q})+ a\alpha((k-1)^{2r-1-q}-k^{2r-1-q})\] \[=ac(k-1)^{2r-1-p}-\mathcal{O}(k^{2r-3})-as\alpha\mathcal{O}(k^{2 r-1-p})+a\alpha\mathcal{O}((k-1)^{2r-1-q}-k^{2r-1-q})>0\] \[\text{ and }n_{k}=\mathcal{O}(k^{2r-1-p}). \tag{54}\]
If \(q<1\), \(1+q=p\) then \(s_{k}=\frac{s}{k}\) and by taking into account that \((k-1)^{2r-1-q}-k^{2r-1-q}=0\), if \(r=\frac{q+1}{2}\) and \((k-1)^{2r-1-q}-k^{2r-1-q}=(1+q-2r)\mathcal{O}(k^{2r-2-q})\) if \(r<\frac{q+1}{2}\) and \(s<\frac{c}{\alpha}\) one has
\[n_{k} =ac(k-1)^{2r-2-q}+a((k-2)^{2r-1}+k^{2r-1}-2(k-1)^{2r-1})\] \[-\frac{as}{k-1}((k-1)^{2r-1}-k^{2r-1}+\alpha k^{2r-1-q})+a\alpha( (k-1)^{2r-1-q}-k^{2r-1-q})\] \[=ac(k-1)^{2r-2-q}-\frac{as\alpha}{k-1}k^{2r-1-q}+\mathcal{O}(k^{2 r-3})-\mathcal{O}(k^{2r-3})\] \[+a\alpha\mathcal{O}((k-1)^{2r-1-q}-k^{2r-1-q})>0\text{ and }n_{k}= \mathcal{O}(k^{2r-2-q}). \tag{55}\]
Concerning \(\eta_{k}\), since \(p>1>q\) one has
\[\eta_{k} =-\left(1-\frac{\alpha}{k^{q}}\right)^{2}k^{2r}-\left(1-\frac{ \alpha}{k^{q}}\right)ak^{2r-1}+\left(1-\frac{\alpha}{k^{q}}\right)ck^{2r-p}+(k -1)^{2r}-3a(k-1)^{2r-1}\] \[=2\alpha k^{2r-q}+((k-1)^{2r}-k^{2r})-\alpha^{2}k^{2r-2q}-ak^{2r-1 }+\alpha ak^{2r-1-q}+\left(1-\frac{\alpha}{k^{q}}\right)ck^{2r-p}\] \[-3a(k-1)^{2r-1}=2\alpha k^{2r-q}-\mathcal{O}(k^{2r-1})>0\text{ and } \eta_{k}=\mathcal{O}(k^{2r-q}). \tag{56}\]
Now, since \(\lambda_{k}=\lambda k^{\delta}\leq 1\), for \(k\) big enough, i.e. \(\delta\leq 0\) and \(0<\lambda<1\) if \(\delta=0\), further \(a>1+q\geq 2r\), hence \(a>|2r-p|\) if \(\delta<0\) and if \(\delta=0\) then \(a>\frac{(2r-p)-2\lambda r}{1-\lambda}\), one has
\[t_{k} =-c^{2}(k-1)^{2r-2p}+\left(1-\frac{\alpha}{(k-1)^{q}}\right)c(k-1) ^{2r-p}+ac(k-1)^{2r-1-p}-\lambda c(k-2)^{2r+\delta}(k-1)^{-p}\] \[-a\lambda c(k-1)^{2r-1+\delta-p}+\lambda c(k-1)^{2r+\delta-p}- \left(1-\frac{\alpha}{k^{q}}\right)ck^{2r-p}\] \[=(ac(k-1)^{2r-1-p}-a\lambda c(k-1)^{2r-1+\delta-p})+c((k-1)^{2r-p }-k^{2r-p})\] \[+\lambda c(k-1)^{-p}((k-1)^{2r+\delta}-(k-2)^{2r+\delta})+\alpha c (k^{2r-q-p}-(k-1)^{2r-q-p})-c^{2}(k-1)^{2r-2p}\] \[=(ac(k-1)^{2r-1-p}-a\lambda c(k-1)^{2r-1+\delta-p})-c(2r-p) \mathcal{O}(k^{2r-1-p})+\lambda c(2r+\delta)\mathcal{O}(k^{2r-1-p+\delta})\] \[-\mathcal{O}(k^{2r-q-1-p})-\mathcal{O}(k^{2r-2p})>0\text{ and }t_{k}= \mathcal{O}(k^{2r-1-p}). \tag{57}\]
Concerning the right hand side of (51), in what follows we show that
\[\sum_{k=1}^{+\infty}\left(\left(1+\frac{1}{s_{k-1}}\right)(a_{k-1}b_{k-1}- \alpha_{k}a_{k}b_{k})+\frac{a_{k-1}b_{k-1}}{2}\right)\|\overline{x}_{k}- \overline{x}_{k-1}\|^{2}<+\infty.\]
Let us denote
\[S_{k}:=\left(\left(1+\frac{1}{s_{k-1}}\right)(a_{k-1}b_{k-1}-\alpha_{k}a_{k}b_ {k})+\frac{a_{k-1}b_{k-1}}{2}\right)\|\overline{x}_{k}-\overline{x}_{k-1}\|^{ 2}. \tag{58}\]
Note that according to (31) one has \(\|\overline{x}_{k}-\overline{x}_{k-1}\|\leq\frac{p_{1}}{k}\|\overline{x}_{k}\|\) for some \(p_{1}>p\) and all \(k\) big enough. Further \(\|\overline{x}_{k}\|^{2}\leq\|x^{*}\|^{2}\), hence we have
\[\|\overline{x}_{k}-\overline{x}_{k-1}\|^{2}\leq\frac{p_{1}^{2}}{k^{2}}\|x^{*} \|^{2}.\]
Therefore, it is enough to show that \(\left(1+\frac{1}{s_{k-1}}\right)(a_{k-1}b_{k-1}-\alpha_{k}a_{k}b_{k})+\frac{a_ {k-1}b_{k-1}}{2}=\mathcal{O}(k^{l})\) as \(k\to+\infty\), with \(l<1\).
Indeed,
\[\left(1+\frac{1}{s_{k-1}}\right)(a_{k-1}b_{k-1}-\alpha_{k}a_{k}b_ {k})+\frac{a_{k-1}b_{k-1}}{2}=\left(1+\frac{(k-1)^{p-q}}{s}\right)\left(a(k-1) ^{2r-1}-\left(1-\frac{\alpha}{k^{q}}\right)ak^{2r-1}\right)\] \[+\frac{a}{2}(k-1)^{2r-1}\leq C_{1}(k-1)^{\max(2r-1-2q+p,2r-1)}.\]
Observe that by assumption \(q<1\) and \(2r\leq q+1\) if \(p<q+1\), hence one can take \(l=\max(2r-1-q+s,2r-1)<1\) and we obtain that \((S_{k})\) is summable.
Further, for \(p=q+1\) if \(2r<q+1\) we obtain that \(l=\max(2r-1-2q+p,2r-1)<1\), so also in this case \((S_{k})\) is summable.
However, in case \(p=q+1\) and \(2r=q+1\) one has \(l=1\), hence \(S_{k}=\mathcal{O}(k^{-1})\).
Now, since \(\|\overline{x}_{k}\|^{2}\leq\|x^{*}\|^{2}\) and
\[b_{k-1}^{2}\lambda_{k-1}(c_{k-1}-c_{k})+a_{k}b_{k}c_{k} =c\lambda(k-1)^{2r+\delta}((k-1)^{-p}-k^{-p})+ack^{2r-1-p}\] \[=\mathcal{O}(k^{2r-1-p}),\]
the right hand side of (59) leads to
\[\left(b_{k-1}^{2}\lambda_{k-1}(c_{k-1}-c_{k})+a_{k}b_{k}c_{k}\right)\| \overline{x}_{k}\|^{2}-a_{k-1}b_{k-1}\lambda_{k-1}c_{k-1}\|\overline{x}_{k-1}\| ^{2}\] \[-b_{k-1}^{2}\lambda_{k-1}c_{k-1}\|\overline{x}_{k}-\overline{x}_{ k-1}\|^{2}+S_{k}\leq C_{2}k^{2r-1-p}+S_{k}\text{ for some }C_{2}>0.\]
Consequently, (51) leads to
\[E_{k+1}-E_{k}+\xi_{k}\|u_{k}\|^{2}+m_{k}(f_{k-1}(x_{k-1})-f_{k-1 }(\overline{x}_{k-1}))+n_{k}\|x_{k-1}-\overline{x}_{k-1}\|^{2}\] \[+\eta_{k}\|x_{k}-x_{k-1}\|^{2}+t_{k}\|x_{k-1}\|^{2}\leq C_{2}k^{2r -1-p}+S_{k}\text{ for all }k\geq K_{1}. \tag{59}\]
Summing up (59) from \(k=K_{1}\) to \(k=n\geq K_{1}\) we obtain
\[E_{n+1}+\sum_{k=K_{1}}^{n}\xi_{k}\|u_{k}\|^{2}+\sum_{k=K_{1}}^{n}m_{ k}(f_{k-1}(x_{k-1})-f_{k-1}(\overline{x}_{k-1}))+\sum_{k=K_{1}}^{n}n_{k}\|x_{k-1}- \overline{x}_{k-1}\|^{2}\] \[+\sum_{k=K_{1}}^{n}\eta_{k}\|x_{k}-x_{k-1}\|^{2}+\sum_{k=K_{1}}^{ n}t_{k}\|x_{k-1}\|^{2}\leq C_{2}\sum_{k=K_{1}}^{n}k^{2r-1-p}+\sum_{k=K_{1}}^{n}S _{k}+E_{K_{1}}\] \[\leq C_{2}\sum_{k=K_{1}}^{n}k^{2r-1-p}+C\text{ for some }C>0. \tag{60}\]
## II. Rates
In what follows \(x^{*}\) denotes the element of minimum norm from the set \(\operatorname{argmin}f\).
We treat first the case \(p<q+1\).
Now, if \(2r-1-p>-1\), that is \(r\in\left(\frac{p}{2},\frac{q+1}{2}\right],\) it is obvious that \(\sum_{k=K_{1}}^{n}k^{2r-1-p}\to+\infty\) as \(n\to+\infty.\) However, easily can by seen that \(\sum_{k=K_{1}}^{n}k^{2r-1-p}=\mathcal{O}(n^{2r-p}).\)
Hence, dividing (60) with \(n^{2r-p}\) we obtain at once that there exists \(L>0\) such that \(\frac{E_{n+1}}{n^{2r-p}}<L,\) consequently
\[\frac{\mu_{n}}{n^{2r-p}}(f_{n}(x_{n})-f_{n}(\overline{x}_{n}))\leq L\text{ and }\frac{\sigma_{n}}{n^{2r-p}}\|x_{n}\|^{2}\leq L\text{ for all }n\geq K_{1}.\]
But according to (50) one has \(\sigma_{n}=\mathcal{O}(n^{2r-p})\) consequently \((x_{n})\) is bounded.
From (48) we have \(\mu_{n}=\mathcal{O}(n^{2r+\delta}),\) hence
\[f_{n}(x_{n})-f_{n}(\overline{x}_{n})=\mathcal{O}(n^{-p-\delta}).\]
Consequently, for every \(\rho<p+\delta-1\) one has
\[\sum_{k=1}^{+\infty}k^{\rho}(f_{k}(x_{k})-f_{x}(\overline{x}_{k}))<+\infty.\]
Now, according to (34) one has \(f(x_{n})-f(x^{*})\leq f_{n}(x_{n})-f_{n}(\overline{x}_{n})+\frac{c}{2n^{p}}\| x^{*}\|^{2}\) hence, since \(\delta\leq 0\) we obtain
\[f(x_{n})-f(x^{*})=\mathcal{O}(n^{-p-\delta}).\]
Further, one has \(\frac{v_{n+1}}{n^{2r-p}}<L,\) hence
\[\frac{\|a_{n}(x_{n}-\overline{x}_{n})+b_{n}(\alpha_{n}(x_{n}-x_{n-1})-c_{n}x_{ n})\|^{2}}{n^{2r-p}}<L\text{ for all }n\geq K_{1}.\]
Consequently, \(\|an^{\frac{p}{2}-1}(x_{n}-\overline{x}_{n})+n^{\frac{p}{2}}(\alpha_{n}(x_{n} -x_{n-1})-cn^{-p}x_{n})\|^{2}\) is bounded. But \((x_{n})\) is bounded and \(p<2\), hence \(an^{\frac{p}{2}-1}(x_{n}-\overline{x}_{n})\to 0\) as \(n\to+\infty\) and \(-cn^{-\frac{p}{2}}x_{n}\to 0\) as \(n\to+\infty,\) consequently \(\|n^{\frac{p}{2}}\alpha_{n}(x_{n}-x_{n-1})\|^{2}\) is bounded. In other words
\[\|x_{n}-x_{n-1}\|^{2}=\mathcal{O}(n^{-p}).\]
Hence, for every \(\rho<p-1\) one has
\[\sum_{k=1}^{+\infty}k^{\rho}\|x_{k}-x_{k-1}\|^{2}<+\infty.\]
Now, using the definition of \(u_{n}\) we have \(\lambda_{n-1}u_{n}=(x_{n}-x_{n-1})-\alpha_{n-1}(x_{n-1}-x_{n-2})+c_{n-1}x_{n-1}\) hence
\[\|\lambda(n-1)^{\delta}u_{n}\|\leq\|x_{n}-x_{n-1}\|+\alpha_{n-1}\|x_{n-1}-x_{ n-2}\|+c_{n-1}\|x_{n-1}\|=\mathcal{O}(n^{-\frac{p}{2}}).\]
Consequently, \(\|u_{n}\|^{2}=\mathcal{O}(n^{-p-2\delta})\) and for every \(\rho<p+2\delta-1\) one has
\[\sum_{k=1}^{+\infty}k^{\rho}\|u_{k}\|^{2}<+\infty.\]
Further, by taking \(r=\frac{q+1}{2}\) we obtain the following ergodic convergence results.
\[\limsup_{n\to+\infty}\frac{\sum_{k=1}^{n}m_{k}(f_{k-1}(x_{k-1})-f_{k-1}( \overline{x}_{k-1}))}{n^{q+1-p}}<+\infty.\]
But according to (53) we have \(m_{k}=\mathcal{O}(k^{q+\delta}),\) hence
\[\limsup_{n\rightarrow+\infty}\frac{\sum_{k=1}^{n}k^{q+\delta}(f_{k-1}(x_{k-1})-f _{k-1}(\overline{x}_{k-1}))}{n^{q+1-p}}<+\infty.\]
Similarly, according to (56) one has \(\eta_{k}=\mathcal{O}(k^{1}),\) hence
\[\limsup_{n\rightarrow+\infty}\frac{\sum_{k=1}^{n}k\|x_{k}-x_{k-1}\|^{2}}{n^{q +1-p}}<+\infty.\]
Finally, according to (52) one has \(\xi_{k}=\mathcal{O}(k^{q+1+2\delta}),\) hence
\[\limsup_{n\rightarrow+\infty}\frac{\sum_{k=1}^{n}k^{q+1+2\delta}\|u_{k}\|^{2} }{n^{q+1-p}}<+\infty.\]
Now, if \(2r-1-p<-1,\) that is \(r\in\left(\frac{1}{2},\frac{p}{2}\right),\) then the right hand side of (60) is finite, hence there exists \(C_{3}>0\) such that
\[E_{n+1}+\sum_{k=K_{1}}^{n}\xi_{k}\|u_{k}\|^{2}+\sum_{k=K_{1}}^{n }m_{k}(f_{k-1}(x_{k-1})-f_{k-1}(\overline{x}_{k-1}))+\sum_{k=K_{1}}^{n}n_{k}\| x_{k-1}-\overline{x}_{k-1}\|^{2}\] \[+\sum_{k=K_{1}}^{n}\eta_{k}\|x_{k}-x_{k-1}\|^{2}+\sum_{k=K_{1}}^{ n}t_{k}\|x_{k-1}\|^{2}\leq C_{2}\sum_{k=K_{1}}^{n}k^{2r-1-p}+\sum_{k=K_{1}}^{n}S_{ k}+E_{K_{1}}\leq C_{3}. \tag{61}\]
From (61) by using (52), (53) and (56) we obtain the estimates
\[\sum_{k=1}^{+\infty}k^{2r+2\delta}\|u_{k}\|^{2}<+\infty,\]
\[\sum_{k=1}^{+\infty}k^{2r-1+\delta}(f_{k-1}(x_{k-1})-f_{k-1}(\overline{x}_{k-1 }))<+\infty\]
and
\[\sum_{k=1}^{+\infty}k^{2r-q}\|x_{k}-x_{k-1}\|^{2}<+\infty.\]
But according to (34) we have \(f(x_{k-1})-f(x^{*})\leq f_{k-1}(x_{k-1})-f_{k-1}(\overline{x}_{k-1})+\frac{c} {2k^{p}}\|x^{*}\|^{2}.\)
Further \(\sum_{k=1}^{+\infty}k^{2r-1+\delta}\frac{c}{2k^{p}}\|x^{*}\|^{2}<+\infty\) therefore
\[\sum_{k=1}^{+\infty}k^{2r-1+\delta}(f(x_{k-1})-\min_{\mathcal{H}}f)<+\infty.\]
In case \(p=q+1\) we have seen earlier, that \(S_{k}\) defined by (58) is summable provided \(2r<q+1.\) Further, for \(2r=q+1\) one has \(S_{k}=\mathcal{O}(k^{-1}).\) Consequently, the right hand side of (60), that is \(C_{2}\sum_{k=K_{1}}^{n}k^{2r-1-p}+\sum_{k=K_{1}}^{n}S_{k}+E_{K_{1}}\) is finite for \(2r<q+1\) and is of order \(\mathcal{O}(k^{-1})\) for \(2r=q+1.\)
So assume first that \(r\in\left(\frac{1}{2},\frac{q+1}{2}\right).\) Then (60) becomes:
\[E_{n+1}+\sum_{k=K_{1}}^{n}\xi_{k}\|u_{k}\|^{2}+\sum_{k=K_{1}}^{n }m_{k}(f_{k-1}(x_{k-1})-f_{k-1}(\overline{x}_{k-1}))+\sum_{k=K_{1}}^{n}n_{k}\| x_{k-1}-\overline{x}_{k-1}\|^{2}\] \[+\sum_{k=K_{1}}^{n}\eta_{k}\|x_{k}-x_{k-1}\|^{2}+\sum_{k=K_{1}}^{ n}t_{k}\|x_{k-1}\|^{2}\leq C,\text{ for some }C>0. \tag{62}\]
From (62), for all \(r\in\left(\frac{1}{2},\frac{q+1}{2}\right)\) we obtain at once the following estimates:
\(\sum_{k=1}^{+\infty}k^{2r+2\delta}\|u_{k}\|^{2}<+\infty,\)\(\sum_{k=1}^{+\infty}k^{2r-1+\delta}(f_{k-1}(x_{k-1})-f_{k-1}(\overline{x}_{k-1}))<+\infty\) and \(\sum_{k=1}^{+\infty}k^{2r-q}\|x_{k}-x_{k-1}\|^{2}<+\infty.\)
But according to (34) we have \(f(x_{k-1})-f(x^{*})\leq f_{k-1}(x_{k-1})-f_{k-1}(\overline{x}_{k-1})+\frac{c}{2k \nu}\|x^{*}\|^{2}.\) Further \(\sum_{k=1}^{+\infty}k^{2r-1+\delta}\frac{c}{2k\nu}\|x^{*}\|^{2}<+\infty\) therefore
\[\sum_{k=1}^{+\infty}k^{2r-1+\delta}(f(x_{k-1})-\min_{\mathcal{H}}f)<+\infty.\]
Assume now that \(r=\frac{q+1}{2}.\) Then (60) becomes:
\[E_{n+1}+\sum_{k=K_{1}}^{n}\xi_{k}\|u_{k}\|^{2}+\sum_{k=K_{1}}^{n} m_{k}(f_{k-1}(x_{k-1})-f_{k-1}(\overline{x}_{k-1}))+\sum_{k=K_{1}}^{n}n_{k}\|x_{k- 1}-\overline{x}_{k-1}\|^{2}\] \[+\sum_{k=K_{1}}^{n}\eta_{k}\|x_{k}-x_{k-1}\|^{2}+\sum_{k=K_{1}}^ {n}t_{k}\|x_{k-1}\|^{2}\leq C\sum_{k=K_{1}}^{n}\frac{1}{k},\text{ for some }C>0. \tag{63}\]
But \(\sum_{k=1}^{n}\frac{1}{k}=\mathcal{O}(\ln n)\), hence by dividing (63) with \(\ln n\) we get at once that there exists \(L>0\) such that \(\frac{E_{n+1}}{\ln n}<L\). Consequently by arguing analogously as in the case \(p<q+1\) we have
\[f_{n}(x_{n})-f_{n}(\overline{x}_{n})=\mathcal{O}(n^{-p-\delta}\ln n)\]
and
\[f(x_{n})-f(x^{*})=\mathcal{O}(n^{-p-\delta}\ln n).\]
Further, in this case \(\nu_{n}=\mathcal{O}(1)\) and \(\sigma_{n}=\mathcal{O}(1)\) hence \(\frac{1}{\ln n}\|x_{n}-\overline{x}_{n}\|^{2}<L\) and \(\frac{1}{\ln n}\|x_{n}\|^{2}<L\). Combining the latter relations with the fact that \(\frac{v_{n+1}}{\ln n}<L\) we obtain that
\[\|x_{n}-x_{n-1}\|^{2}=\mathcal{O}(n^{-p}\ln n).\]
Now, using the definition of \(u_{n}\) we have
\[\|u_{n}\|^{2}=\mathcal{O}(n^{-p-2\delta}\ln n).\]
Finally, also here the following average convergence results hold.
\[\limsup_{n\to+\infty}\frac{\sum_{k=1}^{n}k^{q+\delta}(f_{k-1}(x_{k-1})-f_{k-1} (\overline{x}_{k-1}))}{\ln n}<+\infty,\]
\[\limsup_{n\to+\infty}\frac{\sum_{k=1}^{n}k\|x_{k}-x_{k-1}\|^{2}}{\ln n}<+\infty\]
and
\[\limsup_{n\to+\infty}\frac{\sum_{k=1}^{n}k^{q+1+2\delta}\|u_{k}\|^{2}}{\ln n}<+\infty.\]
Also here, for \(\delta<0\) it holds \(\sum_{k=1}^{+\infty}k^{q+\delta}\frac{c}{2k^{p}}\|x^{*}\|^{2}<+\infty\), hence according to (34) one has
\[\limsup_{n\to+\infty}\frac{\sum_{k=1}^{n}k^{q+\delta}(f(x_{k-1})-\min_{ \mathcal{H}}f)}{\ln n}<+\infty.\]
### Strong convergence results
Now, in order to show the strong convergence of the sequences generated by (1) to an element of minimum norm of the nonempty, convex and closed set \(\operatorname{argmin}f\), we state the following results.
**Theorem 3.2**.: _Assume that \(0<q<1\), \(1<p<q+1\) and \(\lambda_{k}=\lambda k^{\delta}\) with \(p-q-1<\delta<0,\,\lambda>0\) or \(\delta=0\) and \(\lambda\in]0,1[\). Let \((x_{k})\) be a sequence generated by (1). Let \(x^{*}\) be the minimal norm element from \(\operatorname{argmin}f\). Then, \(\liminf_{k\to+\infty}\|x_{k}-x^{*}\|=0\). Further, \((x_{k})\) converges strongly to \(x^{*}\) whenever \((x_{k})\) is in the interior or the complement of the ball \(B(0,\|x^{*}\|)\) for \(k\) big enough._
Proof.: We will use the notations and the energy functional \(E_{k}\) used in the proof of Theorem 3.1.
**Case I.** Assume that \(\|x_{k}\|\geq\|x^{*}\|\) for all \(k\geq K_{2}\), where \(K_{2}\geq K_{1}\) and \(K_{1}\) was defined in the proof of Theorem 3.1. Let us ad \(-\sigma_{k}\|x^{*}\|^{2}+\sigma_{k-1}\|x^{*}\|^{2}\) to the both side of (51). Note that \(E_{k}-\sigma_{k-1}\|x^{*}\|^{2}\geq 0\) for
all \(k>K_{2}.\) Further, since \(\|\overline{x}_{k}\|\leq\|x^{*}\|,\) we get that \(\|x_{k}\|^{2}-\|\overline{x}_{k}\|^{2}\geq 0\) for all \(k\geq K_{2}.\) Then we obtain for all \(k>K_{2}\) that
\[(E_{k+1}-\sigma_{k}\|x^{*}\|^{2})-(E_{k}-\sigma_{k-1}\|x^{*}\|^{2} )+\xi_{k}\|u_{k}\|^{2}+m_{k}(f_{k-1}(x_{k-1})-f_{k-1}(\overline{x}_{k-1}))\] \[+n_{k}\|x_{k-1}-\overline{x}_{k-1}\|^{2}+\eta_{k}\|x_{k}-x_{k-1} \|^{2}+t_{k}\|x_{k-1}\|^{2}\] \[\leq\left(b_{k-1}^{2}\lambda_{k-1}(c_{k-1}-c_{k})+a_{k}b_{k}c_{k} \right)\|\overline{x}_{k}\|^{2}-a_{k-1}b_{k-1}\lambda_{k-1}c_{k-1}\|\overline {x}_{k-1}\|^{2}+(-\sigma_{k}+\sigma_{k-1})\|x^{*}\|^{2}+S_{k}. \tag{64}\]
The right hand side of (64) can be written as
\[\left(b_{k-1}^{2}\lambda_{k-1}(c_{k-1}-c_{k})+a_{k}b_{k}c_{k} \right)\|\overline{x}_{k}\|^{2}-a_{k-1}b_{k-1}\lambda_{k-1}c_{k-1}\|\overline {x}_{k-1}\|^{2}+(-\sigma_{k}+\sigma_{k-1})\|x^{*}\|^{2}+S_{k}\] \[=\left(\left(b_{k-1}^{2}\lambda_{k-1}(c_{k-1}-c_{k})+a_{k}b_{k}c_{ k}\right)\|\overline{x}_{k}\|^{2}-\left(b_{k-2}^{2}\lambda_{k-2}(c_{k-2}-c_{k-1})+a _{k-1}b_{k-1}c_{k-1}\right)\right)\|\overline{x}_{k-1}\|^{2}\] \[+\left(b_{k-2}^{2}\lambda_{k-2}(c_{k-2}-c_{k-1})+(1-\lambda_{k-1} )a_{k-1}b_{k-1}c_{k-1}\right)\|\overline{x}_{k-1}\|^{2}+(-\sigma_{k}+\sigma_{ k-1})\|x^{*}\|^{2}+S_{k},\]
hence (64) becomes
\[(E_{k+1}-\sigma_{k}\|x^{*}\|^{2})-(E_{k}-\sigma_{k-1}\|x^{*}\|^{2 })+\xi_{k}\|u_{k}\|^{2}+m_{k}(f_{k-1}(x_{k-1})-f_{k-1}(\overline{x}_{k-1}))\] \[+n_{k}\|x_{k-1}-\overline{x}_{k-1}\|^{2}+\eta_{k}\|x_{k}-x_{k-1}\| ^{2}+t_{k}(\|x_{k-1}\|^{2}-\|x^{*}\|^{2})\] \[\leq\left(\left(b_{k-1}^{2}\lambda_{k-1}(c_{k-1}-c_{k})+a_{k}b_{k} c_{k}\right)\|\overline{x}_{k}\|^{2}-\left(b_{k-2}^{2}\lambda_{k-2}(c_{k-2}-c_{k-1})+ a_{k-1}b_{k-1}c_{k-1}\right)\right)\|\overline{x}_{k-1}\|^{2}\] \[+\left(b_{k-2}^{2}\lambda_{k-2}(c_{k-2}-c_{k-1})+(1-\lambda_{k-1} )a_{k-1}b_{k-1}c_{k-1}-\sigma_{k}+\sigma_{k-1}-t_{k}\right)\|x^{*}\|^{2}+S_{k}. \tag{65}\]
Now, according to (57), (50) and the form of \(a_{k},\)\(b_{k},\)\(c_{k}\) and \(\lambda_{k}\) we deduce that there exists \(K_{3}>K_{2}\) such that
\[b_{k-2}^{2}\lambda_{k-2}(c_{k-2}-c_{k-1})+(1-\lambda_{k-1})a_{k- 1}b_{k-1}c_{k-1}-\sigma_{k}+\sigma_{k-1}-t_{k}\] \[=\lambda c(k-2)^{2r+\delta}((k-2)^{-p}-(k-1)^{-p})+(1-\lambda(k-1 )^{\delta})ac(k-1)^{2r-1-p}\] \[-ck^{-p}(k^{2r}-\lambda(k-1)^{2r+\delta})-ack^{2r-1-p}+\alpha ck^{ 2r-q-p}+c^{2}k^{2r-2p}\] \[+c(k-1)^{-p}((k-1)^{2r}-\lambda(k-2)^{2r+\delta})+ac(k-1)^{2r-1-p }-\alpha c(k-1)^{2r-q-p}-c^{2}(k-1)^{2r-2p}\] \[-(ac(k-1)^{2r-1-p}-a\lambda c(k-1)^{2r-1+\delta-p})-c((k-1)^{2r-p }-k^{2r-p})\] \[-\lambda c(k-1)^{-p}((k-1)^{2r+\delta}-(k-2)^{2r+\delta})-\alpha c (k^{2r-q-p}-(k-1)^{2r-q-p})+c^{2}(k-1)^{2r-2p}\] \[=\lambda c((k-2)^{2r+\delta-p}-(k-1)^{2r+\delta-p}+k^{-p}(k-1)^{2 r+\delta}-(k-1)^{-p}(k-2)^{2r+\delta})\] \[+ac((k-1)^{2r-1-p}-k^{2r-1-p})+c^{2}k^{2r-2p}=c^{2}k^{2r-2p}+ \mathcal{O}(k^{2r-2-p})<Ck^{2r-2p}\text{ for some }C>0. \tag{66}\]
Hence, \(\sum_{k=K_{3}}^{+\infty}\left(b_{k-2}^{2}\lambda_{k-2}(c_{k-2}-c_{k-1})+(1- \lambda_{k-1})a_{k-1}b_{k-1}c_{k-1}-\sigma_{k}+\sigma_{k-1}-t_{k}\right)\|x^{* }\|^{2}<+\infty,\) provided \(2r-2p<-1.\) So in what follows we assume that \(\max(p-\delta,1)<2r<\min(q+1,2p-1).\) Then, by summing (65) by \(k=K_{3}\) to \(k=n>K_{3}\) we obtain that there exists \(L>0\) such that
\[\mu_{n}(f_{n}(x_{n})-f_{n}(\overline{x}_{n}))\leq L,\text{ for all }n>K_{3}.\]
Now, by (33) we get
\[\|x_{n}-\overline{x}_{n}\|^{2}<L\frac{2n^{p}}{c\mu_{n}}=\frac{L}{\lambda} \frac{n^{p}}{(n-1)^{2r+\delta}}\text{ for all }n>K_{3}.\]
Consequently, \(\|x_{n}-\overline{x}_{n}\|\to 0\) as \(n\rightarrow+\infty\) which combined with the fact that \(\overline{x}_{n}\to x^{*}\) as \(n\rightarrow+\infty\) lead to
\[\|x_{n}-x^{*}\|\to 0\text{ as }n\rightarrow+\infty.\]
**Case II.**
Assume that there exists \(k_{0}\in\mathbb{N}\) such that \(\|x_{n}\|<\|x^{*}\|\) for all \(n\geq k_{0}.\)
Now, we take \(\bar{x}\in\mathcal{H}\) a weak sequential cluster point of \((x_{n}),\) which exists since \((x_{n})\) is bounded. This means that there exists a sequence \(\left(k_{n}\right)_{n\in\mathbb{N}}\subseteq[k_{0},+\infty)\cap\mathbb{N}\) such that \(k_{n}\rightarrow+\infty\) and \(x_{k_{n}}\) converges weakly to \(\bar{x}\) as \(n\rightarrow+\infty\). According to Theorem 3.1 and the fact that \(f\) is lower semicontinuous one has
\[f(\bar{x})\leq\liminf_{n\rightarrow+\infty}f\left(x_{k_{n}}\right)=\lim_{n \rightarrow+\infty}f\left(x_{k_{n}}\right)=\min_{\mathcal{H}}f\,,\]
hence \(\bar{x}\in\operatorname*{argmin}f.\) Now, since the norm is weakly lower semicontinuous one has that
\[\|\bar{x}\|\leq\liminf_{n\rightarrow+\infty}\|x_{k_{n}}\|\leq\|x^{*}\|\]
which, from the definition of \(x^{*}\), implies that \(\bar{x}=x^{*}\). This shows that \((x_{n})\) converges weakly to \(x^{*}\). So
\[\|x^{*}\|\leq\liminf_{n\to+\infty}\|x_{n}\|\leq\limsup_{n\to+\infty}\|x_{n}\| \leq\|x^{*}\|\,,\]
hence we have
\[\lim_{n\to+\infty}\|x_{n}\|=\|x^{*}\|\,.\]
From the previous relation and the fact that \(x_{n}\rightharpoonup x^{*}\) as \(n\to+\infty\), we obtain the strong convergence, that is
\[\lim_{n\to+\infty}x_{n}=x^{*}.\]
**Case III.** We suppose that there exists \(k_{0}\in\mathbb{N}\) such that for every \(n\geq k_{0}\) there exists \(l\geq n\) such that \(\|x^{*}\|>\|x_{l}\|\) and also there exists \(m\geq n\) such that \(\|x^{*}\|\leq\|x_{m}\|\).
So let \(k_{1}\geq k_{0}\) and \(l_{1}\geq k_{1}\) such that \(\|x^{*}\|>\|x_{l_{1}}\|\). Let \(k_{2}>l_{1}\) and \(l_{2}\geq k_{2}\) such that \(\|x^{*}\|>\|x_{l_{2}}\|\). Continuing the procedure we obtain \((x_{l_{n}})\), a subsequence of \((x_{n})\) with the property that \(\|x_{l_{n}}\|<\|x^{*}\|\) for all \(n\in\mathbb{N}\). Now reasoning as in **Case II** we obtain that \(\lim_{n\to+\infty}x_{l_{n}}=x^{*}\). Consequently,
\[\liminf_{k\to+\infty}\|x_{n}-x^{*}\|=0.\]
### Full strong convergence for the case \(\delta=0,\,\lambda=1\)
Now we are able to show that in case \(\lambda=1\) the sequences generated by Algorithm 1 converges strongly to the minimum norm minimizer of the objective function \(f\). The following result is our main result of the present section.
**Theorem 3.3**.: _Assume that \(0<q<1\), \(1<p<q+1\), \(\lambda_{k}\equiv 1\). Let \((x_{k})\) be a sequence generated by (1). For every \(k\geq 2\) let us denote by \(u_{k}\) the element from \(\partial f(x_{k})\) that satisfies (2) with equality, i.e.,_
\[x_{k}=\alpha_{k-1}(x_{k-1}-x_{k-2})-u_{k}+\left(1-c_{k-1}\right)x_{k-1}.\]
_Then the following results are valid._
* _If_ \(p\leq 2q\) _then_ \(\|x_{n}-\overline{x}_{n}\|=\mathcal{O}(n^{\frac{p-q-1}{2}})\) _as_ \(n\to+\infty\)_, hence_ \(\lim_{n\to+\infty}x_{n}=x^{*}\)_. Further,_ \(\|x_{n}-x_{n-1}\|^{2}\)_,_ \(\|u_{n}\|^{2}\in\mathcal{O}(n^{-q-1})\) _as_ \(n\to+\infty\) _and_ \(f(x_{n})-\min_{\mathcal{H}}f=\mathcal{O}(n^{-p})\) _as_ \(n\to+\infty\)_._
* _If_ \(2q<p\leq\frac{3q+1}{2}\) _then_ \(\|x_{n}-\overline{x}_{n}\|=\mathcal{O}(n^{\frac{q-1}{2}})\) _as_ \(n\to+\infty\) _and_ \(\lim_{n\to+\infty}x_{n}=x^{*}\)_. Further,_ \(f_{n}(x_{n})-f_{n}(\overline{x}_{n})\)_,_ \(\|x_{n}-x_{n-1}\|^{2}\)_,_ \(\|u_{n}\|^{2}\in\mathcal{O}(n^{-q-1})\) _as_ \(n\to+\infty\) _and_ \(f(x_{n})-\min_{\mathcal{H}}f=\mathcal{O}(n^{-p})\) _as_ \(n\to+\infty\)_. The following sum estimates also hold._ \(\sum_{k=1}^{+\infty}k^{q}(f_{k}(x_{k})-f_{k}(\overline{x}_{k}))<+\infty,\,\sum_ {k=1}^{+\infty}k^{2q}\|u_{k}\|^{2}<+\infty\) _and_ \(\sum_{k=1}^{+\infty}k^{q}\|x_{k+1}-x_{k}\|^{2}<+\infty\)_._
* _If_ \(\frac{3q+1}{2}<p<q+1\)_, then_ \(\|x_{n}-\overline{x}_{n}\|=\mathcal{O}(n^{p-q-1})\) _as_ \(n\to+\infty\)_, hence_ \(\lim_{n\to+\infty}x_{n}=x^{*}\)_. Further,_ \(f_{n}(x_{n})-f_{n}(\overline{x}_{n})\)_,_ \(\|x_{n}-x_{n-1}\|^{2}\)_,_ \(\|u_{n}\|^{2}\in\mathcal{O}(n^{2p-4q-2})\) _as_ \(n\to+\infty\)_. Additionally, if_ \(2q<p<\frac{4q+2}{3}\)_, then_ \(f(x_{n})-\min_{\mathcal{H}}f=\mathcal{O}(n^{-p})\) _as_ \(n\to+\infty\) _and if_ \(\frac{4q+2}{3}\leq p<q+1\)_, then_ \(f(x_{n})-\min_{\mathcal{H}}f=\mathcal{O}(n^{2p-4q-2})\) _as_ \(n\to+\infty\)_. Moreover,_ \(\sum_{k=1}^{+\infty}k^{q}(f_{k}(x_{k})-f_{k}(\overline{x}_{k}))<+\infty,\,\sum_ {k=1}^{+\infty}k^{2q}\|u_{k}\|^{2}<+\infty\) _and_ \(\sum_{k=1}^{+\infty}k^{q}\|x_{k+1}-x_{k}\|^{2}<+\infty\)
Proof.: We use the notations from the proof of Theorem 3.1. Then, for \(\lambda=1\), \(\delta=0\) (47) becomes
\[v_{k+1}-v_{k}+2b_{k-1}^{2}(f_{k}(x_{k})-f_{k}(\overline{x}_{k}))-2b _{k-2}^{2}(f_{k-1}(x_{k-1})-f_{k-1}(\overline{x}_{k-1}))\] \[-(2b_{k-1}^{2}-2a_{k-1}b_{k-1}-2b_{k-2}^{2})(f_{k-1}(x_{k-1})-f_{k- 1}(\overline{x}_{k-1}))+b_{k-1}^{2}\|u_{k}\|^{2}\] \[+(-a_{k}^{2}-\alpha_{k}a_{k}b_{k}+a_{k}b_{k}c_{k}+a_{k-1}b_{k-1}) \|x_{k}-\overline{x}_{k}\|^{2}\] \[-(-a_{k-1}^{2}-\alpha_{k-1}a_{k-1}b_{k-1}+a_{k-1}b_{k-1}c_{k-1}+a_ {k-2}b_{k-2})\|x_{k-1}-\overline{x}_{k-1}\|^{2}\] \[+(-\alpha_{k-1}a_{k-1}b_{k-1}+a_{k-1}b_{k-1}c_{k-1}+a_{k-2}b_{k-2 }-(1+s_{k-1})(a_{k-1}b_{k-1}-\alpha_{k}a_{k}b_{k}))\|x_{k-1}-\overline{x}_{k-1 }\|^{2}\] \[-(\alpha_{k}^{2}b_{k}^{2}+\alpha_{k}a_{k}b_{k}-\alpha_{k}b_{k}^{2 }c_{k}-b_{k-1}^{2}+3a_{k-1}b_{k-1})\|x_{k}-x_{k-1}\|^{2}\] \[\leq(b_{k}^{2}c_{k}^{2}-\alpha_{k}b_{k}^{2}c_{k}-a_{k}b_{k}c_{k}+b _{k-1}^{2}c_{k})\|x_{k}\|^{2}\] \[-(b_{k-1}^{2}c_{k-1}^{2}-\alpha_{k-1}b_{k-1}^{2}c_{k-1}-a_{k-1}b_{k -1}c_{k-1}+b_{k-2}^{2}c_{k-1})\|x_{k-1}\|^{2}\] \[+(b_{k-1}^{2}c_{k-1}^{2}+\alpha_{k}b_{k}^{2}c_{k}-\alpha_{k-1}b_{ k-1}^{2}c_{k-1}+b_{k-2}^{2}c_{k-1}-b_{k-1}^{2}c_{k-1})\|x_{k-1}\|^{2}\] \[+\left(b_{k-1}^{2}(c_{k-1}-c_{k})+a_{k}b_{k}c_{k}\right)\|\overline {x}_{k}\|^{2}-a_{k-1}b_{k-1}c_{k-1}\|\overline{x}_{k-1}\|^{2}\] \[+\left(\left(1+\frac{1}{s_{k-1}}\right)(a_{k-1}b_{k-1}-\alpha_{k} a_{k}b_{k})+\frac{a_{k-1}b_{k-1}}{2}-b_{k-1}^{2}c_{k-1}\right)\|\overline{x}_{k}- \overline{x}_{k-1}\|^{2}. \tag{67}\]
We will assume from now on that \(a_{k}\equiv a\), \(\alpha>a>0\) and \(b_{k}=k^{q}.\) Then, concerning the right hand side of (67) we conclude the following.
\[-\sigma_{k}=b_{k}^{2}c_{k}^{2}-\alpha_{k}b_{k}^{2}c_{k}-a_{k}b_{k}c_{k}+b_{k-1 }^{2}c_{k}\geq 0\text{ after an index $k$ big enough}.\]
Further, \(-\sigma_{k}=\mathcal{O}(k^{q-p}).\) Note that for \(k\) big enough one has
\[-t_{k}=b_{k-1}^{2}c_{k-1}^{2}+\alpha_{k}b_{k}^{2}c_{k}-\alpha_{k-1}b_{k-1}^{2} c_{k-1}+b_{k-2}^{2}c_{k-1}-b_{k-1}^{2}c_{k-1}\leq 0.\]
Now, since \(\|\overline{x}_{k}\|\leq\|x^{*}\|\) we conclude that there exists \(C_{1}>0\) such that
\[b_{k-1}^{2}(c_{k-1}-c_{k})\|\overline{x}_{k}\|^{2}\leq C_{1}k^{2q-p-1}\text{ for $k$ big enough}.\]
We recall that \(S_{k}=\left(\left(1+\frac{(k-1)^{p-q}}{s}\right)(a_{k-1}b_{k-1}-\alpha_{k}a_{ k}b_{k})+\frac{a_{k-1}b_{k-1}}{2}\right)\|\overline{x}_{k}-\overline{x}_{k-1}\|^{2}\) and by using (31) we conclude that there exists \(C_{2}>0\) such that for \(k\) big enough one has
\[S_{k}\leq C_{2}k^{\max(p-q-2,q-2)}.\]
Consider now the energy functional \(e_{k}=\mu_{k-1}(f_{k-1}(x_{k-1})-f_{k-1}(\overline{x}_{k-1}))+v_{k}+\nu_{k-1} \|x_{k-1}-\overline{x}_{k-1}\|^{2}.\) Obviously for our setting, one has \(\mu_{k}=\mathcal{O}(k^{2q})\) and \(\nu_{k}=\mathcal{O}(1).\) Then, (67) yields
\[e_{k+1}-e_{k}+m_{k}(f_{k-1}(x_{k-1})-f_{k-1}(\overline{x}_{k-1} ))+\xi_{k}\|u_{k}\|^{2}+n_{k}\|x_{k-1}-\overline{x}_{k-1}\|^{2}+\eta_{k}\|x_{k} -x_{k-1}\|^{2}\] \[\leq-\sigma_{k}\|x_{k}\|^{2}+\sigma_{k-1}\|x_{k-1}\|^{2}+a_{k}b_{ k}c_{k}\|\overline{x}_{k}\|^{2}-a_{k-1}b_{k-1}c_{k-1}\|\overline{x}_{k-1}\|^{2}\] \[\quad+C_{1}k^{2q-p-1}+C_{2}k^{\max(p-q-2,q-2)}. \tag{68}\]
Note that for \(k\) big enough one has \(m_{k}=-(2b_{k-1}^{2}-2a_{k-1}b_{k-1}-2b_{k-2}^{2})\geq 0\) and \(m_{k}=\mathcal{O}(k^{q})\), \(n_{k}=-\alpha_{k-1}a_{k-1}b_{k-1}+a_{k-1}b_{k-1}c_{k-1}+a_{k-2}b_{k-2}-(1+s_{k- 1})(a_{k-1}b_{k-1}-\alpha_{k}a_{k}b_{k})\geq 0\) and \(n_{k}=\mathcal{O}(k^{q-p})\), \(\xi_{k}=b_{k-1}^{2}=(k-1)^{2q}\geq 0\) and \(\xi_{k}=\mathcal{O}(k^{2q})\), further \(\eta_{k}=-\alpha_{k}^{2}b_{k}^{2}-\alpha_{k}a_{k}b_{k}+\alpha_{k}b_{k}^{2}c_{k}+b _{k-1}^{2}-3a_{k-1}b_{k-1}\geq 0\) and \(\eta_{k}=\mathcal{O}(k^{q})\). By using the fact that
\[v_{k} =\|a_{k-1}(x_{k-1}-\overline{x}_{k-1})+b_{k-1}(x_{k}-x_{k-1}+u_{k} )\|^{2}\] \[\leq 2a^{2}\|x_{k-1}-\overline{x}_{k-1}\|^{2}+4(k-1)^{2q}\|x_{k}-x_{ k-1}\|^{2}+4(k-1)^{2q}\|u_{k}\|^{2},\]
we deduce that there exists \(H>0\) such that
\[\frac{H}{k^{p-q}}e_{k}\leq m_{k}(f_{k-1}(x_{k-1})-f_{k-1}(\overline{x}_{k-1}))+ \xi_{k}\|u_{k}\|^{2}+n_{k}\|x_{k-1}-\overline{x}_{k-1}\|^{2}+\eta_{k}\|x_{k}-x_ {k-1}\|^{2}.\]
Consequently, according to (68) there exists an index \(K_{0}\in\mathbb{N}\) such that for all \(k>K_{0}\) it holds
\[e_{k+1}-e_{k}+\frac{H}{k^{p-q}}e_{k}\leq-\sigma_{k}\|x_{k}\|^{2}+ \sigma_{k-1}\|x_{k-1}\|^{2}+a_{k}b_{k}c_{k}\|\overline{x}_{k}\|^{2}-a_{k-1}b_{ k-1}c_{k-1}\|\overline{x}_{k-1}\|^{2}\] \[\quad+C_{1}k^{2q-p-1}+C_{2}k^{\max(p-q-2,q-2)}. \tag{69}\]
Now, by multiplying (69) with \(\pi_{k}=\frac{1}{\prod_{i=k_{0}}^{k}(1-\frac{\mu}{i^{p-q}})}\) we obtain
\[\pi_{k}e_{k+1}-\pi_{k-1}e_{k}\leq \pi_{k}((-\sigma_{k})\|x_{k}\|^{2}-(-\sigma_{k-1})\|x_{k-1}\|^{2})\] \[+\pi_{k}(a_{k}b_{k}c_{k}\|\overline{x}_{k}\|^{2}-a_{k-1}b_{k-1}c _{k-1}\|\overline{x}_{k-1}\|^{2})\] \[+C_{1}\pi_{k}k^{2q-p-1}+C_{2}\pi_{k}k^{\max(p-q-2,q-2)}. \tag{70}\]
Now, by summing (70) from \(k=K_{0}+1\) to \(n>K_{0}+1\) big enough and using Lemma A.2 with \(\beta=p-q<1\) we obtain that there exist some positive constants denoted still by \(C_{1},C_{2},C_{3}\) such that
\[\pi_{n}e_{n+1}\leq\pi_{n}(-\sigma_{n})\|x_{n}\|^{2}+\pi_{n}a_{n}b_{n}c_{n}\| \overline{x}_{n}\|^{2}+C_{1}\pi_{n}n^{q-1}+C_{2}\pi_{n}n^{\max(2p-2q-2,p-2)}+ C_{3}.\]
Now, taking into account that \((-\sigma_{n})\), \((a_{n}b_{n}c_{n})=\mathcal{O}(n^{q-p})\) as \(n\to+\infty\) and according to Theorem 2.6 (\(x_{n}\)) is bounded and \(\|\overline{x}_{n}\|\leq\|x^{*}\|\), the above relation leads to
\[e_{n+1}\leq C_{0}n^{q-p}+C_{1}n^{q-1}+C_{2}n^{\max(2p-2q-2,p-2)}+\frac{C_{3}}{ \pi_{n}}<Cn^{q-1}+Cn^{\max(2p-2q-2,p-2)}, \tag{71}\]
for some constant \(C>0\).
Let us discuss the order of the right hand side of (71).
If \(\max(2p-2q-2,p-2)=p-2\), that is, \(p\leq 2q\) then by assumption \(q-1>p-2\) so the right hand side of (71) is less than \(Cn^{q-1}\) for a constant \(C>0\) appropriately chosen.
If \(\max(2p-2q-2,p-2)=2p-2q-2\), that is, \(p\geq 2q\) then the right hand side of (71) is less than \(Cn^{q-1}\) provided \(2q\leq p\leq\frac{3q+1}{2}\) and the right hand side of (71) is less than \(Cn^{2p-2q-2}\) provided \(\frac{3q+1}{2}<p<q+1\) for a constant \(C>0\) appropriately chosen.
So using (33), (71) and the form of \(e_{n+1}\) we conclude the following.
**a.** If \(p\leq 2q\) then for some \(C^{\prime}>0\) it holds
\[\|x_{n}-\overline{x}_{n}\|^{2}\leq\frac{2n^{p}}{c}(f_{n}(x_{n})-f_{n}( \overline{x}_{n}))\leq\frac{2n^{p}}{c\mu_{n}}e_{n+1}\leq C^{\prime}n^{p-q-1}.\]
Consequently, \(\|x_{n}-\overline{x}_{n}\|=\mathcal{O}(n^{\frac{p-q-1}{2}})\) as \(n\to+\infty.\) Since \(\overline{x}_{n}\to x^{*}\) as \(n\to+\infty\), we obtain in particular that \(\lim_{n\to+\infty}x_{n}=x^{*}.\)
Further, \(f_{n}(x_{n})-f_{n}(\overline{x}_{n})\leq\frac{1}{c\mu_{n}}e_{n+1}\) and \(v_{n}+1\leq e_{n+1}\), hence
\[f_{n}(x_{n})-f_{n}(\overline{x}_{n}),\,\|x_{n}-x_{n-1}\|^{2},\,\|u_{n}\|^{2} \in\mathcal{O}(n^{-q-1})\text{ as }n\to+\infty.\]
According to (34) we have \(f(x_{n})-\min_{\mathcal{H}}f\leq f_{n}(x_{n})-f_{n}(\overline{x}_{n})+\frac{c} {2n^{p}}\|x^{*}\|^{2},\) and since \(-q-1<-p\), we obtain that \(f(x_{n})-\min_{\mathcal{H}}f=\mathcal{O}(n^{-p})\) as \(n\to+\infty.\)
**b.** If \(2q<p\leq\frac{3q+1}{2}\) then by using the fact that \(\nu_{n}=\mathcal{O}(1)\) we obtain from (71) that
\[\|x_{n}-\overline{x}_{n}\|^{2}\leq\frac{1}{\nu_{n}}e_{n+1}\leq C^{\prime}n^{q -1},\text{ for some }C^{\prime}>0.\]
Consequently, \(\|x_{n}-\overline{x}_{n}\|=\mathcal{O}(n^{\frac{q-1}{2}})\) as \(n\to+\infty\) and since \(q<1\) we obtain in particular that \(\lim_{n\to+\infty}x_{n}=x^{*}.\) Analogously to the previous case, one can deduce that
\[f_{n}(x_{n})-f_{n}(\overline{x}_{n}),\,\|x_{n}-x_{n-1}\|^{2},\,\|u_{n}\|^{2}\in \mathcal{O}(n^{-q-1})\text{ as }n\to+\infty\]
and \(f(x_{n})-\min_{\mathcal{H}}f=\mathcal{O}(n^{-p})\) as \(n\to+\infty.\)
**c.** If \(\frac{3q+1}{2}<p<q+1\), then by the same argument as in the previous case we deduce that \(\|x_{n}-\overline{x}_{n}\|=\mathcal{O}(n^{p-q-1})\) as \(n\to+\infty\), hence \(\lim_{n\to+\infty}x_{n}=x^{*}.\) Further, one has
\[f_{n}(x_{n})-f_{n}(\overline{x}_{n}),\,\|x_{n}-x_{n-1}\|^{2},\,\|u_{n}\|^{2}\in \mathcal{O}(n^{2p-4q-2})\text{ as }n\to+\infty.\]
Here, by using (34), concerning the rate of the potential energy \(f(x_{n})-\min_{\mathcal{H}}f\) we conclude the following.
In one hand, if \(-p>2p-4q-2\), that is \(2q<p<\frac{4q+2}{3}\), then \(f(x_{n})-\min_{\mathcal{H}}f=\mathcal{O}(n^{-p})\) as \(n\to+\infty.\)
On the other hand, if \(\frac{4q+2}{3}\leq p<q+1\), then \(f(x_{n})-\min_{\mathcal{H}}f=\mathcal{O}(n^{2p-4q-2})\) as \(n\to+\infty.\)
In order to obtain sum estimates, let us return to (68) which holds from an index \(K_{0}\) big enough. By summing (68) from \(k=K_{0}\) to \(k=n\) we obtain
\[e_{n+1}+\sum_{k=K_{0}}^{n}m_{k}(f_{k-1}(x_{k-1})-f_{k-1}(\overline {x}_{k-1}))+\sum_{k=K_{0}}^{n}\xi_{k}\|u_{k}\|^{2}+\sum_{k=K_{0}}^{n}n_{k}\|x_{ k-1}-\overline{x}_{k-1}\|^{2}\] \[+\sum_{k=K_{0}}^{n}\eta_{k}\|x_{k}-x_{k-1}\|^{2}\leq-\sigma_{n}\| x_{n}\|^{2}+a_{n}b_{n}c_{n}\|\overline{x}_{n}\|^{2}+C_{1}\sum_{k=K_{0}}^{n}k^{2q-p-1}\] \[+C_{2}\sum_{k=K_{0}}^{n}k^{\max(p-q-2,q-2)}+C_{3},\text{ for some }C_{3}>0. \tag{72}\]
Now, since \((x_{n}),\,(\overline{x}_{n})\) are bounded and \(\sigma_{n},\,a_{n}b_{n}c_{n}\in\mathcal{O}(n^{q-p})\) as \(n\to+\infty,\) further \(q<1<p<1+q,\) we deduce that for \(p>2q\) the right hand side of (72) is finite. So taking into account the form of \(m_{k},\eta_{k}\) and \(\xi_{k}\) we obtain that \(\sum_{k=1}^{+\infty}k^{q}(f_{k}(x_{k})-f_{k}(\overline{x}_{k}))<+\infty,\,\sum _{k=1}^{+\infty}k^{2q}\|u_{k}\|^{2}<+\infty\) and \(\sum_{k=1}^{+\infty}k^{q}\|x_{k+1}-x_{k}\|^{2}<+\infty.\)
## 4. Conclusions, perspectives
In the present paper we showed that the constellation \(q=1,\,\lambda_{k}\equiv 1\) is not necessarily the best choice for Algorithm (1) since in case \(0<q<1\) the control on the stepsize parameter \(\lambda_{k}\) allows us to obtain arbitrary rate for the potential energy \(f(x_{k})-\min_{\mathcal{H}}f\). Further, our analysis reveals that the inertial parameter \(\alpha_{k}\), the stepsize \(\lambda_{k}\) and the Tikhonov regularization parameter \(c_{k}\) are strongly correlated: in case \(q+1<p,\,\delta\geq 0\) weak convergence of the generated sequences and fast convergence of the function values can be obtained, meanwhile in case \(p<q+1,\,\delta\leq 0\) strong convergence results for the generated sequences and fast convergence of the function values can be provided.
Another important achievement of the present paper is that for the case \(\lambda_{k}\equiv 1,\,p<q+1\) we succeeded to obtain "full" strong convergence of the generated sequences to the minimal norm solution \(x^{*}\), that is \(\lim_{k\to+\infty}\|x_{k}-x^{*}\|=0.\) For the same constellation of parameters, we also obtained fast convergence of the function values and velocity and some sum estimates. Due to our best knowledge this is the first result of this type in the literature concerning discrete dynamical systems, however in continuous case some similar results have already been obtained in the recent papers [2, 25, 17]. Nevertheless, in order to obtain strong convergence we had to develop some original new techniques.
In our context, one can observe that the case \(p=q+1\) is critical in the sense that separates the two cases: the case when we obtain fast convergence of the function values and weak convergence of the generated sequences to a minimizer and the case when the strong convergence of the generated sequences to a minimizer of minimum norm is assured. However, even in this case we can obtain fast convergence of the function values and velocity and also sum estimates, both for the case \(\delta\geq 0\) and \(\delta<0.\) These facts are in concordance with the results obtained for continuous dynamics in [6], [15] and [1].
Some other subjects for future investigations are the gradient type algorithms obtained via explicit discretization from (3) and the dynamical systems studied in the papers mentioned above.
## Appendix A Auxiliary results
The following lemma summarizes several important results which are behind the Tikhonov regularization techniques and are used in our proofs.
**Lemma A.1**.: _Let \(f:\mathcal{H}\to\overline{\mathbb{R}}\) a proper, convex and lsc function and let \((\varepsilon_{k})\) a positive non-increasing sequence that converges to \(0.\) By \(\overline{x}_{k}\) we denote the unique solution of the strongly convex minimization problem_
\[\min_{x\in\mathcal{H}}\left(f(x)+\frac{\varepsilon_{k}}{2}\|x\|^{2}\right).\]
_Then, for all \(k\geq 1\) one has_
\[\frac{\varepsilon_{k}-\varepsilon_{k+1}}{\varepsilon_{k+1}}\langle\overline{x} _{k},\overline{x}_{k+1}-\overline{x}_{k}\rangle\geq\|\overline{x}_{k+1}- \overline{x}_{k}\|^{2}\]
_and_
\[\frac{\varepsilon_{k}-\varepsilon_{k+1}}{\varepsilon_{k}}\langle\overline{x} _{k+1},\overline{x}_{k+1}-\overline{x}_{k}\rangle\geq\|\overline{x}_{k+1}- \overline{x}_{k}\|^{2}.\]
_Consequently, the sequence \((\|\overline{x}_{k}\|)_{k\geq 1}\) is non-decreasing and one has \(\langle\overline{x}_{k+1},\overline{x}_{k}\rangle\geq 0\) for all \(k\geq 1.\) Additionally, the following statements hold for all \(k\geq 1\)._
* \(\|\overline{x}_{k+1}\|^{2}-\|\overline{x}_{k}\|^{2}\geq\frac{\varepsilon_{k}+ \varepsilon_{k+1}}{\varepsilon_{k}-\varepsilon_{k+1}}\|\overline{x}_{k+1}- \overline{x}_{k}\|^{2}.\)__
* \(\|\overline{x}_{k}\|^{2}+\frac{\varepsilon_{k+1}}{\varepsilon_{k}- \varepsilon_{k+1}}\|\overline{x}_{k+1}-\overline{x}_{k}\|^{2}\leq\langle \overline{x}_{k+1},\overline{x}_{k}\rangle\leq\|\overline{x}_{k+1}\|^{2}- \frac{\varepsilon_{k}}{\varepsilon_{k}-\varepsilon_{k+1}}\|\overline{x}_{k+1 }-\overline{x}_{k}\|^{2}.\)__
* \(\|\overline{x}_{k+1}-\overline{x}_{k}\|\leq\min\left(\frac{\varepsilon_{k}- \varepsilon_{k+1}}{\varepsilon_{k+1}}\|\overline{x}_{k}\|,\frac{\varepsilon_ {k}-\varepsilon_{k+1}}{\varepsilon_{k}}\|\overline{x}_{k+1}\|\right).\)__
Proof.: Since \(\overline{x}_{k}\) is the unique minimum of the strongly convex function \(f_{k}(x)=f(x)+\frac{\varepsilon_{k}}{2}\|x\|^{2},\) obviously one has
\[\partial f_{k}(\overline{x}_{k})=\partial f(\overline{x}_{k})+\varepsilon_{k }\overline{x}_{k}\ni 0. \tag{73}\]
Hence, we have \(-\varepsilon_{k}\overline{x}_{k}\in\partial f(\overline{x}_{k})\) and \(-\varepsilon_{k+1}\overline{x}_{k+1}\in\partial f(\overline{x}_{k+1})\) and by using the monotonicity of \(\partial f\) we get
\[\langle-\varepsilon_{k+1}\overline{x}_{k+1}+\varepsilon_{k}\overline{x}_{k}, \overline{x}_{k+1}-\overline{x}_{k}\rangle\geq 0.\]
In other words
\[-\varepsilon_{k+1}\langle\overline{x}_{k+1}-\overline{x}_{k},\overline{x}_{k+ 1}-\overline{x}_{k}\rangle+(\varepsilon_{k}-\varepsilon_{k+1})\,\langle \overline{x}_{k},\overline{x}_{k+1}-\overline{x}_{k}\rangle\geq 0\]
or, equivalently
\[\frac{\varepsilon_{k}-\varepsilon_{k+1}}{\varepsilon_{k+1}}\langle\overline{x }_{k},\overline{x}_{k+1}-\overline{x}_{k}\rangle\geq\|\overline{x}_{k+1}- \overline{x}_{k}\|^{2}. \tag{74}\]
But, \(\langle\overline{x}_{k},\overline{x}_{k+1}-\overline{x}_{k}\rangle=-\| \overline{x}_{k+1}-\overline{x}_{k}\|^{2}+\langle\overline{x}_{k+1},\overline{ x}_{k+1}-\overline{x}_{k}\rangle\) hence
\[\frac{\varepsilon_{k}-\varepsilon_{k+1}}{\varepsilon_{k+1}}\langle\overline{ x}_{k+1},\overline{x}_{k+1}-\overline{x}_{k}\rangle\geq\frac{\varepsilon_{k}}{ \varepsilon_{k+1}}\|\overline{x}_{k+1}-\overline{x}_{k}\|^{2}.\]
Equivalently, we can write
\[\frac{\varepsilon_{k}-\varepsilon_{k+1}}{\varepsilon_{k}}\langle\overline{x}_ {k+1},\overline{x}_{k+1}-\overline{x}_{k}\rangle\geq\|\overline{x}_{k+1}- \overline{x}_{k}\|^{2}. \tag{75}\]
In order to prove a) note that \(\langle\overline{x}_{k},\overline{x}_{k+1}-\overline{x}_{k}\rangle=\frac{1}{2 }(\|\overline{x}_{k+1}\|^{2}-\|\overline{x}_{k}\|^{2}-\|\overline{x}_{k+1}- \overline{x}_{k}\|^{2}),\) hence (74) leads to
\[\|\overline{x}_{k+1}\|^{2}-\|\overline{x}_{k}\|^{2}\geq\frac{\varepsilon_{k}+ \varepsilon_{k+1}}{\varepsilon_{k}-\varepsilon_{k+1}}\|\overline{x}_{k+1}- \overline{x}_{k}\|^{2}. \tag{76}\]
Observe that b) is actually equivalent to (74) and (75).
For proving c) we simply use in (74) and (75) the Cauchy-Schwarz inequality and simplify with \(\|\overline{x}_{k+1}-\overline{x}_{k}\|.\)
Finally, note that a) implies that the sequence \((\|\overline{x}_{k}\|)_{k\geq 1}\) is non-decreasing and b) implies that \(\langle\overline{x}_{k+1},\overline{x}_{k}\rangle\geq 0\) for all \(k\geq 1.\)
The following result is used in the proofs of our strong convergence results.
**Lemma A.2**.: _Let \(H>0,\,0<\beta\) and for \(K_{0}\in\mathbb{N},\,K_{0}>H^{\frac{1}{\beta}}\) consider the sequence \(\pi_{k}=\frac{1}{\prod_{i=K_{0}}^{t}\left(1-\frac{H}{i\beta}\right)}.\) Then obviously \((\pi_{k})\) is a positive non-decreasing sequence and has the following properties._
* _If_ \(\beta\in]0,1[\) _then there exists_ \(C_{1},C_{2}>0\) _such that after an index_ \(n_{0}\in\mathbb{N}\) _it holds_ \[e^{C_{1}n^{1-\beta}}\leq\pi_{n}\leq e^{C_{2}n^{1-\beta}},\text{ for all }n\geq n_{0}.\] _Further, if_ \(\beta=1\) _then_ \(\pi_{n}=\mathcal{O}(n^{H})\) _as_ \(n\to+\infty\)_._
* _If_ \(\beta\in]0,1[\) _then for all_ \(\gamma\in\mathbb{R}\) _and_ \(n\) _big enough, one has_ \[C_{1}n^{\gamma+\beta}\pi_{n}\leq\sum_{k=K_{0}}^{n}k^{\gamma}\pi_{k}\leq C_{2}n^{ \gamma+\beta}\pi_{n},\text{ for some }C_{1},C_{2}>0.\]
* _For every nonegative sequence_ \((a_{k})\) _one has_ \[\sum_{k=K_{0}+1}^{n}\pi_{k}(a_{k}-a_{k-1})\leq a_{n}\pi_{n}.\]
Proof.: In case \(\beta\in]0,1[\), by applying the Cesaro-Stolz theorem, we have
\[\lim_{n\rightarrow+\infty}\frac{\ln\pi_{n}}{n^{1-\beta}}=\lim_{n\rightarrow+ \infty}\frac{\ln\frac{\pi_{n+1}}{\pi_{n}}}{(n+1)^{1-\beta}-n^{1-\beta}}=\lim_{n \rightarrow+\infty}\frac{\frac{H}{(n+1)^{\beta}}\ln\left(1-\frac{H}{(n+1)^{ \beta}}\right)^{-\frac{(n+1)^{\beta}}{H}}}{(n+1)^{1-\beta}-n^{1-\beta}}\]
But \(\lim_{n\rightarrow+\infty}\frac{\frac{H}{(n+1)^{\beta}}}{(n+1)^{1-\beta}-n^{1 -\beta}}=\frac{H}{1-\beta}\) and \(\lim_{n\rightarrow+\infty}\ln\left(1-\frac{H}{(n+1)^{\beta}}\right)^{-\frac{ (n+1)^{\beta}}{H}}=1\), hence
\[\lim_{n\rightarrow+\infty}\frac{\ln\pi_{n}}{n^{1-\beta}}=\frac{H}{1-\beta}.\]
In other words, for every \(\varepsilon>0\) there exists \(n_{0}\in\mathbb{N}\) such that for all \(n\geq n_{0}\) one has
\[e^{\left(\frac{H}{1-\beta}-\varepsilon\right)n^{1-\beta}}\leq\pi_{n}\leq e^{ \left(\frac{H}{1-\beta}+\varepsilon\right)n^{1-\beta}}\]
and the conclusion follows.
In case \(\beta=1\), by applying the Cesaro-Stolz theorem, we have
\[\lim_{n\rightarrow+\infty}\frac{\ln\pi_{n}}{\ln n}=\lim_{n\rightarrow+\infty }\frac{\frac{H}{n+1}\ln\left(1-\frac{H}{n+1}\right)^{-\frac{n+1}{H}}}{\frac{1 }{n}\ln\left(1+\frac{1}{n}\right)^{n}}=H\]
and the conclusion follows.
b) Note that it is enough to show that \(\lim_{n\rightarrow+\infty}\frac{\sum_{k=K_{0}+1}^{n}k^{\gamma}\pi_{k}}{n^{ \gamma+\beta}\pi_{n}}\) exists and is finite. Observe that according to a) one has \(\lim_{n\rightarrow+\infty}n^{\gamma+\beta}\pi_{n}=+\infty\) for every \(\gamma\in\mathbb{R}.\) Further, for every \(\gamma\in\mathbb{R}\) one has \(\frac{(n+1)^{\gamma+\beta}\pi_{n+1}}{n^{\gamma+\beta}\pi_{n}}=\left(1+\frac{1 }{n}\right)^{\gamma+\beta}\frac{(n+1)^{\beta}}{(n+1)^{\beta}-H}>1\), hence the sequence \((n^{\gamma+\beta}\pi_{n})\) is increasing. Consequently Cesaro-Stolz theorem can be applied in order to find the limit \(\lim_{n\rightarrow+\infty}\frac{\sum_{k=K_{0}+1}^{n}k^{\gamma}\pi_{k}}{n^{ \gamma+\beta}\pi_{n}}.\) We have
\[\lim_{n\rightarrow+\infty}\frac{\sum_{k=K_{0}+1}^{n}k^{\gamma}\pi_{k}}{n^{ \gamma+\beta}\pi_{n}}=\lim_{n\rightarrow+\infty}\frac{(n+1)^{\gamma}\pi_{n+1} }{(n+1)^{\gamma+\beta}\pi_{n+1}-n^{\gamma+\beta}\pi_{n}}=\lim_{n\rightarrow+ \infty}\frac{(n+1)^{\gamma}}{(n+1)^{\gamma+\beta}-n^{\gamma+\beta}\frac{\pi_{n }}{\pi_{n+1}}}.\]
Further,
\[\lim_{n\rightarrow+\infty}\frac{(n+1)^{\gamma}}{(n+1)^{\gamma+\beta}-n^{ \gamma+\beta}\frac{\pi_{n}}{\pi_{n+1}}}=\lim_{n\rightarrow+\infty}\frac{1}{ \frac{(n+1)^{\gamma+\beta}-n^{\gamma+\beta}}{(n+1)^{\gamma}}+\frac{Hn^{\gamma+ \beta}}{(n+1)^{\gamma+\beta}}}=\frac{1}{H}.\]
c) We have \(\pi_{k}a_{k-1}=\pi_{k-1}a_{k-1}+\frac{H}{k^{\beta}-H}\pi_{k-1}a_{k-1}\), hence
\[\sum_{k=K_{0}+1}^{n}\pi_{k}(a_{k}-a_{k-1})\leq\sum_{k=K_{0}+1}^{n}(\pi_{k}a_{k}- \pi_{k-1}a_{k-1})\leq a_{n}\pi_{n}.\]
|
2305.14908 | PURR: Efficiently Editing Language Model Hallucinations by Denoising
Language Model Corruptions | The remarkable capabilities of large language models have been accompanied by
a persistent drawback: the generation of false and unsubstantiated claims
commonly known as "hallucinations". To combat this issue, recent research has
introduced approaches that involve editing and attributing the outputs of
language models, particularly through prompt-based editing. However, the
inference cost and speed of using large language models for editing currently
bottleneck prompt-based methods. These bottlenecks motivate the training of
compact editors, which is challenging due to the scarcity of training data for
this purpose. To overcome these challenges, we exploit the power of large
language models to introduce corruptions (i.e., noise) into text and
subsequently fine-tune compact editors to denoise the corruptions by
incorporating relevant evidence. Our methodology is entirely unsupervised and
provides us with faux hallucinations for training in any domain. Our Petite
Unsupervised Research and Revision model, PURR, not only improves attribution
over existing editing methods based on fine-tuning and prompting, but also
achieves faster execution times by orders of magnitude. | Anthony Chen, Panupong Pasupat, Sameer Singh, Hongrae Lee, Kelvin Guu | 2023-05-24T08:59:00Z | http://arxiv.org/abs/2305.14908v1 | # PURR: Efficiently Editing Language Model Hallucinations
###### Abstract
The remarkable capabilities of large language models have been accompanied by a persistent drawback: the generation of false and unsubstantiated claims commonly known as "hallucinations". To combat this issue, recent research has introduced approaches that involve editing and attributing the outputs of language models, particularly through prompt-based editing. However, the inference cost and speed of using large language models for editing currently bottleneck prompt-based methods. These bottlenecks motivate the training of compact editors, which is challenging due to the scarcity of training data for this purpose. To overcome these challenges, we exploit the power of large language models to introduce corruptions (_i.e._, noise) into text and subsequently fine-tune compact editors to denoise the corruptions by incorporating relevant evidence. Our methodology is entirely unsupervised and provides us with faux hallucinations for training in any domain. Our _Petite Unsupervised Research and Revision_ model, PURR, not only improves attribution over existing editing methods based on fine-tuning and prompting, but also achieves faster execution times by orders of magnitude.1
Footnote 1: The data generation pipeline, training data, and PURR checkpoints will be released.
## 1 Introduction
As the strengths of large language models (LLMs) have become prominent Brown et al. (2020); Chowdhery et al. (2022); Touvron et al. (2023), so too have their weaknesses Bender et al. (2021). A glaring weakness of LLMs is their penchant for generating false, biased, or misleading claims in a phenomena broadly referred to as "hallucinations" Maynez et al. (2020); Krishna et al. (2021); Longpre et al. (2021); Raunak et al. (2021). Most LLMs also do not ground their generations to any source, exacerbating this weakness Rashkin et al. (2021).
Post-hoc attribution and edit strategies offer promising solutions to tackle the problems of grounding and hallucination in language models Thorne and Vlachos (2020); Gao et al. (2022). These approaches retrieve supporting evidence to attribute the output (referred to as a claim) of a language model, followed by an editor that corrects factual errors in the claim, ensuring consistency with the evidence. A notable advantage of post-hoc methods is their modularity: they can be easily applied to any text regardless of their generation source. However, existing editors exhibit distinct strengths and weaknesses. Sufficiently large language models can be few-shot prompted to perform editing Bai et al. (2022); Gao et al. (2022). However, there is currently a steep compute-quality tradeoff, where only the largest, most expensive models can perform this task well. Even then, significant quality headroom remains, as we will show. In contrast, much smaller, cheaper models can be fine-tuned to perform editing, but are limited to specific domains where adequate training data is available Iv et al. (2022); Schick et al. (2022).
Instead of utilizing LLMs as prompted editors, we leverage their general-purpose capabilities to introduce challenging corruptions (_i.e._, noise) to clean pieces of text. Subsequently, we fine-tune compact editors to denoise these corruptions by grounding onto relevant evidence. While text to corrupt is readily available, we do not assume that paired relevant evidence is provided. To tackle this, our data generation pipeline first searches for a collection of topically related evidence. We then employ an LLM summarize the evidence into a claim which is then noised (Fig. 1a). The evidence is then used to ground the denoising. In contrast to existing work that assumes access to relevant paired evidence to ground the edit when training Balachandran et al. (2022) or assumes edit data is provided for training Schick et al. (2022); Iv et al. (2022), our approach eliminates these assumptions.
Furthermore, unlike distillation where a challenging distillation set is vital and the student model generally under-performs the teacher (Beyer et al., 2021; Stanton et al., 2021), our noising process introduces challenging corruptions and our resulting editor trained on these corruptions surpasses the performance of the LLM used for noising when the same LLM is employed for prompted editing on multiple datasets.
Our _Petite Unsupervised Research and Revision_ model, PURR, is built by fine-tuning a fusion-in-decoder T5 model on denoising data from our data generation pipeline (Raffel et al., 2020; Izacard and Grave, 2021). Because our goal is to improve attribution broadly across tasks and domains, we evaluate PURR on outputs of large language models on multiple question answering and dialog datasets. On all benchmarks, PURR outperforms much more expensive LLM-prompted editors in improving attribution while being orders of magnitude faster.
## 2 Editing for Attribution
### Task Overview
While there are various ways to apply editing to the outputs of large language models, the primary objective of PURR is to present efficient methods for attributing the outputs of language models and rectifying inaccuracies, referred to as _Editing for Attribution_(Gao et al., 2022). In this task, a system is provided with a textual statement, \(x\), and is tasked to produce an _attribution report_. The attribution report consists of a collection of evidence snippets, \(A=\{e_{1},e_{2},\ldots,e_{n}\}\), that grounds the information in \(x\). Additionally, the system is asked to produced a revised statement (_i.e._, edit), \(y\), that fixes any inaccuracies in \(x\) that contradict the content in \(A\). For completeness, we present a summary of the task and refer interested readers to Gao et al. (2022) for a more comprehensive discussion.
### Evaluation Metrics
Following Gao et al. (2022), we evaluate editing-for-attribution systems along two dimensions: **attribution**, the extent to which the original and revised statements can be attributed to the attribution report, and **preservation**, which measures how much information has changed from \(x\) to \(y\). The objective of the task is to maximally attribute a textual statement while preserving the original intent of the language model generation to the greatest extent possible. We use automated metrics developed by Gao et al. (2022) to measure both attribution and preservation, which were found to have strong correlation to human raters. It is important to note that this evaluation setup does not require reference edits and only relies on the grounding between the textual statements and the attribution report.
AttributionA textual statement is generally said to be attributable to a set of evidence if one could reasonably say that given the evidence set, the statement is entailed (Rashkin et al., 2021). To formalize this, Gao et al. (2022) introduce an evaluation metric based on sentence-level natural langauge inference (NLI) model. Given an attribution report, \(A\), and a textual statement \(y\) consisting of sentences, \(y=\{s_{1},s_{2},\ldots\}\), we use a NLI model to measure the likely that each sentence is entailed by an evidence snippet in \(A\): \(\text{NLI}(e,s_{i})\). The attribution of the entire statement, \(y\), is computed as the average over the maximum attribution score for each constituent sentence.
\[\text{Attr}_{(s,A)}=\max_{e\in A}\text{NLI}(e,s) \tag{1}\]
\[\text{Attr}_{(y,A)}=\text{avg}_{s\in y}\text{Attr}_{(s,A)} \tag{2}\]
The goal of editing is to have \(\text{Attr}_{(y,A)}\) be higher than \(\text{Attr}_{(x,A)}\).
Figure 1: **Training and Using PURR.**
PreservationPreservation is measured using character-level Levenshtein distance between \(x\) and \(y\). Preservation is 1 if the statements are the same and 0 if \(y\) has completely changed all textual information in \(x\).
\[\text{Pres}_{(x,y)}=\max\left(1-\frac{\text{Lev}(x,y)}{\text{length}(x)},0\right) \tag{3}\]
To capture our goal of maximal attribution with maximal preservation, we unify these two metrics by computing the harmonic mean, \(F1_{AP}\), of \(\text{Attr}_{(y,A)}\) and \(\text{Pres}_{(x,y)}\).
### Evaluation Sets
Our goal is to improve attribution broadly across tasks and domains on the outputs of strong generations systems. Gao et al. (2022) construct evaluation sets by prompting strong LLMs to generate outputs on three tasks: Natural Questions (factoid question answering) (Kwiatkowski et al., 2019), StrategyQA (reasoning-chain question answering) (Geva et al., 2021), and QreCC (knowledge-intensive dialogue) (Anantha et al., 2021). Gao et al. (2022) generate 150 validation and 150 test instances for each dataset using PALM for Natural Questions and StrategyQA and LaMBDA on QReCC (Chowdhery et al., 2022; Thoppilan et al., 2022). We use these sets and tune on the validation sets and report results on the test sets.
### Baselines
PURR and all baselines follow a **research-and-revision** pipeline. In the **research** stage, the objective is to search for relevant pieces of evidence to ground the information in the textual statement, \(x\). This stage remains consistent across all baselines. We first prompt a large language model to generate a set of queries \(Q=\{q_{1},q_{2},\ldots\,q_{m}\}\) that attempts to cover all pieces of information in \(x\) that needs verification. Subsequently, we use Google Search in conjunction with a passage extractor to find the most relevant evidence snippet for each query, constituting an evidence set \(E=\{e_{1},e_{2}\ldots,e_{m}\}\).
In the **revision** stage, an editor is given the original statement, \(x\), the set of queries, \(Q\), and the evidence set, \(E\), and asked to produce a revised statement, \(y\). \(y\) can be the same as \(x\) in the event the editor deems the original statement cannot be edited further to increase attribution. We measure the ability of different editors to abstain from editing later on. We compare PURR against two baseline editors.
Efecis a fine-tuned T5 editor trained on FEVER (Aly et al., 2021). EFEC was trained using evidence retrieved from Wikipedia and concatenates all pieces of evidence with the text statement to produce an edited statement. Notably, EFEC does not use the query set when making an edit. Gao et al. (2022) found EFEC often improves attribution at the expense of preservation.
Rarris a prompt-based editing approach that builds upon PALM, a language model with 540 billion parameters (Chowdhery et al., 2022). Unlike EFEC, which incorporates all evidence simultaneously to produce an edit, RARR iteratively examines each evidence, \(e_{i}\), by checking whether there is any contradictions between the text statement, \(x\), and edits in the event there is. The process of contradiction checking and editing is performed using distinct few-shot prompts. Gao et al. (2022) demonstrate that this iterative approach to editing combined with few-shot prompting leads to improvements in attribution and preservation, albeit at the cost of multiple computationally expensive and slow calls to a large language model.
### Generating the Attribution Report
To maintain a manageable scope, we limit the attribution report, \(A\), to include only the five most relevant pieces of evidence from the evidence set, \(E\). An attribution report of five evidence snippets was found to be able to attribute the information for the claims in the datasets we evaluate on. It is worth noting that when editing, there are no restrictions on the number of evidence snippets an editor can utilize. Given the evidence set, \(E\), and the query set, \(Q\), from the research stage, we employ a scoring module that evaluates the relevance of each evidence \(e_{i}\) to each query \(q_{j}\), \(S(q_{i},e_{j})\). Our objective is to identify a subset of evidence that maximizes the coverage across all queries to form the attribution report. This coverage is quantified as the sum of the highest relevance scores achieved by each query with respect to any evidence. For scoring, we use a cross-encoder2.
Footnote 2: [https://huggingface.co/cross-encoder/ms-marco-MiniLM-L-6-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L-6-v2)
\[\text{Cov}_{(E,Q)}=\sum_{i=1}^{N}\max_{e_{j}\in E}S(q_{i},e_{j}) \tag{4}\]
Efficient Editing by Denoising
In this section, we present an overview of PURR, highlight its distinguishing features compared to baselines, and describe the denoising training strategy.
### Overview of PURR at Inference Time
We first describe how PURR is used at inference time and highlight the differences between PURR and baselines (Fig. 0(b)). Similar to EFEC, PURR is built upon on the T5 model, specifically T5-large. Furthermore, our editing framework adopts a similar approach to EFEC in terms of incorporating all available evidence simultaneously when making an edit. However, instead of concatenating the evidence in the input, we employ fusion-in-decoder (FiD) to effectively aggregate information across evidence [11]. This approach has demonstrated superior performance in merging information and allows us to surpass the context length limits imposed by modern language models. Finally, rather than employing a prompted language model for query generation during the research stage, we employ distillation to train a T5-large query generation model. Although our primary focus lies in enhancing the editing process, we opt for distillation during query generation as well to ensure that our editing pipeline does not rely on prompting.
### Creating Training Data via Noising
To train an editor to fix hallucinations, we need a dataset consisting of a clean statements, \(y\), which are paired with a set of supporting evidence \(E=\{e_{1},e_{2},\ldots,e_{n}\}\), as well as a corrupted statement, \(x\). While collecting this data manually is feasible, doing so can be expensive, requiring scouring for evidence to ground an LLM generation followed by removing any inaccuracies in the generation. Instead, we remove this bottleneck by leveraging the general purpose generation capabilities of LLMs to create a training set in a completely fashion. We generate clean statements by providing a set of topically related evidence to the LLM, and then corrupt the statements to create simulated hallucinations (Fig. 0(a)). We provide the prompts used for summarization and corruption in Appendix A.
**Generating Clean Statements With Evidence** The first step is to create a statement, \(y\), paired with a set of evidence, \(E\), that attributes (_i.e.,_ grounds) the statement. Our pipeline only requires a set of queries in the domain of interest to get started. We start with a query, \(q\), and use a search engine to find evidence related to the question. We take the top web pages from the search engine and chunk them into passages. Using the same cross-encoder from the attribution report scoring module, we bin the passages that have the highest relevant score (beyond some threshold) to \(q\) into a set of gold evidence \(E^{+}=\{e_{1}^{+},e_{2}^{+},\ldots,e_{j}^{+}\}\) and the rest of the passages into a set of hard negative evidences \(E^{-}=\{e_{1}^{-},e_{2}^{-},\ldots,e_{j}^{-}\}\). In our pipeline, we restrict the size of \(E^{+}\) to contain at most four pieces of evidence. The resulting evidence set is the union of the gold and hard negative evidences \(E=E^{+}\cup E^{-}\). We then prompt a large language model to do zero-shot multi-document summarization of the gold evidence set, \(E^{+}\). We use the resulting summary as the clean statement, \(y\), and upon manual inspection, the summary has a high degree of faithfulness to the evidence set.
\begin{table}
\begin{tabular}{l p{142.3pt}} \(\mathit{a}\): & Who will be the new coach of the Detroit lions? \\ \(E^{+}\): & **-** On Jan. 20, 2021 the Detroit Lions named Dan Campbell the franchise’s new head coach... \\ & **-** Campbell possesses 23 years of NFL experience, including 12 years as a coach and 11 as a player. In his first year... \\ & **-** On Jan. 20, 2021 the Detroit Lions named Dan Campbell the franchise’s new head coach... \\ \(\mathit{a}\): & Dan Campbell was appointed the new head assistant coach of the Detroit Lions on January 20, 2021. With \(\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{ \#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{ \#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{ \#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{ \#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{ \#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{ \#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{ \#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{ \#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#}\mathit{\#} \mathit{\#
Noising and Conditional DenoisingWe take the clean statement, \(y\), and noise it by prompting a large language model to corrupt the text resulting in the corruption \(x\). Our prompt contains examples of corruptions, and covers a wide range of linguistic phenomena we observe when it comes to LLM hallucinations. These include incorrect dates and entities, semantic role errors, and quantification errors. Once noised claims paired with evidence is available, an editor can be trained by fine-tuning a sequence-to-sequence model to maximize \(P(y|x,E)\). We call the resulting editor from denoising PURR.
### Dataset Statistics and Training Details
We utilized GPT-3.5 text-davinci-003 to facilitate the process of generating summaries and introducing corruption. Our choice of this particular model ensures that our generation strategy can be easily replicated. We started with roughly 6,000 seed queries covering a variety of domains and topics resulting in an edit dataset of 6,000 instances (Tab. 1). We reserve 10% for validation and use the resulting 90% for training. Each instance cost roughly 4 cents to generate and in total cost of roughly $250.
We fine-tune T5-large on our dataset using the validation loss to tune hyperparameters and determine training stoppage. During training, we pair each corrupted statement, \(x\), with four pieces of evidence from the accompanying gold evidence set, \(E^{+}\), to ground the edit and produce the clean statement, \(y\). In the event that the gold evidence set has fewer than four evidence snippets, we randomly sample evidence from the negative evidence set, \(E^{-}\), until we hit four snippets. We found adding negative evidence during training helps PURR ignore irrelevant evidence during inference.
## 4 Results
### Primary Quantitative Results
We provide results on the editing-for-attribution task in Table 2. We report the attribution of the claim before and after editing and the preservation of the edited claim. Our primary metric, \(F1_{AP}\), is the harmonic mean between the attribution and preservation of the edited claim. We first turn our attention to the baselines. EFEC, the editor that was fine-tuned with evidence largely from Wikipedia, struggles on this task. While EFEC improves attribution, this comes at the large expense of preservation and we see this in practice as EFEC tends to make large changes to the claim. RARR, the prompted editor, does not improve attribution as much as EFEC. However it is significantly better at preserving the intent of the original claim. Because of this, RARR is much better on the unified \(F1_{AP}\) metric.
PURR improves upon the results of RARR by generally making smaller changes to the claim while improving the attribution in this more limited edit. Because of this, PURR pushes the state-of-the-art on the unified \(F1_{AP}\) metric an all three tasks. Moreover, PURR is significantly more efficient to use by virtue of its size.
### Breaking Down the Numbers
We dig into the edits to get a better sense of where PURR improves on the baselines. Based on the preservation, \(\text{Pres}_{(x,y)}\), and attribution scores of the original statement, \(\text{Attr}_{(x,A)}\), and edited statement, \(\text{Attr}_{(y,A)}\), we say an edit can fall into one of the following sets:
* **Huge Edit**: We say an edit is "huge" if preservation is low: \(\text{Pres}_{(x,y)}<0.5\).
* **Bad Edit**: We say an edit is "bad" if the attribution after editing is lower than before: \(\text{Attr}_{(y,A)}-\text{Attr}_{(x,A)}<-0.1\).
* **Unnecessary Edit**: We say an edit is "unneces
\begin{table}
\begin{tabular}{l c c|c} \hline \hline Model & Attr. (\(x\to y\)) & Pres. & \(F1_{AP}\) \\ \hline \multicolumn{4}{c}{**PALM outputs on NQ**} \\ EFEC & 44.7 \(\rightarrow\) **63.9** & 39.6 & 48.5 \\ RARR & 44.7 \(\rightarrow\) 53.8 & 89.6 & 67.2 \\ PURR & 44.8 \(\rightarrow\) 59.8 & **91.0** & **72.2** \\ \hline \multicolumn{4}{c}{**PALM outputs on SQA**} \\ EFEC & 37.2 \(\rightarrow\) **58.2** & 31.0 & 40.4 \\ RARR & 37.2 \(\rightarrow\) 44.6 & 89.9 & 59.6 \\ PURR & 36.9 \(\rightarrow\) 47.1 & **92.0** & **62.3** \\ \hline \multicolumn{4}{c}{**LaMBDA outputs on QreCC**} \\ EFEC & 18.4 \(\rightarrow\) **47.2** & 39.0 & 42.7 \\ RARR & 18.4 \(\rightarrow\) 28.7 & 80.1 & 42.2 \\ PURR & 16.8 \(\rightarrow\) 33.0 & **85.8** & **47.7** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Results on the _Editing for Attribution_ task. We report the attribution of the statement before and after editing, preservation after editing, and \(F1_{AP}\) which combines attribution and preservation. Results are on LLM outputs on factoid question answering, long reasoning question answering, and dialog.**
sary" if it is a bad edit and also \(\text{Attr}_{(x,A)}>0.9\). This means the editor made a poor edit when the attribution was already near perfect before editing.
* **Good Edit**: We say an edit is "good" if attribution has significantly improved while preservation is high: \(\text{Attr}_{(y,A)}-\text{Attr}_{(x,A)}>0.3\) and \(\text{Pres}_{(x,y)}>0.7\).
Note that unnecessary edits are a subset of bad edits. We take the 150 instances in the Natural Questions test set and categorize the edits each editor makes in Figure 2. On a majority of claims, EFEC makes large edits while rarely making edits that improve attribution while preserving the original claim. RAR does a much better job at minimizing large edits but there are still cases where RAR edits a claim in a way that reduces the attribution. PURR almost never makes large edits and never edits a claim when it is near-perfect in a way that reduces attribution. PURR also makes more good edits compared to the baselines.
### Qualitative Analysis
We then dig into the PURR predictions and diagnose the strengths of PURR and examine where there is room for improvement. We show examples in Table 3 that we found are representative of the strengths of PURR and areas of potential improvement. We find that PURR is extremely strong at fixing entity and numerical hallucinations as well as longer spans. Additionally, because PURR uses fusion-in-decoder, it is adept at merging information across multiple pieces of evidence to make an edit. We noticed several instances where there are challenging distractors in evidence that can lead to an erroneous edit. Future work will introduce stronger corruptions in the data generation pipeline to better handle this case.
We next analyze the entire inference pipeline of PURR (Fig. 1b), which includes the question generation model, the search engine, and the editor itself. Our goal is to see when there is an error, which component is responsible. On the Natural Questions subset of the evaluation, we examine 20 instances where the attribution after editing,
Figure 2: **Breakdown of edit types each editor makes on the Natural Questions test set. EFEC makes huge edits while RAR sometimes over edits. PURR does a much better job at balancing attribution and preservation while rarely over-editing.**
\begin{table}
\begin{tabular}{l l} \hline \hline \multicolumn{1}{c}{**Correctly revising an entity/span.**} \\ \(x\): The US stopped printing \$2 bills in 1966. The reason was that vending machines would not accept it. In 1976, the \$2 bill was reintroduced... \\ \(E\): During that time, most goods and services were less than a dollar, making paper currency impractical. As need and use declined, the Federal Reserve stopped printing \$2 bills in 1966. In 1976, the Federal Reserve brought back the \$2-dollar bill. \\ \(y\): The US stopped printing \$2 bills in 1966. The reason was that most \(\text{goods}\) and services were less than a dollar. In 1976, the \$2 bill was reintroduced... \\ \(Pres_{(x,y)}\): 0.81 \(Attr_{(A,x)}\to attr_{(A,y)}\): 0.61 \(\to\) 0.84 \\ \hline \hline \multicolumn{1}{c}{**Reasoning across multiple pieces of evidence**} \\ \(x\): In 1899, the first driver’s license was issued in France. The first license in the United States was issued in 1902 in Massachusetts. \\ \(E\): - France introduced the first driving licences under the Paris Police Ordinance of 14 August 1893... \\ - Massachusetts and Missouri were the first states to require a license in 1903, but there was no test associated with it... \\ \(y\): In 1893, the first driver’s license was issued in France. The first license in the United States was issued in \(1903\) in Massachusetts. \\ \(Pres_{(x,y)}\): 0.90 \(Attr_{(A,x)}\to attr_{(A,y)}\): 0.03 \(\to\) 0.96 \\ \hline \hline \multicolumn{1}{c}{**Challenging distractors in evidence**} \\ \(x\): The first orca put in captivity was in 1965. It was at Seattle, Washington’s Seattle Aquarium. The orca’s name was Wanda. \\ \(E\): - On Nov. 18, 1961 a killer whale had been spotted and a collection crew from the **Marineland of the Pacific** scrambled to capture her. Never had an orca been captured alive, and Wanda was trying to avoid being the first... \\ - The first orca to be captured in 1961 was found off the coast of California and given the name Wanda. \\ \(y\): The first orca put in captivity was in 1961. It was at **Marineland of the Pacific**. The orca’s name was Wanda. \\ \(Pres_{(x,y)}\): 0.77 \(Attr_{(A,x)}\to attr_{(A,y)}\): 0.33 \(\to\) 0.77 \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Example of \(\text{good}\) and bad revisions with PURR.**\(x\) = claim; \(E\) = relevant evidence; \(y\) = edited claim using \(E\). PURR can handle hallucinated entities and spans as well as merge information across evidence to edit. PURR can struggle when there are challenging distractors in a piece of evidence.
Attr\({}_{(y,A)}\), is less than 0.30. Our qualitative analysis is provided in Table 4. Roughly 80% of the instances have low attribution after editing because either the question generation model we used did not fully cover the information in the claim or our search procedure did not find the best evidence for editing. We believe the question generation is the easier problem to fix while search is a much harder problem. Editing is a fairly small issue in comparison. Finally, there are some claims that fall into a "miscellaneous" category, either because it was not contextualized enough to properly edit or because the automatic metric erroneously assigned a low score.
### Inference Speed and Cost Comparisons of Fine-tuned vs Prompted Editors
A key advantage of PURR over prompt-based editors are the lower computational costs. RAR, a prompt-based editor built upon 540B PALM, runs on dozens of TPUs and takes approximately 40 seconds to edit a single statement. In comparison, PURR can run on a 12GB GPU and takes approximately 2 seconds to edit a single statement on a Titan-RTX. Considering generating our training set costs <$300 USD which is quickly amortized, we recommend our synthetic data generation strategy for large-scale deployment given the speed and cost savings of PURR.
## 5 Related Work
Editing for AttributionPURR builds upon previous research on post-hoc editing methods aimed at enhancing the attribution and accuracy of generated text (Balachandran et al., 2021; Cao et al., 2020; Iso et al., 2020). Notably, RAR (Gao et al., 2022) and Rethinking-with-Retrieval (He et al., 2022) employ few-shot prompting to rectify language model outputs, exhibiting similarities to our work. FRUIT (Iv et al., 2022) and EFEC (Thorne and Vlachos, 2020) also utilize fine-tuned editors to achieve similar objectives, leveraging Wikipedia as a source of training data. PEER is trained on Wikipedia edits (Schick et al., 2022) and includes a component for enhancing factuality through editing, but its primary focus lies in collaborative writing. Our denoising approach combines the speed advantages of fine-tuned editors while circumventing the reliance on training data that is typically constrained to specific domains like Wikipedia.
Improving Trust in Large Language Models Ensuring the safe deployment of large language models encompasses various considerations, beyond just factuality and attribution. Large language models have demonstrated the potential to regurgitate protected information (Carlini et al., 2020), spew hateful content (Gehman et al., 2020), and exhibit high sensitivity to input variations (Zhao et al., 2021). A common approach to addressing these issues has been via additional training such as instruction fine-tuning (Sanh et al., 2021; Min
\begin{table}
\begin{tabular}{c c} \hline \hline \multicolumn{1}{c}{**Query Generation Missing Coverage** (_35\%_)} \\ \(x\): & Legends of Tomorrow season 3 finale aired on April 9, 2018. \\ \(\mathrm{It}\)’s title is No Country for Old Dads and is a 42-minute episode. \\ \(Q\): & When did the season 3 finale of Legends of Tomorrow a fair? \\ \(\boldsymbol{\text{-}}\)_When did the title of Legends of Tomorrow a season 3-finale_\(2\) \\ \(\boldsymbol{\text{-}}\)_How long is the season 3 finale of Legends of Tomorrow a_\(2\) \\ \(\boldsymbol{\text{-}}\)_Legends of Tomorrow a_\(
et al., 2021; Chung et al., 2022; Ye et al., 2022), fine-tuning from human feedback (Ziegler et al., 2019; Stiennon et al., 2020), and more recently pre-training from human feedback (Korbak et al., 2023). In a similar vein to RARR, Bai et al. (2022) proposes to edit the outputs of LLMs using prompted LLMs to remove unsafe aspects of generated text. As part of our future research, we aim to apply our denoising strategy to train efficient compact editors for addressing such undesired generation behaviors.
Distilling Large Language ModelsGiven their generation prowess, LLMs have been incorporated into data generation pipelines, essentially distilling the knowledge of the language model if their outputs are used for training (Wang et al., 2021; Bartolo et al., 2022; Lang et al., 2022; Smith et al., 2022). Eisenstein et al. (2022) follow a multi-step distillation pipeline like ours, chaining the outputs of multiple LLM calls and distilling the output into an explainable question answering model. Liu et al. (2022) uses the outputs of LLMs followed by filtering and human refinement to create WANLI, a challenging natural language inference dataset. On the evaluation side, Ribeiro and Lundberg (2022) use LLMs to generate evaluation sets for testing LLMs. While similar, our denoising approach _implicitly_ distills the information in a large language model while simultaneously producing challenging training instances.
## 6 Conclusion
Factuality and attribution are vital for the safe deployment of large language models. However, these mechanisms are inherently lacking in LLMs. Recent work has proposed augmenting the outputs of LLMs by retrieving evidence to attribute their outputs followed by prompting another LLM to edit the outputs to remove inconsistencies. However, there is a heavy computational cost which bottleneck these methods which motivates a need to develop efficient editors, but this is hindered by training data scarcity. To overcome these challenges, we use LLMs to corrupt text and fine-tune compact editors to denoise these faux hallucinations. Our denoising method is completely unsupervised and our resulting editor, PURR, improves attribution performance across various datasets over prompted editors, while being order of magnitude faster to execute.
|
2303.01735 | Automatic Increase Market Systems (AIMS): Towards a deterministic theory
for cryptocurrencies | The popularity of cryptocurrencies has grown significantly in recent years,
and they have become an important asset for internet trading. One of the main
drawbacks of cryptocurrencies is the high volatility and fluctuation in value.
The value of cryptocurrencies can change rapidly and dramatically, making them
a risky investment. Cryptocurrencies are largely unregulated, which can
exacerbate their volatility. The high volatility of cryptocurrencies has also
led to a speculative bubble, with many investors buying and selling
cryptocurrencies based on short-term price fluctuations rather than their
underlying values. Therefore, how to reduce the fluctuation risk introduced by
exchanges, transform uncertain prices to deterministic value, and promote the
benefits of decentralized finance are critical for the future development of
cryptos and Web 3.0.
To address the issues, this paper proposes a novel theory as Automatic
Increase Market Systems (AIMS) for cryptos, which could potentially be designed
to automatically adjust the value of a cryptocurrency helping to stabilize the
price and increase its value over time in a deterministic manner. We build a
crypto, WISH (https://wishbank.wtf), based on AIMS in order to demonstrate how
the automatic increase market system would work in practice, and how it would
influence the supply of the cryptocurrency in response to market demand and
finally make itself to be a stable medium of exchange, ensuring that the AIMS
is fair and transparent. | Wantall Newby, Nickuk Nishikawa | 2023-03-03T06:30:16Z | http://arxiv.org/abs/2303.01735v1 | # Automatic Increase Market Systems (AIMS): Towards a deterministic theory for cryptocurrencies
###### Abstract
The popularity of cryptocurrencies has grown significantly in recent years, and they have become an important asset for internet trading. One of the main drawbacks of cryptocurrencies is the high volatility and fluctuation in value. The value of cryptocurrencies can change rapidly and dramatically, making them a risky investment. Cryptocurrencies are largely unregulated, which can exacerbate their volatility. The high volatility of cryptocurrencies has also led to a speculative bubble, with many investors buying and selling cryptocurrencies based on short-term price fluctuations rather than their underlying values. Therefore, how to reduce the fluctuation risk introduced by exchanges, transform uncertain prices to deterministic value, and promote the benefits of decentralized finance are critical for the future development of cryptos and Web 3.0.
To address the issues, this paper proposes a novel theory as Automatic Increase Market Systems (AIMS) for cryptos, which could potentially be designed to automatically adjust the value of a cryptocurrency helping to stabilize the price and increase its value over time in a deterministic manner. We build a crypto, WISH ([https://wishbank.wtf](https://wishbank.wtf)), based on AIMS in order to demonstrate how the automatic increase market system would work in practice, and how it would influence the supply of the cryptocurrency in response to market demand and finally make itself to be a stable medium of exchange, ensuring that the AIMS is fair and transparent.
**Keywords:** Automatic increase market systems, cryptocurrency, stablecoin, decentralized finance.
## 1 Introduction
Cryptocurrencies are virtual tokens that use cryptography for security and operate independently of a central bank [16]. They are decentralized and use blockchain technology to keep track of transactions, making them transparent and immutable [17]. Many businesses and individuals now accept cryptocurrencies as a form of payment, and there are numerous exchanges and trading platforms that allow users to buy, sell, and trade cryptocurrencies [14]. However, it is important to note that cryptocurrencies are still a relatively new and volatile asset, and there are huge risks involved in investing or trading cyrptos [18, 19, 20, 21]. The high volatility of cryptos may lead to significant losses for investors who are not prepared for price fluctuations. And the fluctuations make cryptos less attractive for use as a medium of exchange. Many merchants might be hesitant to accept cryptocurrencies due to the high volatility, which can result in significant losses if the value falls sharply. The result would distinctly discourage the adoption of the corresponding decentralized applications [19, 1].
It is difficult to completely eliminate the speculative nature of cryptocurrencies and the associated risks, as market forces and human behavior are inherently unpredictable, especially over unregulated exchanges [15] and crypto projects [16, 17]. However, there have been proposals for various systems and mechanisms to reduce the volatility of cryptocurrencies and address some of the risks associated with them. For example, an automatic market maker (AMM) system could help to stabilize the price of a cryptocurrency by adjusting the supply and demand of the currency in response to market conditions. An AMM system uses algorithms to set the price of a cryptocurrency based on the amount of liquidity available in the market, and adjusts the supply of the currency in response to changes in demand [1]. Another potential approach is the use of stablecons, which are cryptocurrencies that are pegged to the value of a fiat currency or other asset [14].
In addition, increased regulation and oversight of the cryptocurrency market could help to reduce the risks associated with cryptocurrency investment and trading. This could include measures such as requiring exchanges to comply with anti-money laundering and know-your-customer regulations, and establishing clearer guidelines for the issuance and trading of cryptocurrencies [1]. It is important to note that the volatility of the current design of cryptocurrencies and the associated risks are inherent to the nature of the market, which determines that the above solutions have no guarantee to eliminate the risks [13].
This paper proposes a novel theory as Automatic Increase Market Systems (AIMS) for cryptocurrencies, which could potentially be designed to automatically adjust the value of a cryptocurrency. The adjustment is contracted on blockchains to be transparent, which means that the increase of its value over time is deterministic regardless of the exchanges and market conditions. Thus, the AIMS is a blockchain-based mechanism that aims to create a new type of crypto financial market where the value of a cryptocurrency only increases over time. This is achieved by using an automated system deployed with a smart contract that adjusts the price of the cryptocurrency in response to a deterministic mathematical function, ensuring that the price of the cryptocurrency only goes up. The use of AIMS could help to reduce of the risks, stabilize the prices and make cryptocurrencies a more deterministic, stable and reliable asset for investment and trading. The contribution of the paper is threefold as follows:
* To propose and formulate the Automatic Increase Market Systems (AIMS) for cryptocurrencies which enables deterministic value change of a cryptocurrency and complete decentralized exchange.
* To summarize the features and benefits towards using Automatic Increase Market Systems (AIMS) with analysis comparing with traditional cryptocurrencies.
* To demonstrate the use, design, and implementation of Automatic Increase Market Systems (AIMS) for WISH ([https://wishbank.wtf](https://wishbank.wtf)) in practice. Source code of the smart contract and deployed application are both available for public reference.
The rest of the paper is organized as follows: Section 2 presents the concepts and theories of AIMS, and Section 3 describe the WISH crypto project implementation. Section 4 reviews the literature, and Section 5 finally concludes the paper.
## 2 Automatic Increase Market Systems (AIMS)
Let \(P\) be the set of all participants in a blockchain network \(N\), where each participant \(i\) has a public key \(\pi\) and a private key \(\psi\). Let \(T\) be the set of all transactions in the network. Let \(C\) be the smart contract code deployed on \(N\), which is a program that executes automatically when certain conditions are met. Let \(\gamma\) be the USDT-similar stablecoin on \(N\).
Figure 1: The illustration of an AIMS contract.
A cryptocurrency \(c\) issued by \(C\) is denoted as \(\Delta\). The value of \(\Delta\) is a deterministic increase function \(f_{C}()\) of time \(t\), where \(t\) is a variable within a timespan (\(t\in(t_{i},t^{\prime})\)) as the input to the function, namely we have the formula as follows,
\[\Delta_{C,t}=f_{C}(t),t\in(t_{i},t^{\prime}),\]
where \(t_{i}\) is an initial time point to automatically produce the price of \(c\) regards to \(\gamma\), and \(t^{\prime}\) is the ultimate time point for the use of \(f_{C}()\). We can gain insights into its properties and characteristics from Figure 1, which shows that the total crypto volume and value are determinstic to the market demand as an input of \(C\) for \(f_{C}()\) within a timespan.
Given such a function \(f_{C}()\) for \(c\) on \(N\), we have the following lemmas.
**Lemma 1**: _Let \(t_{x}<t_{y}\), \(t_{x},t_{y}\in t_{i},t^{\prime}\), we have \(\Delta_{C,t_{x}}<\Delta_{C,t_{y}}\)._
The lemma shows the deterministic increase feature of \(c\) with AIMS. The \(c\) value on \(N\) always increases with time within the timespan according to a given function.
**Lemma 2**: _Given the amount of \(\gamma\) to be \(\xi\) for a participant \(i\in P\) on \(N\), the following formula always hold,_
\[\frac{\xi}{\Delta_{C,t}}\geq 0.\]
The lemma holds for a non-negative value of \(\xi\), and the produced price of \(c\) at \(t\) shall be always positive. And the coined cryptos over the time \(T\) for a participant \(i\) is \(\sum_{t\in T}\frac{\xi}{\Delta_{C,t}}\).
**Lemma 3**: _Given the amount of \(\gamma\) to be \(\xi_{t}\) for a participant \(i\in P\) at time \(t\) on \(N\), the total locked value \(\Omega>0\) of \(C\) on \(N\) for \(i\) at time \(t_{m}\) is,_
\[\Omega=\sum_{t\in T}\frac{\xi_{t}}{\Delta_{C,t}}\times f_{C}(t_{m}).\]
The total locked value of \(c\) shows the potential profit-earning ability of a participant \(i\). Thus for all the participants the total locked value is \(\sum_{t\in T}^{P}\frac{\xi_{t}}{\Delta_{C,t}}\), which is the total market value of \(C\). When \(t>t^{\prime}\), the increase slope of \(\Omega\) is 0.
**Lemma 4**: _Given the amount of \(\gamma\) to be \(\xi_{t}\) for a participant \(i\in P\) at time \(t\) on \(N\), the total net profits \(\Lambda\) of \(C\) on \(N\) within \(T\) (\(t<t_{m}>T\)) for \(P\) is,_
\[\Lambda=\sum_{t\in T}^{P}\frac{\xi_{t}}{\Delta_{C,t}}\times(f_{C}(t_{m})- \Delta_{C,t}).\]
**Theorem 1**: _Given the amount of \(\gamma\) to be \(\xi_{t}\) for a participant \(i\in P\) at time \(t\) on \(N\), we have_
\[\Omega>\Lambda.\]
Let \(\Xi\) is a function of activities encouraging the participants on \(N\) to destroy \(c\), then we have Theorem 2 that we shall always have a \(\Xi\) to reach a balance 0 of the total locked value regards to the cryptocurrency net profits. And it means that it is possible for the a cryptocurrency to reach to a stable status without inflated price according to the invested values in \(\gamma\) and to provide a pure function of stabilizing the prices. However, to realize such a \(\Xi\), a significant effort has to be invested.
**Theorem 2**: _Given the amount of \(\gamma\) to be \(\xi_{t}\) for a participant \(i\in P\) at time \(t\) on \(N\), \(\Xi\) to be a function of a set activities of destroying \(c\),_
\[\Xi=\Omega-\Lambda\neq\emptyset.\]
## 3 wishbank.wtf Demonstration
In this section, we will show a demonstration of AIMS with a cryptocurrency WISH ([https://wishbank.wtf](https://wishbank.wtf)) to illustrate the mechanism in a straightforward way. According to the previous sections, WISH is designed to have several key features as follows.
* Automated investment management: AIMS uses algorithms to help investors make more informed investment decisions and optimize their trading strategies. This includes features such as automated rebalancing, risk management, and portfolio optimization.
* Automatic increase market system: The AIMS system uses an automated increase market system to passively adjust the volume of the cryptocurrency in response to market conditions and demand. This ensures that the price and supply of the cryptocurrency are totally transparent, providing a more predictable investment option.
* Decentralized exchange: AIMS operates as a decentralized exchange, which means that trades are executed directly on the blockchain and users retain control over their own private keys and funds. This helps to ensure the security and transparency of trades, and eliminates the need for intermediaries such as centralized exchanges.
For the automation of the increase management of WISH, \(f_{C}()\) is defined as
\[6.4428653^{n},n\ is\ a\ unit\ of\ year\ within\ [2023\ Mar\ 6th,\ 2033\ Jan\ 21st]\]
where \(n\) is a unit of a year or 365 days from the start of 2023 Mar 6th. Given the initial price of $0.0000001 for a WISH, the price at \(n\) is \(f_{C}()\times 80.0000001\). Therefore, we can see that for WISH the increase function of its value is a power function of time with base 6.4428653 multiplying its initial value. The value of \(f_{C}()\) depends on the value of time \(n\). When \(n\) is a positive integer, raising 6.4428653 to the power of \(n\) means multiplying 6.4428653 by itself \(n\) times. The function for the timespan of near 10 years to 2033 Jan 21st as plotted in Figure 2. As you can see from the graph, the function starts at $0.00000001 and increases rapidly as \(n\) increases. By the time 2033 Jan 21st, the value of the function is approximately stablizes at $1.00000005841. The curve is smooth and continuous, indicating that the function is well-behaved over the interval, and the rate of increase slowing down as \(n\) gets larger.
The logy growth of the value enables sufficient time for the community to leverage the applications of the cryptocurrency in a relatively long term instead of afraid of the rapid price fluctuation. For WISH few activities \(\Xi\) including donation and wish redeem are provided for users to destroy the coins.
Finally, for the time after 2033 Jan 21st, we can observe that the price and value of WISH is stable to be $1.00000005841 which can be leveraged as a trust medium of exchange, especially when the term is sufficiently long and the potential growth profit is low.
## 4 Literature Review
Cryptocurrency has gained increasing attention in recent years due to its potential to transform the traditional financial system. A literature review was conducted to explore the current state of research on cryptocurrency, its history, technical aspects, and potential applications. The history of cryptocurrency dates back to the late 1990s when the concept of digital currencies was first introduced. However, it was not until the launch of Bitcoin in 2009 [20] that cryptocurrency gained mainstream attention. Since then, numerous cryptocurrencies have been developed, each with its own unique features and use cases [14]. From a technical perspective, cryptocurrency relies on blockchain technology to create a decentralized and secure system for digital transactions. The blockchain is a distributed ledger that records all transactions and is maintained by a network of computers, making it nearly impossible to hack or manipulate [1]. This feature has led to the emergence of numerous decentralized applications and smart contracts, which are built on top of blockchain technology. The potential applications of cryptocurrency are vast and varied, ranging from online purchases and international money transfers to voting systems and secure record-keeping. However, the adoption of cryptocurrency has
Figure 2: The function of value plot with time of WISH
been hindered by several challenges, including regulatory issues, lack of mainstream acceptance, and concerns over security and volatility [14].
In recent years, researchers have focused on addressing these challenges and exploring new use cases for cryptocurrency. Some studies have examined the effectiveness of different consensus algorithms, such as proof-of-stake, in ensuring the security and reliability of blockchain networks. Others have explored the potential for cryptocurrency to improve financial inclusion, particularly in underbanked and developing countries. Despite the potential of cryptocurrency, there are still many questions and uncertainties surrounding its future. Ongoing research and development are necessary to address the challenges and explore new use cases for this transformative technology.
To sum up, cryptocurrency is a rapidly evolving technology with many open questions. Its history, technical aspects, and potential applications have been explored in various studies, but many challenges and uncertainties remain. Further research and development are necessary to fully understand and harness the power of cryptocurrency [21, 1].
## 5 Conclusion and Future Work
AIMS has several benefits for traditional cryptocurrencies, including a) Reduced volatility. The automatic increase market system used by AIMS helps to reduce the volatility of the cryptocurrency market by providing a more stable and predictable investment option. b) Increased transparency. The decentralized exchange used by AIMS helps to increase transparency and reduce the potential for market manipulation. c) Enhanced security. The use of blockchain technology and decentralized exchanges helps to enhance the security of trades and ensure the integrity of the system. Overall, AIMS represents a novel new approach and theory to cryptocurrency design and investment that could have significant benefits for investors and help to address some of the challenges and risks associated with traditional cryptocurrency markets.
In the future, decentralized exchanges and asset swaps including NFTs [15, 2] among blockchains are possible to be purely designed on AIMS, which introduces less risks and higher transparency compared to the existing state of the art for cryptocurrencies.
|
2307.02913 | Numerical Methods with Coordinate Transforms for Efficient Brownian
Dynamics Simulations | Many stochastic processes in the physical and biological sciences can be
modelled as Brownian dynamics with multiplicative noise. However, numerical
integrators for these processes can lose accuracy or even fail to converge when
the diffusion term is configuration-dependent. One remedy is to construct a
transform to a constant-diffusion process and sample the transformed process
instead. In this work, we explain how coordinate-based and time-rescaling-based
transforms can be used either individually or in combination to map a general
class of variable-diffusion Brownian motion processes into constant-diffusion
ones. The transforms are invertible, thus allowing recovery of the original
dynamics. We motivate our methodology using examples in one dimension before
then considering multivariate diffusion processes. We illustrate the benefits
of the transforms through numerical simulations, demonstrating how the right
combination of integrator and transform can improve computational efficiency
and the order of convergence to the invariant distribution. Notably, the
transforms that we derive are applicable to a class of multibody, anisotropic
Stokes-Einstein diffusion that has applications in biophysical modelling. | Dominic Phillips, Charles Matthews, Benedict Leimkuhler | 2023-07-06T10:56:20Z | http://arxiv.org/abs/2307.02913v3 | # Numerical Methods with Coordinate Transforms for Efficient Brownian Dynamics Simulations
###### Abstract
Many stochastic processes in the physical and biological sciences can be modelled as Brownian dynamics with multiplicative noise. However, numerical integrators for these processes can lose accuracy or even fail to converge when the diffusion term is configuration-dependent. One remedy is to construct a transform to a constant-diffusion process and sample the transformed process instead. In this work, we explain how coordinate-based and time-rescaling-based transforms can be used either individually or in combination to map a general class of variable-diffusion Brownian motion processes into constant-diffusion ones. The transforms are invertible, thus allowing recovery of the original dynamics. We motivate our methodology using examples in one dimension before then considering multivariate diffusion processes. We illustrate the benefits of the transforms through numerical simulations, demonstrating how the right combination of integrator and transform can improve computational efficiency and the order of convergence to the invariant distribution. Notably, the transforms that we derive are applicable to a class of multibody, anisotropic Stokes-Einstein diffusion that has applications in biophysical modelling.
## 1 Introduction
Many problems in finance and the physical and biological sciences can be modelled as instances of Brownian dynamics. Examples include portfolio optimization [21], options pricing [6], diffusion in biological membranes and nanocomposites [29, 11], cell migration [30], protein folding [5], neuronal dynamics [13], population genetics [20], MRI imaging [2], ecological modelling [32] and score-based diffusion for generative AI [31]. In these contexts, configuration-dependent diffusion is often critical to the modelling assumption but can introduce problems for numerical modelling. It make the problem stiffer by introducing unbounded noise or bounds on the state variables. Additionally, it can reduce the weak order of convergence of an integrator. This is a problem for simulation because sampling becomes more expensive. It's also a problem for estimation, such as when fitting a Brownian dynamics "grey-box" model, since high accuracy is required for the Extended Kalman Filter approximations to be meaningful [15].
One remedy for these problems is to design sophisticated, derivative-free numerical integrators that maintain high-accuracy convergence for certain classes of state-dependent diffusion. In recent years, many authors have contributed to a series of improvements and various integrators have been proposed [22, 27, 28, 16, 3]. However, a common drawback of these integrators is the requirement of multiple evaluations of the force and diffusion tensor per time step. This can be prohibitively expensive for multi-body simulations, where the evaluation of these terms is the computational bottleneck [1]. Furthermore, many of these integrators place restrictions on the class of state-dependent diffusion, often requiring commutative noise, which is not suitable for all applications.
An alternative approach, preferred whenever possible, is to transform the original process into a process with constant diffusion, thereby mitigating the sampling challenges introduced by multiplicative noise [4]. For certain classes of stochastic differential equations (SDEs), this is achieved through a Lamperti transform, a type of non-linear change of state variables [25, 19]. The resulting constant-diffusion process might exhibit enhanced numerical stability and
can be sampled with computationally cheap, high weak-order integrators. Take for example the Black-Scholes model from financial mathematics, which describes geometric Brownian motion on the positive real axis. When simulated with sufficiently large step sizes, positivity can be violated which results in numerical instability. Here the Lamperti transform approach is especially valuable since it is possible to simultaneously construct a transform to unit diffusion whilst also removing the positivity constraint [10].
An alternative to a spatial coordinate transform is to apply a smooth, configuration-dependent time-rescaling [33, 1]. Recently, this has been explored as a method for adaptive stepsize control in Langevin dynamics sampling [18]. In this work, we take a different perspective and consider time-rescaling alongside the Lamperti transform as a strategy to remove multiplicative noise.
In this article, we derive conditions for applying the Lamperti and time-rescaling transforms, either separately or in combination, to achieve constant diffusion in multivariate Brownian dynamics with multiplicative noise. Through numerical experiments, we show how if the right choice of numerical integrator is used for the transformed process, then this leads to an efficient, second-order weak sampling method that involves just one force and one diffusion evaluation per time step. Furthermore, we show how the original autocorrelation function and evolving distribution can be accurately recovered by applying an inverse transform to the samples.
The article is structured as follows. Section 2 introduces Brownian dynamics and the Lamperti and time-rescaling transformations. Section 3 explores in detail how these transforms apply to one-dimensional Brownian dynamics. Section 4 extends the theory of transforms to multivariate Brownian dynamics. Numerical experiments in one dimension are presented in Section 5 and multivariate experiments are presented in Section 6. Conclusions are presented in Section 7.
## 2 Preliminaries
### Brownian Dynamics
Brownian dynamics is defined through an Ito stochastic differential equation (SDE), which in one dimension reads [26]
\[dx_{t}=-D(x_{t})\frac{dV(x_{t})}{dx}dt+kT\frac{dD(x_{t})}{dx}dt+\sqrt{2kTD(x_{ t})}dW_{t}, \tag{1}\]
where \(t\in\mathbb{R}_{>0}\) is time, \(x_{t}\in\mathbb{R}\) is the state variable, \(W_{t}\) is a one-dimensional Wiener process, \(V:\mathbb{R}\rightarrow\mathbb{R}\) is a potential energy function, \(D:\mathbb{R}\rightarrow\mathbb{R}_{>0}\) is the diffusion coefficient, \(k\) is the Boltzmann constant and \(T\) is the temperature in degrees Kelvin. Note that the diffusion coefficient \(D(x)\) is a function of \(x\) which means that we have configuration-dependent noise, also known as multiplicative noise.
In higher dimensions, (1) generalises to
\[d\textbf{X}_{t}=-(\textbf{D}(\textbf{X}_{t})\textbf{D}(\textbf{X}_{t})^{T}) \nabla V(\textbf{X}_{t})dt+kT\text{div}(\textbf{DD}^{T})(\textbf{X}_{t})dt+ \sqrt{2kT}\textbf{D}(\textbf{X}_{t})d\textbf{W}_{t}, \tag{2}\]
where \(\textbf{X}_{t}\in\mathbb{R}^{n}\) is the state variable, \(\textbf{W}_{t}\) is an n-dimensional Wiener process, \(V:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is a potential function, and \(\textbf{DD}^{T}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\times\mathbb{R}^{n}\) is a configuration-dependent diffusion tensor that is everywhere positive definite. The matrix divergence in (2) is defined as the column vector resulting from applying the vector divergence to each matrix row. Note that we identify \(\textbf{DD}^{T}\) (not **D**) as the diffusion tensor to avoid taking a matrix square root in the noise term.
We assume that \(V\) is confining in a way that ensures ergodicity of the dynamics. This is true for any \(V\) that grows sufficiently quickly as \(|\textbf{X}|\rightarrow\infty\), for details see Pavliotis (2014) [26]. One consequence of ergodicity is that there exists a unique invariant distribution \(\rho(\textbf{X})\) - a probability distribution that does not change under the process dynamics. For Brownian dynamics, the invariant distribution is the canonical ensemble; \(\rho(\textbf{X})\propto\exp{(-V(\textbf{X})/kT)}\). Another consequence of ergodicity is that the long-time time average of any L1 integrable function \(f\) converges to its phase-space average as the simulation time goes to infinity, i.e.
\[\int_{\mathbb{R}^{n}}f(\textbf{X})\rho(\textbf{X})d\textbf{X}=\lim_{T \rightarrow\infty}\frac{1}{T}\int_{t=0}^{T}f(\textbf{X}_{t})dt. \tag{3}\]
In the remainder of this paper, we shall refer to (3) as "the ergodic theorem".
### The Lamperti Transformation
Consider a time-homogeneous Ito SDE of the form
\[d\mathbf{X}_{t}=f(\mathbf{X}_{t})dt+\sigma(\mathbf{X}_{t})\mathbf{R}d\mathbf{W}_{t}. \tag{4}\]
Here, \(\mathbf{X}_{t}\in\mathbb{R}^{n}\) is the state variable, \(\mathbf{W}_{t}\in\mathbb{R}^{m}\) is an \(m\)-dimensional Wiener process, \(f:\mathbb{R}^{n}\xrightarrow{}\mathbb{R}^{n}\) is a drift vector, \(\sigma:\mathbb{R}^{n}\xrightarrow{}\mathbb{R}^{n}\times\mathbb{R}^{m}\) is a diffusion matrix, and \(\mathbf{R}\in\mathbb{R}^{m}\times\mathbb{R}^{m}\) is an arbitrary matrix of constant coefficients.
The _Lamperti transform_, \(\mathbf{Y}_{t}=\xi(\mathbf{X}_{t})\), is an invertible coordinate transformation \(\xi:\mathbb{R}^{n}\xrightarrow{}\mathbb{R}^{n}\) that when applied to an SDE of the form (4), results in a process with unit diffusion [25]. The transform is derived by applying the multivariate version of Ito's lemma and setting the coefficients of the noise terms to unity. This gives a set of ODEs that the transform \(\xi\) must satisfy. A consistent solution of these ODEs exists if and only if: (i) the dimensions of the state variable and the noise are the same, (ii) \(\mathbf{R}\) is invertible, and (iii) the diffusion matrix \(\sigma(\mathbf{X}_{t})\) has diagonal form:
\[\sigma(\mathbf{X}_{t})=\text{diag}(\sigma_{1}(X_{1,t}),\sigma_{2}(X_{2,t}), \ldots,\sigma_{n}(X_{n,t})), \tag{5}\]
where \(\sigma_{i}:\mathbb{R}\xrightarrow{}\mathbb{R}_{>0}\) for all \(i\in\{1,2,\ldots,n\}\). The solution is given by
\[\mathbf{Y}_{t}=\xi(\mathbf{X}_{t})=\mathbf{R}^{-1}\phi(\mathbf{X}_{t}), \tag{6}\]
where \(\phi(\mathbf{X}_{t})=[\phi_{1}(X_{1,t}),\phi_{2}(X_{2,t}),\ldots,\phi_{n}(X_{ n,t})]^{T}\) and \(\phi_{j}:\mathbb{R}\xrightarrow{}\mathbb{R}\) is the invertible function:
\[\phi_{j}(x):=\int_{x_{j,0}}^{x}\frac{1}{\sigma_{j}(z)}dz, \tag{7}\]
with \(x_{j,0}\) being an arbitrary constant chosen from the state space of \(X_{j}\). By applying Ito's lemma to (4), it can be shown that the transformed process \(\mathbf{Y}_{t}\) obeys unit-diffusion dynamics given by:
\[dY_{i,t}=\sum_{j=1}^{n}R_{ij}^{-1}\left(\frac{f_{j}(\phi^{-1}(\mathbf{R} \mathbf{Y}_{t}))}{\sigma_{j}(\phi_{j}^{-1}((\mathbf{R}\mathbf{Y}_{t})_{j}))} -\frac{1}{2}\frac{\partial}{\partial x}\sigma_{j}\left(x\right)\Bigg{|}_{x= \phi_{j}^{-1}((\mathbf{R}\mathbf{Y}_{t})_{j})}\right)dt+dW_{i,t}. \tag{8}\]
The Lamperti transform can be used as a tool to find exact solutions for specific classes of SDEs [10] or to perform statistical inference for SDEs [9], but the extent to which the Lamperti transform is useful in practice is limited by the restriction on the drift term in (5). Here we considered only time-homogeneous SDEs, although the Lamperti transform can be extended to certain time-inhomogeneous problems [25].
### The Time-Rescaling Transform
An alternative method for transforming an SDE to constant diffusion is the time-rescaling transformation (see, for instance, [33] Chapter 8 and [1] Chapter 8), which is applicable to a different class of SDEs than the Lamperti transformation. As before, we start by considering an SDE of the form
\[d\mathbf{X}_{t}=f(\mathbf{X}_{t})dt+\sigma(\mathbf{X}_{t})\mathbf{R}d\mathbf{ W}_{t}, \tag{9}\]
where the notation follows Equation (4). We introduce a configuration-dependent time rescaling, denoted as \(t\rightarrow\tau(t)\), with Jacobian \(\frac{dt}{d\tau}(\mathbf{X}_{t})=g(\mathbf{X}_{\tau})\). The governing equation for the time-rescaled process becomes
\[d\mathbf{X}_{\tau}=f(\mathbf{X}_{\tau})g\left(\mathbf{X}_{\tau}\right)d\tau+ \sigma(\mathbf{X}_{\tau})\mathbf{R}\sqrt{g\left(\mathbf{X}_{\tau}\right)}d \mathbf{W}_{\tau}, \tag{10}\]
where we have replaced \(dt\) with \(\frac{dt}{d\tau}d\tau=g(\mathbf{X}_{\tau})d\tau\) using a change of variables. The factor \(\sqrt{g(\mathbf{X}_{\tau})}\) in the noise arises from the scaling property of Brownian motion.
A transformation to unit diffusion is possible if and only if: (i) the dimensions of the state variable and the noise are the same, (ii) \(\mathbf{R}\) is invertible, (ii) the diffusion matrix \(\sigma(\mathbf{X}_{t})\) has diagonal form:
\[\sigma(\mathbf{X}_{t})=\text{diag}(D(\mathbf{X}_{t}),D(\mathbf{X}_{t}),\ldots,D (\mathbf{X}_{t})), \tag{11}\]
an isotropic matrix with arbitrary configuration dependence.
To remove the configuration dependence from the diffusion term, we can choose \(g(\mathbf{X})=1/D^{2}(\mathbf{X})\). Substituting this and the diffusion ansatz (11) into (10) simplifies the governing equations to
\[d\mathbf{X}_{\tau}=\frac{f(\mathbf{X}_{\tau})}{D^{2}(\mathbf{X}_{\tau})}d\tau+ \mathbf{R}d\mathbf{W}_{\tau}. \tag{12}\]
We may then transform to unit diffusion through a linear transform
\[\mathbf{Y}_{\tau}=\mathbf{R}^{-1}\mathbf{X}_{\tau}. \tag{13}\]
Note that time-rescaling method can also be used to transform to an arbitrary isotropic diffusion \(\tilde{D}(\mathbf{X})\) by making the choice \(g(\mathbf{X})=(\tilde{D}(\mathbf{X})/D(\mathbf{X}))^{2}\).
## 3 Transforms for 1D Brownian Dynamics
In this section, we consider the Lamperti and time-rescaling transforms applied to one-dimensional Brownian dynamics, comparing the two approaches. For detailed proofs of all results, see C.
### The Lamperti Transform
In one dimension, the Lamperti transform emerges as an instance of a transformational symmetry inherent in Brownian dynamics. This symmetry states that, under an invertible coordinate transformation \(x\xrightarrow{}y(x)\), the one-dimensional Brownian dynamics (1) with potential \(V(x)\) and diffusion function \(D(x)\) is transformed into another Brownian dynamics process with potential \(\hat{V}(y)\) and diffusion function \(\hat{D}(y)\) given by
\[\begin{split}\hat{V}(y)&=V(x(y))+kT\ln\left|\frac{ dy}{dx}(x(y))\right|,\\ \hat{D}(y)&=D(x(y))\left(\frac{dy}{dx}(x(y))\right)^ {2},\end{split} \tag{14}\]
where \(y\xrightarrow{}x(y)\) is the inverse transformation.
Setting \(\hat{D}(y)=1\) and solving for \(y(x)\) yields the one-dimensional Lamperti transform:
\[y(x)=\int_{x_{0}}^{x}\left(\frac{1}{D(z)}\right)^{\frac{1}{2}}dz. \tag{15}\]
From (15) we have \(\frac{dy}{dx}=\left(\frac{1}{D(x)}\right)^{\frac{1}{2}}\). Substituting this result into (14), we arrive at the transformed, constant-diffusion dynamics:
\[dy_{t}=-\frac{d\hat{V}(y_{t})}{dy}dt+\sqrt{2kT}dW_{t}, \tag{16}\]
with an effective potential given by
\[\hat{V}(y)=V(x(y))-\frac{kT}{2}\ln D(x(y)). \tag{17}\]
Note that \(\hat{V}(y)\) implicitly depends on \(x_{0}\) in (15) through the inverse transform \(x(y)\). Since \(x_{0}\) changes the vertical offset of \(y(x)\), it therefore changes the horizontal offset of \(x(y)\). Changing \(x_{0}\) thus corresponds to horizontally translating \(\hat{V}(y)\), which shifts the mean position but otherwise has no physical consequence for the dynamics.
By writing down the ergodic theorem for the process (16) and transforming back to \(x\)-space, it can be shown that
\[\int_{-\infty}^{\infty}f(x)\rho(x)dx=\lim_{T\to\infty}\frac{1}{T}\int_{t=0}^{ T}f(x(y_{t}))dt, \tag{18}\]
so trajectories of the transformed process can be used directly to approximate ensemble averages with respect to \(\rho(x)\), the invariant distribution of the original process. Furthermore, by choosing \(f(x)=I(x\in[a,b])\) (the indicator function on the interval \([a,b]\)) it can be shown that invariant distribution \(\rho(x)\) of the original process and the invariant distribution \(\hat{\rho}(y)\) of the Lamperti-transformed process are related by the equation \(\rho(x)=\hat{\rho}(x(y))\frac{dy}{dx}\). Similarly, if we have a trajectory of discrete samples \(y_{n}\) with constant stepsize \(h\), then choosing \(f(x)=I(x\in[a,b])\) in (18) leads to a simple counting formula to estimate finite-width integrals of the original invariant distribution:
\[\int_{a}^{b}\rho(x)dx\approx\lim_{N\to\infty}\frac{1}{N}\sum_{n=0}^{N}I(x(y_{n })\in[a,b]). \tag{19}\]
This approximation becomes exact in the limit \(h\to 0\).
### The Time-Rescaling Transform
Consider a configuration-dependent time rescaling \(t\xrightarrow{}\tau(t)\) with \(\frac{dt}{d\tau}(x)=g(x)\). It can be shown that applying this to the original Brownian dynamics (1) results in another Brownian dynamics process but with a modified potential \(\hat{V}(x)\)
and diffusion coefficient \(\hat{D}(x)\), given by:
\[\hat{V}(x) =V(x)+kT\ln g(x), \tag{20}\] \[\hat{D}(x) =g(x)D(x).\]
Setting \(\hat{D}(x)=1\) implies \(g(x)=\frac{1}{D(x)}\). Substituting this result into (20), we arrive at the time-rescaled, constant-diffusion dynamics:
\[dx_{\tau}=-\frac{d\hat{V}(x_{\tau})}{dx}d\tau+\sqrt{2kT}dW_{\tau}, \tag{21}\]
with an effective potential given by
\[\hat{V}(x)=V(x)-kT\ln D(x). \tag{22}\]
Notably, these dynamics differ from those obtained through the Lamperti transform.
By applying a time rescaling to the ergodic theorem of the original process \(x_{t}\), it can be shown that
\[\int_{-\infty}^{\infty}f(x)\rho(x)dx=\lim_{T\to\infty}\frac{\int_{\tau=0}^{T}f (x_{\tau})g(x_{\tau})d\tau}{\int_{\tau=0}^{T}g(x_{\tau})d\tau}, \tag{23}\]
so trajectories of the transformed process can be used directly to approximate ensemble averages with respect to \(\rho(x)\), the invariant distribution of the original process.
Discritising with a constant stepsize \(h\) in \(\tau\)-time, and setting \(f(x)=I(x\in[a,b])\), leads to a counting formula to estimate finite-width integrals of the original invariant distribution:
\[\int_{a}^{b}\rho(x)dx\approx\lim_{N\to\infty}\frac{\sum_{n=0}^{N}g(x_{\tau_{n} })I(x_{\tau_{n}}\in[a,b])}{\sum_{n=0}^{N}g(x_{\tau_{n}})}. \tag{24}\]
This approximation becomes exact in the limit \(h\to 0\).
_Remark_.: The proof of (23) and (24) does not require Brownian dynamics, hence these results hold more generally for one-dimensional, time-homogeneous SDEs.
### Comparing the Two Transform Approaches
In one dimension, both the Lamperti and time-rescaling transforms are applicable for any \(D(x)>0\). However, whilst for known \(D(x)\) the time-rescaling transform can be computed exactly, the Lamperti transform often requires a numerical approximation due to the intractability of the integral (15). The two transforms also yield different effective potentials. The Lamperti transform tends to increase confinement of the potential in \(y\)-space wherever \(D(x)>1\) and decrease it wherever \(0<D(x)<1\). Conversely, the time-rescaled effective potential is more confining where \(\frac{dD}{dx}<0\) and less confining where \(\frac{dD}{dx}>0\). Therefore, deciding which transform is more useful can be problem-specific. For instance, in rare event sampling, it is preferable to choose the transform that results in the least-confining effective potential since this improves numerical stability at larger step sizes. In Figure 2 we consider the case \(V(x)=x^{2}\) and illustrate the different effective potentials resulting from the two transforms for various initial diffusion coefficients.
**Example 3.1**.: Consider a diffusion coefficient given by \(D(x)=1+|x|\). For this choice of \(D(x)\), the Lamperti transform to constant diffusion is (setting \(x_{0}=0\)), \(y(x)=\int_{0}^{x}\sqrt{\frac{1}{1+|x|}}dz=2\text{sgn}(x)(\sqrt{1+|x|}-1)\) and hence \(\frac{dy}{dx}=\sqrt{\frac{1}{1+|x|}}\), \(x(y)=\frac{y}{4}(|y|+4)\). The Lamperti-transformed effective potential is
\[\hat{V}(y)=V\left(\frac{y}{4}(|y|+4)\right)-\frac{kT}{2}\ln\left|1+|y|+\frac{y ^{2}}{4}\right|.\]
Alternatively, for this choice of \(D(x)\), the time rescaling to constant diffusion is \(g(x)=\frac{1}{1+|x|}\). The time-rescaled effective potential is
\[\hat{V}(x)=V(x)-kT\ln\left(1+|x|\right).\]
Sketches of \(V(x)\), \(\hat{V}(y)\) and \(\hat{V}(x)\) are shown below for \(kT=1\) and \(V(x)=\frac{x^{2}}{2}+\sin(1+3x)\). In this case, the Lamperti transform stiffens the potential, while the time-rescaling softens it.
## 4 Transforms for Multivariate Brownian Dynamics
We now examine how the Lamperti and time-rescaling transforms generalise to multivariate Brownian dynamics. For detailed proofs of all results, see C.
### The Multivariate Lamperti Transform
Consider multivariate Brownian dynamics with **D** matrix
\[\textbf{D}(\textbf{X})_{ij}=D_{i}(X_{i})R_{ij}, \tag{25}\]
where \(R_{ij}\) in an invertible, constant matrix. For this class of diffusion, a Lamperti transform to unit diffusion can be constructed (Section 2.2), however, the transformed dynamics is only Brownian dynamics with a conservative drift force if **R** is proportional to the identity. Specifically, when \(\textbf{D}(\textbf{X})_{ij}=D_{i}(X_{i})\delta_{ij}\), the Lamperti-transformed process is \(Y_{i,t}=\sqrt{2kT}\int_{x_{0}}^{X_{i,t}}\frac{1}{D_{i}(x)}dx:=\sqrt{2kT}\phi_{i }(X_{i,t})\), and obeys
\[dY_{i,t}=-\nabla_{Y_{i}}\hat{V}(\textbf{Y})dt+\sqrt{2kT}dW_{i}, \tag{26}\]
Figure 1: Comparison of the Lamperti and time-rescaling transforms when applied to the same quadratic potential \(V(x)=x^{2}\) for a variety of different diffusion coefficients. The abscissa is the \(x\) coordinate for the time-rescaled potential and the \(y\) coordinate for the Lamperti transform. The original potential is shown in black for reference.
with an effective potential
\[\hat{V}(\mathbf{Y})=V(\phi^{-1}(\mathbf{Y}))-kT\sum_{k=1}^{n}\ln D_{k}(\phi_{k}^{ -1}(Y_{k,t})), \tag{27}\]
and where the ergodic theorem (18) generalises to
\[\int_{\mathbb{R}^{n}}f(\mathbf{X})\rho(\mathbf{X})d\mathbf{X}=\lim_{T\to \infty}\frac{1}{T}\int_{t=0}^{T}f(\phi^{-1}(\mathbf{Y}_{t}))dt. \tag{28}\]
In the above, the map \(\phi^{-1}:\mathbb{R}^{n}\to\mathbb{R}\) is constructed by individually applying \(\phi_{i}^{-1}\) to each component of its argument, \(1\leq i\leq n\). We observe that there is an independent contribution to the effective potential for every diagonal component of \(\mathbf{D}\).
### The Multivariate Time Rescaling Transform
Consider multivariate Brownian dynamics with \(\mathbf{D}\) matrix
\[\mathbf{D}(\mathbf{X})=D(\mathbf{X})\mathbf{R}, \tag{29}\]
where \(\mathbf{R}\) is an invertible matrix. For this class of variable diffusion, a time-rescaling to Brownian dynamics unit diffusion can be constructed (Section 2.3). The time-rescaled process is given by \(\mathbf{Y}_{\tau}=\mathbf{R}^{-1}\mathbf{X}_{\tau}\) where \(\frac{dt}{d\tau}=g(\mathbf{X}):=1/D^{2}(\mathbf{X})\) and it obeys
\[d\mathbf{Y}_{\tau}=-\nabla_{\mathbf{Y}}\hat{V}(\mathbf{Y})dt+\sqrt{2kT}d \mathbf{W}, \tag{30}\]
with an effective potential
\[\hat{V}(\mathbf{Y})=V(\mathbf{R}\mathbf{Y})-2kT\ln D(\mathbf{R}\mathbf{Y}), \tag{31}\]
and where the ergodic theorem (24) generalises to
\[\int_{\mathbb{R}^{n}}f(\mathbf{X})\rho(\mathbf{X})d\mathbf{X}=\lim_{T\to \infty}\frac{\int_{\tau=0}^{T}f(\mathbf{R}\mathbf{Y}_{\tau})g(\mathbf{R} \mathbf{Y}_{\tau})d\tau}{\int_{\tau=0}^{T}g(\mathbf{R}\mathbf{Y}_{\tau})d \tau}. \tag{32}\]
_Remark_.: The proof of (32) does not require the assumption of Brownian dynamics and therefore it holds more generally for SDEs of the form considered in Section 2.3.
### Combining Multivariate Transforms
The Lamperti and time-rescaling transforms can be combined to transform a wider class of diffusion processes to constant diffusion than is possible when using either transformation in isolation. However, naively combining the
Figure 2: Comparison of Lamperti and time-rescaling transforms with original diffusion \(D(x)=1+|x|\), \(V(x)=\frac{x^{2}}{2}+\sin(1+3x)\) and \(kT=1\). The original potential is in black, the Lamperti-transformed potential is in red and the time-rescaled potential is in blue.
transforms and considering \(\mathbf{D}(\mathbf{X})=\mathbf{D}^{(1)}(\mathbf{X})\mathbf{R}\mathbf{D}^{(2)}( \mathbf{X})\), where
\[\mathbf{D}^{(1)}(\mathbf{X})=\begin{bmatrix}D(\mathbf{X})&&\\ &\ddots&\\ &&D(\mathbf{X})\end{bmatrix},\quad\mathbf{D}^{(2)}(\mathbf{X})=\begin{bmatrix} D_{1}(X_{1})&&\\ &\ddots&\\ &&D_{n}(X_{n})\end{bmatrix}, \tag{33}\]
results in a transformed process with a non-conservative drift force unless \(\mathbf{R}\) is proportional to the identity. However, when \(\mathbf{D}(\mathbf{X})=\mathbf{D}^{(1)}(\mathbf{X})\mathbf{D}^{(2)}(\mathbf{X})\) the process can be transformed to a constant-diffusion Brownian dynamics process \(\mathbf{Y}_{\tau}\) through a time rescaling followed by a Lamperti transform, represented schematically as
\[\mathbf{X}_{t}\xrightarrow{\frac{dt}{2}=g(\mathbf{X}):=D(\mathbf{X})^{-2}} \mathbf{X}_{\tau}\xrightarrow{\mathbf{Y}_{\tau}=f^{Y_{\tau}}D_{i}(x)^{-1}dx} \mathbf{Y}_{\tau}. \tag{34}\]
The effective potential of the transformed process is then
\[\hat{V}(\mathbf{Y})=V(\mathbf{Y})-2kT\ln D(\phi^{-1}(\mathbf{Y}))-kT\sum_{i=1 }^{n}\ln D_{i}(\phi_{i}^{-1}(\mathbf{Y})), \tag{35}\]
and the ergodic theorem generalises to
\[\int_{\mathbb{R}^{n}}f(\mathbf{X})\rho(\mathbf{X})d\mathbf{X}=\lim_{T\to \infty}\frac{\int_{0}^{T}f(\phi^{-1}(\mathbf{Y}_{\tau}))g(\phi^{-1}(\mathbf{Y} _{\tau}))d\tau}{\int_{0}^{T}g(\phi^{-1}(\mathbf{Y}_{\tau}))d\tau}. \tag{36}\]
## 5 Numerical Experiments in One Dimension
We simulate Brownian dynamics trajectories of the system defined in Example 3.1, i.e. \(D(x)=1+|x|\), \(V(x)=\frac{x^{2}}{2}+\sin(1+3x)\) and \(kT=1\). We consider various numerical integrators (introduced below) with and without transforms. For this example, we compare the weak convergence to the invariant distribution, the sampling efficiency, and the effect of transforms on estimates of the autocorrelation function and the evolving distribution. All experiments are run on a Thinkpad P17 with a 12-core, 2.60GHz Intel i7-10750H CPU, using code implemented in Julia 1.8.51.
Footnote 1: GitHub repository: [https://github.com/dominicp6/Transforms-For-Brownian-Dynamics](https://github.com/dominicp6/Transforms-For-Brownian-Dynamics)
### Numerical Integrators
We examine the performance of the following numerical integrators: Euler-Maruyama (EM), Milstein Method (MM), Leimkuhler-Matthews (LM), Hummer-Leimkuhler-Matthews (HLM), Stochastic Heun (SH), and Limit Method with Variable Diffusion (LMVD). For detailed definitions of these integrators in the context of one-dimensional Brownian dynamics, refer to A. The integrators can be summarised as follows:
The Euler-Maruyama (EM) integrator extends the Euler method to SDEs. It has a strong convergence order of \(1/2\) and a weak convergence order of \(1\)[12]. The Milstein Method (MM) modifies EM by incorporating a second-order correction derived from a stochastic Taylor series expansion. It is strong order \(1\) and weak order \(1\) and reduces to EM for constant diffusion [23]. The Leimkuhler-Matthews (LM) integrator is derived from the high-friction limit of the BAOAB-splitting method in the constant diffusion regime [17]. It has weak convergence order \(2\) for constant diffusion but is invalid (does not converge) for multiplicative noise. The Hummer-Leimkuhler-Matthews (HLM) integrator is an extension of LM that ensures that the expectation of position is exact in the case of locally linear diffusion, and is conjectured to improve convergence in the variable diffusion regime2. It reduces to LM for constant diffusion. The Stochastic Heun (SH) integrator is a two-stage Runge-Kutta method. It has weak convergence order of \(2\) for constant diffusion and \(1\) for variable diffusion [7]. However, the accuracy gains of SH come at the cost of higher computational requirements, as it involves two force evaluations, two diffusion coefficient evaluations, and two diffusion gradient evaluations per iteration. The Limit Method with Variable Diffusion (LMVD) is a scheme that has a weak convergence order of \(2\) for both constant and variable diffusion. It stems from the high-friction limit of the BAOAB-splitting method in the variable diffusion regime. Unlike SH, it requires one force evaluation per iteration, however, it requires two ODE solves per timestep. It is a novel integrator method that we introduce in this work. The derivation can be found in B. LMVD reduces to LM for constant diffusion.
### Error in Infinite Time
We compare weak convergence to the invariant distribution \(\rho(x)\propto\exp{(-V(x)/kT)}\) with varying stepsize \(h\), using trajectories generated by the different integrators both with and without transforms to constant diffusion. For untransformed dynamics, we compare EM, MM, HLM, SH, and LMVD. For the Lamperti-transformed dynamics and the time-rescaled dynamics, we compare the EM, LM, and SH integrators. We omit MM since it reduces to EM for constant diffusion, while LMVD and HLM both reduce to LM for constant diffusion. For each method, we run trajectories of length \(T=7.5\times 10^{7}\), and 12 independent runs are averaged to reduce sampling errors.
To assess the convergence of the invariant distribution, we divide a subset of the \(x\) domain into \(M\) equal-length intervals and compute the mean error between the empirical probabilities and the exact probabilities given by the invariant distribution. For Lamperti-transformed experiments, we derive empirical probabilities using equation (19), and for time-rescaled experiments, we use equation (24). We use the L1 error:
\[\text{Error}:=\frac{1}{M}\sum_{i=1}^{M}|\omega_{i}-\hat{\omega}_{i}|, \tag{37}\]
where \(\omega_{i}\) is the exact occupancy probability of the \(i^{th}\) interval and \(\hat{\omega}_{i}\) is the empirical estimate. We use \(30\) equal-width intervals in the range \(-5\) to \(5\) and run each integrator using \(10\) different step sizes, equally spaced in log space between \(10^{-3}\) and \(10^{-1}\). Steps are in \(\tau\)-time for time-rescaled methods, but \(t\)-time for all other methods. The error is plotted against the step size on a log-log scale, so first-order weak methods have a gradient of one, and second-order weak methods have a gradient of two. The results are shown in Figure 3.
Figure 3 confirms the expected orders of weak convergence for the untransformed methods. Interestingly, MM has a larger error constant than EM, illustrating that improved strong convergence doesn't guarantee improved weak convergence. Examining Figure 3, we see that the effect of applying a transform is method-dependent. Applying a transform to EM yields negligible changes in convergence properties, while applying a transform to constant diffusion for SH or LM restores their second-order convergence behavior. The transformed LM method becomes comparable in convergence properties to LMVD method. Notably, the time-transformed methods are shifted relative to the Lamperti-transformed methods. We show in the following section that this shift largely arises because of the discrepancy between a step size in \(\tau\)-time and an equivalent numerical step size in \(t\)-time, and not because applying time-rescaling results in a significantly more efficient sampler than when applying a Lamperti transform.
### Computational Efficiency and Numerical Stability
We assess the computational efficiency of each method by comparing the wall-clock time required to achieve a fixed L1 error in the invariant measure, as defined by Equation (37). We estimate the wall-clock cost per iteration of each
Figure 3: Rates of convergence to the invariant measure. The simulation time was fixed at \(T=7.5\times 10^{7}\) and \(12\) independent runs were averaged to further reduce sampling errors. (a) The untransformed methods. (b) When applying a transform to constant diffusion, either a Lamperti transform or a time rescaling. The untransformed methods are shown in faint in panel (b) to facilitate comparison.
method by timing \(10^{8}\) iterations with a fixed step size of \(h=0.01\), averaging over 12 runs. For transformed methods, any additional cost of applying the counting formulas (19) or (24) is included in these timings. We then fix a target error and run trajectories with various step sizes for each method, stopping when first reaching the target error. For each step size, we average 6,000 repeats and find the minimum number of iterations to reach the specified error. The total wall-clock time is then estimated as the minimum number of iterations over the various step sizes times the cost per iteration. This calculation is repeated for 5 target errors logarithmically spaced between \(10^{-3.5}\) and \(10^{-3}\). The resulting cost-error diagram is illustrated in Figure 4.
Numerical stability is estimated by determining the smallest step size, in logarithmic increments of \(10^{0.1}\), where numerical blow-up occurs before \(T=10^{6}\). These stability threshold estimates as well as timing results for \(10^{8}\) iterations are shown in Table 1.
Comparing the untransformed methods in Figure 4(a), we note that the method with the best weak convergence (LMVD) isn't necessarily the most computationally efficient for a given range of target errors (HLM). Furthermore, Figure 4(b) shows how applying coordinate transforms can significantly improve the computational efficiency of certain integrators but have a more modest impact on others. For example, the Lamperti-transformed/time-rescaled LM is the most computationally efficient method overall - approximately 5 times more efficient than LMVD. However, applying transforms only slightly improves the efficiency of SH, and even slightly reduces the efficiency of EM. In general, both types of transform have very similar effects on efficiency, and any differences can be attributed to small differences in
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{**Untransformed**} & \multicolumn{2}{c|}{**Lamperti**} & \multicolumn{2}{c|}{**Time-rescaling**} \\ \cline{2-7} & \(t\left(s\right)\) & \(h^{*}\) & \(t\left(s\right)\) & \(h^{*}\) & \(t\left(s\right)\) & \(h^{*}\) \\ \hline EM & \(10.83(8)\) & \(0.20\) & \(12.77(8)\) & \(0.25\) & \(13.18(9)\) & \(0.25\) \\ \hline SH & \(14.47(13)\) & \(0.25\) & \(17.22(8)\) & \(0.32\) & \(15.56(14)\) & \(2.5\) \\ \hline LM & - & - & \(12.54(8)\) & \(0.25\) & \(13.09(4)\) & \(2.5\) \\ \hline MM & \(11.97(18)\) & \(0.16\) & - & - & - & - \\ \hline HLM & \(10.72(10)\) & \(0.20\) & - & - & - & - \\ \hline LMVD & \(48.20(12)\) & \(0.25\) & - & - & - & - \\ \hline \end{tabular}
\end{table}
Table 1: The time taken for \(10^{8}\) iterations (\(t\)) and stability threshold (\(h^{*}\)) are compared for both untransformed and transformed methods. Standard errors in \(t\) were computed by averaging over 12 runs with a constant step size of 0.01. Stability thresholds were determined as the first step size (in geometric increments of \(10^{0.1}\)) that resulted in numerical blow-up. Errors are in bracket notation, e.g., \(10.83(8)=10.83\pm 0.08\).
Figure 4: Cost-error diagram to compare numerical efficiency. Error is defined as per Equation (37). The cost is the wall-clock time to reach the target error (number of iterations times cost per iteration), relative to the wall-clock time for untransformed EM to reach a target error of \(10^{-3}\). (a) Untransformed methods. (b) When applying constant-diffusion transform: Lamperti or time rescaling. The untransformed methods are shown in faint for comparison.
the iteration cost, see Table 1. Overall, transformations only improve the efficiency of numerical integrators that have better convergence properties for constant noise.
Examining Table 1, we see that time-rescaling significantly improves the stability threshold of SH and LM in this case whereas the Lamperti transform does not. This is explained by the fact that, for this diffusion coefficient, the time-rescaled potential is the softer of the two transformed potentials (see Figure 2). Thus, choosing the time-rescaling approach might be preferable if simulations with large step sizes are required.
### Error in Finite Time
In the continuous limit, the Lamperti and time-rescaling transforms are invertible, allowing recovery of the original dynamics. However, the transforms could still introduce bias for numerics. This section explores the effect of transforms on estimates of dynamic quantities, specifically the autocorrelation function and the evolving distribution.
**Autocorrelation Function** To obtain a reference estimate of the autocorrelation function, we run \(200\) randomised trajectories of length \(T=5000\) using the Stochastic Heun integrator with step size of \(h=0.01\), and with initial positions drawn from a standard normal distribution. For each trajectory, we use the Fast Fourier Transform (FFT) algorithm to estimate the normalised autocorrelation function and compute the mean and standard deviation of the best estimate. Additionally, we run trajectories under the same parameters but separately apply a Lamperti transform and time-rescaling transform. We then transform these trajectories back to \(x\)-space and \(t\)-time respectively and compute the normalised autocorrelation function. The three autocorrelation functions so obtained (reference, Lamperti and time-rescaling) are shown in Figure 5(a). We also calculate the mean and standard deviation in the mean of the difference between the Lamperti/time-rescaling autocorrelation functions and the reference estimate. These results are displayed in Figure 5(b).
Overall, at short times (approximately \(t\lesssim 8\)) the differences in autocorrelation estimates are generally minimal and often not statistically significant. As time increases (between \(10\) and \(20\)), the fractional error in the mean becomes more significant, but still much smaller than the standard deviation across runs (which is \(\sqrt{200}\approx 14\) times larger than the standard error in this case).
Note that inverting the Lamperti transform is straightforward, but undoing the time-rescaling process requires more attention. This is because the conversion from \(\tau\)-time to \(t\)-time results in an irregular time series, making it unsuitable for direct application of the FFT algorithm. To address this, we perform linear interpolation on a \(t\) grid with the same regular spacing of \(h=0.01\) before applying the FFT. This interpolation step introduces bias. However, as we have seen in Figure 5, the overall bias remains small and unlikely to be practically significant. Alternatively, methods
Figure 5: Comparing normalized autocorrelation function estimates with and without transforms. Panel (a) shows the mean and standard deviation in the mean of autocorrelation function estimates obtained using the Stochastic Heun integrator with 200 trajectories of length \(T=5000\) and step size \(h=0.01\). In black is the reference (no transform applied), in blue is when using a Lamperti transform and in orange is when using time rescaling. Panel (b) displays mean and standard error of the differences in autocorrelation function estimates using the Lamperti transform (blue) and time-rescaling transform (orange) compared to the reference (\(ACF_{ref}\)). Overall, biases introduced by the transforms are small and only practically significant at large times.
designed for unevenly spaced time series, such as least-squares spectral analysis, could be used but these add significant computational cost, which would likely negate any efficiency benefits of the time transform.
**Evolving Distribution** With initial positions drawn from a standard normal distribution, we estimate the evolving distribution using \(2.5\times 10^{7}\) independent trajectories made with the SH integrator, step size \(h=10^{-4}\). We compare this to the evolving distribution estimates computed with step size \(h=0.02\) for each method in Section 5.2. For each method, L1 errors with the reference distribution are computed at time snapshots at intervals of \(\delta t=0.04\) using the same histogram binning as introduced in Section 5.2.
For methods involving the Lamperti transform, the initial condition is transformed to \(y\)-space, trajectories are evaluated and then transformed back to \(x\)-space. For methods involving time rescaling, trajectories are evaluated in \(\tau\)-time and then transformed back to \(t\)-time. However, the conversion from \(\tau\)-time to \(t\)-time is problematic since this transform is unique to each trajectory. The approach we use is to simulate each trajectory in \(\tau\)-time (in steps of \(h=0.02\)) until slightly overshooting the \(\delta t=0.04\) interval. The position at the required \(t\)-time is then estimated by linear interpolation. This approach is inexpensive but can introduce bias.
The resulting errors for the untransformed and transformed methods are shown in Figure 6 and 6, respectively. Additionally, the figures include a dotted black line representing the L1 difference between the reference evolving distribution and the invariant distribution. For points below this line, the evolving distribution is distinguishable from the invariant distribution.
Examining the untransformed methods in Figure 6, we observe that by \(t=6\), the errors for each method have already converged to their corresponding infinite-time errors depicted in Figure 3 (a), which is consistent with the correlation timescale implied in Figure 5. However, we see that MM, EM and LM are unsuitable at this step size if high accuracy is required, as their errors soon exceed the difference between the evolving and invariant distributions.
Examining the transformed methods, we see that the Lamperti transform has no detrimental impact on finite-time errors. In particular, the Lamperti-transformed LM method has the same finite-time error as the more expensive LMVD method. By contrast, the time-rescaled methods show a noticeable bias, likely originating from the need for linear interpolation when computing the evolving distribution at fixed \(t\). This bias makes these methods ill-suited for high-accuracy simulations of the evolving distribution.
Figure 6: Finite-time errors of the evolving distribution in the interval \(t\in[0,6]\) for fixed step size \(h=0.02\). The reference distribution at time \(t\) is computed by averaging over \(2.5\times 10^{7}\) independent trajectories using the SH method with small stepsize \(h=10^{-4}\). In each plot, the dotted black line represents the L1 difference between the reference evolving distribution and the invariant distribution. (a) Errors of the untransformed methods. (b) Errors when applying a transform to constant diffusion, either a Lamperti transform or a time rescaling. The untransformed methods are shown in faint to facilitate comparison. Integrators are Leimkuhler-Matthews (LM), Milstein Method (MM), Euler-Maruyama (EM), Hummer-Leimkuhler-Matthews (HLM), Stochastic Heun (SH), Limit Method with Variable Diffusion (LMVD).
Figure 8: Rates of convergence to the invariant measure for Brownian dynamics in a 2D quadruple well potential with the Moro-Cadin diffusion tensor. The simulation time was fixed at \(T=5\times 10^{6}\) and \(12\) independent runs were averaged to further reduce sampling errors. (a) The untransformed methods. (b) When applying a transform to constant diffusion through a time rescaling. The untransformed methods are shown in faint to facilitate comparison. Integrators are Leimkuhler-Matthews (LM), Euler-Maruyama (EM), Hummer-Leimkuhler-Matthews (HLM) and Stochastic Heun (SH).
Figure 7: Heatmap of the quadruple-well potential function (38). White circles depict contours of the Frobenius norm of the Moro-Cadin diffusion tensor (39). The black path shows a Euler-Maruyama trajectory of Brownian dynamics of \(1000\) steps with time step of \(h=0.01\) and \(kT=1\). Note the small norm of the diffusion tensor in the vicinity of the origin. This inhibits hopping between the wells, making this a challenging sampling problem.
## 6 Multivariate Numerical Experiments
As an example of multivariate Brownian dynamics, consider Stokes-Einstein diffusion which models the diffusion of a low concentration of non-interacting, spherical particles suspended in a fluid. In \(n\) dimensions, each particle obeys multivariate Brownian dynamics with the Stokes-Einstein diffusion tensor
\[\textbf{D}_{SE}=\frac{k_{B}T}{6\pi\eta r}\textbf{1}_{n},\]
where \(T\) is Kelvin temperature and \(\eta\) denotes the viscosity. If the temperature field or the fluid's material properties are non-homogeneous, then the diffusion tensor \(\textbf{D}_{SE}(\textbf{X})\) is position dependent. Furthermore, the viscosity and temperature are functionally related through a constitutive relation \(\eta(\textbf{X})=\eta(T(\textbf{X}),\textbf{a}(\textbf{X}))\), where \(\textbf{a}(\textbf{X})\) are a set of possibly position-dependent material properties whose values and functional relationship with \(\eta(\textbf{X})\) which depends on the details of the specific fluid model. The general nature of Stokes-Einstein model lends itself to widespread applications particularly in materials science [8, 14] and in the modelling of water diffusion in biological tissues, which has medical applications in diffusion-tensor MRI imaging [2]. To account for coordinate-dependent diffusion anisotropy, the Stokes-Einstein diffusion model can be generalised to \(\textbf{D}(\textbf{X})=\textbf{D}_{SE}(\textbf{X})\textbf{D}^{(2)}(\textbf{X})\), where \(\textbf{D}^{(2)}(\textbf{X})\) is of the functional form given by equation (27). This kind of generalised diffusion can occur in protein transport in biological tissues, where the diffusion anisotropy derives from the matrix of actin filaments. Note that this is also of the form (29), and so this process can be transformed to constant-diffusion Brownian dynamics as described in Section 4.3.
As a non-trivial example of Stokes-Einstein diffusion, we will consider multivariate Brownian dynamics in a 2D quadruple-well potential given by
\[V(x,y)=\sqrt{\frac{17}{16}-2x^{2}+x^{4}}+\sqrt{\frac{17}{16}-2y^{2}+y^{4}}, \tag{38}\]
with a Stokes-Einstein diffusion tensor given by the Moro-Cardin tensor [24]
\[D(x,y)=\left(1+A\exp(-\frac{x^{2}+y^{2}}{2\sigma^{2}})\right)^{-1}\textbf{1}, \tag{39}\]
where \(A=5\) and \(\sigma=0.3\), see Figure 7. Since this diffusion tensor is isotropic, it can be mapped to constant diffusion through time rescaling. Figure 8 illustrates the comparison of weak convergence to the invariant measure for the LM, EM, SH, and HLM integrators. In Figure 8 (a), the comparison is shown without any transforms, while in Figure 8 (b), a time-rescaling transform to constant diffusion is applied. We follow the same general approach as first outlined in Section 5.2. We run trajectories of length \(T=5\times 10^{6}\) and average over \(12\) independent runs and we run each integrator using \(10\) different step sizes, equally spaced in log-space between \(10^{-2.5}\) and \(10^{-0.5}\). For histogram computation, we use a \(30\times 30\) grid of equal-width square bins covering the domain \([-3,3]\times[-3,3]\) in the x-y plane.
We observe similar behavior to the 1D numerical experiments discussed in Section 5.2. It is noteworthy that applying a time-rescaling transform enhances the convergence rate for both the SH and LM integrators. Just like in the 1D case, the transformed LM integrator exhibits a lower error constant compared to SH, indicating its superior efficiency for this particular problem.
## 7 Conclusions
We examined two types of transform to constant diffusion for Brownian dynamics with multiplicative noise: the Lamperti transform and the time-rescaling transform. We derived conditions on the noise term for these transforms to be applied and combined. Furthermore, through numerical experiments in one and two dimensions, we have shown how using these transforms with an appropriate integrator can lead to a highly efficient sampling method for certain classes of multivariate noise.
For one-dimensional Brownian dynamics, we showed that both transforms are always applicable, regardless of the form of the diffusion coefficient. However, the two transforms affect the dynamics differently, so the choice of transform may depend on the specific problem at hand. We showed numerically that applying either transform with the Leimkuhler-Matthews (LM) integrator significantly improves the convergence to the invariant measure, resulting in a method that has approximately five times higher sampling efficiency than the Limit Method with Variable Diffusion (LMVD) - a highly-performant integrator for multiplicative noise that does not utilise transformations. This transformed method also significantly outperformed the popular Euler-Maruyama integrator, with a 10 to 25 times higher computational efficiency for the problem investigated. Crucially, this method only requires one force and one diffusion tensor evaluation per
iteration, thus scaling better to high-dimensional problems than competing methods that require multiple force and/or diffusion evaluations per step.
In addition to investigating convergence to the invariant measure, we also verified whether dynamics information, in the form of the autocorrelation function and the evolving distribution, can be recovered after simulating a transformed process and then applying the inverse transform. We found that the Lamperti transform introduced no appreciable bias for estimates of either quantity, but that the time-rescaling transform is less suitable for recovering finite-time distributions.
For multivariate Brownian dynamics, the Lamperti and time-rescaling transforms have somewhat limited application. However, the two transformations can be combined to transform non-homogeneous, anisotropic Stokes-Einstein diffusion into a constant diffusion process. This is a broad class of diffusion tensors with applications in biomolecular diffusion, among other areas. We anticipate that this approach will improve the efficiency of Brownian dynamics simulations in various contexts.
## Supplementary Material
### Numerical Integrators
We use the shorthand notation
\[a(x) :=-D(x)\frac{dV}{dx}+kT\frac{dD}{dx}, \tag{40}\] \[\sigma(x) :=\sqrt{2kTD(x)},\] \[\tilde{a}(x) :=a(x)-\frac{1}{2}\sigma(x)\frac{d\sigma}{dx}=-D(x)\frac{dV}{dx}+ \frac{1}{2}kT\frac{dD}{dx},\]
where \(a(x)\) is the drift term, \(\sigma(x)\) the diffusion term, and \(\tilde{a}(x)\) is the Stratonovich-corrected drift [26]. We consider the following integrators, where \(w_{n},w_{n+1}\stackrel{{\text{iid}}}{{\sim}}\mathcal{N}(0,1)\) and \(h\) is the step size:
1. Euler-Maruyama (EM) \[x_{n+1}=x_{n}+a(x_{n})h+\sigma(x_{n})\sqrt{h}w_{n};\] (41)
2. Milstein Method (MM) \[x_{n+1}=x_{n}+a(x_{n})h+\sigma(x_{n})\sqrt{h}w_{n}+\frac{1}{2}kT\frac{dD}{dx} (x_{n})(w_{n}^{2}-1)h;\] (42)
3. Leimkuhler-Matthews (LM) \[x_{n+1}=x_{n}+a(x_{n})h+\sigma(x_{n})\sqrt{h}\frac{w_{n}+w_{n+1}}{2};\] (43)
4. Hummer-Leimkuhler-Matthews (HLM) \[x_{n+1}=x_{n}+\left(a(x_{n})+\frac{1}{4}kT\frac{dD}{dx}(x_{n})\right)h+\sigma (x_{n})\sqrt{h}\frac{w_{n}+w_{n+1}}{2};\] (44)
5. Stochastic Heun (SH) \[x_{n+1}^{*} =x_{n}+\tilde{a}(x_{n})h+\sigma(x_{n})\sqrt{h}w_{n}\] (45) \[x_{n+1} =x_{n}+\frac{1}{2}\left(\tilde{a}(x_{n})+\tilde{a}(x_{n+1}^{*}) \right)h+\frac{1}{2}\left(\sigma(x_{n})+\sigma(x_{n+1}^{*})\right)\sqrt{h}w_{ n};\]
6. Limit Method with Variable Diffusion (LMVD) \[\dot{x}_{n+1}=\sqrt{kT}w_{n}+\sqrt{2hD(x_{n})}\frac{dV}{dx}(x_{n})+kT\sqrt{ \frac{h}{2D(x_{n})}}\frac{dD}{dx}(x_{n})\] (46) \[\tilde{x}_{n+1}=\left\{x\left(\sqrt{\frac{h}{2}}\right)\bigg{|} \ x(0)=x_{n},\ dx=\sqrt{kTD(x)}\dot{x}_{n+1}dt\right\}\] \[x_{n+1}=\left\{x\left(\sqrt{\frac{h}{2}}\right)\bigg{|}\ x(0)= \tilde{x}_{n+1},\ dx=\sqrt{kTD(x)}w_{n+1}dt\right\}.\]
### Derivation of the Limit Method with Variable Diffusion
Consider the dynamics originally proposed in [7], Equation 6,
\[d\textbf{X}_{t} =\textbf{B}(\textbf{X}_{t})\textbf{P}_{t}dt, \tag{47}\] \[d\textbf{P}_{t} =-\textbf{B}(\textbf{X}_{t})^{T}\nabla V(\textbf{X}_{t})dt+kT \text{div}(\textbf{B}^{T})(\textbf{X}_{t})dt-\gamma\textbf{P}_{t}dt+\sqrt{2 \gamma kT}d\textbf{W}_{t},\]
where \(\textbf{B}(\textbf{X})\) is a positive definite matrix, \(\gamma>0\) is a friction parameter and \(\textbf{P}\in\mathbb{R}^{n}\) denotes the instantaneous system momentum. It is straightforward to check that these dynamics preserve the canonical distribution
\[\rho(\textbf{X},\textbf{P})\propto\exp\left(-V(\textbf{X})/kT-\|\textbf{P}\| ^{2}/2kT\right),\]
for any positive definite matrix \(\mathbf{B}(\mathbf{X})\), where the marginal distribution of position satisfies
\[\int\rho(\mathbf{X},\mathbf{P})d\mathbf{P}\propto\rho(\mathbf{X}).\]
We now consider discretizations of (47) built via splitting the SDE into three pieces denoted A, B, and O:
\[\begin{split} d\left[\begin{array}{c}\mathbf{X}_{t}\\ \mathbf{P}_{t}\end{array}\right]=\underbrace{\left[\begin{array}{c}\mathbf{ B}(\mathbf{X}_{t})\mathbf{P}_{t}\\ \mathbf{0}\end{array}\right]}_{\hat{\mathbf{A}}}dt&+\underbrace{\left[ \begin{array}{c}\mathbf{0}\\ -\mathbf{B}(\mathbf{X}_{t})^{T}\nabla V(\mathbf{X}_{t})+kT\text{div}(\mathbf{B} ^{T})(\mathbf{X}_{t})\end{array}\right]dt}_{\hat{\mathbf{B}}}\\ &+\underbrace{\left[\begin{array}{c}\mathbf{0}\\ -\gamma\mathbf{P}_{t}dt+\sqrt{2\gamma kT}d\mathbf{W}_{t}\end{array}\right]}_{ \hat{\mathbf{O}}}.\end{split}\]
Note that when \(\mathbf{B}\) is a constant matrix (47) reduces to conventional Langevin dynamics, and the above splitting matches the pieces given in [17].
Taking any of the A, B or O pieces in isolation, we may solve the implied SDE exactly (in distribution) for time \(t>0\). Denoting the solution to each piece as \(\phi_{t}(\mathbf{X},\mathbf{P})\), given the initial conditions at \(t=0\) are \((\mathbf{X},\mathbf{P})\), we can write
\[\begin{split}\phi_{t}^{\text{A}}(\mathbf{X},\mathbf{P})& =(\{\mathbf{Y}(t)|\mathbf{Y}(0)=\mathbf{X},d\mathbf{Y}=\mathbf{B}( \mathbf{Y})\mathbf{P}dt\}\,,\mathbf{P}),\\ \phi_{t}^{\text{B}}(\mathbf{X},\mathbf{P})&=(\mathbf{X},\mathbf{P}-t\mathbf{B}(\mathbf{X})^{T}\nabla V(\mathbf{X})+tkT\text{div}( \mathbf{B}^{T})(\mathbf{X})),\\ \phi_{t}^{\text{O}}(\mathbf{X},\mathbf{P})&=(\mathbf{X},e^{-\gamma t}\mathbf{P}+\sqrt{kT}\sqrt{1-e^{-2\gamma t}}\mathbf{R}),\end{split}\]
where \(\mathbf{R}\sim N(\mathbf{0},\mathbf{I})\) is a normal random vector. As \(\phi^{\text{A}}\) has no explicit closed form, we write the update purely as the solution to the underlying ODE.
We now consider the overdamped limit \(\gamma\to\infty\) with a time step \(s>0\), using the discretization scheme
\[(\mathbf{X}_{n+1},\mathbf{P}_{n+1}):=\phi_{s/2}^{\text{A}}\circ\phi_{s}^{ \text{O}}\circ\phi_{s/2}^{\text{A}}\circ\phi_{s}^{\text{B}}(\mathbf{X}_{n}, \mathbf{P}_{n}).\]
Writing out the resulting steps, we obtain
\[\begin{split}\hat{\mathbf{P}}_{n}&=\mathbf{P}_{n}-s \mathbf{B}(\mathbf{X}_{n})^{T}\nabla V(\mathbf{X}_{n})+skT\text{div}(\mathbf{ B}^{T})(\mathbf{X}_{n})\\ \hat{\mathbf{X}}_{n}&=\Big{\{}\mathbf{Y}(s/2)\Big{|} \mathbf{Y}(0)=\mathbf{X}_{n},d\mathbf{Y}=\mathbf{B}(\mathbf{Y})\hat{\mathbf{P }}_{n}\Big{\}}\\ \mathbf{P}_{n+1}&=\sqrt{kT}\mathbf{R}_{n+1}\\ \mathbf{X}_{n+1}&=\Big{\{}\mathbf{Y}(s/2)\Big{|} \mathbf{Y}(0)=\hat{\mathbf{X}}_{n},d\mathbf{Y}=\mathbf{B}(\mathbf{Y})\mathbf{ P}_{n+1}\Big{\}}\end{split}\]
which we may simplify by recognizing that \(\mathbf{P}_{n}\equiv\sqrt{kT}\mathbf{R}_{n}\).
We recover the LMVD method given in (46) by considering the one-dimensional case where \(B(x)=\sqrt{D(x)}\) and choosing \(s=\sqrt{2h}\) for a time step of \(h>0\) to ensure consistency between schemes [1].
## Appendix C Proofs
This section contains proofs of all results stated in Section 3 and Section 4.
**Theorem C.1**.: _Applying a continuous coordinate transform to a one-dimensional Brownian dynamics process results in another Brownian dynamics process with potential and diffusion function given by (14)._
Proof.: Applying Ito's Lemma with \(y=y(x)\), where \(x\) obeys (1), we have,
\[\begin{split}& dy=\left(-D(x(y))\frac{dV}{dx}(x(y))+kT\frac{dD}{ dx}(x(y))\right)\frac{dy}{dx}(x(y))dt\\ +& kTD(x(y))\frac{d^{2}y}{dx^{2}}(x(y))dt+\sqrt{2kTD(x (y))}\frac{dy}{dx}(x(y))dW_{t}.\end{split} \tag{48}\]
Now substituting in the transformations (14),
\[= -\hat{D}\frac{d}{dx}\left(\hat{V}dt-kT\ln\left|\frac{dy}{dx}\right| \right)\left(\frac{dy}{dx}\right)^{-1}dt+kT\frac{d}{dx}\left(\hat{D}\frac{dy}{dx }^{-2}\right)\frac{dy}{dx}dt \tag{49}\] \[+kT\hat{D}\frac{dy^{-2}}{dx}\frac{d^{2}y}{dx^{2}}dt+\sqrt{2kT\hat {D}}dW_{t}\] \[= -\hat{D}\frac{d\hat{V}}{dy}dt+kT\hat{D}\frac{d^{2}y}{dx^{2}}\frac {dy}{dx}^{-2}dt+kT\frac{d\hat{D}}{dy}dt-2kT\hat{D}\frac{d^{2}y}{dx^{2}}\frac{ dy}{dx}^{-2}+kT\hat{D}\frac{dy}{dx}^{-2}\frac{d^{2}y}{dx^{2}}dt\] \[+\sqrt{2kT\hat{D}}dW_{t}\] \[= -\hat{D}(y)\frac{d\hat{V}}{dy}dt+kT\frac{d\hat{D}}{dy}dt+\sqrt{2 kT\hat{D}(y)}dW_{t},\]
which is Brownian motion with potential \(\hat{V}(y)\) and diffusion function \(\hat{D}(y)\).
**Theorem C.2**.: _In one dimension, the Lamperti-transformed process can be used to compute phase space averages through (18)._
Proof.: We assume that the transformed process \(y_{t}\) is ergodic and therefore satisfies
\[\lim_{T\to\infty}\frac{1}{T}\int_{t=0}^{T}f(y_{t})dt=\int_{-\infty}^{\infty}f( y)\hat{\rho}(y)dy, \tag{50}\]
where \(\hat{\rho}(y)=\frac{1}{Z}\exp\left(-\frac{-\hat{V}(y)}{kT}\right)\) is the invariant distribution of the transformed process. Substituting in the effective potential from (14), the right-hand side of (50) becomes
\[\int_{-\infty}^{\infty}f(y)\frac{\exp(-\frac{V(x(y))}{kT})}{\hat{Z}}\sqrt{D(x (y))}dy=\int_{-\infty}^{\infty}f(y(x))\frac{\exp(-\frac{V(x)}{kT})}{\hat{Z}} \sqrt{D(x)}\frac{dy}{dx}dx. \tag{51}\]
Using the fact \(\frac{dy}{dx}=\frac{1}{\sqrt{D(x)}}\), this equation simplifies to
\[\int_{-\infty}^{\infty}f(y(x))\frac{\exp(-\frac{V(x)}{kT})}{\hat{Z}}dx=\frac{ Z}{\hat{Z}}\int_{-\infty}^{\infty}f(y(x))\rho(x)dx \tag{52}\]
where \(Z=\int_{-\infty}^{\infty}\exp\left(-\frac{V(x)}{kT}\right)dx\) and \(\hat{Z}=\int_{-\infty}^{\infty}\exp\left(-\frac{\hat{V}(y)}{kT}\right)dy\) are the partition functions of the original and transformed processes respectively. But \(Z=\hat{Z}\) since, by change of variables:
\[Z =\int_{-\infty}^{\infty}\exp\left(-\frac{V(x)}{kT}\right)dx=\int_ {-\infty}^{\infty}\exp\left(-\frac{V(x(y))}{kT}\right)\left(\frac{dy}{dx}(x(y) )\right)^{-1}dy\] \[=\int_{-\infty}^{\infty}\exp\left(-\frac{V(x(y))+kT\ln\left|\frac {dy}{dx}(x(y))\right|}{kT}\right)dy=\int_{-\infty}^{\infty}\exp\left(-\frac{ \hat{V}(y)}{kT}\right)dy \tag{53}\] \[=\hat{Z}.\]
Hence, from (50) and (52) we have
\[\int_{-\infty}^{\infty}f(y(x))\rho(x)dx=\lim_{T\to\infty}\frac{1}{T}\int_{t=0 }^{T}f(y_{t})dt. \tag{54}\]
Finally, if we redefine \(f\) as \(f\circ x\), then we obtain:
\[\int_{-\infty}^{\infty}f(x)\rho(x)dx=\lim_{T\to\infty}\frac{1}{T}\int_{t=0}^{ T}f(x(y_{t}))dt, \tag{55}\]
as required.
**Theorem C.3**.: _In one dimension, the Lamperti-transformed invariant measure \(\hat{\rho}(y)\) and the original invariant measure \(\rho(x)\) are related by \(\rho(x)=\hat{\rho}(x(y))\frac{dy}{dx}\)._
Proof.: Set \(f(x)=I(x\in[a,b])\), the indicator function on the interval \([a,b]\), in (55). This gives
\[\begin{split}\int_{-\infty}^{\infty}f(x)\rho(x)dx&= \int_{a}^{b}\rho(x)dx=\lim_{T\to\infty}\frac{1}{T}\int_{t=0}^{T}I(x(y_{t})\in[ a,b])dt\\ &=\lim_{T\to\infty}\frac{1}{T}\int_{t=0}^{T}I(y_{t}\in[y(a),y(b)] )dt=\int_{y(a)}^{y(b)}\hat{\rho}(y)dy\\ &=\int_{a}^{b}\hat{\rho}(x(y))\frac{dy}{dx}dx,\end{split} \tag{56}\]
where in the second line we re-expressed the indicator function in terms of \(y_{t}\) and then applied the ergodic theorem for the \(y_{t}\) process. (56) implies
\[\int_{a}^{b}\left(\rho(x)-\hat{\rho}(x(y))\frac{dy}{dx}\right)dx=0\]
which, from the arbitrariness of the constants \(a\) and \(b\), proves the result.
**Theorem C.4**.: _Applying a time-rescaling to a one-dimensional Brownian dynamics process results in another Brownian dynamics process with potential and diffusion given by (20)._
Proof.: Applying a version of the time rescaling appearing in equation (10) to one-dimensional Brownian dynamics we arrive at,
\[dx_{\tau}=-g(x)D(x)\frac{dV}{dx}d\tau+kTg(x)\frac{dD(x)}{dx}d\tau+\sqrt{2kTg(x )D(x)}dW_{\tau}. \tag{57}\]
Inserting the identities from equation (20) into (57) we obtain
\[\begin{split} dx_{\tau}&=-\hat{D}(x)\frac{d}{dx} \left(\hat{V}(x)-kT\ln g(x)\right)d\tau+kTg(x)\frac{d}{dx}\left(\frac{\hat{D}( x)}{g(x)}\right)d\tau+\sqrt{2kT\hat{D}(x)}dW_{\tau}\\ &=-\hat{D}(x)\frac{d\hat{V}}{dx}d\tau+kT\hat{D}(x)\frac{g^{\prime }(x)}{g(x)}d\tau+kT\frac{d\hat{D}(x)}{dx}d\tau-kT\hat{D}(x)\frac{g^{\prime}(x )}{g(x)}d\tau+\sqrt{2kT\hat{D}(x)}dW_{\tau}\\ &=-\hat{D}(x)\frac{d\hat{V}}{dx}d\tau+kT\frac{d\hat{D}(x)}{dx}d \tau+\sqrt{2kT\hat{D}(x)}dW_{\tau},\end{split} \tag{58}\]
which is a transformed version of the original one-dimensional Brownian dynamics but in an effective potential \(\hat{V}(x)\) and a rescaled diffusion coefficient \(\hat{D}(x)\), as required.
**Theorem C.5**.: _In one dimension, the time-rescaled process can be used to compute phase space averages through (23)._
Proof.: From the ergodic theorem applied to the original process,
\[\int_{-\infty}^{\infty}f(x)\rho(x)dx=\lim_{T\to\infty}\frac{1}{T}\int_{t=0}^{ T}f(x_{t})dt. \tag{59}\]
Changing variables \(t\to\tau\) in the integration, the right-hand side becomes,
\[\lim_{T\to\infty}\frac{1}{T}\int_{\tau=0}^{\tau(T)}f(x_{\tau})\frac{dt}{d\tau }d\tau=\lim_{T\to\infty}\frac{1}{T}\int_{\tau=0}^{\tau(T)}f(x_{\tau})g(x_{\tau })d\tau. \tag{60}\]
Redefining \(T\) this can be alternatively written as
\[\lim_{T\to\infty}\frac{1}{t(T)}\int_{\tau=0}^{T}f(x_{\tau})g(x_{\tau})d\tau. \tag{61}\]
Finally, we note that by integrating \(\frac{dt}{d\tau}=g(x)\) between \(0\) and \(T\) we can obtain an expression for \(t(T)\),
\[t(T)=\int_{\tau=0}^{T}g(x_{\tau})d\tau. \tag{62}\]
Substituting this into the above equation completes the proof.
**Theorem C.6**.: _The effective potential of a Lamperti-transformed Brownian dynamics process with original **D** matrix \(\textbf{D}(\textbf{X})=D_{i}(X_{i})\delta_{ij}\) is given by (27)._
Proof.: The required transformation is a multivariate Lamperti transformation with
\[\begin{split}\textbf{R}=\textbf{1},\quad f(\textbf{X})=-\textbf {D}(\textbf{X})\textbf{D}(\textbf{X})^{T}\nabla V(\textbf{X})+kT\text{div}( \textbf{D}\textbf{D}^{T})(\textbf{X}),\\ \sigma(\textbf{X})=\sqrt{2kT}\textbf{D}(\textbf{X}).\end{split} \tag{63}\]
From (8), the transformed process, therefore, satisfies (we transform to constant diffusion \(\sqrt{2kT}d\textbf{W}\)):
\[\begin{split} dY_{i,t}=\sqrt{2kT}\left(\frac{-\sum_{k=1}^{n}( \textbf{D}\textbf{D}^{T})_{ik}\partial_{k}V}{\sqrt{2kT}D_{i}}+\frac{kT\sum_{k =1}^{n}\partial_{k}(\textbf{D}\textbf{D}^{T})_{ik}}{\sqrt{2kT}D_{i}}-\frac{1} {2}\sqrt{2kT}\partial_{i}D_{i}\right)dt\\ +\sqrt{2kT}dW_{i},\end{split} \tag{64}\]
where \(\partial_{j}:=\frac{\partial}{\partial X_{j}}\) and \(V\), **D** and \(D_{j}\) are functions of \(\textbf{Y}_{t}\) through the relations
\[V(\textbf{X}_{t})=V(\phi^{-1}(\textbf{Y}_{t})),\quad\textbf{D}(\textbf{X}_{t} )=\textbf{D}(\phi^{-1}(\textbf{Y}_{t})),\quad D(X_{j,t})=D(\phi_{j}^{-1}(Y_{ j,t})). \tag{65}\]
Substituting \(\textbf{D}(\textbf{X})_{ij}=D_{i}(X_{i})\delta_{ij}\), this becomes
\[dY_{i,t}=\left(-D_{i}\partial_{l}V+kT\frac{\partial_{i}(D_{i}^{2})}{D_{i}}-kT \partial_{i}D_{i}\right)dt+\sqrt{2kT}dW_{i}. \tag{66}\]
Expanding, this simplifies to
\[dY_{i,t}=-D_{i}\partial_{i}Vdt+kT\partial_{i}D_{i}+\sqrt{2kT}dW_{i}. \tag{67}\]
Changing variables so that the derivatives are with respect to **Y** we get
\[\frac{\partial}{\partial X_{i}}=\sum_{j=1}^{n}\frac{\partial Y_{j}}{\partial X _{i}}\frac{\partial}{\partial Y_{j}}=\sum_{j=1}^{n}\frac{\delta_{ij}}{D_{i}(X _{i})}\frac{\partial}{\partial Y_{j}}=\frac{1}{D_{i}(X_{i})}\frac{\partial}{ \partial Y_{i}}, \tag{68}\]
and the transformed equation becomes
\[dY_{i,t}=\left(-\nabla_{Y_{i}}V(\phi^{-1}(\textbf{Y}_{t}))+kT\nabla_{Y_{i}} \ln D_{i}(\phi_{i}^{-1}(Y_{i,t}))\right)dt+\sqrt{2kT}dW_{i}, \tag{69}\]
which we identify as constant-diffusion Brownian dynamics with an effective potential
\[\hat{V}(\textbf{Y})=V(\phi^{-1}(\textbf{Y}))-kT\sum_{k=1}^{n}\ln D_{k}(\phi_{ k}^{-1}(Y_{k})). \tag{70}\]
In constructing this, we have used the fact \(\nabla_{Y_{i}}\ln D_{k}(\phi_{k}^{-1}(Y_{k}))=\delta_{ik}\nabla_{Y_{k}}\ln D_{k}( \phi_{k}^{-1}(Y_{k}))\).
**Theorem C.7**.: _A Lamperti-transformed process with original **D** matrix of the form \(\textbf{D}(\textbf{X})=D_{i}(X_{i})\delta_{ij}\) can be used to compute phase-space averages through (28)._
Proof.: We assume that the effective potential (70) is such that geometric ergodicity holds for \(\textbf{Y}_{t}\). Then, by applying the ergodic theorem to the transformed process, we obtain:
\[\lim_{T\rightarrow\infty}\frac{1}{T}\int_{t=0}^{T}f(\textbf{Y}_{t})dt=\int_{ \mathbb{R}^{n}}f(\textbf{Y})\hat{\rho}(\textbf{Y})d\textbf{Y}. \tag{71}\]
Substituting in the effective potential, we have:
\[=\int_{\mathbb{R}^{n}}f(\textbf{Y})\frac{1}{\tilde{Z}}\exp\left(-\frac{V(\phi ^{-1}(\textbf{Y}))}{kT}\right)\prod_{i=1}^{n}\left(D_{i}(\phi_{i}^{-1}(Y_{i,t}) )\right)d\textbf{Y} \tag{72}\]
Next, we change variables from **Y** to **X**. The Jacobian factor is given by:
\[J=\left|\frac{d\textbf{Y}}{d\textbf{X}}\right|=\left|\frac{1}{D_{i}(X_{i})} \delta_{ij}\right|=\prod_{i=1}^{n}\frac{1}{D_{i}(X_{i})}, \tag{73}\]
which exactly cancels with the diffusion coefficients in the integral, and we have
\[=\int_{\mathbb{R}^{n}}f(\phi(\textbf{X}))\frac{1}{\hat{Z}}\exp\bigg{(}-\frac{V( \textbf{X})}{kT}\bigg{)}d\textbf{X}=\frac{Z}{\hat{Z}}\int_{\mathbb{R}^{n}}f( \phi(\textbf{X}))\rho(\textbf{X})d\textbf{X}. \tag{74}\]
Choosing \(f(\textbf{Y})=1\) leads to \(\hat{Z}=Z\), hence
\[\lim_{T\to\infty}\frac{1}{T}\int_{t=0}^{T}f(\textbf{Y}_{t})dt=\int_{\mathbb{R }^{n}}f(\phi(\textbf{X}))\rho(\textbf{X})d\textbf{X}. \tag{75}\]
Finally, if we redefine \(f\) as \(f\circ\phi^{-1}\), then we obtain:
\[\lim_{T\to\infty}\frac{1}{T}\int_{t=0}^{T}f(\phi^{-1}(\textbf{Y}_{t}))dt=\int_ {\mathbb{R}^{n}}f(\textbf{X})\rho(\textbf{X})d\textbf{X}, \tag{76}\]
as required.
**Theorem C.8**.: _The effective potential of a time-rescaled Brownian dynamics process with original **D** matrix \(\textbf{D}(\textbf{X})=D(\textbf{X})\textbf{R}\) is given by (31)._
Proof.: The time-rescaling transform follows (12), (13) with \(f(\textbf{X})=-\textbf{D}(\textbf{X})\textbf{D}(\textbf{X})^{T}\nabla V( \textbf{X})+kT\text{div}(\textbf{D}(\textbf{X})\textbf{D}(\textbf{X})^{T})\), which gives (we transform to constant diffusion \(\sqrt{2kT}\)):
\[d\textbf{Y}_{\tau}=-\textbf{R}^{-1}\frac{\textbf{D}\textbf{D}^{T}\nabla_{ \textbf{X}}V-kT\text{div}(\textbf{D}\textbf{D}^{T})}{D^{2}}dt+\sqrt{2kT}d \textbf{W}_{\tau}. \tag{77}\]
Here, \(V\), **D** and \(D\) are functions of \(\textbf{Y}_{\tau}\) through the relations:
\[V(\textbf{X}_{\tau})=V(\textbf{R}\textbf{Y}_{\tau}),\quad\textbf{D}(\textbf{ X}_{\tau})=\textbf{D}(\textbf{R}\textbf{Y}_{\tau}),\quad D(\textbf{X}_{\tau})=D( \textbf{R}\textbf{Y}_{\tau}). \tag{78}\]
Substituting \(\textbf{D}(\textbf{X})=D(\textbf{X})\textbf{R}\), in components this becomes
\[dY_{i,\tau}=-\frac{\sum_{j}D^{2}R_{ji}\partial_{j}V-kT\sum_{j}R_{ji}\partial_ {j}(D^{2})}{D^{2}}dt+\sqrt{2kT}dW_{i,\tau}, \tag{79}\]
which simplifies to
\[dY_{i,\tau}=\sum_{j}R_{ji}\left(-\partial_{j}V+2kT\partial_{j}\ln D\right)dt+ \sqrt{2kT}dW_{i,\tau}. \tag{80}\]
Changing variables so that the derivatives are with respect to **Y** we get
\[\frac{\partial}{\partial X_{i}}=\sum_{j=1}^{n}\frac{\partial Y_{j}}{\partial X _{i}}\frac{\partial}{\partial Y_{j}}=\sum_{j=1}^{n}R_{ji}^{-1}\frac{\partial}{ \partial Y_{j}}. \tag{81}\]
The **R** matrix then cancels with its inverse, and the dynamics now reads
\[d\textbf{Y}_{\tau}=\left(-\nabla_{\textbf{Y}}V(\textbf{R}\textbf{Y})+2kT \nabla_{\textbf{Y}}\ln D(\textbf{R}\textbf{Y})\right)dt+\sqrt{2kT}d\textbf{ W}_{\tau}, \tag{82}\]
which is constant-diffusion Brownian dynamics in an effective potential \(\hat{V}(\textbf{Y})\) given by
\[\hat{V}(\textbf{Y})=V(\textbf{R}\textbf{Y})-2kT\ln D(\textbf{R}\textbf{Y}). \tag{83}\]
This completes the proof.
**Theorem C.9**.: _A time-rescaled process with original **D** matrix of the form \(\textbf{D}(\textbf{X})=D(\textbf{X})\textbf{R}\) can be used to compute phase-space averages through (32)._
Proof.: We begin with the ergodic theorem of the original process, which states
\[\int_{\mathbb{R}^{n}}f(\textbf{X})\rho(\textbf{X})d\textbf{X}=\lim_{T\to \infty}\frac{1}{T}\int_{t=0}^{T}f(\textbf{X}_{t})dt. \tag{84}\]
To express this in terms of the time-rescaled process, we change the variable \(t\to\tau\) in the integration, resulting in:
\[\lim_{T\to\infty}\frac{1}{T}\int_{\tau=0}^{\tau(T)}f(\textbf{X}_{\tau})\frac{ dt}{d\tau}d\tau=\lim_{T\to\infty}\frac{1}{T}\int_{\tau=0}^{\tau(T)}f( \textbf{X}_{\tau})g(\textbf{X}_{\tau})d\tau. \tag{85}\]
By redefining \(T\), we can alternatively write this as
\[\lim_{T\rightarrow\infty}\frac{1}{t(T)}\int_{\tau=0}^{T}f(\textbf{X}_{\tau})g( \textbf{X}_{\tau})d\tau, \tag{86}\]
where \(g(\textbf{X}_{\tau})=1/D^{2}(\textbf{X})\) by the definition of time rescaling. Next, we integrate \(\frac{dt}{d\tau}=g(\textbf{X})\) from \(0\) to \(T\) to obtain an expression for \(t(T)\),
\[t(T)=\int_{\tau=0}^{T}g(\textbf{X}_{\tau})d\tau, \tag{87}\]
Substituting equation (87) and the relation \(\textbf{X}_{\tau}=\textbf{RY}_{\tau}\) into equation (86), we have:
\[\int_{\mathbb{R}^{n}}f(\textbf{X})\rho(\textbf{X})d\textbf{X}=\lim_{T \rightarrow\infty}\frac{\int_{\tau=0}^{T}f(\textbf{RY}_{\tau})g(\textbf{RY}_ {\tau})d\tau}{\int_{\tau=0}^{T}g(\textbf{RY}_{\tau})d\tau}, \tag{88}\]
as required.
**Theorem C.10**.: _A Lamperti-transformed process with original **D** matrix of the form **D**\((\textbf{X})_{ij}=D_{i}(X_{i})R_{ij}\) is an instance of Brownian dynamics if and only if the matrix **M** with components_
\[M_{ij}=\sum_{k=1}^{n}\left(R_{ij}^{-1}R_{jk}R_{jk}R_{jj}^{-1}\right)+R_{ji}R_{ jj}^{-1}-R_{ij}^{-1}R_{jj}^{-1},\]
_is diagonal. Here, \(R_{ij}^{-1}\) is the \(i,j\) component of the inverse matrix **R**. In particular, **M** is diagonal if **R** is diagonal._
Proof.: The stated transformation is a multivariate Lamperti transform (8) with
\[f(\textbf{X})=-\textbf{D}(\textbf{X})\textbf{D}(\textbf{X})^{T}\nabla V( \textbf{X})+kT\text{div}(\textbf{DD}^{T})(\textbf{X}),\quad\sigma(\textbf{X}) =\sqrt{2kT}\textbf{D}(\textbf{X}). \tag{89}\]
The transformed process therefore satisfies
\[\begin{split} dY_{i,t}=&\sum_{j=1}^{n}R_{ij}^{-1} \sqrt{2kT}\left(\frac{-\sum_{k=1}^{n}(\textbf{DD}^{T})_{jk}\partial_{k}V}{ \sqrt{2kT}D_{j}}+\frac{kT\sum_{k=1}^{n}\partial_{k}(\textbf{DD}^{T})_{jk}}{ \sqrt{2kT}D_{j}}-\frac{1}{2}\sqrt{2kT}\partial_{j}D_{j}\right)dt\\ &\hskip 113.811024pt+\sqrt{2kT}dW_{i},\end{split} \tag{90}\]
where \(\partial_{j}:=\frac{\partial}{\partial X_{j}}\) and \(V\), **D** and \(D_{j}\) are functions of \(\textbf{Y}_{t}\) through the relations
\[V(\textbf{X}_{t})=V(\phi^{-1}(\textbf{RY}_{t})),\quad\textbf{D}(\textbf{X}_{t })=\textbf{D}(\phi^{-1}(\textbf{RY}_{t})),\quad D(X_{j,t})=D(\phi_{j}^{-1}(( \textbf{RY})_{j,t})). \tag{91}\]
Substituting \(\textbf{D}(\textbf{X})_{ij}=D_{i}(X_{i})R_{ij}\), this becomes
\[\begin{split} dY_{i,t}=\sum_{j,k,l=1}^{n}R_{ij}^{-1}\left(\frac {-R_{jk}R_{kl}D_{j}D_{k}\partial_{k}V}{D_{j}}+kT\frac{R_{jl}R_{kl}\partial_{k }(D_{j}D_{k})}{D_{j}}\right)dt-kT\sum_{j=1}^{n}R_{ij}^{-1}\partial_{j}D_{j}dt \\ +\sqrt{2kT}dW_{i}.\end{split} \tag{92}\]
Expanding,
\[\begin{split} dY_{i,t}=\sum_{k=1}^{n}-R_{ki}D_{k}\partial_{k}Vdt+ kT\left(\sum_{j,k,l=1}^{n}R_{ij}^{-1}R_{jl}R_{kl}\left(\partial_{k}D_{j}\frac{D_{k}}{D _{j}}+\partial_{k}D_{k}\right)-\sum_{j=1}^{n}R_{ij}^{-1}\partial_{j}D_{j} \right)dt\\ +\sqrt{2kT}dW_{i}.\end{split} \tag{93}\]
Noting that \(\partial_{k}D_{j}=\delta_{kj}\partial_{j}D_{j}\), this becomes
\[\begin{split} dY_{i,t}=\sum_{k=1}^{n}-R_{ki}D_{k}\partial_{k}Vdt+ kT\left(\sum_{j,l}R_{ij}^{-1}R_{jl}R_{jl}\partial_{k}D_{k}+\sum_{k}R_{ki} \partial_{k}D_{k}-\sum_{j=1}^{n}R_{ij}^{-1}\partial_{j}D_{j}\right)dt\\ +\sqrt{2kT}dW_{i}.\end{split} \tag{94}\]
Changing variables so that the derivatives are with respect to \(\mathbf{Y}\) we get
\[\frac{\partial}{\partial X_{k}}=\sum_{l=1}^{n}\frac{\partial Y_{l}}{\partial X_{ k}}\frac{\partial}{\partial Y_{l}}=\sum_{l=1}^{n}\frac{R_{lk}^{-1}}{D_{k}(X_{k})} \frac{\partial}{\partial Y_{l}}, \tag{95}\]
and the transformed equation becomes
\[dY_{i,t}=-\partial_{i}Vdt+kT\sum_{k=1}^{n}\left(\left(\sum_{l=1}^ {n}R_{ik}^{-1}R_{kl}R_{kl}R_{kk}^{-1}\right)+R_{ki}R_{kk}^{-1}+R_{ik}^{-1}R_{kk} ^{-1}\right)\nabla_{Y_{k}}\ln D_{k}dt \tag{96}\] \[+\sqrt{2kT}dW_{i},\]
or equivalently:
\[dY_{i,t}=-\partial_{i}Vdt+kT\sum_{k=1}^{n}M_{ik}\nabla_{Y_{k}}\ln D_{k}dt+ \sqrt{2kT}dW_{i}, \tag{97}\]
where \(\mathbf{M}\) is the matrix defined in the theorem statement. Note that only if the matrix \(\mathbf{M}\) is diagonal is it possible to express the drift term as a gradient of a potential:
\[\hat{V}(\mathbf{Y})=V(\phi^{-1}(\mathbf{R}\mathbf{Y}))-kT\sum_{i=1}^{n}M_{ii} \ln D_{i}(\phi_{i}^{-1}(\mathbf{R}\mathbf{Y}_{i})). \tag{98}\]
_Remark 1_.: The functions \(D_{i}\) can be arbitrarily scaled in such a manner that for all \(i\), \(M_{ii}=1\) in equation (98). The transformed process then becomes equivalent to the case \(\mathbf{R}=\mathbf{I}\), as discussed in Section 4.1.
**Theorem C.11**.: _Consider a multivariate Brownian dynamics process \(\mathbf{X}_{t}\) following (2), where the diffusion tensor \(\mathbf{D}\) is defined as_
\[\mathbf{D}(\mathbf{X})=\mathbf{D}^{(1)}(\mathbf{X})\mathbf{D}^{(2)}(\mathbf{X}),\]
_where \(\mathbf{D}^{(1)}\) and \(\mathbf{D}^{(2)}\) are given by (33). Then the transformed process \(\mathbf{Y}_{\tau}=\phi(\mathbf{X}_{\tau})\), resulting from a time rescaling where \(\frac{dt}{d\tau}=g(\mathbf{X}):=1/D^{2}(\mathbf{X})\), followed by a Lamperti transform given by_
\[Y_{i,\tau}=\sqrt{2kT}\int_{x_{0}}^{X_{i,\tau}}\frac{1}{D_{i}(x)}dx:=\sqrt{2kT }\phi_{i}(X_{i,\tau}),\]
_satisfies the constant-diffusion Brownian dynamics process:_
\[dY_{i,\tau}=-\nabla_{Y_{i}}\hat{V}(\mathbf{Y})dt+\sqrt{2kT}dW_{i},\]
_where \(\hat{V}(\mathbf{Y})\) is the effective potential defined as_
\[\hat{V}(\mathbf{Y})=V(\phi^{-1}(\mathbf{Y}))-2kT\ln D(\mathbf{Y})-kT\sum_{i=1}^{n}\ln D_{ k}(\phi_{k}^{-1}(Y_{k,\tau})).\]
Proof.: The time-rescaling transformation gives
\[d\mathbf{X}_{\tau}=-\frac{\mathbf{D}\mathbf{D}^{T}\nabla_{\mathbf{X}}V-kT \text{div}(\mathbf{D}\mathbf{D}^{T})}{D^{2}}dt+\sqrt{2kT}\mathbf{D}^{(2)}( \mathbf{X})d\mathbf{W}_{\tau}. \tag{99}\]
Applying the Lamperti transform then gives
\[d\mathbf{Y}_{i,\tau}=\sqrt{2kT}\left(-\frac{D_{ij}D_{kj}\partial_{k}V-kT \partial_{k}(D_{ij}D_{kj})}{\sqrt{2kT}D^{2}D_{i}}-\frac{1}{2}\sqrt{2kT} \partial_{i}D_{i}\right)dt+\sqrt{2kT}dW_{i}, \tag{100}\]
where \(V(\mathbf{X}_{\tau})\), \(\mathbf{D}(\mathbf{X}_{\tau})\), \(D(\mathbf{X}_{\tau})\) and \(\mathbf{D}_{i}(\mathbf{X}_{\tau})\) are functions of \(\mathbf{Y}_{\tau}\) through the relation \(\mathbf{X}_{\tau}=\phi^{-1}(\mathbf{Y}_{\tau})\). Since
\[D_{ij}(\mathbf{X})=\sum_{k=1}^{n}D_{ik}^{(1)}(\mathbf{X})D_{kj}^{(2)}(\mathbf{X })=\sum_{k=1}^{n}\delta_{ik}\delta_{kj}D(\mathbf{X})D_{k}(X_{k})=D(\mathbf{X})D _{i}(X_{i}), \tag{101}\]
this becomes
\[dY_{i,\tau}=-D_{i}\partial_{i}Vdt+kT\frac{\partial_{i}(D^{2}D_{i}^{2})}{D^{2}D _{i}}dt-kT\partial_{i}D_{i}dt+\sqrt{2kT}dW_{i} \tag{102}\]
which simplifies to
\[dY_{i,\tau}=-D_{i}\partial_{i}Vdt+2kT\frac{D_{i}}{D}\partial_{i}Ddt+kT\partial_{i }D_{i}dt+\sqrt{2kT}dW_{i}. \tag{103}\]
Changing variables so that the derivatives are with respect to \(\mathbf{Y}\) we get
\[\frac{\partial}{\partial X_{k}}=\sum_{l=1}^{n}\frac{\partial Y_{l}}{\partial X _{k}}\frac{\partial}{\partial Y_{l}}=\sum_{l=1}^{n}\frac{\delta_{lk}}{D_{k}(X_ {k})}\frac{\partial}{\partial Y_{l}}=\frac{1}{D_{k}}\frac{\partial}{\partial Y _{k}}, \tag{104}\]
and the transformed equation becomes
\[dY_{i,\tau}=-\nabla_{Y_{i}}Vdt+2kT\nabla_{Y_{i}}\ln Ddt+kT\nabla_{Y_{i}}\ln D_{ i}dt+\sqrt{2kT}dW_{i}, \tag{105}\]
which we identify as Brownian motion in an effective potential
\[\hat{V}(\mathbf{Y})=V(\phi^{-1}(\mathbf{Y}))-2kT\ln D(\phi^{-1}(\mathbf{Y}))- kT\sum_{i=1}^{n}\ln D_{i}(\phi_{i}^{-1}(Y_{i}), \tag{106}\]
as required.
**Theorem C.12**.: _Performing a time rescaling followed by a Lamperti transform to a multivariate Brownian dynamics process with \(\mathbf{D}\) matrix \(\mathbf{D}(\mathbf{X})=\mathbf{D}^{(1)}(\mathbf{X})\mathbf{R}\mathbf{D}^{(2) }(\mathbf{X})\), where \(\mathbf{R}\) is not diagonal, results in a constant-diffusion process that is not Brownian dynamics._
Proof.: First, consider the time-rescaled process \(\mathbf{X}_{\tau}\), where \(\frac{dt}{d\tau}=1/D^{2}(\mathbf{X})\), which obeys the dynamics
\[d\mathbf{X}_{\tau}=-\frac{\mathbf{D}\mathbf{D}^{T}\nabla_{\mathbf{X}}V-kT \text{div}(\mathbf{D}\mathbf{D}^{T})}{D^{2}}dt+\sqrt{2kT}\mathbf{R}\mathbf{D }^{(2)}d\mathbf{W}_{\tau}. \tag{107}\]
Defining a transformed process \(\mathbf{Y}_{\tau}=\mathbf{R}^{-1}\mathbf{X}_{\tau}\), then applying the multidimensional Ito formula gives
\[d\mathbf{Y}_{\tau}=-\mathbf{R}^{-1}\left(\frac{\mathbf{D}\mathbf{D}^{T}\nabla _{\mathbf{X}}V|_{\mathbf{R}\mathbf{Y}_{\tau}}-kT\text{div}(\mathbf{D}\mathbf{ D}^{T})|_{\mathbf{R}\mathbf{Y}_{\tau}}}{D^{2}D_{i}}\right)dt+\sqrt{2kT}\mathbf{D}^{ (2)}d\mathbf{W}_{\tau}, \tag{108}\]
where
\[V=V(\mathbf{R}\mathbf{Y}_{\tau}),\quad\mathbf{D}=\mathbf{D}(\mathbf{R} \mathbf{Y}_{\tau}),\quad\mathbf{D}^{(2)}=\mathbf{D}^{(2)}(\mathbf{R}\mathbf{Y }_{\tau}),\quad D=D(\mathbf{R}\mathbf{Y}_{\tau}). \tag{109}\]
Finally, we apply a Lamperti transform to remove the noise dependence on \(\mathbf{D}^{(2)}\). The transformed process \(\mathbf{Z}_{\tau}=\phi(\mathbf{Y}_{\tau})\) then satisfies (using Einstein summation convention for sums over repeated indices)
\[d\mathbf{Z}_{i,\tau}=-R_{ij}^{-1}\left(\frac{D_{jk}D_{lk}\partial_{l}V|_{ \mathbf{R}\phi^{-1}(\mathbf{Z}_{\tau})}-kT\partial_{l}(D_{jk}D_{lk})|_{ \mathbf{R}\phi^{-1}(\mathbf{Z}_{\tau})}}{D^{2}D_{i}}\right)dt+\sqrt{2kT}dW_{i, \tau}. \tag{110}\]
Changing variables,
\[\frac{\partial}{\partial X_{k}}=\sum_{l=1}^{n}\frac{\partial Z_{l}}{\partial X _{k}}\frac{\partial}{\partial Z_{l}}=\sum_{l=1}^{n}\frac{R_{lk}^{-1}}{D_{k}(X_ {k})}\frac{\partial}{\partial Z_{l}} \tag{111}\]
and substituting \(D_{ij}=\delta_{ik}\delta_{lj}R_{kl}DD_{l}\) this becomes
\[d\mathbf{Z}_{i,\tau}=-R_{ij}^{-1}\left(\frac{\delta_{jm}\delta_{nk}\delta_{ lp}\delta_{qk}R_{mn}R_{pq}D^{2}D_{n}D_{q}R_{rl}^{-1}\nabla_{Y_{\tau}}V}{D^{2}D_{i}D_{ l}}\right. \tag{112}\]
which simplifies to
\[d\mathbf{Z}_{i,\tau}=-R_{ij}^{-1}\left(\frac{R_{jk}R_{lk}D^{2}D_{k}^{2}R_{rl}^{ -1}\nabla_{Y_{\tau}}V-kTR_{rl}^{-1}\nabla_{Y_{\tau}}(R_{jk}R_{lk}D^{2}D_{k}^ {2})}{D^{2}D_{i}D_{l}}\right)dt+\sqrt{2kT}dW_{i,\tau}, \tag{113}\]
\[d\mathbf{Z}_{i,\tau}\stackrel{{\text{(no sum $i$)}}}{{=}}-\left(\frac{R_{li}D^{2}D_{i}^{2}R_{rl}^{-1}\nabla_{Y_{\tau}}V-kTR_ {rl}^{-1}R_{li}\nabla_{Y_{\tau}}(D^{2}D_{i}^{2})}{D^{2}D_{i}D_{l}}\right)dt+ \sqrt{2kT}dW_{i,\tau}. \tag{114}\]
Expanding this expression does not lead to a great simplification of terms. In particular, the non-vanishing of the \(\mathbf{R}\) matrix elements in the \(dt\) term means that the drift term cannot be written as a gradient of a potential energy, hence this is not Brownian dynamics. |
2302.09040 | Automated Graph Genetic Algorithm based Puzzle Validation for Faster
Game Design | Many games are reliant on creating new and engaging content constantly to
maintain the interest of their player-base. One such example are puzzle games,
in such it is common to have a recurrent need to create new puzzles. Creating
new puzzles requires guaranteeing that they are solvable and interesting to
players, both of which require significant time from the designers. Automatic
validation of puzzles provides designers with a significant time saving and
potential boost in quality. Automation allows puzzle designers to estimate
different properties, increase the variety of constraints, and even personalize
puzzles to specific players. Puzzles often have a large design space, which
renders exhaustive search approaches infeasible, if they require significant
time. Specifically, those puzzles can be formulated as quadratic combinatorial
optimization problems. This paper presents an evolutionary algorithm, empowered
by expert-knowledge informed heuristics, for solving logical puzzles in video
games efficiently, leading to a more efficient design process. We discuss
multiple variations of hybrid genetic approaches for constraint satisfaction
problems that allow us to find a diverse set of near-optimal solutions for
puzzles. We demonstrate our approach on a fantasy Party Building Puzzle game,
and discuss how it can be applied more broadly to other puzzles to guide
designers in their creative process. | Karine Levonyan, Jesse Harder, Fernando De Mesentier Silva | 2023-02-17T18:15:33Z | http://arxiv.org/abs/2302.09040v2 | # Automated Graph Genetic Algorithm based Puzzle Validation for Faster Game Design
###### Abstract
Many games are reliant on creating new and engaging content constantly to maintain the interest of their player-base. One such example are puzzle games, in such it is common to have a recurrent need to create new puzzles. Creating new puzzles requires guaranteeing that they are solvable and interesting to players, both of which require significant time from the designers. Automatic validation of puzzles provides designers with a significant time saving and potential boost in quality. Automation allows puzzle designers to estimate different properties, increase the variety of constraints, and even personalize puzzles to specific players. Puzzles often have a large design space, which renders exhaustive search approaches infeasible, if they require significant time. Specifically, those puzzles can be formulated as quadratic combinatorial optimization problems. This paper presents an evolutionary algorithm, empowered by expert-knowledge informed heuristics, for solving logical puzzles in video games efficiently, leading to a more efficient design process. We discuss multiple variations of hybrid genetic approaches for constraint satisfaction problems that allow us to find a diverse set of near-optimal solutions for puzzles. We demonstrate our approach on a fantasy Party Building Puzzle game, and discuss how it can be applied more broadly to other puzzles to guide designers in their creative process.
## I Introduction
Puzzles have long been a staple of video games. They can be just enjoyable to play, or necessary to advance in the game, and usually provide in-game rewards upon their completion. In terms of design, puzzles can affect in-game economy, if completion of harder puzzles can lead to better rewards, and rewards can be purchased with, or traded for, in-game currency. Unlike traditional logic puzzles like Sudoku, World Wheel, Tower of Hanoi, Refraction, crosswords etc, where game rules are predefined and static [1], puzzles in video games commonly have changing constraints in order to diversify gameplay. With many games having ever evolving puzzles which continue to grow in complexity and difficulty, the task of designing these puzzles becomes time consuming and challenging.
The game design process is usually complex and labor intensive. Exploring the design space is how game creators discover and develop the rules and mechanics of the game. One of the principal challenges in designing puzzles, and potentially the most important for game designers, is guaranteeing they can be solved by players [2]. Unsolvable puzzles are frustrating to players for obvious reasons, but puzzles that become exceptionally hard due to virtually infeasible constraints, such as having barely enough time to perform all necessary moves, can be just as equally frustrating. In order to guarantee the quality of the player experience, designers then need to certify that solution for the puzzle exists. With ever growing constraints and numerous puzzles to create, the task quickly becomes more complex. In addition to time saving, by automating certain aspects of the puzzle creation process, it can also assist in discovering new solutions that might not have been noticed initially. This allows designers to validate and quickly iterate to different versions of their puzzles. It also answers questions such as: how many solutions exist for a particular puzzle, what is the optimal solution, or what is the "cheapest price" solution.
For the game showcased in this paper, game designers need to produce a significant amount of new puzzles everyday. In order for our system to be of assistance to their workflow, it is necessary that it is able to find solutions within a time frame that allows designers to make quick iterations of their design. This scenario and the sheer size of the search space of potential solutions renders straight-forward techniques, such as exhaustive search, impractical. In order to meet such requirements we propose an evolutionary algorithm to search the space of candidates, making use of an expert knowledge derived heuristic to guide exploration of potential puzzle solutions. Although this technique does not guarantee we will find the optimal solution, which is also of interest to the designers, it consistently finds diverse, close-to-optimal solutions under our strict time constraints. The diverse set of solutions found provides valuable insight to the designers allowing them to quickly analyze the attributes of the solution space and compare different design iterations.
The puzzles discussed in this paper are deterministic, single-player, and fully observable at all steps. These puzzles are defined as constraint satisfaction problems with one or more constraints, where the objective is to select and assign items, from a pool of candidates, in order to complete the requirements. If the puzzle is solvable, there is rarely (if ever) a unique solution to it. In this paper, we describe a randomized heuristic-driven construction state-space search based methodology to validate the solvability of a puzzle. With thousands of possible combinations to build a solution from, the search efficiency is key. In addition, each solution also has'solution cost', which is the sum of individual item prices comprising the solution. The cost of the solution should be minimized to better assess the 'value' of the puzzle offered to the players. We further expand this methodology to find a set of near-optimal solutions efficiently using a custom designed
evolutionary algorithm. Important to emphasize, in our work we focus solely on puzzle validation, regardless if the puzzles were generated manually or automatically.
The rest of the paper is formatted as such: Section II discusses how our approach fits in the context of previous research. Section III defines the party building puzzle, our example game, followed by the constructive approach to build a solution IV. Section V further develops our approach on how to harness power of a Genetic Algorithm not only find a solution but how to optimize the search for puzzle solutions, while trying to approximate the global price optimum. Lastly, we conclude by explaining our findings and future work in Section VI.
## II Related Work
The goal of this work is to develop an automated solution strategy targeted at validating a puzzle design, which lends itself well to the concept of automating playtesting. Commonly, automation in playtesting revolves around agents that can play through the game, or particular game scenarios. In particular, artificial intelligence agents could be used to test the boundaries of the rules/design constraints as in [3, 4]. While our approach is not conducted by an agent (puzzles are action-based), but rather the player needs to find a solution by selecting and organizing items that can fulfill the requirements presented.
Important to note that for our problem rather than automatically trying to change the design, or to create a new puzzle, we instead assist in evaluating the feasibility of a proposed puzzle. Perhaps more similar to our approach, Bhatt et al. evolve Hearthstone decks, by selecting from existing cards, to fit the strategy of game playing agents [5].
The strategy of using custom Genetic Algorithm based solvers for Jigsaw puzzles was first introduced in [6] for binary image puzzles. The large set of selection items presents a unique challenge in choosing the solution optimization approach. The closest example of application of an evolutionary approach to effectively solve puzzles rather than applying greedy approach is demonstrated by [7] for very large Jigsaw puzzles, and later extended to multiple puzzles by [8]. Specifically, in [7] authors propose an approach that iteratively improves initial population via the means of natural selection (mutation, selection, crossover) to find more accurate solutions (i.e. the correct image) with the novel puzzle representation and a custom crossover approach.
The puzzles considered in our work have similarly large pool of items to (pre)select and assign, which poses comparable computation hurdle. The video game puzzles we consider pose an added challenge of non-uniqueness of a solution as well as absence of a clearly defined benchmark solution to compare with.
## III Example Problem: Party Building Puzzles
To illustrate our proposed Graph-Based Genetic Algorithm approach to finding a set of near-optimal solutions we, first, describe an example problem to introduce all the necessary terminology, followed by a constructive search based method to build a single solution, which becomes the basis for the evolutionary computations.
Party building fantasy combat games are rising in popularity recently, e.g. Idle Champions, Firestone Idle RPG, or others (Fig. 1). In these games, a player is tasked with selecting and filling out a combat formation with different heroes. Each combatant has different properties, making them suitable for different positions in the combat formation as well as making them strategically preferable to combine with other fighters. The genre of party-based, fantasy combat games is a suitable candidate for demonstrating our approach. The game we are analyzing is called "Party Building Puzzle", or PBP for short. In a PBP, the player has to select a number of items (heroes), and then assign those items to the combat formation such that heroes in the formation meet the puzzle requirements. An important thing to note is the puzzle requirements are defined by the game designers, which differentiates this type of formation filling from a more general strategy based games, and hence the use of the term 'puzzle'.
Each hero has a number of properties such as race, religion, nation, total level, gear quality level, price and how rare it is to obtain that particular hero. Puzzle requirements are constraints the player needs to honor when building their formation, and are usually related to the hero properties. For example, a puzzle might require the player use at least 3 orcs in their party, from at least 4 different nations, while another puzzle requires that the average level of the party be at least 60. Heroes additionally contribute to something called party synergy, a graph-based metric of how well the party works together and will be described in more detail in the next section. Requesting players to form a party of a certain synergy level creates a complex challenge in which the player must not only choose the right heroes, but also arrange them appropriately within the party's formation.
Currently the game possess a total amount of unique cards in the order of \(10^{5}\), and the number keeps expanding as the designers are constantly releasing new cards. The number of possible heroes is near limitless, which creates a combinatorially complex search space over which a solver must iterate to find solutions that satisfy the requirements. In that
Fig. 1: These figures illustrate fantasy parties arranged in typical combat formations. For example, melee fighters are positioned in the front and ranged fighters or support characters are in the back. Even if the core game play differs, these concepts are shared, suggesting the wide applicability of formation-building puzzles within this game genre.
scenario, a task of constructing all possible solutions for a given puzzle becomes infeasible. For such, we use the optimization heuristics of evolutionary strategy to efficiently search the space of possible solutions, while optimizing for what is defined as the "cheapest' priced option.
## IV Puzzle Solver Part I: Formation Building and Playtesting
### _Problem Formulation_
The puzzles we are considering, with both linear and quadratic constraints, can be formulated as an asymmetric quadratic assignment problem (QAP), which consists of selecting \(N\) items out of a pool of \(M\), where \(N<<M\), and placing them into a graph of locations in an optimal way such that certain conditions are satisfied. Each item, i.e. hero, is characterized by a list of properties, so-called traits. Each can be selected once and its selection and corresponding placement are advised by those traits. In our example of the Party Building Puzzle, \(N=10\) is the number of nodes in the puzzle graph to fill out and \(M=10000\) is the number of possible items to choose from without repetition.
Linear constraints are refereed as simple, and can be either continuous or discrete. For continuous properties, constraints are formulated as 'the accumulated quality of the property \(P\) over all positions has to be greater or equal to some \(a\)': \(\sum_{i=1}^{N}x_{i}^{P}\geq a\). For example, _minimum team level is 84_. Discrete property constraints require 'not more than \(b\) items with a property \(P\) are allowed in the placement graph': \(\sum_{i=1}^{N}I(x_{i}^{P})\leq b,\) where \(I\) is a boolean indicator function defining the presence or absence of a certain trait in the feature vector for each item. For example, _at most 8 elves are allowed_ or _at most 4 humans and at least 2 elves are allowed_. In our PBP, number of properties for a hero does not exceed \(P=8\).
In addition, we have a nonlinear complex constraint, so-called'synergy', which is dependent on both item placement and the compatibility between adjacent items in the graph. Hence, its value is not defined till all the items are selected and assigned. Synergy is present within each puzzle, but can have various requirement value from 0 to 1.
\[\sum_{j=1,j\neq i}^{N}\sum_{i=1}^{N}\sum_{p=1}^{P}w(x_{j}^{p},x_{i}^{p})\geq Synergy, \tag{1}\]
where \(w(x_{p}^{j},x_{p}^{j})\) is the weight of the edge in the graph between two neighbouring nodes \(i\) and \(j\), and summations are for each edge, across each trait. The weight function is specific to the puzzle and is a custom-defined, non-linear relation.
We formulate Quadratic Optimization problem to maximize the synergy constraint following the Koopmans and Beckmann QAP formulation [9] over all possible selection and assignment permutations. Our solution must search for which items to use, out of the available ones, as well as which position to place the items in.
In addition to the classical QAP, we need to also pre-select \(N\) out of \(M\) possible items before assigning them to locations, which makes the assignment asymmetrical. The asymmetric nature of the placement presents unique challenges, since the number of candidate items \(N\) can be on the order of \(10^{5}\), while the graph consists at most a few dozen items. So, in addition to the optimal placement, the selection of items is part of the optimization.
### _Our Proposed Solution_
In this section we first describe the design to automatically build a deck by finding a feasible solution to a challenge, i.e. the one that honors all requirements specific to that puzzle while maximizing the synergy. Using this design, we further employ evolutionary approach to finding a set of optimal solutions as described in the following section.
It is known that QAPs are difficult to solve and are among the hardest NP-complete problems [1] especially given the size of the combinatorial search space. Any algorithm that guarantees an exact solution (given that it exists) has to consider every item (hero) and every combination, so has an exponential computational time. Heuristic driven approaches are usually employed to either linearize or find approximate solutions [10].
Potentially, training a Reinforcement Learning agent to solving a puzzle could lead to a fast performance. However, with dynamically changing sets of requirements as well as a large pool of candidate items it could lead to overfitting and overall would result in data inefficiency in training and might not be practical. There are additional complications of defining an adaptable state representation, as well as catastrophic forgetting (both during training and/or during re-training with new data). This is due to the fact that, though there is a limited, constant, set of well-defined requirements that may appear in a given puzzle, each puzzle has a unique combination of completion requirements. Compare to Sudoku solvers [11] as an example, where similar graph type constraints are defined, but the rules of the game are always the same even when the initial state of the board is different for each puzzle. In addition, placing numbers from 0 to 9 is not comparable to selecting 10 items out of tens of thousands and then placing them optimally.
Another set of approaches commonly applied for similar problems are classical search methods like A-star or Monte-Carlo tree search that are based on heuristic-driven or probability-based backtracking [12]. However, for problems with quadratic constraints like the graph-dependent synergy value that we are trying to maximize, those approaches have limited application. In particular, deriving admissible heuristics for quadratic optimization is a challenge by itself. Additionally, the size of the search space we are dealing with makes those approaches impractical.
As a reminder, our goal is to select \(N\) items, each of which has \(P\) features, from a list of \(M\) items, where \(N<<M\). These items must be selected and assigned to positions within a graph so as to satisfy a list of requirements and avoid repetitions. The solution strategy we chose is a constructive, randomized, heuristic search as described in Algorithm 1. To avoid deterministic behaviour, we first define a random
traversal order in which to visit each node in the graph. Then, for any unvisited node, we first filter out all items from the list of candidates following the constraint requirements such that any item chosen from the filtered list is valid. Once the list is filtered, we search through it to find an item that maximizes the synergy of the neighbouring candidates that are partially filled. The random path traversal ensures diversity, so that the process can be repeated until the synergy requirement is satisfied.
#### Iii-B1 Filter the candidate pool
Prior to selecting items at each position that optimize for the synergy, at each step of the algorithm iteration we apply heuristics specific to the problem and the set of requirements in order to filter out all the candidate items that are not eligible for the valid assignment. This look-ahead approach allows to do forward checking between the current and future candidates and if at any point the candidate pool is empty given an early signal to start over. This step is applied sequentially \(N\) times for each linear constraint for each position until each node in the graph is visited. This heuristic rule is applied for discrete and continuous feature constraints.
Consider a given step of solution building process when \(L\) positions out of \(N\) are already assigned (or, equivalently, visited), \(0\leq L<N\), and at least \(a\) items of a specific property \(P\) are required: \(\sum_{i=1}^{N}I(x_{i}^{P})\geq a\). First, we compute the intermediate value \(l\) of that constraint using assigned items: \(l=\sum_{i=1}^{L}I(x_{i}^{P})\). Then, \(l\geq a\) means that the constraint is already satisfied and no action is taken. Alternatively, if \(l<a\), we have \(K=N-L\) nodes yet to be visited, out of which remaining \(k=a-l\) items have to have a property \(P\). Then, if \(K>k\), again no action is taken. Else, if \(K=k\), we filter out all items in our candidate pool that do not have property \(P\) forcing in subsequent iterations the selection of a property \(P\) to honor the requirements. This process is repeated for each requirement. Note, that since at each iteration only one item is selected to fill-up a given node, the repetitive application of these heuristics guarantees that \(K\) is never less than \(k\), and at the end all requirements are satisfied.
#### Iii-B2 Select the best matching item
Once the item candidate pool is filtered, any randomly selected item will satisfy all the linear constraints. At this step the goal is to select the best matching candidate in the partially filled graph which maximizes complex constraints. The complex constraints are computed for a position that has to be filled with respect to its immediate neighbors in a partially filled graph. For example, if the node \(E\) has to be filled as shown on Fig. 2, it is only influenced by nodes \(A\) and \(C\). For an empty graph, a random item is picked.
Of note is the fact that this intermediate synergy value at any given step does not guarantee that the final solution will comply with the complex requirements since these are quadratic requirements. If they should fail to be satisfied, the algorithm starts over with a new randomly selected traversal path. Fig. 3 demonstrates two sample solutions which satisfy all linear constraints, which concerns the selection of the items, but only the right one satisfies quadratic constraint (both selection and optimal arrangement). Since the synergy is, however, usually hard to obtain, requiring a number of attempts for a given search space which makes the problem intractable, our proposed guided, randomized search increases the chances of obtaining feasible solutions in only a few iterations. Fig. 4 shows the results of running the algorithm on a given set of linear requirements with and without heuristics guiding it toward synergy maximization. In both cases same constructive approach is used to build a solution, so linear constraints are satisfied through the candidate filtering, the only difference is that we do not select the best matching candidate in the later case. Out of 5000 independent attempts, the maximum synergy value reached without heuristic-guidance was 0.35. This is poor performance, as common in game required values are typically above 0.7-0.8, demonstrating the value of heuristic-guidance. Solution construction is summarized in Algorithm 1.
```
Initialize: Random graph traversal path whileSynergy is not satisfieddo forEach unvisited node in a graphdo Look ahead: filter the list of candidate items forall linear requirementsdo Apply the heuristics rule end Select the best matching candidate (synergy wise) end Compute synergy Eq. 1 ifsynergy requirement is satisfiedthen return the solution: selected items \(x\), permutation \(\sigma\) end else New random path and iterate until solution is obtained end
```
**Algorithm 1** Guided Randomized Heuristic Search to Construct a Solution
The computational complexity of the algorithm is \(O(kNM)=O(NM)\) per iteration, in the worst-case, where \(k\) is the fixed finite number of linear requirements on the order of \(1\) to \(6\), \(N\) is \(10\), the number of nodes in the graph, and \(M\) is the size of the candidate pool, order of \(10^{4}\). The number of iterations is fixed, typically to a value of \(10\) or less for the purposes of our experiments. In the best case scenario, however, linear requirements lead to a significant reduction
Fig. 2: Graph traversal for searching for the best matching item based on synergy with neighbors. The order of traversal is alphabetical, so that node \(E\) is influenced only by nodes \(A\) and \(C\), since they were the only neighbours visited.
in the size of the candidate pool such that it approaches the size of the graph. In these cases, the complexity approximately reduces to \(O(N^{2})\) per iteration.
Important to note that exhaustive search is intractable for this type of a problem, since the number of all possible combinations of (a) selecting 10 items out of 10 000 and (b) arranging those 10 items into a specific order is roughly of the order of \(10^{39}\). Depending on the requirements of the puzzles, there could be a few hundred thousand possible feasible solutions out of those combinations.
At the same time, human players are able to find solutions without extensively covering the entire search space by using a limited number of items available to them and obtaining a few missing ones, as well as taking advantage of their intuitive knowledge of the game mechanics and items' properties. Note, the heuristics used in the algorithm are inspired by how human players act: the way they select items and use in-game tools to filter out certain items's traits, and how they use their intuition to arrange the items to increase synergy. However, the task of a game designer to find not just a solution, but the 'best' solution requires extensive coverage of the entire search space, not just using those few items available to each player (and which could of course be different for each player).
## V Puzzle Solver Part 2: GA-Based Solution Optimizer
The solution space of the type of puzzles described often times consists of multiple solutions, each which are valid, honoring the constraints, however some solutions are more desirable than others, like having a low price or using items which are easier to obtain. Knowing the optimal solution allows content creators to gauge the desired reward value and puzzle difficulty accordingly.
Optimizing for a solution in the class of constraint satisfaction problems considered poses unique challenges. Since they form discrete combinatorial problems, gradient based optimization methods are not applicable. Various population based optimization methods have been applied to solving similar type of puzzles. In many cases, memetic variants of genetic algorithms, i.e. evolutionary approaches hybridized with constraint satisfaction tools, are used successfully for combinatorial optimization problems [10]. Similar ideas were also used as a hybrid genetic algorithm, such as when applied to the Light-up puzzle [13]. In those approaches, the genetic algorithms always work with feasible solutions, and if the individuals become infeasible after crossover and mutation operations, they are being "healed" to restore feasibility. In this work we propose a customized hybrid GA algorithm for finding the optimal solution in terms of an objective not part of the constraint by searching the solution space.
### _Graph-Based Hybrid Genetic Algorithm for Solution Optimization_
There are various ways in which we can choose to setup the architecture of the genetic algorithm, such as representing the solution space, deciding to allow vertical or horizontal crossover, which of multiple ways parents are selected for crossover based on their fitness value, which mutation rate, mutation strategy, and selection strategy to use, and the settings for initial population size, offspring size, crossover and mutation rates. We are using a Graph-Based Genetic Algorithm (GB-GA) in which the relative locations of nodes to be filled are important, so reshuffling of genes is not allowed [14]. The algorithm steps are described below highlighting the specifics of our implementation.
#### V-A1 Representation and Initialization
In our terminology, a chromosome is an individual solution to the puzzle, which consists of selected items from the candidate pool assigned to the formation. Same items assigned in a different order are considered as two separate solutions. To start off a genetic algorithm, Algorithm 1 is used to randomly generate a number of independent feasible candidate solutions, i.e. chromosomes.
#### V-A2 Crossover and Mutation
After examining several variants of crossover, we have chosen a uniform crossover operation such as each unoccupied position in the offspring solution (chromosome) is assigned an item (gene) from one of the parent solutions with the probability \(p=1/2\), while enforcing that no item may be repeated in the offspring solution. The selection of parents for the crossover is performed by a rank-based selection rule [15]. The mutation rate is set at 20%,
Fig. 4: Comparison of Synergy values normalized to [0,1] for an independent set of 5000 attempts for a given challenge with and without heuristics guidance. Vertical axes are frequencies as percentage over all the samples. As can be seen, without the heuristics guidance to continuously select the best matching items, the synergy value never reaches more than 0.3. Results are for a sample challenge.
Fig. 3: Sample solutions to a puzzle with the requirements ‘At least 7 Gobins and at least 2 races.’. In both cases the same heroes are used. However, the arrangements are different which result in a higher synergy (edges) for the solution on the right as party members are placed more optimally.
meaning that each individual has a 20% chance being replaced, resulting in an average of \(0.2N\) items removed from the graph during a given mutation pass. In detail, a solution is traversed in a random order and at every step a probabilistic decision is made whether to remove a given item both from the solution and the remainder of the candidate pool. The resultant partial solution is then populated again where missing items are randomly filled with the new items (heroes).
#### Iii-A3 Hybrid Approach to Find Feasible Solutions
Note, that solutions produced by recombination and mutation do not necessarily result in feasible ones, since those operations are not guaranteed to satisfy the challenge constraints. As an extra step before accepting them, we repair those solutions by a process we call 'healing' to ensure that the final offspring solution is feasible, or valid [13, 16].
More specifically, first, those few items in the offspring that were replaced during mutation are removed. Then, the same Algorithm 1 we used to build a new solution is used to replace missing items on the partial solution. If no such replacements are possible, the algorithm rejects that solution in favor of a new randomly generated one.
#### Iii-A4 Selection and Refreshment
The selection process favors solutions with better fitness and diversity. The new generation of solutions are selected from the offspring and previous population. If the selected solutions comprise a diverse enough population, the algorithm moves to the next generation by selecting the best fit individuals. Otherwise, if diversity drops below a threshold, a fixed number of new randomly generated individuals are added to the offspring pool.
### _Maintaining Diversity_
One of the key reasons the algorithm encounters premature stagnation, or trapping, is when the population looses diversity among its individual solutions. If that happens, recombinations and local perturbations through mutations are not able to lead the individual solutions to escape local minima [17]. We have chosen phenotype diversity as a representative measure relating to the optimal fitness of the solution.
We explicitly track diversity within population at every generation, and if it falls below a certain user defined threshold (measured by a normalized coefficient of variation), we randomly replace a third of the existing solutions with the new individuals1. While maintaining diversity is essential to avoid algorithm stagnation, the diversity measure is still a proxy as it does not capture full variations in the solution space. In addition, adding new solutions that are far from the optimal reduces the rate of the convergence. Alternatively, one can introduce an adaptive mutation rate, which increases the number of mutations when the diversity of the population stagnates. These approaches, however, slow down the generation process. Additionally, mutation does not always lead out of a local minimum.
Footnote 1: Both the diversity measure and threshold as well as the percentage of the solutions to be replaced are engineering hyperparameters that have been tested to be efficient in average.
One of the most successful solutions to this problem is the "multi-population" or "multi-island" model [17, 18, 19], which allows a unique search trajectory to be followed by each islands, resulting in a more efficient exploration of the search space. An additional benefit is that it is inherently parallelizable and can be implemented employing distributed computing.
In our current work we have designed a multi-island approach with migration to encourage the genetic process in maintaining diversity. The overall population is partitioned into sub-populations based on their similarity and each sub-population is assigned to an island. After that, each island evolves independently, and then, after a fixed number of sub-generations or epochs, migration allows the islands to interact. The traditional island model requires additional parameters such as the number of islands, size of the population of each island, the migration frequency, migration rate, migration topology and migration policy. The migration strategy between the islands directly affects the performance. It also impacts the optimization of the algorithm via balancing exploration and exploitation and indirectly supporting diversity. As a result, each island explores a separate optimization path resulting in broader coverage of the search space.
The exact mechanism of migration between islands used in this paper is following a fully-connected migration pattern: after a fixed number of sub-generations within islands (10 sub-generations), all solutions across all islands are combined, sorted out by their fitness similarity, and then divided amongst islands equally such that most similar solutions are grouped together.
There are multiple choices of the migration strategies [20] since islands can be clustered together using various similarity measures, either according to their fitness values, or through other measures of diversity like entropy [21], or even dynamically through spectral clustering based on the pair-wise similarities between individuals [22]. In general, having more densely connected islands gives a higher accuracy of the lower bound, but it is more computationally expensive. In the current work we have chosen the fully-connected island model where migration of individuals is not constrained and to use fitness as our similarity measure.
### _Experimental Setup_
To demonstrate the performance of our approach, we defined 3 types of puzzles that cover different requirements and sizes of the search space. Certain requirements increase the difficulty of achieving a specific synergy threshold, if they create a large search space. Opposed to such, are constraints that greatly reduce the number of candidate items, resulting in quicker convergence, since synergy is easier to satisfy with more similar items. Our puzzles representative of each puzzle types are:
* **Type 1** Min Synergy 0.8, Min Team Level 84
* **Type 2** Min Synergy 0.9, Min Team Level 75, at least 8 religions, at least 6 hometowns, at most 2 heroes with the same religion
* **Type 3** Min Synergy 0.8, Min Team Level 84, Min of 8 elves
For each of the selected challenges the optimal solutions are provided by the game designers as benchmarks. The parameters of the Genetic Approach were selected through trial and error to perform reasonably well across various puzzles as summarized in Table I. Our goal was to demonstrate the advantage of multi-islands approach in maintaining diversity, in comparison with a single population approach, and thus leading to more reliable results, while the actual model parameters can be tuned for a problem of interest. For example, authors in [23] present an N-Tuple Bandit Evolutionary approach to automatically optimize hyperparameters.
### _Numerical Results_
The experiments were conducted for the three types of puzzles, comparing "vanilla" hybrid genetic algorithm with the "multi-island" approach. Fig. 5 compares the behaviour of both approaches, the difference between those two approaches across 25 independent restarts with a fixed computational budget for each scenario so that average performance can be traced. Performance is defined by the rate of convergence and solution population diversity. During the early generations, the "vanilla" genetic approach slightly outperforms the multi-islands since it has more individuals to choose from. While average over time, shows the multi-island approach better approximates the lower bound with less variance as it manages to escape the local minima by interaction between the islands. We conclude that overall multi-islands approach consistently outperforms the standard GA under the same time, while maintaining at least twice higher diversity. The effect of the initial generations fades out fast for both approaches. The steps on the convergence curve reflects the reshuffling between the islands.
### _Notes on Algorithm Feasibility_
As with any Genetic Algorithm for combinatorial non-convex optimization, there is no way to prove that any of the local minima are actually the global minima. The approach we took is to show that even though the algorithm is random in nature (both because of initial population and recombinations), if we run it repetitively for the same puzzle we could show that initial population, as well as a series of random crossover/mutations, are all leading to the same minimal fitness value solution, in average. Even more, by maintaining the diversity of the population through a multi-island approach, we could avoid algorithm stagnation and keep exploring the solution space as long as a satisfying minimal solution is found.
In theory, the actual global minima is not known for these types of problems. So we are effectively comparing our genetic approach to an alternative of finding and enumerating all possible solutions and selecting the 'best' one. That could be possibly finding a few hundred thousand solutions. Even though each of them takes about 10 sec in average to find on a standard cpu machine, it would overall take up to \(300\) hours (\(10\times 100000/3600\)), while with the Genetic Algorithm we can limit that time to up to \(10\) minutes, by reducing the number of independent solutions we need to find to only initialize the population, while recombinations are relatively cheap to compute. The number of puzzles generated by designers then
\begin{table}
\begin{tabular}{l l l} \hline Parameter & vanilla GA & multi-island GA \\ \hline mutation rate & \(0.2N\) & \(0.2N\) \\ crossover rate & 0.5 & 0.5 \\ pop/offspring size & 50/100 & 10/20 per island \\ migration type & N/A & similarity based \\ similarity measure & N/A & fitness \\ migration frequency & N/A & 10 \\ islands epochs & N/A & 10 \\ generations & 100 & 100 \\ \hline \end{tabular}
\end{table} TABLE I: GA parameters for “vanilla” and “multi-islands”
Fig. 5: **Top row**: Convergence curves of best fit solutions for challenges Type1, Type2 and Type3, comparing the two approaches while maintaining diversity in both cases, averaged over 50 independent runs. The solid lines indicate the best fitness value across all the runs, the dotted lines are median values at each generation, and shaded regions cover each run. The solid black line represents the optimal solution. **Bottom row**: Diversity at each generation, solid lines: minimum, dotted line: median.
becomes an important factor, as they are constantly generating new puzzles (on average \(20\) per day) and would need the solver to run in actionable time to gain the knowledge of the attributes of the solution space.
## VI Conclusion and Future Work
Our approach is motivated by the recurring problem designers have of improving and optimizing the task of creating new quality puzzle variations. We demonstrate our use case on the Party Building Puzzle game, where players collect heroes and later select from their collection which ones to use and how to arrange them in order to complete puzzle constraints. Our approach optimized for minimum amount of in-game resource value our solution has, to help designers evaluate and compare the puzzles they create. In addition, the proposed solution had to be capable of running in feasible time to allow designers to constantly iterate over their designs in just a few hours.
In this work we have proposed an efficient constructive randomized search algorithm to build a solution using heuristics specific to the puzzle's constraints, and show that a solver hybrid, graph-based, genetic approach allows us to find near-optimal solutions to the puzzles. One of the challenges of these types of combinatorial optimization is an early drop into the local optima. We have experimentally demonstrated how the multi-island approach with a randomized selection strategy allows us to reach a near-optimal solution.
As mentioned above, instead of decoupling the constraint optimization problem into two different ones: constraint satisfaction for a QAP and a combinatorial optimization to find the best performing solution, one could alternatively formulate the problem as a multi objective optimization, where constraints like synergy, ratings etc are optimized in combination with the fitness. This exploration of a comparative performances of these approaches is a future work.
An avenue to explore are methods that can further investigate the search space of solutions. In an effort to increase diversity of potential solutions, and thus provide designers with a more detailed insight into their own design, we plan to explore algorithms for illuminating search spaces, such as Map-Elites [24, 25]. Map-Elites explores different dimensions of the search space, and with such also provides an alternative solution that is more robust in avoid the local minimal problem discussed.
We have demonstrated that power of Genetic Algorithms could be exploited for the NP hard combinatorial optimization problem with large non-unique search space and in the presence of additional constrains. Our proposed framework with the novel puzzle representation and custom designed multi-islands graph based genetic approach could be adapted to other problems with similar properties as long as the genetic operations are specified for the problems of interest. This method could also further be extended to solve puzzles starting from a partial state since there is no dependency of prior state.
## Acknowledgment
We would like to genuinely thank the anonymous Reviewers for all valuable comments and suggestions, which helped us to improve the quality of the manuscript.
|
2301.10041 | A parallel solver for FSI problems with fictitious domain approach | We present and analyze a parallel solver for the solution of fluid structure
interaction problems described by a fictitious domain approach. In particular,
the fluid is modeled by the non-stationary incompressible Navier-Stokes
equations, while the solid evolution is represented by the elasticity
equations. The parallel implementation is based on the PETSc library and the
solver has been tested in terms of robustness with respect to mesh refinement
and weak scalability by running simulations on a Linux cluster. | Daniele Boffi, Fabio Credali, Lucia Gastaldi, Simone Scacchi | 2023-01-24T14:32:35Z | http://arxiv.org/abs/2301.10041v2 | # A parallel solver for FSI problems
###### Abstract.
We present and analyze a parallel solver for the solution of fluid structure interaction problems described by a fictitious domain approach. In particular, the fluid is modeled by the non-stationary incompressible Navier-Stokes equations, while the solid evolution is represented by the elasticity equations. The parallel implementation is based on the PETSc library and the solver has been tested by running simulations on a Linux cluster.
**Keywords**: fluid-structure interactions, fictitious domain, preconditioners, parallel solver.
2020 Mathematics Subject Classification: 65N30, 65N12, 74F10, 65F08
## 1. Introduction
In this work, we continue the analysis of the parallel solver for fluid structure interaction problems with fictitious domain approach, recently introduced in [7]. In particular, here we focus our attention on robustness with respect to mesh refinement, with different choices of time step, and weak scalability.
Our fictitious domain approach with distributed Lagrange multiplier was introduced in [5] as evolution of the immersed boundary method [11, 9]. The fluid is governed by the incompressible time dependent Navier-Stokes equations, while the immersed structure is characterized by linear and nonlinear constitutive laws. For the finite element discretization, we choose the \((\mathcal{Q}_{2},\mathcal{P}_{1})\) element for velocity and pressure of the fluid and the \(\mathcal{Q}_{1}\) element for the structure variables; the time marching is a first order semi-implicit finite difference scheme. Moreover, the fluid-structure coupling matrix is assembled by exact computations over non-matching meshes as described in [6].
At each time step, the linear system arising from the discretization is solved by employing the GMRES method, combined with either a block-diagonal or a block-triangular preconditioner.
In Section 2 we present the mathematical model describing fluid structure interaction problems in the spirit of the fictitious domain approach. In Section 3, we describe the numerical method we implemented for our simulations and in Section 4 we introduce two possible choices of preconditioner for our parallel solver. Finally, in Section 5 we present some numerical tests in terms of robustness with respect to mesh refinement and weak scalability.
## 2. Continuous formulation
We simulate fluid-structure interaction problems characterized by a visco-elastic incompressible solid body immersed in a viscous incompressible fluid. We denote by \(\Omega_{t}^{f}\) and \(\Omega_{t}^{s}\) the two regions in \(\mathbb{R}^{d}\) (with \(d=2,3\)) occupied by the fluid and the structure, respectively, at the time instant \(t\); the interface between these two regions is denoted by \(\Gamma_{t}\). The evolution of such a system takes place inside \(\Omega\), that is the union of \(\Omega_{t}^{f}\) and \(\Omega_{t}^{s}\): this new domain is independent of time and we assume
## 1. Introduction
In this paper we consider a _Newtonian fluid_ characterized by density \(\rho_{f}\) and viscosity \(\nu_{f}>0\), so that the Cauchy stress tensor \(\boldsymbol{\sigma}_{s}\) can be seen as the sum of two contributions: a viscous part, similar to the one of the fluid
\[\boldsymbol{\sigma}_{s}^{v}=-p_{s}\mathbb{I}+\nu_{s}\,\boldsymbol{\varepsilon}( \mathbf{u}_{s}), \tag{1}\]
and an elastic part which can be written in terms of the Piola-Kirchhoff stress tensor \(\mathbb{P}\)
\[\mathbb{P}(\mathbb{F}(\mathbf{s},t))=J(\mathbf{s},t)\boldsymbol{\sigma}_{s}^{ e}\mathbb{F}(\mathbf{s},t)^{-\top}\quad\text{for $\mathbf{x}=\mathbf{X}(\mathbf{s},t)$}. \tag{2}\]
In particular, hyperelastic materials are characterized by a positive energy density \(W(\mathbb{F})\) which is related with \(\mathbb{P}\) since \(\mathbb{P}(\mathbb{F})=\partial W/\partial\mathbb{F}\).
Finally, the system is described by the following equations in strong form
\[\begin{split}&\rho_{f}\bigg{(}\frac{\partial\mathbf{u}_{f}}{ \partial t}+\mathbf{u}_{f}\cdot\boldsymbol{\nabla}\,\mathbf{u}_{f}\bigg{)}= \operatorname{div}\boldsymbol{\sigma}_{f}&\text{in $\Omega_{t}^{f}$}\\ &\operatorname{div}\mathbf{u}_{f}=0&\text{in $\Omega_{t}^{f}$} \end{split} \tag{3}\]
\[\begin{split}&\rho_{f}\bigg{(}\frac{\partial\mathbf{u}_{f}}{ \partial t}+\mathbf{u}_{f}\cdot\boldsymbol{\nabla}\,\mathbf{u}_{f}\bigg{)}= \operatorname{div}\boldsymbol{\sigma}_{f}&\text{in $\Omega_{t}^{f}$}\\ &\operatorname{div}\mathbf{u}_{f}=0&\text{in $\Omega_{t}^{f}$} \end{split} \tag{4}\]
and
\[\begin{split}&\rho_{f}\bigg{(}\frac{\partial\mathbf{u}_{f}}{ \partial t}+\mathbf{u}_{f}\cdot\boldsymbol{\nabla}\,\mathbf{u}_{f}\bigg{)}= \operatorname{div}\boldsymbol{\sigma}_{f}&\text{in $\Omega_{t}^{f}$}\\ &\operatorname{div}\mathbf{u}_{f}=0&\text{in $\Omega_{t}^{f}$} \end{split} \tag{5}\]
and
\[\begin{split}&\rho_{f}\bigg{(}\frac{\partial\mathbf{u}_{f}}{ \partial t}+\mathbf{u}_{f}\cdot\boldsymbol{\nabla}\,\mathbf{u}_{f}\bigg{)}= \operatorname{div}\boldsymbol{\sigma}_{f}&\text{in $\Omega_{t}^{f}$}\\ &\operatorname{div}\mathbf{u}_{f}=0&\text{in $\Omega_{t}^{f}$} \end{split} \tag{6}\]
completed by two transmission conditions along the interface \(\Gamma_{t}\)
\[\begin{array}{ll}\mathbf{u}_{f}=\mathbf{u}_{s}&\text{on }\Gamma_{t}\\ \boldsymbol{\sigma}_{f}\mathbf{n}_{f}=-(\boldsymbol{\sigma}_{s}^{v}+J^{-1} \mathbb{P}\mathbb{F}^{\top})\mathbf{n}_{s}&\text{on }\Gamma_{t},\end{array} \tag{7}\]
where \(\mathbf{n}_{f}\) and \(\mathbf{n}_{s}\) denote the outer normals to \(\Omega_{t}^{f}\) and \(\Omega_{t}^{s}\), respectively. Moreover, we consider the following initial and boundary conditions
\[\begin{array}{ll}\mathbf{u}_{f}(0)=\mathbf{u}_{f,0}&\text{in }\Omega_{0}^{f}\\ \mathbf{u}_{s}(0)=\mathbf{u}_{s,0}&\text{in }\Omega_{0}^{s}\\ \mathbf{X}(0)=\mathbf{X}_{0}&\text{in }\mathcal{B}\\ \mathbf{u}_{f}=0&\text{on }\partial\Omega.\end{array} \tag{8}\]
The idea of the fictitious domain approach is to extend the first two equations in (6) to the whole domain \(\Omega\); consequently, following [5], we introduce two new unknowns
\[\mathbf{u}=\left\{\begin{array}{ll}\mathbf{u}_{f}&\text{in }\Omega_{t}^{f}\\ \mathbf{u}_{s}&\text{in }\Omega_{t}^{s}\end{array}\right.\qquad p=\left\{ \begin{array}{ll}p_{f}&\text{in }\Omega_{t}^{f}\\ p_{s}&\text{in }\Omega_{t}^{s}.\end{array}\right. \tag{9}\]
In this new setting, (1) becomes a constraint on \(\mathbf{u}\), since we have to impose that
\[\mathbf{u}(\mathbf{X}(\mathbf{s},t),t)=\frac{\partial\mathbf{X}}{\partial t}( \mathbf{s},t)\quad\text{for }\mathbf{s}\in\mathcal{B}. \tag{10}\]
This condition can be weakly enforced by employing a distributed Lagrange multiplier. To this end, we introduce a suitable functional space \(\boldsymbol{\Lambda}\) and a continuous bilinear form \(\mathbf{c}\) such that
\[\begin{array}{l}\mathbf{c}:\boldsymbol{\Lambda}\times H^{1}(\mathcal{B})^{d }\longrightarrow\mathbb{R}\\ \mathbf{c}(\boldsymbol{\mu},\mathbf{Y})=0\quad\forall\boldsymbol{\mu}\in \boldsymbol{\Lambda}\implies\mathbf{Y}=0.\end{array} \tag{11}\]
We set \(\mathbf{c}\) to be the duality pairing between \(H^{1}(\mathcal{B})^{d}\) and its dual \(\boldsymbol{\Lambda}=(H^{1}(\mathcal{B})^{d})^{\prime}\), so that we have
\[\mathbf{c}(\boldsymbol{\mu},\mathbf{Y})=\langle\boldsymbol{\mu},\mathbf{Y} \rangle\quad\forall\boldsymbol{\mu}\in(H^{1}(\mathcal{B})^{d})^{\prime},\, \forall\mathbf{Y}\in H^{1}(\mathcal{B})^{d}. \tag{12}\]
At this point, following [5, 8], the equations in (6), endowed with conditions (7) and (8), can be written in variational form.
For our simulations, we consider a simplified version of the problem: we assume that fluid ad solid materials have the same density, i.e \(\rho_{s}=\rho_{f}\) and we drop the convective term of the Navier-Stokes equations.
**Problem 1**.: _Given \(\mathbf{u}_{0}\in H^{1}_{0}(\Omega)^{d}\) and \(\mathbf{X}_{0}\in W^{1,\infty}(\mathcal{B})\), find \(\mathbf{u}(t)\in H^{1}_{0}(\Omega)^{d}\), \(p(t)\in L^{2}_{0}(\Omega)\), \(\mathbf{X}(t)\in H^{1}(\mathcal{B})^{d}\), and \(\boldsymbol{\lambda}(t)\in\boldsymbol{\Lambda}\) such that for almost all \(t\in(0,T)\):_
\[\rho_{f}\left(\frac{\partial}{\partial t}\mathbf{u}(t),\mathbf{v }\right)_{\Omega}+a\left(\mathbf{u}(t),\mathbf{v}\right)\] \[\qquad\qquad\qquad-\left(\operatorname{div}\mathbf{v},p(t) \right)_{\Omega}+\langle\boldsymbol{\lambda}(t),\mathbf{v}(\mathbf{X}(\cdot,t ))\rangle=0 \forall\mathbf{v}\in H^{1}_{0}(\Omega)^{d}\] \[\left(\operatorname{div}\mathbf{u}(t),q\right)_{\Omega}=0 \forall q\in L^{2}_{0}(\Omega)\] \[\left(\mathbb{P}(\mathbb{F}(t)),\nabla_{s}\mathbf{Y}\right)_{ \mathcal{B}}-\langle\boldsymbol{\lambda}(t),\mathbf{Y}\rangle=0 \forall\mathbf{Y}\in H^{1}(\mathcal{B})^{d}\] \[\langle\boldsymbol{\mu},\mathbf{u}(\mathbf{X}(\cdot,t),t)-\frac {\partial\mathbf{X}}{\partial t}(t)\rangle=0 \forall\boldsymbol{\mu}\in\boldsymbol{\Lambda}\] \[\mathbf{u}(\mathbf{x},0)=\mathbf{u}_{0}(\mathbf{x}) \text{in }\Omega\] \[\mathbf{X}(\mathbf{s},0)=\mathbf{X}_{0}(\mathbf{s}) \text{in }\mathcal{B}.\]
In particular, \(a(\mathbf{u},\mathbf{v})=\nu\big{(}\operatorname{\boldsymbol{\operatorname{ \boldsymbol{\varepsilon}}}}(\mathbf{u}),\operatorname{\boldsymbol{\operatorname{ \boldsymbol{\varepsilon}}}}(\mathbf{v})\big{)}_{\Omega}\), where \(\nu\) is the extended viscosity with value \(\nu_{f}\) in \(\Omega^{f}_{t}\) and \(\nu_{s}\) in \(\Omega^{s}_{t}\).
## 3. Discrete formulation
Before discussing the discrete formulation, we remark that, from now on, we focus on two dimensional problems (\(d=2\)).
The time semi-discretization of Problem 1 is based on the Backward Euler scheme. The time interval \([0,T]\) is partitioned into \(N\) parts with uniform size \(\Delta t=T/N\). We denote the subdivision nodes by \(t_{n}=n\Delta t\). For a generic function \(g\) depending on time, setting \(g^{n}=g(t_{n})\), the time derivative is approximated as
\[\frac{\partial g}{\partial t}(t_{n+1})\approx\frac{g^{n+1}-g^{n}}{\Delta t}. \tag{13}\]
Therefore, we obtain a semi-implicit first order scheme in time.
On the other hand, for the discretization in space, we work with quadrilateral meshes for both fluid and solid.
For the fluid, we consider a mesh \(\mathcal{T}^{\Omega}_{h}\) for \(\Omega\) with meshsize \(h_{\Omega}\). We then consider two finite element spaces \(\mathbf{V}_{h}\subset H^{1}_{0}(\Omega)^{d}\) and \(Q_{h}\subset L^{2}_{0}(\Omega)\) for velocity and pressure, respectively, satisfying the inf-sup condition for the Stokes problem. In particular, we work with the \((\mathcal{Q}_{2},\mathcal{P}_{1})\) pair.
On the other hand, for \(\mathcal{B}\), we choose a mesh \(T^{\mathcal{B}}_{h}\) with meshsize \(h_{\mathcal{B}}\), independent from \(\mathcal{T}^{\Omega}_{h}\). We then consider two finite dimensional spaces \(\mathbf{S}_{h}\subset H^{1}(\mathcal{B})^{d}\) and \(\boldsymbol{\Lambda}_{h}\subset\boldsymbol{\Lambda}\). We assume that \(\mathbf{S}_{h}=\boldsymbol{\Lambda}_{h}\) and we approximate both variables \(\mathbf{X}\) and Lagrange multiplier with piecewise bilinear elements on quadrilaterals.
We notice that, since \(\boldsymbol{\Lambda}_{h}\) is included in \(L^{2}(\mathcal{B})^{d}\), at discrete level the coupling bilinear form \(\mathbf{c}\) can be replaced by the scalar product in \(L^{2}(\mathcal{B})^{d}\)
\[\mathbf{c}(\boldsymbol{\mu}_{h},\mathbf{Y}_{h})=(\boldsymbol{\mu}_{h},\mathbf{ Y}_{h})_{\mathcal{B}}\qquad\forall\boldsymbol{\mu}_{h}\in\boldsymbol{\Lambda}_{h} \,\forall\mathbf{Y}_{h}\in\mathbf{S}_{h}. \tag{14}\]
Therefore, we get the following full discretized problem.
**Problem 2**.: _Given \(\mathbf{u}_{0,h}\in\mathbf{V}_{h}\) and \(\mathbf{X}_{0,h}\in\mathbf{S}_{h}\), for all \(n=1,\ldots,N\) find \(\mathbf{u}_{h}^{n}\in\mathbf{V}_{h}\), \(p_{h}^{n}\in Q_{h}\), \(\mathbf{X}_{h}^{n}\in\mathbf{S}_{h}\), and \(\boldsymbol{\lambda}_{h}^{n}\in\boldsymbol{\Lambda}_{h}\) fulfilling:_
\[\rho_{f}\left(\frac{\mathbf{u}_{h}^{n+1}-\mathbf{u}_{h}^{n}}{\Delta t },\mathbf{v}_{h}\right)_{\Omega}+a\left(\mathbf{u}_{h}^{n+1},\mathbf{v}_{h}\right)\] \[\qquad\qquad-\left(\operatorname{div}\mathbf{v}_{h},p_{h}^{n+1} \right)_{\Omega}+\left(\boldsymbol{\lambda}_{h}^{n+1},\mathbf{v}_{h}(\mathbf{ X}_{h}^{n})\right)_{\mathcal{B}}=0 \forall\mathbf{v}_{h}\in\mathbf{V}_{h}\] \[\left(\operatorname{div}\mathbf{u}_{h}^{n+1},q_{h}\right)_{ \Omega}=0 \forall q_{h}\in Q_{h}\] \[\left(\mathbb{P}(\mathbb{F}_{h}^{n+1}),\nabla_{s}\mathbf{Y}_{h} \right)_{\mathcal{B}}-\left(\boldsymbol{\lambda}_{h}^{n+1},\mathbf{Y}_{h} \right)_{\mathcal{B}}=0 \forall\mathbf{Y}_{h}\in\mathbf{S}_{h}\] \[\left(\boldsymbol{\mu}_{h},\mathbf{u}_{h}^{n}(\mathbf{X}_{h}^{n} )-\frac{\mathbf{X}_{h}^{n+1}-\mathbf{X}_{h}^{n}}{\Delta t}\right)_{\mathcal{ B}}=0 \forall\boldsymbol{\mu}_{h}\in\boldsymbol{\Lambda}_{h}\] \[\mathbf{u}_{h}^{0}=\mathbf{u}_{0,h},\quad\mathbf{X}_{h}^{0}= \mathbf{X}_{0,h}.\]
Assuming for simplicity \(\mathbb{P}(\mathbb{F})=\kappa\mathbb{F}\), Problem 2 can be represented in matrix form as
\[\begin{bmatrix}\mathsf{A}_{f}&-\mathsf{B}^{\top}&\mathsf{0}&\mathsf{C}_{f}( \mathbf{X}_{h}^{n})^{\top}\\ -\mathsf{B}&\mathsf{0}&\mathsf{0}&\mathsf{0}\\ \hline\mathsf{0}&\mathsf{0}&\mathsf{A}_{s}&-\mathsf{C}_{s}^{\top}\\ \mathsf{C}_{f}(\mathbf{X}_{h}^{n})&\mathsf{0}&-\frac{1}{\Delta t}\mathsf{C}_{s} &\mathsf{0}\end{bmatrix}\begin{bmatrix}\mathbf{u}_{h}^{n+1}\\ p_{h}^{n+1}\\ \mathbf{X}_{h}^{n+1}\\ \lambda_{h}^{n+1}\end{bmatrix}=\begin{bmatrix}\mathsf{g}_{1}\\ \mathsf{0}\\ \mathsf{g}_{2}\end{bmatrix}, \tag{15}\]
with
\[\mathsf{A}_{f}=\frac{\rho_{f}}{\Delta t}\mathsf{M}_{f}+\mathsf{K}_{f}\] \[(\mathsf{M}_{f})_{ij}=\left(\boldsymbol{\phi}_{j},\boldsymbol{ \phi}_{i}\right)_{\Omega},\quad(\mathsf{K}_{f})_{ij}=a\left(\boldsymbol{\phi }_{j},\boldsymbol{\phi}_{i}\right)\] \[\mathsf{B}_{ki}=\left(\operatorname{div}\boldsymbol{\phi}_{i}, \psi_{k}\right)_{\Omega}\] \[(\mathsf{A}_{s})_{ij}=\kappa\left(\nabla_{s}\boldsymbol{\chi}_{j}, \nabla_{s}\boldsymbol{\chi}_{i}\right)_{\mathcal{B}}\] \[(\mathsf{C}_{f}(\mathbf{X}_{h}^{n}))_{\ell j}=\left(\boldsymbol{ \chi}_{\ell},\boldsymbol{\phi}_{j}(\mathbf{X}_{h}^{n})\right)_{\mathcal{B}}, \quad(\mathsf{C}_{s})_{\ell j}=\left(\boldsymbol{\chi}_{\ell},\boldsymbol{ \chi}_{j}\right)_{\mathcal{B}}\] \[\mathsf{g}_{1}=\frac{\rho_{f}}{\Delta t}\mathsf{M}_{f}\mathbf{u}_ {h}^{n},\quad\mathsf{g}_{2}=-\frac{1}{\Delta t}\mathsf{C}_{s}\mathbf{X}_{h}^{n}.\]
In particular, \(\boldsymbol{\phi}_{i}\) and \(\psi_{k}\) denote the basis functions for \(\mathbf{V}_{h}\) and \(Q_{h}\) respectively, while \(\boldsymbol{\chi}_{j}\) are the basis functions for the space defined on \(\mathcal{B}\). We observe that, due to our choices, \(\mathsf{C}_{s}\) represents a mass matrix.
We can observe that the matrix in (15) splits into four blocks, defined as follows:
\[\mathcal{A}_{11}=\begin{bmatrix}\mathsf{A}_{f}&-\mathsf{B}^{\top}\\ -\mathsf{B}&\mathsf{0}\end{bmatrix} \mathcal{A}_{12}=\begin{bmatrix}\mathsf{0}&\mathsf{C}_{f}(\mathbf{X}_ {h}^{n})^{\top}\\ \mathsf{0}&\mathsf{0}\end{bmatrix}\] \[\mathcal{A}_{21}=\begin{bmatrix}\mathsf{0}&\mathsf{0}\\ \mathsf{C}_{f}(\mathbf{X}_{h}^{n})&\mathsf{0}\end{bmatrix} \mathcal{A}_{22}=\begin{bmatrix}\mathsf{A}_{s}&-\mathsf{C}_{s}^{\top}\\ -\frac{1}{\Delta t}\mathsf{C}_{s}&\mathsf{0}\end{bmatrix}\]
where \(\mathcal{A}_{11}\) is related to the fluid dynamic, \(\mathcal{A}_{22}\) to the solid evolution, while \(\mathcal{A}_{12}\) and \(\mathcal{A}_{21}\) contain the coupling term.
Particular attention has to be paid to the assembly of the coupling matrix \(\mathsf{C}_{f}(\mathbf{X}_{h}^{n})\), since it involves the integration over \(\mathcal{B}\) of solid and fluid basis functions, taking into account the position of \(\Omega_{t}^{s}\). For more details about the procedure in a similar situation, we refer to [6]. In particular, we implement an exact quadrature rule by computing, at each time step, the intersection between the fluid mesh \(\mathcal{T}_{h}^{\Omega}\) and the mapped solid mesh \(\mathbf{X}_{h}^{n}(\mathcal{T}_{h}^{\mathcal{B}})\).
In general, when \(\mathbb{P}(\mathbb{F})\) is nonlinear, the system is solved making use of the Newton iterator method.
## 4. Parallel preconditioners
The design of an efficient parallel solver influences two aspects of the numerical method: firstly, the finite element matrices need to be assembled in parallel on each processor and, secondly, the solution of the saddle point system arising from the discretization has to be solved saving computational resources, in terms of both memory and execution time. For this purpose, we implemented a Fortran90 code based on the library PETSc [4, 3].
We introduce two possible choices of preconditioner:
* _block-diagonal preconditioner_ \[\begin{bmatrix}\mathcal{A}_{11}&\mathbf{0}\\ \mathbf{0}&\mathcal{A}_{22}\end{bmatrix}\]
* _block-triangular preconditioner_ \[\begin{bmatrix}\mathcal{A}_{11}&\mathbf{0}\\ \mathcal{A}_{21}&\mathcal{A}_{22}\end{bmatrix}.\]
We solve the linear system making use of the parallel GMRES method combined with the action of our preconditioners, which consists in the exact inversion of the diagonal blocks performed by the parallel direct solver Mumps [1, 2].
## 5. Numerical tests
The proposed preconditioners have been widely studied in [7] in terms of robustness with respect to mesh refinement, strong scalability and refinement of the time step. In this work, after reporting new results in terms of optimality, we analyze the weak scalability of our solver. We focus on both linear and nonlinear models describing the solid material.
The simulations have been run on the Shaheen cluster at King Abdullah University of Science and Technology (KAUST, Saudi Arabia). It is a Cray XC40 cluster constituted by 6,174 dual sockets compute nodes, based on 16 core Intel Haswell processors running at 2.3GHz. Each node has 128GB of DDR4 memory running at 2300MHz.
### Linear solid model
We consider the quarter of the elastic annulus \(\{\mathbf{x}\in\mathbb{R}^{2}:0.3\leq|\mathbf{x}|\leq 0.5\}\) included in \(\Omega=[0,1]^{2}\): in particular, the solid reference domain corresponds to the resting configuration of the body, that is
\[\mathcal{B}=\{\mathbf{s}=(s_{1},s_{2})\in\mathbb{R}^{2}:\,s_{1},s_{2}\geq 0, \,0.3\leq|\mathbf{s}|\leq 0.5\}.\]
The dynamics of the system is generated by stretching the annulus and observing how internal forces bring it back to the resting condition. In this case, \(\Omega_{0}^{s}\) coincides with the stretched annulus. Four snapshots of the evolution are shown in Figure 1.
The solid behavior is governed by a linear model, therefore \(\mathbb{P}(\mathbb{F})=\kappa\,\mathbb{F}\), with \(\kappa=10\). We choose fluid and solid materials with same density \(\rho_{f}=\rho_{s}=1\) and same viscosity \(\nu_{f}=\nu_{s}=0.1\). We impose no slip conditions for the velocity on the upper and right edge of \(\Omega\), while on the other two
edges, we allow the motion of both fluid and structure along the tangential direction. Finally, the following initial conditions are considered
\[\mathbf{u}(\mathbf{x},0)=0,\qquad\mathbf{X}(\mathbf{s},0)=\bigg{(}\frac{s_{1}}{1.4},1.4\,s_{2}\bigg{)}.\]
In Table 1, we report the results for the optimality test, where the robustness of the solver is studied by refining the mesh and keeping fixed the number of processors. In particular, we set the time step \(\Delta t=0.01\) and the final time of simulation \(T=2\). The number of processors used for the simulation is \(32\). The time \(T_{ass}\) needed to assemble the matrix of the problem increases moderately, while the time \(T_{coup}\), needed for the assembly of the coupling matrix by computing the intersection between the involved meshes, exhibits a superlinear growth. In terms of preconditioners, we can see that block-diag is not robust with respect to mesh refinement since the number of GMRES iterations grows from \(13\) to \(430\); clearly, this phenomenon affects also the time \(T_{sol}\) we need to solve the system. On the other hand, block-tri is robust since the number of GMRES iterations remains bounded by \(14\) when the mesh is refined. Therefore, \(T_{sol}\) presents only a moderate growth and, for \(1074054\) dofs, it is \(30\) times smaller than the value we get for block-diag preconditioner.
The weak scalability of the proposed parallel solver is analyzed in Table 2. Again, we choose \(T=2\) and \(\Delta t=0.01\). We perform six tests by doubling both the global number of dofs and the number of processors. Thanks to the resources provided by PETSc, the time \(T_{ass}\) to assemble stiffness and mass matrices is perfectly scalable. On the other hand, the assembly procedure for the coupling matrix is much more complicated: to detect all the intersections between solid and fluid elements, the algorithm consists of two nested loops. For each solid element (outer loop), we check its position with respect to all the fluid elements (inner loop). In particular, only the outer loop is distributed over all the processors. Consequently, \(T_{coup}\) is not scalable since the number of fluid dofs, analyzed in serial, increases at each test. We now discuss the behavior of the two proposed preconditioners. It is evident that block-diag is not scalable since the number of GMRES iteration drastically increases as we increase dofs and procs, clearly affecting \(T_{sol}\) and \(T_{tot}\). On the other hand, block-tri behaves well: even if it is not perfectly scalable, the number of iterations slightly increases from \(8\) to \(18\) and \(T_{sol}\) ranges from \(2.24\cdot 10^{-1}\,s\) to \(11.43\,s\).
### Nonlinear solid model
For this test, we set again the fluid domain \(\Omega\) to be the unit square. On the other hand, the immersed solid body is a bar represented, at resting configuration, by the
Figure 1. Four snapshots of the evolution of the structure with linear constitutive law.
rectangle \(\mathcal{B}=\Omega_{0}^{s}=[0,0.4]\times[0.45,0.55]\). During the time interval \([0,1]\), the structure is pulled down by a force applied at the middle point of the right edge. Therefore, when it is released, the solid body returns to its resting configuration by the action of internal forces. Four snapshots of the evolution are shown in Figure 2.
The energy density of the solid material is given by the potential strain energy function of an isotropic hyperelastic material; in particular, we have
\[W(\mathbb{F})=(\gamma/2\eta)\exp\big{(}\eta\,\mathrm{tr}(\mathbb{F}^{\top} \mathbb{F})-2\big{)},\]
where \(\mathrm{tr}(\mathbb{F}^{\top}\mathbb{F})\) denotes the trace of \(\mathbb{F}^{\top}\mathbb{F}\), while \(\gamma=1.333\) and \(\eta=9.242\).
\begin{table}
\begin{tabular}{r|r|r|r|r|r|r|r|r} \hline \multicolumn{8}{c}{**Linear solid model – Mesh refinement test**} \\ \hline \multicolumn{8}{c}{procs = 32, T = 2, \(\Delta t\) = 0.01} \\ \hline dofs & \(T_{ass}(s)\) & \(T_{coup}(s)\) & \multicolumn{3}{c|}{block-diag} & \multicolumn{3}{c}{block-tri} \\ & & its & \(T_{sol}(s)\) & \(T_{tot}(s)\) & its & \(T_{sol}(s)\) & \(T_{tot}(s)\) \\ \hline
30534 & 1.02e-2 & 9.98e-2 & 13 & 1.14e-1 & 42.01 & 7 & 6.93e-2 & 33.83 \\
120454 & 2.12e-2 & 1.09 & 31 & 8.30e-1 & 390.90 & 9 & 2.40e-1 & 266.17 \\
269766 & 9.20e-2 & 7.60 & 97 & 5.41 & 2.55e+3 & 11 & 6.47e-1 & 1.65e+3 \\
478470 & 1.31e-1 & 25.04 & 192 & 18.75 & 9.07e+3 & 12 & 1.14 & 5.24e+3 \\
746566 & 1.23e-1 & 85.32 & 422 & 67.92 & 3.07e+4 & 13 & 2.17 & 1.75e+4 \\
1074054 & 1.81e-1 & 196.88 & 430 & 97.19 & 5.90e+4 & 14 & 3.21 & 4.00e+4 \\ \hline \end{tabular}
\end{table}
Table 1. Refining the mesh in the linear solid model. The simulations are run on the Shaheen cluster. procs = number of processors; dofs = degrees of freedom; \(T_{ass}\) = CPU time to assemble the stiffness and mass matrices; \(T_{coup}\) = CPU time to assemble the coupling term; its = GMRES iterations; \(T_{sol}\) = CPU time to solve the linear system; \(T_{tot}\) = total simulation CPU time. The quantities \(T_{coup}\), its and \(T_{sol}\) are averaged over the time steps. All CPU times are reported in seconds.
\begin{table}
\begin{tabular}{r|r|r|r|r|r|r|r|r} \hline \multicolumn{8}{c}{**Linear solid model – Weak scalability test**} \\ \hline \multicolumn{8}{c}{T = 2, \(\Delta t\) = 0.01} \\ \hline procs & dofs & \(T_{ass}(s)\) & \(T_{coup}(s)\) & \multicolumn{3}{c|}{block-diag} & \multicolumn{3}{c}{block-tri} \\ & & & its & \(T_{sol}(s)\) & \(T_{tot}(s)\) & its & \(T_{sol}(s)\) & \(T_{tot}(s)\) \\ \hline
4 & 68070 & 8.55e-2 & 3.95 & 22 & 6.25e-1 & 933.43 & 8 & 2.24e-1 & 833.44 \\
8 & 135870 & 1.00e-1 & 5.23 & 38 & 2.16 & 1.48e+3 & 9 & 4.41e-1 & 1.13e+3 \\
16 & 269766 & 1.01e-1 & 8.77 & 111 & 10.23 & 3.80e+3 & 11 & 9.70e-1 & 1.95e+3 \\
32 & 539926 & 9.24e-2 & 59.27 & 706 & 108.05 & 2.50e+4 & 18 & 2.91 & 1.24e+4 \\
64 & 1074054 & 1.90e-1 & 48.00 & 429 & 113.59 & 3.24e+4 & 14 & 3.90 & 1.04e+4 \\
128 & 2152614 & 1.90e-1 & 98.63 & - & - & - & 18 & 11.43 & 2.20e+4 \\ \hline \end{tabular}
\end{table}
Table 2. Weak scalability for the linear solid model. The simulations are run on the Shaheen cluster. Same format as Table 1.
Also for this test we assume that fluid and solid materials share the same density, equal to \(1\), and the same viscosity, equal to \(0.2\). The velocity is imposed to be zero at the boundary of \(\Omega\), while the following initial conditions are imposed
\[\mathbf{u}(\mathbf{x},0)=0,\quad\mathbf{X}(\mathbf{s},0)=\mathbf{s}.\]
Results for the mesh refinement test are reported in Table 3: we consider the evolution of the system during the time interval \([0,2]\), with time step \(\Delta t=0.002\). The number of processors for the simulations is set to be \(64\), while the number of dofs increases from \(21222\) to \(741702\). As for the linear case, \(T_{ass}\) increases moderately and \(T_{coup}\) follows a superlinear growth. Both preconditioners are robust with respect to mesh refinement: the number of Newton iterations is \(2\) for each test and the average number of GMRES iterations per nonlinear iteration is bounded by \(15\) for block-diag and by \(10\) for block-tri. This behavior of block-diag is in contrast with the results we obtained for the linear solid model: this is due to the finer time step chosen for this simulation.
To study the weak scalability, we choose \(T=0.1\) and \(\Delta t=0.002\). The results, reported in Table 4, are similar to the results obtained for the linear case. As before, \(T_{coup}\) is not scalable due to the algorithm we implemented for assembling the coupling term. Even if it is not perfectly scalable, block-tri performs pretty well since the average number of linear iterations per nonlinear iteration increases only from \(15\) to \(19\). On the other hand, the good behavior of block-diag registered in Table 3 is not confirmed: the average number of linear iterations reaches \(101\), showing a lack of weak scalability as already seen in Table 2.
## 6. Conclusions
We analyzed two preconditioners, block-diagonal and block-triangular, for saddle point systems originating from the finite element discretization of fluid-structure interaction problems with fictitious domain approach. In particular, the analysis has been done by studying the robustness with respect to mesh refinement and weak scalability, applying the parallel solver to both linear and nonlinear problems.
Only block-triangular appears to be robust in terms of mesh refinement for linear and nonlinear problems; on the other hand, block-diagonal works well when the time step is very small.
Moreover, by studying the weak scalability, we can notice two further limitations of the proposed method, which will be the subject of future studies. First, the time to assemble the coupling matrix
Figure 2. Four snapshots of the evolution of the structure with nonlinear constitutive law.
is not scalable: it is based on two nested loops, on solid and fluid elements respectively; but only the external one is done in parallel over the processors. Second, since the action of the preconditioners consists of the exact inversion of two matrices, the time for solving the linear system slightly increases when the mesh is refined.
## Acknowledgments
The authors are member of INdAM Research group GNCS. D. Boffi, F. Credali and L. Gastaldi are partially supported by IMATI/CNR. Moreover, D. Boffi and L. Gastaldi are partially supported by PRIN/MIUR.
\begin{table}
\begin{tabular}{r|r|r|r|r|r|r|r|r|r|r|r} \hline \multicolumn{10}{c}{**Nonlinear solid model – Weak scalability test**} \\ \hline \multicolumn{10}{c}{T = 0.1, \(\Delta t\) = 0.002} \\ \hline \multicolumn{10}{c|}{procs} & \multicolumn{1}{c|}{dofs} & \multicolumn{1}{c|}{\(T_{ass}(s)\)} & \multicolumn{1}{c|}{\(T_{coup}(s)\)} & \multicolumn{4}{c}{block-diag} & \multicolumn{4}{c}{block-tri} \\ \multicolumn{10}{c|}{procs} & \multicolumn{1}{c|}{dofs} & \multicolumn{1}{c|}{\(T_{ass}(s)\)} & \multicolumn{1}{c|}{\(T_{coup}(s)\)} & \multicolumn{4}{c|}{block-diag} & \multicolumn{4}{c}{it} & \multicolumn{1}{c|}{its} & \multicolumn{1}{c|}{\(T_{sol}(s)\)} & \multicolumn{1}{c}{\(T_{tot}(s)\)} \\ \hline
4 & 83398 & 1.01e-1 & 2.21 & 3 & 23 & 6.76 & 448.65 & 3 & 15 & 5.50 & 386.13 \\
8 & 156910 & 1.59e-1 & 3.77 & 3 & 38 & 15.49 & 963.03 & 3 & 16 & 8.87 & 627.84 \\
16 & 330630 & 1.62e-1 & 8.92 & 3 & 49 & 36.58 & 2.28e+3 & 3 & 17 & 17.84 & 1.34e+3 \\
32 & 741702 & 2.60e-1 & 25.18 & 3 & 67 & 123.99 & 7.46e+3 & 3 & 18 & 48.85 & 3.70e+3 \\
64 & 1316614 & 2.61e-1 & 69.12 & 3 & 101 & 328.18 & 1.99e+4 & 3 & 19 & 97.28 & 8.38e+3 \\ \hline \end{tabular}
\end{table}
Table 4. Weak scalability for the nonlinear solid model. The simulations are run on the Shaheen cluster. Same format as Table 3. |
2306.12829 | Relevance-Based Compression of Cataract Surgery Videos | In the last decade, the need for storing videos from cataract surgery has
increased significantly. Hospitals continue to improve their imaging and
recording devices (e.g., microscopes and cameras used in microscopic surgery,
such as ophthalmology) to enhance their post-surgical processing efficiency.
The video recordings enable a lot of user-cases after the actual surgery, for
example, teaching, documentation, and forensics. However, videos recorded from
operations are typically stored in the internal archive without any
domain-specific compression, leading to a massive storage space consumption. In
this work, we propose a relevance-based compression scheme for videos from
cataract surgery, which is based on content specifics of particular cataract
surgery phases. We evaluate our compression scheme with three state-of-the-art
video codecs, namely H.264/AVC, H.265/HEVC, and AV1, and ask medical experts to
evaluate the visual quality of encoded videos. Our results show significant
savings, in particular up to 95.94% when using H.264/AVC, up to 98.71% when
using H.265/HEVC, and up to 98.82% when using AV1. | Natalia Mathá, Klaus Schoeffmann, Konstantin Schekotihin, Stephanie Sarny, Doris Putzgruber-Adamitsch, Yosuf El-Shabrawi | 2023-06-22T12:04:37Z | http://arxiv.org/abs/2306.12829v1 | Relevance-Based Compression of Cataract Surgery Videos
## Abstract
In the last decade, the need for storing videos from cataract surgery has increased significantly. Hospitals continue to improve their imaging and recording devices (e.g., microscopes and cameras used in microscopic surgery, such as ophthalmology) to enhance their post-surgical processing efficiency. The video recordings enable a lot of user-cases after the actual surgery, for example, teaching, documentation, and forensics. However, videos recorded from operations are typically stored in the internal archive without any domain-specific compression, leading to a massive storage space consumption. In this work, we propose a relevance-based compression scheme for videos from cataract surgery, which is based on content specifics of particular cataract surgery phases. We evaluate our compression scheme with three state-of-the-art video codecs, namely H.264/AVC, H.265/HEVC, and AV1, and ask medical experts to evaluate the visual quality of encoded videos. Our results show significant savings, in particular up to 95.94% when using H.264/AVC, up to 98.71% when using H.265/HEVC, and up to 98.82% when using AV1.
## 1 Introduction
_Cataract_ is a severe clouding disease of the natural eye lens, which often comes with aging and can eventually lead to blindness if not treated. Cataract surgery is a viable treatment option, where an ophthalmic surgeon replaces the natural lens with an artificial one, by using a microscope and tiny instruments. Cataract surgery is the most frequent surgery in the world, and it follows a standardized procedure consisting of 12 phases.
However, cataract surgery is also an incredibly challenging operation that requires specialized and intensive training. It is crucial to train aspiring surgeons to handle the various aspects and potential complications in the surgical workflow. The microscope that is used for cataract surgery is typically equipped with two eyepieces and allows only one additional person to closely follow the operation in real-time. Although it is possible for a limited number of additional trainees to be present in the operation room (OR), they can only follow the surgery indirectly via the video display that shows a live image of the microscope. The display is typically used by clinical personell to follow the surgical workflow and prepare operation equipment. Due to the limited space in the OR, the indirect viewing angle to the display, and the passive/non-interactive nature of the live image, it is not a good source for teaching either.
Therefore, more and more clinicians record videos of entire surgeries, and use them later for teaching and training via interactive video demonstrations. The video recordings are also suitable for video documentation, which is often a necessity for surgeries and a good alternative to traditional textual reports. The videos can communicate every little detail of a surgery, and even reveal causes for complications, much better than a text report would do. Also, they can be used for forensics and post-operative studies that are performed over a large number of patients.
Since cataract surgery is the most frequent operation, full video recording and archival can quickly lead to an immense amount of data volume, which causes data management problems for the hospital information systems. Public cloud storage is no solution because clinicians want to keep surgical videos local for various reasons, for example, privacy and existing safety policies. This leads to the very unfortunate situation, that nowadays recorded cataract videos are often deleted after a specific amount of time and the invaluable information is lost and cannot be used for any further purpose.
The research aim of this work, which is an extension of our previous work [12], is to optimize the required storage space of cataract surgery videos, and thereby allow for a long-term archival strategy. We focus on regular cataract surgery videos that consist of they typical 12 phases with no complications or extra phases. Since this is the most frequent situation, our results are applicable to the majority of cataract surgery videos.
In [12], we have already defined the relevance of each phase in a regular cataract surgery by a user study with 30 clinicians and proposed compression parameters for H.264/AVC [11] and AV1 [2]. In this work we extend this study to H.265/HEVC [14] and perform additional runtime evaluations. These are the most commonly used codecs and supported by a wide set of web browsers and operating systems. We also evaluate two different compression approaches: (1) considering idle phases as a part of the subsequent phase, and (2) removing them entirely as irrelevant content, as agreed with medical experts. We finally perform a qualitative study with cataract experts, who are requested to inspect and rate the encoded video segments, in order to verify if the produced visual quality is still acceptable for all surgical phases. Our results show that regular cataract surgery phases can be significantly compressed without loss of medically relevant content.
## 2 Related Work
Many papers have been published in the literature, which focus on improving video compression. Most of them are specific video coding schemes, or concepts to improve the general compression of video data, for example [7] and [16]. Some recent works also utilize neural networks for the purpose of general video coding, e.g., [1], [8], and [18]. Another research area is the utilization of intra-frame information in videos. For example, the papers [9], [10], [17] compress videos by encoding less relevant content in every frame with lower quality, where the relevance is defined using an attention mechanisms.
However, few research has been conducted on relevance-based video compression in specific domains, such as medicine. The challenge here is the automatic assessment of relevance in the content, which follows the same rules a medical expert would apply. Hence, the works that consider clinicians' subjective evaluations are essential, as they can identify the compression parameters that match the end-user expectations.
In [4], the authors evaluate the compression efficiency of bronchoscopic surgery videos. Their method encodes bronchoscopic recordings using different compression rates and video codecs, such as M-JPEG 2000 [3] and H.264/AVC [11]. A hybrid vector measure metric [15] and Hosaka plots [6] are used to evaluate each configuration. The
results achieve a compression rate of 96 (of the uncompressed content).
In [13], the authors perform a user study to assess the possible trade-off between low bitrate and medically satisfying visual quality of laparoscopy videos. For this study, they encode laparoscopic videos with the H.264/AVC [11]) codec and different encoding parameters, such as resolution and constant rate factor. As a result of the user study, the authors identify an acceptable compression rate and finally propose three configurations for _visually lossless_ quality, _good quality_, and _acceptable quality_. Using the proposed recommendations, the authors can save additional 60%, 87.5%, and 92.86% of required storage space, respectively (when compared to common H.264/AVC encoding).
For the field of cataract surgery, [5] proposes a relevance-based compression scheme using intra-frame relevance. First, the authors encode idle phases in cataract surgery videos (with no surgical tools inside the eye) with low quality, because such idle phases are considered irrelevant. Next, they apply region-of-interest (ROI) detection to the _cornea_ area as well as instruments, and give less priority to the other spatial content in the frame. As a result, the proposed method allows to save up to 68% of storage space using content-specific compression and removal of irrelevant content.
Although some other research works focused on surgical video compression (as described above), to the best of our knowledge, there is no work that considers the medical content relevance in cataract surgery videos, as defined by clinicians. We propose a novel approach to compress cataract surgery videos based on the content's relevance, as defined by surgeons. We manually extract the phase samples for each relevance category and compress them with different visual parameters. Finally, we let clinicians evaluate visual quality of the result segments in a user study, to find the acceptable compression rate.
## 3 Clinical Relevance of Phases in Cataract Surgery
The procedure of cataract surgery is pretty standardized. According to medical experts [19], each common cataract surgery consists of 12 phases, distinguishable by the used instruments. During these phases, the surgeon performs tasks with varying difficulty. For example, filling up the patient's eye with antibiotics or a viscoelastic substance is rather simple, while removing the natural lens or inserting an artificial one requires very specific skills. Each surgical phase is followed by an idle phase, where the surgical instruments are changed. Different phases of cataract surgery have different relevance, as rated by the clinicians.
In [12] we have presented results of a user study conducted with 30 medical experts who defined the clinical relevance of different content segments in regular cataract surgery videos for different purpose (teaching, documentation, and research). They rated the relevance with different levels: (N: _not relevant at all_, SR: _somewhat relevant_, R: _relevant_, and HR: _highly relevant_). The results are summarized again in Table 1. According to the findings, teaching is considered the most crucial aspect for clinicians, as it received the highest relevance score among all purposes surveyed for each phase. For relevance-based video compression this means that a higher video quality/bitrate is needed for teaching (with less quality degradation), while for documentation and research purpose the content can be compressed more efficiently (i.e., more strongly quantized, with less remaining visual quality).
Fig. 1 depicts an illustration of an exemplary cataract surgery video, along with all phases and the corresponding relevance rates obtained for teaching purpose. A relatively long part of cataract surgery (e.g., in this case 75.48%) has _relevant_ and _highly relevant_ content.
## 4 Relevance-Based Cataract Video Compression
To evaluate the achievable compression gain, we use the relevance-rates defined in the previous section and encode videos of different phases in cataract surgery with different encoding settings (codecs, bitrates, resolutions).
The underlying video dataset, out of which clips are selected, was recorded at Klinikum Klagenfurt and uses a resolution of \(1024x768\) pixels. Example frames are presented in Fig. 2 and Fig 3 for the phases _capsularhexis_ and _irrigation/aspiration_. It is important to understand that with the current common settings in the hospital (H.264/AVC codec, 60 fps, a constant rate factor (CRF) of 14-16) a typical cataract surgery video would result in a file size of 506 MiB. This is the basis to which we compare our relevance-based compression method that uses different visual quality for different surgery phases, according to the relevance levels defined by clinicians (see Section 3).
### Compression Setup
We assign the highest possible relevance level based on the ratings obtained from the relevance detection survey for each cataract surgery phase (see Table 1). If a phase is _highly relevant_ for teaching and _relevant_ for research purposes, we categorize it as _highly relevant_. Furthermore, we select the following compression setups (using ffmpeg [20]) for clips from cataract surgery videos:
\begin{table}
\begin{tabular}{|c|c c c|} \hline
**Surgery Phase** & **Teaching** & **Documentation** & **Research** \\ \hline \hline Incision & R & SR & SR \\ \hline Viscoelastic I & SR & SR & SR \\ \hline Capsularhexis & HR & R & R \\ \hline Hydrodissection & HR & SR & SR \\ \hline Phaco & HR & R & R \\ \hline Irrigation/aspiration & R & SR & SR \\ \hline Capsule polishing & SR & SR & SR \\ \hline Viscoelastic II & SR & SR & SR \\ \hline Implantation & R & R & R \\ \hline Viscoelastic aspiration & R & SR & SR \\ \hline Sealing of & R & SR & SR \\ \hline Antibiotic injection & R & R & R \\ \hline \end{tabular}
\end{table}
Table 1: Relevance rates of regular cataract surgery phases for different purpose: teaching, documentation, and research, as determined by medical experts (median value). HR, R, and SR are abbreviations for _Highly Relevant_, _Relevant_, and _Somewhat Relevant_.
Figure 1: Example of a cataract surgery video: temporal relevance distribution for teaching purpose. The content relevance is given by the block color: idle phases (_not relevant_) are gray, _somewhat relevant_ content is yellow, _relevant_ segments are green, and _highly relevant_ content is orange.
* H.264/AVC, 23-47 CRF (with a step of 2);
* H.265/HEVC, 23-47 CRF (with a step of 2);
* AV1, 27-63 CRF (with a step of 3);
The constant-rate-factor (CRF) is ffmpeg's way to control the visual quality (i.e., encoding bitrate) in an inverse setting (a CRF value of 0 produces the best quality). For H.264/AVC and H.265/HEVC the lower value (47) is chosen because otherwise the resulting videos have unacceptable visual quality, i.e., very pixelated, while the upper CRF is a default value (23). For AV1, the lower CRF value (63) is the lowest possible number provided by ffmpeg. To achieve the same number of setups for each codec, we set the upper CRF value (27) three points higher than the standard value (23). We also select different resolution settings, namely 1024x768 (original resolution), 800x600, and 640x480 pixels.
It becomes clear that due to the high number of parameters (13 bitrate settings, 12 phases, three resolutions), we would end up with \(13x12x3=468\) different clips/settings to test for each codec in the qualitative study, which is practically impossible. Hence, to maintain the participants' focus while still evaluating a large number of different
Figure 3: Exemplary video frame of the _irrigation/aspiration_ phase.
Figure 2: Exemplary video frame of the _capsulorhexis_ phase.
encoding configurations, we select one representative clip from each group of relevance categories (HR, R, and SR) only. More specifically, to reveal the maximum compression rate for cataract surgery phases and to find the optimal compression parameters, we consider _capsulorhexis_ as _highly relevant_ (original bitrate 12278 kbps),
_irrigation/aspiration_ as _relevant_ (12416 kbps), and _viscoelastic I_ and _viscoelastic II_ as _somewhat relevant_ (13074 kbps). Also, we test only meaningful resolution settings and exclude those that we could rule out in a pre-study already. With this configuration, we end up with 78 setups for H.264/AVC and AV1, and 39 setups for H.265/HEVC.
### User Study
We evaluate the achievable compression settings by a qualitative study with eight clinicians (cataract surgery experts). The breakdown of the subjects' experience in carrying out cataract surgery is as follows: two experts have 10 _or more years_ of surgical experience, one surgeon operates since nine years, two clinicians have an experience of three years, one has two years of practice, and two experts perform the surgeries since one year or less. Half of the participants exclusively work in private hospitals, while the other half work in both private and public sectors. Of all the survey respondents, six have experience in teaching, seven have performed patient documentation, and six have published scientific papers in this field.
To compare the visual quality of the encoded videos, we utilize the SSIM metric [21], which evaluates the similarity between frames before and after encoding. The highest quality is represented by a value of 1, indicating that both frames are identical. In essence, the greater the number of artifacts present in the encoded frame, the lower its SSIM value will be.
To conduct the user study, we arrange the videos for each phase based on the average SSIM value of all per-frame SSIM value. In the following we refer to all the configurations by their _setup number_, where 1 has the best quality, and 78 has the worst quality for H.264/AVC and AV1. For H.265/HEVC, the best and worst quality values are 1 and 39, respectively.
The videos are presented to the users following a Dichotomous Search Method paradigm. More specifically, the subjects first see the video with the middle quality (e.g., [39]) from an initial interval (e.g., [1, 78]) and decide if this quality is sufficient. If so, the upper boundary is updated with this setup number, otherwise the lower boundary is updated. For each phase, this method converges within around six to seven steps, which helps the participants to stay focused and allows the rating in a logarithmic time.
### Relevance-Based Compression Results
Fig. 4 and Fig. 5 present the study results. Please note that the visual quality improves with the increase of SSIM metric. We observe that although the clinicians' results are diverse, the experts require better visual quality for _highly relevant_ phases and slightly lower visual quality for _relevant_ one.
To determine the ideal compression parameters, we utilize the median value rounded down to an integer for each relevance category (Fig. 4) as a threshold value for visual quality. In particular, these median integer values are 36 (0.9250 SSIM) for _highly relevant_ content, 59 (0.9048 SSIM) for _relevant_, and 53 (0.9157 SSIM) for _somewhat relevant_ category for encoded with H.264/AVC and AV1 videos. For H.265/HEVC, these values are 17 (0.9160 SSIM), 22 (0.9057 SSIM), and 17 (0.9185 SSIM). To select appropriate compression parameters for each relevance group, we drop the options where the SSIM value is lower than the defined threshold. Afterward, we select the configurations with the lowest bitrate for each codec. Table 2 shows the result setups.
We can see that the bitrates of these compression configurations are relatively low with at least 653.39 kbps for H.264/AVC, 207.12 for H.265/HEVC, and 190.38 kbps for AV1. Besides, we discover a bitrate of just 68.32 kbps with AV1 as optimal compression parameters for _relevant_ content. Using the proposed settings allows optimizing the required storage space for at least 94.68% for H.264/AVC, at least 98.31% for H.265/HEVC, and at least 98.45% for AV1 for the entire surgery considering highly relevant content.
Figure 4: Evaluation with H.264/AVC and AV1. Experts’ rates distribution with the median values on the acceptable quality for different relevance groups, where _HR_, \(R\), and _SR_ stand for _Highly Relevant_, _Relevant_, and _somewhat relevant_, respectively.
Figure 5: Evaluation with H.265/HEVC. Experts’ rates distribution with the median values on the acceptable quality for different relevance groups, where _HR_, \(R\), and _SR_ stand for _Highly Relevant_, _Relevant_, and _somewhat relevant_, respectively.
### Idle Phases
According to clinicians, idle phases, where no instruments are being used, typically do not include any relevant content. However, as there may be some unforeseen exceptions, which is why we explore two methods of processing idle phases. Each idle phase can be deemed part of the preceding phase and compressed accordingly, or it can be considered a separate video segment. Medical professionals classify idle phases as completely irrelevant content if they are treated as independent segments. As a result, such segments can be entirely eliminated from cataract surgery videos. Thus, we present an additional assessment of the potential storage savings that could be achieved by removing idle phases. To this end, we gather and annotate 20 typical cataract surgery videos manually in terms of idle and non-idle phases, frame by frame (for a total of \(492,425\) frames). The annotations are carried out by a skilled technician under the guidance of medical experts. It turns out that in all of these videos, idle phases take around \(23.75\%\) of space. In other words, considering idle phases as irrelevant content, allows to save at least \(95.94\%\), \(98.82\%\), and \(98.71\%\) of space for each evaluated codec by removing them from cataract surgery videos before compressing the rest of the video files.
### Impact of Experience
We have calculated the correlation between participants' practicing experience and their preferences for encoded video quality. Our findings reveal that there exists a moderate negative correlation between participants' experience and their expectations for visual quality in _highly relevant_ and _somewhat relevant_ content, specifically \(70.20\%\) and \(63.47\%\), correspondingly. At the same time, for the _relevant_ phases, it is lower with \(40.96\%\). We conclude that with increasing clinicians' experience, the requirements to the visual quality of the recorded surgery decrease.
### Run-Time Requirements
Finally, we also evaluate the encoding time for each codec with the defined best configurations (see Table 2), which we measure for two different devices, namely:
1. A desktop computer with the following parameters: * Intel(R) Core(TM) i5-10210U CPU @ 1.60GHz 2.11 GHz; * 8 GB RAM; * Windows 10;
\begin{table}
\begin{tabular}{|c|c|c c c c c c|} \hline
**Codec** & **Relevance** & **Setup** & **Resolution** & **CRF** & **SSIM** & **kbps** & **Compr.** \\ \hline \hline \multirow{3}{*}{H.264/AVC} & highly & 34 & 640x480 & 25 & 0.9260 & 653.39 & 94.68\% \\ & relevant & 56 & 640x480 & 33 & 0.9079 & 252.11 & 97.97\% \\ & somewhat & 52 & 640x480 & 31 & 0.9162 & 248.49 & 98.10\% \\ \hline \hline \multirow{3}{*}{H.265/HEVC} & highly & 16 & 640x480 & 31 & 0.9174 & 207.12 & 98.31\% \\ & relevant & 22 & 640x480 & 35 & 0.9057 & 130.10 & 98.95\% \\ & somewhat & 16 & 640x480 & 31 & 0.9202 & 173.66 & 98.67\% \\ \hline \hline \multirow{3}{*}{AV1} & highly & 36 & 1024x768 & 57 & 0.9250 & **190.38** & **98.45\%** \\ & relevant & 57 & 800x600 & 63 & 0.9078 & **68.32** & **99.45\%** \\ \cline{1-1} & somewhat & 51 & 640x480 & 60 & 0.9179 & **75.46** & **99.42\%** \\ \hline \end{tabular}
\end{table}
Table 2: The optimal compression parameters for H.264/AVC, H.265/HEVC, and AV1 with SSIM and bitrate for each relevance category.
* ffmpeg version 5.1;
2. An AWS instance with the following parameters: * c5.2xlarge instance type: * 8 vCPUs; * 16 GB RAM; * Intel Xeon Platinum 8124M physical processor; * 3 GHz clock speed;
* Windows Server Base 2022;
* ffmpeg version 5.1;
* 0.756 USD per hour.
Table 3 shows the obtained results. The values are the average of several measurements. As expected, we can see that the fastest encoding is reached with the H.264/AVC codec. The AV1 codec requires more than 10 times encoding time and only saves 1.32%-3.77% of storage space, when compared to H.264/AVC. At the same time, H.265/HEVC requires only twice as much time as H.264/AVC but also saves 0.57%-3.63%. We can conclude that H.265/HEVC provides the best trade-off solution between the encoding speed and stored space.
## 5 Conclusion
In this paper, we performed several evaluations related to the storage space requirements of videos from cataract surgery. The aim was to optimize the storage space while still keeping the necessary clinical visual quality by using stronger compression (more quantization) for less relevant content, but keeping the clinically necessary visual quality. Our evaluations show that it is possible to save 94.68% of storage space for _highly relevant content_ when encoded with H.264/AVC (98.31% and 98.45% when using H.265/HEVC and AV1). Also, for _relevant_ content (which is the vast majority in a video of cataract surgery) we can save 97.97% with H.264/AVC (95.85% and 99.45% with H.265/HEVC and AV1). Finally, for _somewhat relevant_ content it allows to save 98.10% with H.264/AVC (95.67% and 99.42% with H.265/HEVC and AV1). These savings can be achieved while keeping most of the original video quality (more than 0.9057 SSIM), which is sufficient for teaching purpose at least. While the saved storage space is a little higher with H.265/HEVC and AV1, our evaluations also show that both codecs require significantly more encoding time (H.265/HEVC about 2x and AV1 about 10x). This may be a problem for hospitals with many procedures, because either more efficient hardware is needed, or a continuously increasing backlog of video encoding jobs
\begin{table}
\begin{tabular}{|c|c|c c c|} \hline
**Device** & **Codec** & **HR** & **R** & **SR** \\ \hline \hline \multirow{3}{*}{Desktop PC} & H.264/AVC & **12.1s** & **4.3s** & **1.2s** \\ & H.265/HEVC & 25.6s & 8.4s & 2.3s \\ & AV1 & 140.3s & 29.4s & 7.1s \\ \hline \hline \multirow{3}{*}{AWS Instance} & H.264/AVC & **7.7s** & **2.9s** & **1.2s** \\ & H.265/HEVC & 15.5 & 5.2s & 2.1s \\ \cline{1-1} & AV1 & 103.5 & 27.5s & 8.1s \\ \hline \end{tabular}
\end{table}
Table 3: The encoding speed measurements for H.264/AVC, H.265/HEVC and AV1 for each relevance category. _HR_, \(R\), and _SR_ stand for _Highly Relevant_, _Relevant_, and _somewhat relevant_, respectively.
would build up. Hence, we conclude that the H.265/HEVC codec offers the best compromise in terms of fast encoding speed and high storage space savings.
Our findings also showed that clinicians prioritize superior visual quality for _highly relevant_ content and that there is a moderate negative correlation between age/experience and the need for visual quality for _highly relevant_ and _somewhat relevant_ content. From this, we can infer that more experienced clinicians tend to have less stringent demands for visual quality.
## Acknowledgment
This work was funded by the FWF Austrian Science Fund under grant P 31486-N31 and by the EFRE, REACT-EU and Carinthian Economic Promotion Fund Programme (Project ONTIS, Contract No. KWF-3520--34826--50900).
|
2306.14280 | CP asymmetries in $B$ meson two-body baryonic decays | We study the CP-odd and CP-even observables of the $B$ mesons decaying into a
baryon and antibaryon. We estimate these observables through the $^3P_0$ model
and chiral selection rule. The decay branching ratios of $ B^+ \to p
\overline{\Lambda}$ and $ B^0 \to p \overline{p}$ are calculated to be $2.31
\times 10^{-7}$ and $1.27 \times 10^{ -8} $, which are consistent with the
current experiments, respectively. The effects of the $B-\overline{B}$
oscillations are considered, which largely suppress the direct CP asymmetries
in the $B_s^0$ decays. We suggest the experiments to visit $B_s^0 \to
\Lambda(\to p \pi^-) \overline{\Lambda} (\to \overline{ p} \pi^+) $, where the
time-averaged CP-odd observables are estimated to be large. The direct CP
asymmetries of $B^+ \to p \overline{\Lambda}$ and $B^0 \to p\overline{p}$ are
found to be $26.2\%$ and $-3.1\%$ for a positive strong phase and $-36.9\%$ and
$4.2\%$ for a negative strong phase, respectively. | Chao-Qiang Geng, Xiang-Nan Jin, Chia-Wei Liu | 2023-06-25T16:10:26Z | http://arxiv.org/abs/2306.14280v1 | # CP asymmetries in \(B\) meson two-body baryonic decays
###### Abstract
We study the CP-odd and CP-even observables of the \(B\) mesons decaying into a baryon and antibaryon. We estimate these observables through the \({}^{3}P_{0}\) model and chiral selection rule. The decay branching ratios of \(B^{+}\to p\overline{\Lambda}\) and \(B^{0}\to p\overline{p}\) are calculated to be \(2.31\times 10^{-7}\) and \(1.27\times 10^{-8}\), which are consistent with the current experiments, respectively. The effects of the \(B-\overline{B}\) oscillations are considered, which largely suppress the direct CP asymmetries in the \(B^{0}_{s}\) decays. We suggest the experiments to visit \(B^{0}_{s}\to\Lambda(\to p\pi^{-})\overline{\Lambda}(\to\overline{p}\pi^{+})\), where the time-averaged CP-odd observables are estimated to be large. The direct CP asymmetries of \(B^{+}\to p\overline{\Lambda}\) and \(B^{0}\to p\overline{p}\) are found to be \(26.2\%\) and \(-3.1\%\) for a positive strong phase and \(-36.9\%\) and \(4.2\%\) for a negative strong phase, respectively.
Introduction
CP asymmetries play an important role in the study of particle physics [1]. In particular, CP violation is not only the key to understand the matter-antimatter asymmetry in the universe but an important way to probe the effects of new physics (NP). As the CP symmetry is respected by the strong interaction, the study of CP violation allows us to extract the complex phases in the weak interaction of the standard model (SM) [2; 3; 4; 5] even though theoretical calculations are clouded by the hadronic uncertainties.
Since CP violation was first observed in 2001 [6], various CP-odd quantities have been measured mainly in the \(B\) meson decays [7]. Remarkably, the LHCb collaboration has recently found evidences of CP violation in \(D\) meson decays [8], which are expected to be small. The finding has stimulated great interests among the theorist and a debate on whether the observed values require NP is still ongoing [9].
Despite the great processes in the mesonic sector, a nonzero signal at \(5\sigma\) confidential level of CP violation is still absent in the baryonic sector yet. In particular, the direct CP violation of \(\Lambda_{b}\to\Lambda K^{-}\pi^{+}\) is found to be \(-0.53\pm 0.25\)[10], which has a huge central value but a large uncertainty. In addition, several theoretical works [11] have been triggered by the measurements of \(A_{CP}(\Lambda_{b}\to p\pi^{-}/K^{-})\), which are consistent with the results in the perturbative QCD (pQCD) [12]. Undoubtedly, more experiments are expected in the future.
To the end of probing CP violation involving baryons, we study the CP asymmetries exhibit in \(B\to{\bf B}_{1}\overline{\bf B}_{2}\) with \({\bf B}_{1}\) and \(\overline{\bf B}_{2}\) baryon and antibaryon, respectively. On the one hand, it benefits by the simplicity of the two-body decays. On the other, the spins of \({\bf B}_{1}\) and \(\overline{\bf B}_{2}\) provide fruitful physical observables in experiments. Additionally, the production rate of \(B\) is roughly three times larger than the one of the bottom baryons [13], making \(B\to{\bf B}_{1}\overline{\bf B}_{2}\) an ideal place to probe CP violation with baryons. We note that the direct CP asymmetries in \(B\to{\bf B}_{1}\overline{\bf B}_{2}\) have systematically estimated by Ref. [14] but the \(B\) meson oscillation and CP asymmetries in angular distributions have not been considered yet.
The LHCb collaboration have measured the branching ratios of [15; 16],
\[{\cal B}(B^{+}\to p\overline{\Lambda})=\left(2.4^{+1.0}_{-0.8}\pm 0.3 \right)\times 10^{-7}\,,\quad{\cal B}(B^{0}\to p\overline{p})=(1.27\pm 0.14) \times 10^{-8}\,, \tag{1}\]
and obtained a upper limit of [16],
\[{\cal B}(B^{0}_{s}\to p\overline{p})<5.1\times 10^{-9}\,, \tag{2}\]
at 95% confidence level, which is consistent with \({\cal B}(B_{s}^{0}\to p\overline{p})<1\times 10^{-10}\) predicted by the chiral selection rule [14; 17]. In the theoretical studies, the calculated amplitudes depend heavily on QCD models, such as the pole model [18] and sum rule [19]. Nevertheless, in the literature \({\cal B}(B^{0}\to p\overline{p})\) is overestimated by an order. The reason behind it is closely related to the Fierz identity [20].
This paper is organized as follows. In Sec. II, we define the decay parameters associated with spins and classify them in terms of CP-odd or CP-even parts. In Sec. III, we analyze the decay distributions of \(B\to{\bf B}_{1}\overline{\bf B}_{2}\). In Sec. IV, we estimate the decay parameters through the \({}^{3}P_{0}\) model and chiral selection rule. Our numerical results are shown in Sec. V. Sec. VI is devoted to conclusions.
## II Decay parameters
In general, with \(\hat{A}\) an arbitrary Hermitian operator, we can define a corresponding asymmetry as
\[A\equiv\frac{\Gamma(\lambda_{A}>0)-\Gamma(\lambda_{A}<0)}{\Gamma(\lambda_{A} >0)+\Gamma(\lambda_{A}<0)}\,, \tag{3}\]
where \(\Gamma\) represents the decay width, and \(\lambda_{A}\) stands for the eigenvalue of \(\hat{A}\). In decay final states, it is convenient to choose \(\hat{A}\) as an \(SO(3)\) rotational scalar, since the angular momenta are always constrained by the spins of the parent particles. In \(B\to{\bf B}_{1}\overline{\bf B}_{2}\), we simply have \(J=0\) and the most simple operators are
\[\hat{\alpha}=\vec{s}_{1}\cdot\hat{p}\,,\ \ \ \ \hat{\beta}=(\vec{s}_{1}\times \vec{s}_{2})\cdot\hat{p}\,,\ \ \ \ \hat{\gamma}=2\,\vec{s}_{1}\cdot\vec{s}_{2}\,, \tag{4}\]
where \(\vec{s}_{1}\) (\(\vec{s}_{2}\)) is the spin operator of \({\bf B}_{1}\) (\(\overline{\bf B}_{2}\)), and \(\hat{p}\) is the 3-momentum norm of \({\bf B}_{1}\). From Eqs. (3) and (4), we define
\[\alpha\equiv\frac{\Gamma(\lambda_{\alpha}>0)-\Gamma(\lambda_{\alpha}<0)}{ \Gamma(\lambda_{\alpha}>0)+\Gamma(\lambda_{\alpha}<0)}\,,\ \ \beta\equiv\frac{\Gamma(\lambda_{\beta}>0)-\Gamma(\lambda_{\beta}<0)}{\Gamma( \lambda_{\beta}>0)+\Gamma(\lambda_{\beta}<0)}\,,\ \ \ \gamma\equiv\frac{\Gamma(\lambda_{\gamma}>0)-\Gamma(\lambda_{\gamma}<0)}{ \Gamma(\lambda_{\gamma}>0)+\Gamma(\lambda_{\gamma}<0)}\,, \tag{5}\]
which affect the cascade decay distributions as we will demonstrate in the next section. Clearly, \(\hat{\alpha}\) is a helicity operator and \(\hat{\beta}\) describes the T-odd spin correlation.
Since \(\hat{\alpha}\) and \(\hat{\beta}\) are both P-odd, \(\lambda_{\alpha}\) and \(\lambda_{\beta}\) would flip signs under the parity transformation. On the other hand, \(\hat{\beta}\) is T-odd and \(\beta\) is a naively T-odd observable1. In general, a nonzero
value of \(\beta\) can be generated by the oscillations between \(|\pm\lambda_{\beta}\rangle\) with the final state interactions. We utilize the CPT symmetry and define the true T-odd observables as
\[\beta_{w}=(\beta\Gamma+\overline{\beta}\,\overline{\Gamma})/(\Gamma+\overline{ \Gamma})\,, \tag{6}\]
where overlines denote the charge conjugates. We emphasize that \(\beta_{w}\) is clearly also a CP violating observable. The other CP violating observables can be defined to be
\[{\cal A}_{\rm dir}=(\Gamma-\overline{\Gamma})/(\Gamma+\overline{\Gamma})\,, \hskip 14.226378pt\alpha_{w}=(\alpha\Gamma+\overline{\alpha}\,\overline{ \Gamma})/(\Gamma+\overline{\Gamma})\,,\hskip 14.226378pt\gamma_{w}=(\gamma \Gamma-\overline{\gamma}\,\overline{\Gamma})/(\Gamma+\overline{\Gamma})\,, \tag{7}\]
where the signs of Eqs. (6) and (7) are according to the parity of the responsible operators. To be explicit, \(\hat{\alpha}\) and \(\hat{\beta}\) are both P-odd, leading to the plus signs, whereas \(\hat{\gamma}\) is P-even, resulting in the minus sign. On the other hand, the CP-even observables are
\[\beta_{s}=(\beta\Gamma-\overline{\beta}\,\overline{\Gamma})/(\Gamma+\overline{ \Gamma})\,,\hskip 14.226378pt\alpha_{s}=(\alpha\Gamma-\overline{\alpha}\, \overline{\Gamma})/(\Gamma+\overline{\Gamma})\,,\hskip 14.226378pt\gamma_{s}=( \gamma\Gamma+\overline{\gamma}\,\overline{\Gamma})/(\Gamma+\overline{\Gamma})\,, \tag{8}\]
where the subscripts of \(w\) and \(s\) denote CP-odd and CP-even, respectively.
Clearly, to calculate the asymmetries in Eq. (3), it is necessary to obtain the eigenstates of \(\hat{\alpha}\), \(\hat{\beta}\) and \(\hat{\gamma}\). To this end, we start with \(\hat{\alpha}\) and express the others in the linear combinations of \(|\lambda_{\alpha}\rangle\). With \(J=0\), the helicity states are given as
\[|\lambda_{\alpha}=\pm 1/2\rangle=\int_{0}^{2\pi}\int_{-1}^{1}R_{z}(\phi)R_{y}( \theta)\,|\hat{p}=\hat{z},\lambda_{1}=\lambda_{2}=\pm 1/2\rangle\,d\cos\theta d \phi\,, \tag{9}\]
where \(\lambda_{1(2)}\) is the helicity of \({\bf B}_{1}\) (\(\overline{{\bf B}}_{2}\)), and \(R_{i}\) is the rotational operator pointing toward the \(i\)th direction. Note that the operators in Eq. (4) commute with \(R_{i}\) and [21]
\[(s_{j})_{\pm}|\hat{p}=\hat{z},\lambda_{j}=\mp 1/2\rangle=|\hat{p}=\hat{z}, \lambda_{j}=\pm 1/2\rangle\hskip 14.226378pt\mbox{for}\hskip 5.690551ptj\in\{1,2 \}\,, \tag{10}\]
with \((s_{j})_{\pm}=(s_{j})_{x}\pm i(s_{j})_{y}\). Together with Eqs. (4), (9) and (10), the eigenstates of \(\hat{\beta}\) and \(\hat{\gamma}\) are then given as
\[|\lambda_{\beta}=\pm 1/2\rangle = \frac{1}{\sqrt{2}}\left(|\lambda_{\alpha}=1/2\rangle\mp i| \lambda_{\alpha}=-1/2\rangle\right)\,,\] \[|\lambda_{\gamma}=\pm 1/2\rangle = \frac{1}{\sqrt{2}}\left(|\lambda_{\alpha}=1/2\rangle\pm|\lambda_ {\alpha}=-1/2\rangle\right)\,. \tag{11}\]
The asymmetry parameters defined in Eq. (3) are then given as
\[\alpha=\frac{|H_{+}|^{2}-|H_{-}|^{2}}{|H_{+}|^{2}+|H_{-}|^{2}}\,,\hskip 14.226378pt \beta=\frac{2\Im(H_{-}^{*}H_{+})}{|H_{+}|^{2}+|H_{-}|^{2}}\,,\hskip 14.226378pt \gamma=\frac{2\Re(H_{-}^{*}H_{+})}{|H_{+}|^{2}+|H_{-}|^{2}}\,, \tag{12}\]
with the identity
\[1-\left(\alpha^{2}+\beta^{2}+\gamma^{2}\right)=0\,. \tag{13}\]
Here, \(H_{\pm}\) stand for the helicity amplitudes, defined by
\[H_{\pm}=\langle\lambda_{\alpha}=\pm 1/2|i{\cal H}_{eff}|B\rangle=\sum_{j}H_{j}^{ \pm}\exp\left(i\phi_{j_{s}}^{\pm}+i\phi_{j_{w}}^{\pm}\right)\,, \tag{14}\]
where \({\cal H}_{eff}\) is the effective Hamiltonian, \(H_{j}^{\pm}\) are real, and \(\phi_{\pm}^{j_{s}}\) and \(\phi_{\pm}^{j_{w}}\) are the strong and the weak CP phases, respectively. The charge conjugate ones can be obtained by taking the CP transformation
\[\overline{H}_{\mp}=\langle\lambda_{\alpha}=\mp 1/2|i{\cal H}_{eff}|\overline{B} \rangle=\sum_{j}H_{j}^{\pm}\exp\left(i\phi_{j_{s}}^{\pm}-i\phi_{j_{w}}^{\pm} \right)\,. \tag{15}\]
Notice that \(\lambda_{\alpha}\) flips sign after the CP transformation.
For \(B_{(s)}^{0}\), the decay parameters in Eq. (12) depend on time due to the \(B-\overline{B}\) mixing, given by
\[|B_{(s)}^{0}(t)\rangle=g_{+}(t)|B_{(s)}^{0}(t=0)\rangle+\frac{q}{ p}g_{-}(t)|\overline{B}_{(s)}^{0}(t=0)\rangle\,,\] \[|\overline{B}_{(s)}^{0}(t)\rangle=\frac{p}{q}g_{-}(t)|B_{(s)}^{0 }(t=0)\rangle+g_{+}(t)|\overline{B}_{(s)}^{0}(t=0)\rangle\,, \tag{16}\]
where \(p\) and \(q\) are the mixing parameters, and
\[g_{+}(t)\pm g_{-}(t)=e^{(-\Gamma\mp\Delta\Gamma/2)t/2}e^{-i(M\pm\Delta M/2)t}\,. \tag{17}\]
The parameters associated with the masses and the decay widths are defined as
\[M=(M_{H}+M_{L})/2\,,\quad\Gamma=(\Gamma_{H}+\Gamma_{L})/2\,,\] \[M_{\Delta}=M_{H}-M_{L}\,,\quad\Gamma_{\Delta}=\Gamma_{L}-\Gamma _{H}\,, \tag{18}\]
where \(m_{L,H}\) and \(\Gamma_{L,H}\) are the masses and total decay widths of the light and heavy neutral \(B\) mesons, respectively. Clearly, \(g_{\pm}(t)\), \(M_{H,L}\) and \(\Gamma_{H,L}\) depend on whether \(B^{0}\) or \(B_{s}^{0}\) is in question. We do not explicitly show the dependence as long as it does not cause confusion. In this work, we take \(q=p\), corresponding to that CP is conserved in the oscillations, which causes some errors at the \(O(10^{-3})\) level. In the future studies, the approximation is not necessary if a higher precision is desired.
Using Eqs. (20) and (21), we arrive at
\[\langle\lambda_{\alpha}=\pm 1/2|i{\cal H}_{eff}|B^{0}_{(s)}(t)\rangle = g_{+}(t)H_{\pm}+g_{-}(t)\overline{H}_{\pm}\,,\] \[\langle\lambda_{\alpha}=\pm 1/2|i{\cal H}_{eff}|\overline{B}^{0}_{(s) }(t)\rangle = g_{+}(t)\overline{H}_{\pm}+g_{-}(t)H_{\pm}\,, \tag{19}\]
leading to
\[D(t) = (|g_{+}|^{2}+|g_{-}^{2}|)D+4\Re(g_{+}g_{-}^{*})\left(\Re(H_{+} \overline{H}_{+}^{*}+H_{-}\overline{H}_{-}^{*})\right)\] \[{\cal A}_{\rm dir}(t) = \left[(|g_{+}|^{2}-|g_{-}|^{2}){\cal A}_{\rm dir}(0)D+4\Im(g_{+}g _{-}^{*})\left(\Im(H_{+}\overline{H}_{+}^{*}+H_{-}\overline{H}_{-}^{*})\right) \right]/D(t)\,,\] \[\alpha_{s}(t) = \left[(|g_{+}|^{2}-|g_{-}|^{2})\alpha_{s}(0)D+4\Im(g_{-}g_{+}^{* })\left(\Im(H_{+}\overline{H}_{+}^{*}-H_{-}\overline{H}_{-}^{*})\right) \right]/D(t)\,,\] \[\alpha_{w}(t) = \left[(|g_{+}|^{2}+|g_{-}|^{2})\alpha_{w}(0)D+4\Re(g_{+}g_{-}^{* })\left(\Re(H_{+}\overline{H}_{+}^{*}-H_{-}\overline{H}_{-}^{*})\right) \right]/D(t)\,,\] \[\beta_{s}(t) = \left[(|g_{+}|^{2}-|g_{-}|^{2})\beta_{s}(0)D+4\Im(g_{+}^{*}g_{-}) \left(\Re(H_{-}^{*}\overline{H}_{+}-H_{+}\overline{H}_{-}^{*})\right)\right]/D (t)\,,\] \[\beta_{w}(t) = \left[(|g_{+}|^{2}+|g_{-}|^{2})\beta_{w}(0)D+4\Re(g_{+}^{*}g_{-}) \left(\Im(H_{-}^{*}\overline{H}_{+}+H_{+}\overline{H}_{-}^{*})\right)\right]/D (t)\,,\] \[\gamma_{s}(t) = \left[(|g_{+}|^{2}+|g_{-}|^{2})\gamma_{s}(0)D+4\Re(g_{-}g_{+}^{* })\left(\Re(H_{-}^{*}\overline{H}_{+}+H_{+}\overline{H}_{-}^{*})\right) \right]/D(t)\,,\] \[\gamma_{w}(t) = \left[(|g_{+}|^{2}-|g_{-}|^{2})\gamma_{w}(0)D-4\Im(g_{-}g_{+}^{* })\left(\Im(H_{-}^{*}\overline{H}_{+}-H_{+}\overline{H}_{-}^{*})\right)\right]/ D(t)\,, \tag{20}\]
where the denominator is \(D=|H_{+}|^{2}+|H_{-}|^{2}+|\overline{H}_{+}|^{2}+|\overline{H}_{-}|^{2}\). In the experiments, the measured quantities correspond to the ones averaged from \(t_{1}\) to \(t_{2}\). By taking \(t_{1}=0\) and \(t_{2}\to\infty\), we find that
\[\langle|g_{+}|^{2}+|g_{-}|^{2}\rangle=\frac{1}{\Gamma}\frac{4}{4- x^{2}}\,,\qquad\langle|g_{+}|^{2}-|g_{-}|^{2}\rangle=\frac{1}{\Gamma}\frac{1}{1+y^ {2}}\,,\] \[\langle g_{+}(t)g_{-}^{*}(t)\rangle=\frac{1}{\Gamma}\left(\frac{- x}{4-x^{2}}-\frac{iy}{1+y}\right)\,, \tag{21}\]
with \((x,y)=(\Gamma_{\Delta}/\Gamma,M_{\Delta}/\Gamma)\). In this work, the values of the oscillating parameters are from the Particle Data Group (PDG) [7], given by
\[(x,y)_{B_{0}}=(0.001,0.77),\ (x,y)_{B_{s}^{0}}=(0.128,27)\,. \tag{22}\]
## III Angular distributions
From Eq. (9), the decay distributions of \(B\to{\bf B}_{1}\overline{\bf B}_{2}\) are trivial since \(B\) is spinless. Hence, the asymmetry parameters defined in Eq. (12) are essentially described by the spin correlations between \({\bf B}_{1}\) and \(\overline{\bf B}_{2}\). In the LHCb experiments, spins can not be measured directly and we therefore shall seek the spin effects in their cascade decays.
If \({\bf B}_{1}\in\{\Xi^{0,-},\Sigma^{\pm},\Lambda\}\), it would consequently decay to an octet baryon (\({\bf B}^{\prime}_{1}\)) and a pion, while \(H_{\pm}\) interfere via the cascade decays. The partial decay widths are proportional to
\[\frac{\partial\Gamma}{\partial\cos\theta_{1}}\propto\sum_{\lambda_{1}}\left| \sum_{\lambda}H_{\lambda}A_{1,\lambda_{1}}d^{\frac{1}{2}}(\theta_{1})^{\lambda }\,_{\lambda_{1}}\right|^{2}\,, \tag{23}\]
with \(\theta_{1}\) defined as the polar angles in the \({\bf B}_{1}\) helicity frame (see FIG. 1), resulting in
\[{\cal D}_{1}(\theta_{1})\equiv\frac{1}{\Gamma}\frac{\partial\Gamma}{\partial \cos\theta_{1}}=\frac{1}{2}\left(1+\alpha\alpha_{1}\cos\theta_{1}\right)\,, \tag{24}\]
where we have used \(|A_{1,\pm}|^{2}=(1\pm\alpha_{1})/2\) in the last equation with \(\alpha_{1}\) the up-down asymmetry parameter for \({\bf B}_{1}\to{\bf B}^{\prime}_{1}\pi\), and \(d^{J}(\theta)^{M}\,_{N}\) is the Wigner-\(d\) matrix, defined by \(d^{J}(\theta)^{M}\,_{N}\equiv\langle J,M|R_{y}(\theta)|J,N\rangle\). On the other hand, if \(\overline{\bf B}_{2}\in\{\overline{\Xi}^{0,+},\overline{\Sigma}^{\pm}, \overline{\Lambda}\}\), it would sequentially decay to \(\overline{\bf B}^{\prime}_{2}\pi\), and the angular distributions are given as
\[{\cal D}_{2}(\theta_{2})\equiv\frac{1}{\Gamma}\frac{\partial^{2}\Gamma}{ \partial\cos\theta_{2}\partial\phi_{2}}=\frac{1}{4\pi}\left(1+\alpha\overline {\alpha}_{2}\cos\theta_{2}\right)\,, \tag{25}\]
where \(\overline{\alpha}_{2}\) is the up-down asymmetry parameters for \(\overline{\bf B}_{2}\to\overline{\bf B}^{\prime}_{2}\pi\). Here, \(\theta_{1,2}\) are defined as the angles between \(\vec{p}_{1,2}\) and \(\vec{p}^{\,\prime}_{1,2}\) with \(\vec{p}^{\,(\prime)}_{1,2}\) the 3-momentum of \({\bf B}^{(\prime)}_{1,2}\); see FIG. 1 with \(B^{0}_{s}\to\Lambda\overline{\Lambda}\) for illustration.
When \({\bf B}_{1}\) and \(\overline{\bf B}_{2}\) both decay subsequently, there would be three independent 3-momenta in the final states and it is possible to define an azimuthal angle. The angular distributions for \(B\to{\bf B}_{1}(\to{\bf B}^{\prime}_{1}\pi)\overline{\bf B}_{2}(\to\overline{ \bf B}^{\prime}_{2}\pi)\) are given as
\[{\cal D}(\vec{\Omega}) \equiv \frac{1}{\Gamma}\frac{\partial^{3}\Gamma}{\partial\Phi\partial \cos\theta_{1}\cos\theta_{2}} \tag{26}\] \[= \frac{1}{8\pi}\left[1+\alpha_{1}\overline{\alpha}_{2}\cos\theta_{ 1}\cos\theta_{2}+\alpha(\alpha_{1}\cos\theta_{1}+\overline{\alpha}_{2}\cos \theta_{2})\right.\] \[\left.+2\alpha_{1}\overline{\alpha}_{2}\sin\theta_{1}\sin\theta_{ 2}\left(\gamma\cos\Phi-\beta\sin\Phi\right)\right],\]
where \(\theta_{1,2}\) are defined as same as the previous ones. By integrating \(\theta_{1,2}\), we find
\[\frac{1}{\Gamma}\frac{\partial\Gamma}{\partial\Phi}\propto 1+\frac{\pi^{2}}{8} \alpha_{1}\alpha_{2}(\gamma\cos\Phi-\beta\sin\Phi)\,. \tag{27}\]
We see that \(\gamma\) and \(\beta\) are shown in the double cascade decays and possible to be measured in experiments. It is straightforward to see that by integrating \(\phi_{1(2)}\) and \(\theta_{1(2)}\), \(\mathcal{D}(\vec{\Omega})\) reduces to \(\mathcal{D}_{1(2)}(\theta_{1(2)})\) as expected.
## IV Model calculations
In this sections we estimate the decay parameters in the SM through model calculations. In addition to \(B^{0}\to p\overline{p}\) and \(B^{+}\to p\overline{\Lambda}\), we also consider the processes of \(B^{0}_{s}\to\Lambda\overline{\Lambda}\) and \(B^{0}_{s}\to\Sigma^{+}\overline{\Sigma}^{-}\), which are promising to be measured in the near future. The considered quark diagrams are shown in FIG. 2, where the left one is factorizable, whereas the right one is not, denoted by \(\mathcal{A}_{P}\) and \(\mathcal{A}_{T}\), respectively. For the \(b\to f\) transition at the quark level, we have
\[\mathcal{A}_{P}=\zeta\langle\mathbf{B}_{1};\overline{\mathbf{B}} _{2}|\overline{q}_{B}(1+\gamma_{5})f|0\rangle=\zeta\overline{u}_{1}\left(F(q^ {2})+F_{5}(q^{2})\gamma_{5}\right)v_{2}\,,\] \[\zeta=\frac{G_{F}}{\sqrt{2}}V_{tb}^{*}V_{tf}f_{B}\left(c_{6}+c_{5 }/3\right)\frac{2m_{B}^{2}}{m_{b}} \tag{28}\]
where \(q_{B}=(u,d,s)\) is the light flavor of the \(B\) meson, \(G_{F}\) corresponds to the Fermi constant, \(V\) is the Cabibbo-Kobayashi-Maskawa matrix, \(f_{B}\) (\(m_{B}\)) is the decay constant (mass) of \(B\), \(m_{b}\approx 4.5\) GeV is the \(b\) quark mass, \(u_{1}\) (\(v_{2}\)) is the Dirac spinor of \(\mathbf{B}_{1}\) (\(\overline{\mathbf{B}}_{2}\)), and \(F_{(5)}(q^{2})\) is the (pseudo)scalar form factor. In our previous work [17], we have used the crossing symmetry and analytical continuations to calculate \(F_{(5)}(q^{2})\). It was done by assuming that the singularity does not exist in the \(q^{2}\) dependence of form factors.
In this work, we adopt the \({}^{3}P_{0}\) model to calculate the form factors. The model asserts that creations of valence quarks can be approximated by scalar operators at the limit of hadrons being at rest; see Refs. [22; 23; 24; 25; 26] for instance. To accommodate the fact that \({\bf B}_{1}\) and \(\overline{\bf B}_{2}\) in \(B\) meson decays are far from at rest, we calculate the form factors at \(\vec{v}=0\) and take their dependencies on \(\vec{v}\) as [27]
\[F_{(5)}(\gamma) = \frac{{\cal F}_{(5)}}{1-5.29\gamma+7.07\gamma^{2}-0.31\gamma^{3} +0.41\gamma^{4}}\,, \tag{29}\]
where \(\gamma=1/\sqrt{1-v^{2}}\), \(\vec{v}\) is the velocity of \({\bf B}_{1}\) at the Briet frame, \({\cal F}_{(5)}\) is a constant to be fitted at \(\gamma=1\) from the \({}^{3}P_{0}\) model, and the coefficients of \((5.29,7.07,-5.29,0.41)\) are extracted from the experiments of \(e^{-}e^{+}\to p\overline{p}\).
The method of calculating \({\cal F}_{(5)}\) is similar to the one used in \(J/\psi\to\Lambda\overline{\Sigma}^{0}\)[28] and we briefly quote the formalism used in the numerical evaluations here. We define the amplitudes
\[A^{{\bf B}_{1}\overline{\bf B}_{2}} \equiv \zeta\langle{\bf B}_{1},\uparrow;\overline{\bf B}_{2},\uparrow \mid\overline{q}_{B}f|0\rangle\,,\] \[A^{{\bf B}_{1}\overline{\bf B}_{2}} \equiv \zeta\langle{\bf B}_{1},\uparrow;\overline{\bf B}_{2},\uparrow \mid\overline{q}_{B}\gamma_{5}f|0\rangle\,. \tag{30}\]
The factorizable parts of the helicity amplitudes are then given by \(H_{\pm}^{fac}=A^{{\bf B}_{1}\overline{\bf B}_{2}}\pm A^{{\bf B}_{1}\overline {\bf B}_{2}}_{5}\). For \(B^{+}\to p\overline{\Lambda}\) as an example, we find [29]
\[A^{p\overline{\Lambda}}_{(5)} = \gamma\zeta\gamma_{q}^{2}\frac{N^{2}}{2}\int d^{3}\vec{x}_{\Delta }\Gamma^{(5)}_{\uparrow\downarrow}\Big{[}\big{(}2E^{u}_{\uparrow\uparrow}E^{ d}_{\downarrow\downarrow}+E^{u}_{\downarrow\downarrow}E^{d}_{\uparrow \uparrow}-2E^{u}_{\uparrow\downarrow}E^{d}_{\downarrow\uparrow} \tag{31}\] \[-E^{u}_{\downarrow\uparrow}E^{d}_{\uparrow\downarrow}+\Gamma^{(5 )}_{\uparrow\uparrow}\left(E^{u}_{\downarrow\uparrow}E^{d}_{\downarrow \downarrow}-E^{d}_{\downarrow\downarrow}E^{u}_{\downarrow\uparrow}\right)\Big{]}\,,\]
where \(N\) is the normalization constant, the \(\vec{x}_{\Delta}\) dependences of \(E(\vec{x}_{\Delta})\) and \(\Gamma(\vec{x}_{\Delta})\) have not been written out explicitly, and \(\gamma_{q}\approx 0.3\) is the strength of the quark production in the \(3P^{0}\) model. Here, \(\Gamma_{\lambda_{1}\lambda_{2}}\) represents the overlapping of the quarks which participate in the weak interactions, whereas \(E^{q}_{\lambda_{1}\lambda_{2}}\) of the spectator quark. For \(\vec{v}=0\), the formalism is reduced to
\[\Gamma_{\uparrow\downarrow}(\vec{x}_{\Delta})=E_{\uparrow\downarrow }(\vec{x}_{\Delta})\,,\quad\Gamma_{\uparrow\uparrow}(\vec{x}_{\Delta})=E_{ \uparrow\uparrow}(\vec{x}_{\Delta})\,,\quad\Gamma^{5}_{\uparrow\uparrow}( \vec{x}_{\Delta})=0\,,\] \[\Gamma^{5}_{\uparrow\downarrow}(\vec{x}_{\Delta})=-2\pi\int\rho d \rho dz\left[u_{+}u_{-}+v_{+}v_{-}(\rho^{2}+z_{+}z_{-})\right]\,, \tag{32}\]
where the explicit forms of \(N\), \(E(\vec{x}_{\Delta})\), \(u_{\pm}\), \(v_{\pm}\) and \(z_{\pm}\) and the computing techniques can be
found in Ref. [28]. Likewise, the expressions for \(B_{0}\to p\overline{p}\), \(B_{s}^{0}\to\Lambda\overline{\Lambda}\) and \(B_{s}^{0}\to\Sigma^{+}\overline{\Sigma}^{-}\) are
\[A_{(5)}^{p\overline{p}} = \zeta\gamma_{q}^{2}\frac{N^{2}}{2}\int d^{3}\vec{x}_{\Delta}(4E_{ \downarrow\uparrow}^{u}E_{\uparrow\uparrow}^{u}\Gamma_{\downarrow\downarrow}^ {(5)}+4E_{\downarrow\uparrow}^{u}E_{\downarrow\downarrow}^{u}\Gamma_{ \uparrow\uparrow}^{(5)}\] \[-4E_{\downarrow\uparrow}^{u}E_{\downarrow\uparrow}^{u}\Gamma_{ \uparrow\downarrow}^{(5)}-2\Gamma_{\downarrow\uparrow}^{(5)}(E_{\downarrow \uparrow}^{u}E_{\uparrow\downarrow}^{u}+E_{\uparrow\uparrow}^{u}E_{ \downarrow\downarrow}^{u}))\,,\] \[A_{(5)}^{\Lambda\overline{\Lambda}} = \zeta\gamma_{q}^{2}\frac{N^{2}}{2}\int d^{3}\vec{x}_{\Delta} \Gamma_{\downarrow\uparrow}^{(5)}\big{(}E_{\uparrow\uparrow}^{u}E_{\downarrow \downarrow}^{d}+E_{\downarrow\downarrow}^{u}E_{\uparrow\uparrow}^{d}\big{)}\,,\] \[A_{(5)}^{\Sigma^{+}\overline{\Sigma}^{-}} = \zeta\gamma_{q}^{2}\frac{N^{2}}{2}\int d^{3}\vec{x}_{\Delta}(4E_{ \downarrow\uparrow}^{u}E_{\uparrow\uparrow}^{u}\Gamma_{\downarrow\downarrow}^ {(5)}+4E_{\downarrow\uparrow}^{u}E_{\downarrow\downarrow}^{u}\Gamma_{ \uparrow\uparrow}^{(5)} \tag{33}\] \[-4E_{\downarrow\uparrow}^{u}E_{\downarrow\uparrow}^{u}\Gamma_{ \uparrow\downarrow}^{(5)}-2\Gamma_{\downarrow\uparrow}^{(5)}(E_{\downarrow \uparrow}^{u}E_{\uparrow\downarrow}^{u}+E_{\uparrow\uparrow}^{u}E_{ \downarrow\downarrow}^{u}))\,.\]
We note that we have assumed the \(SU(3)_{F}\) symmetry to simplify the formalism.
With the factorizable amplitudes, we find2
Footnote 2: We note that the results are consistent with the use of the crossing symmetry, where we found that \({\cal B}_{fac}(B^{+}\to p\overline{\Lambda})=(1.3\pm 0.1)\times 10^{-7}\), \({\cal B}_{fac}(B_{s}^{0}\to\Lambda\overline{\Lambda})=(1.6\pm 0.1)\times 10^{-7}\), \({\cal B}_{fac}(B_{s}^{0}\to\Sigma^{+}\overline{\Sigma}^{-})=(2.9\pm 0.2)\times 10^{-7}\) and \({\cal B}_{fac}(B^{0}\to p\overline{p})=0.2\times 10^{-8}\)[17]. The numerical results in this work serve as an illustration for the CP violating quantities without the discussions of the error analyses.
\[{\cal B}_{fac}(B^{+}\to p\overline{\Lambda})=1.57\times 10^{-7}\,, \ \ {\cal B}_{fac}(B_{s}^{0}\to\Lambda\overline{\Lambda})=2.6\times 10^{-7}\,,\] \[{\cal B}_{fac}(B_{s}^{0}\to\Sigma^{+}\overline{\Sigma}^{-})=2.17 \times 10^{-7}\,, \ \ {\cal B}_{fac}(B^{0}\to p\overline{p})=0.21\times 10^{-8}\,, \tag{34}\]
where the subscript in \({\cal B}_{fac}\) indicates that only the factorizable part of the amplitude is considered. In Eq. (34), \({\cal B}_{fac}(B^{+}\to p\overline{\Lambda})\) is compatible with the experimental data in Eq. (1) but \({\cal B}_{fac}(B^{0}\to p\overline{p})\) is 6 times smaller. The reason can be traced back to that \({\cal A}_{T}\) is considered yet. For \(|V_{ub}^{*}V_{ud}|\gg|V_{ub}^{*}V_{us}|\) and \(|V_{tb}^{*}V_{td}|\ll|V_{tb}^{*}V_{ts}|\,,\)\({\cal A}_{P}\) and \({\cal A}_{T}\) play a leading role in the \(b\to s\) and \(b\to d\) transitions, respectively.
For the nonfactorizable diagram in the right hand side of FIG. 2, we fit its amplitude from the experimental branching ratios in Eq. (1). For an estimation, we assume the relative complex phase between two diagrams to be maximum3. The ratios of the nonfactorizable amplitudes among the channels are determined by the chiral selection rule described in Ref. [17]. The ones relevant to this work read
Footnote 3: The cases with vanishing relative strong phases are studied in Ref. [17], which would lead to zero \(A_{dir}\).
\[{\cal A}_{T}(B^{0}\to p\overline{p}):{\cal A}_{T}(B^{+}\to p \overline{\Lambda}):{\cal A}_{T}(B_{s}^{0}\to\Sigma^{+}\overline{\Sigma}^{-}): {\cal A}_{T}(B_{s}^{0}\to\Lambda\overline{\Lambda})=2:-\sqrt{6}:2:-3\,. \tag{35}\]
The numerical results at \(t=0\) and the time-averaged ones are given in Tables 1 and 2, respectively, where \({\cal B}_{av}\) stands for the CP-even part of the branching ratio and is unaffected by the \(B\) meson oscillations.
We note there remain ambiguities in strong phases related by complex conjugates. The two possibilities would lead to an opposite sign in \(A_{dir}\) and may be determined by the future experiments. In the tables, two scenarios are presented in the upper and lower columns, respectively. For the \(b\to d\) transition, namely \(B^{0}\to p\overline{p}\), \(A_{dir}\) at \(t=0\) is found to be as large as \(-40.3\%\) or \(29.1\%\), whereas \(|A_{dir}|<7\%\) in general for the \(b\to s\) transition, namely \(|A_{dir}|<7\%
\(B^{+}\to p\overline{\Lambda}\) and \(B^{0}_{s}\to\Sigma^{+}\Sigma^{-},\Lambda\overline{\Lambda}\). More importantly, due to the violent oscillation (see Eq. (22)) between \(B^{0}_{s}\) and \(\overline{B}^{0}_{s}\), the differences between the \(B^{0}_{s}\) and \(\overline{B}^{0}_{s}\) decays in the branching ratios are washed out quickly, leading to tiny \(\langle A_{dir}\rangle\). In contrast, the \(B^{0}\) oscillation is much more gentle, and we have \(A_{dir}(t=0)\approx\langle A_{dir}\rangle\). This argument is supported by the explicit calculations as well as Eqs. (20) and (21). The same suppression due to the oscillation is found also in \(\langle\gamma_{w}\rangle\) but not \(\langle\alpha_{w}\rangle\) and \(\langle\beta_{w}\rangle\). We conclude that \(\langle\alpha_{w}\rangle\) and \(\langle\beta_{w}\rangle\) are ideal observables to be probed in the experiments. In particular, \((\langle\alpha_{w}\rangle,\langle\beta_{w}\rangle)\) for \(B^{0}_{s}\to\Lambda\overline{\Lambda}\) is estimated to be \((-2.2\%,0.5\%)\) or \((3.1\%,-0.7\%)\), and the relative large branching ratio of \(B^{0}_{s}\to\Lambda\overline{\Lambda}\) would benefit the experimental measurement.
## V Conclusions
We have studied the decay observables in \(B^{0}\to p\overline{p}\), \(B^{+}\to p\overline{\Lambda}(\to\overline{p}\pi^{+})\), \(B^{0}_{s}\to\Sigma^{+}(\to p\pi^{0})\overline{\Sigma}^{-}(\to\overline{p}\pi ^{0})\) and \(B^{0}_{s}\to\Lambda(\to p\pi^{-})\overline{\Lambda}(\to\overline{p}\pi^{+})\). The spin-related CP-odd and CP-even quantities in \(B\to{\bf B}_{1}\overline{\bf B}_{2}\) have been examined. Though it is not possible to measure the spins directly at the current stage, we show that several quantities are able to be probed through the decay distributions with cascade decays. In particular, \(B^{0}_{s}\to\Lambda(\to p\pi^{-})\overline{\Lambda}(\to\overline{p}\pi^{+})\) provide excellent opportunities as its final state particles are all charged.
We have estimated these quantities through the \({}^{3}P_{0}\) model and chiral selection rule within the SM. The decaying branching ratios are found to be \({\cal B}(B^{0}\to p\overline{p})=1.27\times 10^{-8}\) and \({\cal B}(B^{+}\to p\overline{\Lambda})=2.31\times 10^{-7}\), which are consistent with the current experimental data. On the other hand, \({\cal B}(B^{0}_{s}\to\Lambda\overline{\Lambda})\) is estimated to be around \(7.4\times 10^{-7}\), which is about 60 times larger than \({\cal B}(B^{0}\to p\overline{p})\), making it promising to be observed in the near future. In addition, we show that \(\langle A_{dir}\rangle\) are tiny for \(B^{0}_{s}\) due to the violent oscillation between \(B^{0}_{s}\) and \(\overline{B}^{0}_{s}\).
We suggest the future experiments to visit \(B^{0}_{s}\to\Lambda(\to p\pi^{-})\overline{\Lambda}(\to\overline{p}\pi^{+})\), where the time-averaged CP-odd observables are estimated to be \((\langle\alpha_{w}\rangle,\langle\beta_{w}\rangle)=(-2.2\%,0.5\%)\) or \((\langle\alpha_{w}\rangle,\langle\beta_{w}\rangle)=(3.1\%,-0.7\%)\), depending on the sign of the strong phases.
###### Acknowledgements.
This work is supported in part by the National Key Research and Development Program of China under Grant No. 2020YFC2201501 and the National Natural Science Foundation
of China (NSFC) under Grant No. 12147103 and 12205063.
|
2301.09510 | First Light and Reionisation Epoch Simulations (FLARES) X: Environmental
Galaxy Bias and Survey Variance at High Redshift | Upcoming deep galaxy surveys with JWST will probe galaxy evolution during the
epoch of reionisation (EoR, $5\leq z\leq10$) over relatively compact areas
(e.g. $\sim$ 300\,arcmin$^2$ for the JADES GTO survey). It is therefore
imperative that we understand the degree of survey variance, to evaluate how
representative the galaxy populations in these studies will be. We use the
First Light And Reionisation Epoch Simulations (FLARES) to measure the galaxy
bias of various tracers over an unprecedentedly large range in overdensity for
a hydrodynamic simulation, and use these relations to assess the impact of bias
and clustering on survey variance in the EoR. Star formation is highly biased
relative to the underlying dark matter distribution, with the mean ratio of the
stellar to dark matter density varying by a factor of 100 between regions of
low and high matter overdensity (smoothed on a scale of 14$\,h^{-1}$cMpc). This
is reflected in the galaxy distribution --the most massive galaxies are found
solely in regions of high overdensity. As a consequence of the above, galaxies
in the EoR are highly clustered, which can lead to large variance in survey
number counts. For mean number counts $N\lesssim 100$ (1000), in a unit
redshift slice of angular area 300\,arcmin$^2$ (1.4\,deg$^2$), the 2-sigma
range in $N$ is roughly a factor of four (two). We present relations between
the expected variance and survey area for different survey geometries; these
relations will be of use to observers wishing to understand the impact of
survey variance on their results. | Peter A. Thomas, Christopher C. Lovell, Maxwell G. A. Maltz, Aswin P. Vijayan, Stephen M. Wilkins, Dimitrios Irodotou, William J. Roper, Louise Seeyave | 2023-01-23T16:02:06Z | http://arxiv.org/abs/2301.09510v2 | First Light and Reionisation Epoch Simulations (FLARES) X: Environmental Galaxy Bias and Survey Variance at High Redshift
###### Abstract
Upcoming deep galaxy surveys with _JWST_ will probe galaxy evolution during the epoch of reionisation (EoR, \(5\leq z\leq 10\)) over relatively compact areas (e.g. \(\sim 300\,\mathrm{arcmin}^{2}\) for the JADES GTO survey). It is therefore imperative that we understand the degree of survey variance, to evaluate how representative the galaxy populations in these studies will be. We use the First Light And Reionisation Epoch Simulations (Flares) to measure the galaxy bias of various tracers over an unprecedentedly large range in overdensity for a hydrodynamic simulation, and use these relations to assess the impact of bias and clustering on survey variance in the EoR. Star formation is highly biased relative to the underlying dark matter distribution, with the mean ratio of the stellar to dark matter density varying by a factor of 100 between regions of low and high matter overdensity (smoothed on a scale of \(14\,h^{-1}\mathrm{cMpc}\)). This is reflected in the galaxy distribution - the most massive galaxies are found solely in regions of high overdensity. As a consequence of the above, galaxies in the EoR are highly clustered, which can lead to large variance in survey number counts. For mean number counts \(N\lesssim 100\) (1000), in a unit redshift slice of angular area \(300\,\mathrm{arcmin}^{2}\) (\(1.4\,\mathrm{deg}^{2}\)), the 2-sigma range in \(N\) is roughly a factor of four (two). We present relations between the expected variance and survey area for different survey geometries; these relations will be of use to observers wishing to understand the impact of survey variance on their results.
keywords: galaxies: high-redshift - galaxies: luminosity function, mass function
## 1 Introduction
This paper investigates the clustering and bias of galaxies in the Epoch of Reionisation (EoR), \(5\leq z\leq 10\) using the First Light and Reionisation Epoch Simulations (Lovell et al., 2021, hereafter Flares-I). This can lead to variations in the number counts of upcoming galaxy surveys in the EoR (95 percentile range) of factors of around \(2-4\).
Galaxies form within dark matter haloes, which themselves form at the peaks of the density field (smoothed on the halo mass scale) and which are overdense with respect to the background (Zeldovich et al., 1982; Kaiser, 1984). In the early Universe especially, those peaks rely on contributions from a wide range of scales (e.g. Bardeen et al., 1986) and can therefore only be properly represented in a region of very large extent. The non-linear relationship between galaxies and the underlying matter distribution is known as _galaxy bias_, a term which is also used more generally to describe the relation between a range of different galaxy tracers and the underlying matter distribution (see a review by Desjacques et al., 2018).
_Survey variance1_ describes the uncertainty in observed estimates of galaxy number densities that arises from spatial variation within different survey volumes: both clustering of dark matter and galaxy bias contribute to this effect. The choice of survey area and geometry is closely linked to the amplitude of these variations and can give rise to significant variation in the measured number counts.
Footnote 1: We avoid the off-used term _cosmic variance_ which more accurately describes the uncertainty from having a single observable universe.
A combinaton of dark matter only (DMO) simulations and analytic models are a computationally efficient means of assess
ing the variance over large volumes. These tend to connect haloes to galaxies given some mass -- luminosity relation, or using some abundance matching prescription (e.g. Newman and Davis, 2002; Somerville et al., 2004; Stark et al., 2007; Trenti and Stiavelli, 2008; Moster et al., 2011). Ideally, however, one would use a more astrophysical, semi-analytic model (SAM, for a comparative review see Knebe et al., 2018) for which simulations with sufficient resolution are limited in size. The most well-known and well-used of these is the Millennium Simulation (Springel et al., 2005) which has a volume of just \((500\,h^{-1}\mathrm{cMpc})^{3}\), for which estimates of survey variance out to \(z=5\) were undertaken by Kitzbichler and White (2007). Larger volumes are available at lower resolution (see, e.g., Kim et al., 2009; Angulo et al., 2012; Maksimova et al., 2021), more suitable for use with (sub)halo abundance matching.
To more accurately model galaxies, hydrodynamic simulations are required and these have even more limited extent. The first to be widely used were Illustris and Eagle(Genel et al., 2014; Schaye et al., 2015, respectively), both of order \((70\,h^{-1}\mathrm{cMpc})^{3}\), followed by Sima(Dave et al., 2019) at \((100\,h^{-1}\mathrm{cMpc})^{3}\) and Illustris-TNG (Nelson et al., 2017; Pillepich et al., 2017) at \((200\,h^{-1}\mathrm{cMpc})^{3}\). Perhaps the most ambitious in this respect is the large-scale simulation Bluefies (Feng et al., 2015) which simulated \((400\,h^{-1}\mathrm{cMpc})^{3}\) -- still less than the Millennium Simulation -- but only down to \(z\approx 8\).
To overcome this limitation requires new approaches. One such is zoom simulations which run hydrodynamics at high resolution in selected regions of very large, low resolution, DMO simulations (e.g. Katz and White, 1993; Bahe et al., 2017; Barnes et al., 2017, which all concentrated on massive clusters). Flares built on this approach to simulate galaxy formation in a wide range of environments (following the approach adopted in Crain et al., 2009) within a \((2.2\,h^{-1}\mathrm{cGpc})^{3}\) box. It resimulates 40 regions with a wide range of overdensities, allowing us both to capture the very high overdensity environments within which the first galaxies will form, but also to investigate in detail the dependence of galaxy formation upon environment.
In recent years a number of multiwavelength surveys have measured the abundances and properties of galaxies at high redshift (e.g. Gonzalez et al., 2011; Duncan et al., 2014; Song et al., 2016; Stefanon et al., 2017; Batawedkar et al., 2019). Deep galaxy surveys using _JWST_ over the coming years will measure many of these functions to much greater depth, increasing the redshift and dynamic range probed, e.g. : CEERS (Bagley et al., 2022), COSMOS-Web (Casey et al., 2022), JADES (Rieke, 2020) and PRIMER Dunlop et al. (2021). These surveys will cover areas in the range \(100-2000\,\mathrm{arcmin}^{2}\) and one of the purposes of this paper is to estimate the effect of survey variance on the expected number counts. This is particularly pertinent given the recent discovery of massive galaxy candidates at very high redshifts in relatively small early fields (e.g. Donnan et al., 2022; Labbe et al., 2022; Adams et al., 2022; Harikane et al., 2022; Rodighiero et al., 2022; Naidu et al., 2022).
A number of studies have used analytic methods to estimate survey variance at these high redshift (Trapp and Furlanetto, 2020; Trapp et al., 2022; Einasto et al., 2023), and shown that the normalisation and slope of measured luminosity functions can be significantly affected. However, these studies use simplified models to map galaxies onto dark matter halos, which have not been tested in this regime. Also, they are presented in a way that is hard to relate to the population of galaxies likely to be observed in deep surveys. One purpose of this paper is to investigate the relationship between galaxies and the underlying dark matter distribution in a much more direct way, using a hydrodynamic method (Eagle, Schaye et al., 2015), which has been shown to reproduce the galaxy population extremely well in the current day Universe and which provides a good match to the observed luminosity functions of galaxies in the EoR (Vijayan et al., 2021). We measure the galaxy bias of various components over an unprecedented range in overdensity for a hydrodynamic simulation, and provide new estimates of the effect of survey variance on high redshift galaxy number counts.
Section 2 briefly describes Flares, the method that we use to define large-scale overdensity, and to map stars and galaxies onto the dark matter distribution. Section 3 presents results for the biasing of the smooth stellar distribution and of galaxies relative to that of the dark matter. Section 4 then explores the clustering of those galaxies in areas typical of those of deep surveys. Finally, Section 5 summarises our conclusions.
## 2 Method
### Flares
The First Light And Reionisation Epoch Simulations (Flares Lovell et al., 2021; Vijayan et al., 2021) are a series of 40 large zoom simulations selected at \(z=4.69\). Flares uses the same hydrodynamics code, Anarchy, as the Eagle simulation, described in detail in (Schaye et al., 2015; Schaller et al., 2015). It employs the AGN47p parameter configuration, which leads to a closer match with observational constraints on the hot gas properties in groups and clusters (Barnes et al., 2017) than does the standard configuration, although in Flares these changes should have little effect, since the number of such massive halos is very low at \(z=5\).
Flares uses an identical resolution to the fiducial Eagle simulation, with gas particle mass \(m_{B}=1.8\times 10^{6}\,\mathrm{M}_{\odot}\), and a softening length of \(2.66\,\mathrm{cMpc}\). Resimulation regions are selected from the same \((3.2\,\mathrm{cGpc})^{3}\) dark matter-only parent simulation as that used in the C-Eagle simulations (Barnes et al., 2017). The highest redshift snapshot available for this simulation is at \(z=4.69\), which was used to select spherical volumes that sample a range of overdensities. The size of the resimulation regions (radius \(14\,h^{-1}\,\mathrm{cMpc}\)) was chosen such that density fluctuations averaged on that scale are linear: then the distortion in the shape of the Lagrangian volume during the simulation is relatively small, and the ordering of the density fluctuations is preserved. Full details on the 40 selected regions and their overdensities are provided in Flares-I.
As shown in Flares-I, the galactic stellar mass functions from the Flares simulations agree with those from Eagle at \(z=5-10\) in the mass range within which they overlap, but with those from Flares extending to higher masses that are not accessible within the limited Eagle volume. As Eagle has been shown to agree well with observations of galaxies in the low-redshift universe (Schaye et al., 2015) that gives us confidence that our galaxies will also provide a reasonable match to the real galaxy population in the EoR.
### Determination of overdensities
It is useful to be able to relate the density of stars, galaxies, or other observable quantities, to the overdensity of matter smoothed on a scale for which fluctutations are still linear and hence deducable from the initial density field. In Flares, we do this at a redshift of \(4.69\), shortly after the end of reionisation.2
Footnote 2: With that particular redshift being chosen because it was the highest snapshot available for the underlying dark matter simulation.
The _parent simulation_ has a volume of \((3.2\,\mathrm{cGpc})^{3}\). We divide
that up into \(1200^{3}\)_grid cells_ each of side \(2.67\,\mathrm{cMpc}\). We use nearest grid point assignment to associate simulation particles with grid cells. We then determine the mean overdensity of those regions, smoothed using top hat filters of three radii: 10, 14 and \(20\,h^{-1}\mathrm{cMpc}\): we will call these \(\delta_{10}\), \(\delta_{14}\) and \(\delta_{20}\), where \(1+\delta\) is the ratio of the density of matter within a smoothing sphere to the mean density of matter within the simulation.
Figure 1 shows the probability density functions (PDFs) for these three different definitions of overdensity. Blue shows the PDF for the parent simulation; orange shows the overdensities within the regions that we resimulate. You can see that we have deliberately chosen to over-sample regions of high density in order to get a significant population of massive galaxies.
To determine the mean (i.e. universal average) of a given quantity, we need to know how to weight the contributions from individual grid cells. We do this using the procedure described in Section 2.4 of FLARES-I. Essentially, we count the number of grid cells in bins of overdensity, both in the resimulated regions and within the parent simulation as a whole. The ratio of the latter to the former then gives us the relative weighting that needs to be applied to each resimulated grid cell.
Figure 2 shows the relationship between \(\delta_{10}\) and \(\delta_{20}\). While clearly there is a strong correlation between the two, there is also significant scatter. We have found that all three smoothing radii give very similar results for the quantities that we investigate in this paper and so we stick with the original choice of \(\delta_{14}\) used in FLARES-I below.
### Mapping stars and galaxies to dark matter
Although we resimulate only a small fraction of the parent volume, we sample a wide range of environments that span the whole range of overdensities. We use this to populate the parent simulation with galaxies in order to create large mock surveys. To do this, we tabulate galaxy properties3 within overdensity bins, and then use this as a lookup table to populate grid cells that we have not resimulated.
Footnote 3: the galaxy stellar mass function (GSMF), or the star formation rate function, (SFFF).
We map individual particles within the simulation (dark matter, gas, stars or black holes) to the grid cell that they occupy at \(z=4.69\). This mapping can then be recovered at higher redshifts using the particle IDs that are preserved during the simulation and when particles transform from gas into stars.4
Footnote 4: The exception is merging of black hole particles for which only the ID of the most massive progenitor is stored – hence we trace the main branch.
We have tried using each of \(\delta_{10}\), \(\delta_{14}\), \(\delta_{20}\), the grid cell overdensity without smoothing (\(\delta_{\mathrm{grid}}\)), and the velocity divergence within a grid cell, both alone and in combination. Although there is a slight reduction in residual scatter when combining two or more diagnostics, the gain is very slight, and we choose in this paper to stick to the single input of \(\delta_{14}\) that was used in FLARES-I.
## 3 Bias
Bias is a measure of how much a quantity is clustered relative to the overall mass density. Section 3.1 looks at bias in the distribution of stars and other smoothed quantities within grid cells, and Section 3.2 in that of the galactic population. Unless otherwise stated, all results shown here are for a redshift \(z=4.69\).
### Bias in the matter distribution
We first look at the bias in the distribution of different types of matter within grid cells, compared to that of the dark matter. We plot our results as a function of \(\delta_{14}\) in order to investigate how the bias changes with overdensity.
#### 3.1.1 Dark matter
Figure 3 shows the measured density of dark matter within each grid cell in resimulated regions, compared to the smoothed matter density, \(1+\delta_{14}\) at that location. The solid black line shows the 1-to-1 relation, i.e. \(y=1+\delta_{14}\); the blue dashed and magenta dotted lines the mean and median, respectively, averaged in bins of \(\delta_{14}\). The mean passes through the point (1,1), as is to be expected, but has a slope greater than that of the 1-1 relation: this is because the density averaged within a sphere of radius \(14\,h^{-1}\,\mathrm{cMpc}\) will tend to be closer to 1 than when averaged within grid cells.
Note that the horizontal variation in the density of points simply reflects the overdensity of the regions that we have chosen to resimulate: the excess near \(\delta_{14}=0\) comes from the fact that this is the peak of overall density field; that at high values of \(\delta_{14}\) because we have chosen to simulate a large number of regions of high overdensity. The vertical variation in the density of points does, however, show the true variation of \(\rho_{\mathrm{DM}}/\dot{\rho}_{\mathrm{DM}}\) at a given overdensity.
The scatter in \(\rho_{\mathrm{DM}}/\dot{\rho}_{\mathrm{DM}}\), that is the dark matter density within grid cells measured in units of the universal mean, is very large and roughly symmetrical in the log, i.e. skewed to high values in real space. This skewness is caused by the non-linear growth of density fluctuations within grid cells whose overdensity approaches or exceeds unity. This leads to a huge bias in star formation, as we will see in the next section.
#### 3.1.2 Stellar to dark matter mass ratio
Figure 4 shows, at \(z=4.69\), the ratio of the stellar mass to the dark matter mass in individual grid cells, as a function of the smoothed matter density \(1+\delta_{14}\). The points coloured in red correspond to cells that have zero stars, but which have been given a nominal value so that they appear on the plot. The blue dashed and dotted lines correspond to the mean and median value, respectively, within overdensity bins. In magenta, we show the equivalent ratios for the standard FLARES-I00 \(\mathrm{cMpc}\) box, which correspond quite closely to the relation seen in FLARES. However, Eagle does not extend to the higher or lower values of overdensity sampled by FLARES.
One thing that is immediately apparent is that the mean stellar to dark matter mass ratio varies by a factor of 100 between the highest and lowest values of \(\delta_{14}\): star formation is thus highly biased towards regions of high overdensity. Moreover, even at a fixed value of \(\delta_{14}\), the scatter is enormous and the distribution is highly skewed such that the mean is 10 times the median.
The variation of the mean stellar to dark matter density ratio as a function of redshift is shown in Figure 5. Note that the value of \(\delta_{14}\) used here is that measured at \(z=4.69\) (using the ability to track particles over time), so that the same grid cells contribute to the \(x\)-axis bins at all redshifts. The relative bias as a function of \(\delta_{14}\) steepens slightly over time, with the overall normalisation rising steadily with decreasing redshift.
We show the dispersion of values about the mean in Figure 6. As can be seen, there is a huge variation in stellar density, even within grid cells with the same value of the smoothed matter density \(\delta_{14}\). For the lowest values of \(\delta_{14}\lesssim-0.25\) more than half the grid cells
Figure 1: The PDF of overdensities smoothed within top-hat windows of different radii: 10, 14 and 20 \(h^{-1}\) cMpc for the left, centre and right panels, respectively. Blue is the entire simulation box; orange is the regions that we resimulate.
Figure 4: The ratio of stellar to dark matter density within grid cells plotted as a function of the mean matter density at that location, smoothed with a top-hat window of radius 14 \(h^{-1}\) cMpc. Only grid cells within the resimulated regions are plotted and used to calculate the mean and median within bins of \(\delta_{14}\).
Figure 5: The mean stellar to dark matter density ratio within grid cells, as a function of redshift.
Figure 3: The dark matter density within grid cells plotted as a function of the mean matter density at that location, smoothed with a top-hat window of radius 14 \(h^{-1}\) cMpc. Only grid cells within the resimulated regions are plotted and used to calculate the mean and median within bins of \(\delta_{14}\).
Figure 2: For a subsample of grid cell locations, the relationship between overdensity smoothed with top-hat filters of radii 10 \(h^{-1}\) cMpc and 20 \(h^{-1}\) cMpc.
have no stars whatsoever within them. By contrast, the highest stellar density, within a grid cell with \(\delta_{14}=0.76\), is \(2.3\times 10^{9}\,\mathrm{M}_{\odot}\,\mathrm{Mpc}^{-3}\), almost 4 times the universal baryon density: within that grid cell approximately 10 per cent of the baryons have been turned into stars.
#### 3.1.3 Other properties
Figure 7 contrasts the density variation of different particle types within grid cells at \(z=4.69\). The dotted lines show the expected relations if the particles traced the smooth matter distribution. The bias for both dark matter and non star-forming gas is minimal. However, that of star-forming gas, stars themselves, and the mass of metals produced is significant, varying by more than an order of magnitude above and below the mean in the highest and lowest density regions, respectively. A similar effect is seen in the distribution of black hole mass.
Finally, Figure 8 shows the star formation and black hole accretion rate densities, which roughly track those of the stellar and black hole mass density, respectively.
### Bias in galaxy properties
We now look at the bias in the distribution of integrated galaxy properties, as a function of the matter overdensity.
Figure 9 shows the galactic stellar mass function (GSMF) as a function of \(1+\delta_{14}\). This can be directly compared to Fig. 9 in Flares-I, which showed a similar plot with each galaxy being associated with the whole range of overdensities within its resimulation volume, rather than that specific to its individual grid cell. The two plots are very similar except that the new one better captures the true overdensity local to each galaxy, and so has a slightly larger difference between overdensity bins.
The mean GSMF follows a similar form to that for the grid cells of mean matter density (\(\delta_{14}\approx 0\)) in the mass range where they overlap, but has a slightly higher normalisation due to the strong bias towards extra star formation in overdense regions. An important thing to note, however, is that, at the high mass end, only the highest overdensity regions contribute to the mass function, increasingly so at higher redshift. These regions are very rare, which gives rise to the exponential decline in the GSMF at the high mass end. High mass galaxies are strongly clustered in these high density regions, leading to a large sample variance in observational surveys: this is discussed further in Section 4.3.
The star formation rate function (SFRF) for galaxies is shown in Figure 10, again split by matter overdensity. It shows a similar behaviour to the GSMF, with the largest star formation rates being dominated by galaxies in the highest overdensity bins, especially at high redshift. This reflects the strong positive correlation between stellar mass and star formation at high redshift.
## 4 Survey variance
In this section, we investigate the clustering of galaxies on the sky and discuss the implications for survey design. This is very much a first look and we make a number of simplifying assumptions. We show that compact surveys such as those that are expected for deep
Figure 8: The mean star formation and black hole accretion rate densities, as a function of the smoothed matter density. The dotted lines show the expected relations if the rates traced the smooth matter distribution.
Figure 6: The distribution of stellar mass densities within grid cells split by overdensity. The large bin on the left captures grid cells that have no stars within them. The legend shows the range of \(1+\delta_{14}\) within each colour bin: the peak stellar density shifts to the right as \(\delta_{14}\) increases.
Figure 7: The mean density of various particle types within grid cells, as a function of the smoothed matter density. The dotted lines show the expected relations if the particles traced the smooth matter distribution.
fields are subject to large variance and we will return to a more detailed study of this in future work.
### Populating grid cells with galaxies
We use the information that we have gathered from our high-resolution hydrodynamic simulations to populate the underlying dark-matter-only (DMO) simulation with galaxies. The mass of the DMO particles is \(8.01\times 10^{10}\,\mathrm{M}_{\odot}\), meaning that Milky Way sized halos would be barely resolvable, hence we choose instead for the purposes of this current paper to use as input the average properties of dark matter within grid cells. We have investigated a number of different ingredients: as well as the average densities on different smoothing scales, \(\delta_{10}\), \(\delta_{14}\) and \(\delta_{20}\), described above, we have also tried the unsmoothed density within an individual grid cell, \(\rho_{\mathrm{grid}}\), and the divergence of the local velocity field within each cell. We find that all are highly correlated and have similar predictive power for the determination of the bias, \(\rho_{\mathrm{star}}/\rho_{\mathrm{DM}}\), within each grid cell. We have also checked that combining two or more of these inputs
Figure 10: The galactic star formation rate split by overdensity \(1+\delta_{14}\) at three different redshifts. The legend shows bins of \(1+\delta_{14}\). The universal mean is shown by the solid, black line.
Figure 9: The galactic stellar mass function split by overdensity \(1+\delta_{14}\) at three different redshifts. The legend shows bins of \(1+\delta_{14}\). The universal mean is shown by the solid, black line.
provides only a marginal improvement in predictive accuracy. For that reason, we stick here with the quantity that we have used both in the design of Flares and throughout most of this paper, \(\delta_{14}\).
We tabulate the galactic stellar mass function (GSMF) as a function of overdensity and redshift. Then, within each grid cell, we simply assign galaxies according to the corresponding mean GSMF. We do not seek to reproduce or investigate the effects of including the variance in the GSMF within each overdensity and redshift bin. Note that we determine the expected number of galaxies in each grid cell, which will be a fractional number; we do not sample from that distribution to generate an actual realisation of a possible survey.
### Generating maps
To generate maps, we project grid cells along one axis of the simulation through a depth corresponding to a unit redshift interval. Strictly speaking, we should project the simulation box onto a cone centred on an observer at \(z=0\). However, the angular diameter of a grid cell varies by only a small amount within the depth of each map: for example, between 1.20 and 1.12 arcmin across the redshift interval \(4.5<z<5.5\), and between 0.96 and 0.94 arcmin across the redshift interval \(9.5<z<10.5\). Hence to a good approximation, we can simply project parallel to the grid edges within any unit redshift interval: that avoids having to smooth over grid cells. In this paper, we are interested in only a rough estimate of the clustering of sources; hence this is sufficient for our purposes.
### Results
In this section, we will present results for the number of galaxies that exceed a certain mass limit, in different survey areas and redshift slices. Very similar results are found for galaxies that exceed particular star formation rates, and these are shown in Appendix A.
Figure 11 shows the expected number of galaxies per projected grid cell in two redshift slices: \(M_{*}>10^{10}\,\mathrm{M}_{\odot}\), \(4.5<z\leq 5.5\) in the top panel; and \(M_{*}>10^{9}\,\mathrm{M}_{\odot}\), \(9.5<z\leq 10.5\) in the lower panel5. These have been chosen, somewhat arbitrarily, to represent relatively abundant and sparse sources, respectively. In the upper panel, it can be seen quite clearly that there is significant clustering of the galaxies at \(z\sim 5\); this is also true, but less obvious, in the lower panel at \(z\sim 10\).
Footnote 5: These have a depth of 203 and 82 grid cells, respectively.
To show what effect this might have on the variance of galaxy numbers detected in surveys, we plot in Figure 12 the galaxy counts in an area of approximately \(\sim 300\,\mathrm{arcmin}^{2}\), corresponding to 256 projected grid cells, for 3 different survey designs: the upper row is a square survey region of \(16\,\mathrm{x}\,16\) grid cells, approximately \((17\,\mathrm{arcmin})^{2}\); the middle row is a long strip of \(256\,\mathrm{x}\,1\) grid cells, approximately \(5\,\mathrm{deg}\,\mathrm{x}\,1\) arcmin; and the lower row is 256 separate, widely-spaced and hence uncorrelated grid cells. There are 5625 separate samples in the upper and lower rows; slightly fewer in the middle row because of the shape of the region and a desire not to sample the same grid cell twice.
From the bottom row, we can see that we have sampled a sufficient number of independent regions that the expected number of galaxies in each mock survey lies close to the mean. Note that we have not attempted to model the scatter about this expected value, but that is likely to follow poison counting statistics. For the square survey regions, however, there is a large variation in the expected number of detected galaxies, by a factor of 8 (top-left) for the most abundant sources at \(z\sim 5\), to 60 (top-right) for the rarer sources at \(z\sim 10\); the 2-sigma ranges for these are factors of 3.3 and 5.2, respectively. The long, thin surveys shown in the middle row, unsurprisingly, lie between these two extremes.
Although we have presented here results for galactic stellar mass, those for star formation rate, shown in Appendix A are similar. Moreover, we would expect the same to hold for flux-limited surveys also, as we expect a strong correlation between mass/SFR and observable fluxes. That is not to say that there won't be some environmental dependence in that correlation. We will explore this in future work, where we generate mock surveys in different bands.
Figure 13 shows the mean and 2-sigma spread for the number counts as a function of the survey area, for both high and low number counts. The variances are reduced as the survey area is increased. We show only our two mass selections here; results for the star formation selection are very similar. Measured values are taken for survey areas corresponding to 256, 1024 and 4096 pixels and results interpolated between these points; in Appendix B we show histograms of number counts for the largest survey area of 4096 pixels, or approximately \(1.4\,\mathrm{deg}^{2}\).
Figure 11: A map of the expected number of galaxies in some projected redshift slice per projected grid cell. _Top_: galaxies with mass \(M_{*}>10^{10}\,\mathrm{M}_{\odot}\) between \(4.5<z\leq 5.5\). _Bottom_: galaxies with mass \(M_{*}>10^{9}\,\mathrm{M}_{\odot}\) between \(9.5<z\leq 10.5\)
### Application to existing and proposed surveys
The implications for the interpretation of galaxy surveys at these redshifts are clear: in any survey of limited spatial extent, the variance in the number of detected galaxies is likely to be large, and one should take that into account when making any measurement of the number density of sources.
Figure 8 of Flares-I showed galactic stellar mass functions at \(z=5\) from a range of observations (Gonzalez et al., 2011; Duncan et al., 2014; Song et al., 2016; Stefanon et al., 2017), varying in survey area from 50 arcmin\({}^{2}\) to 1 deg\({}^{2}\). These show a variation in normalisation of a factor of 3 at low masses to 10 at high masses. Now, while some of this difference will be due to the different observational bands and analysis, a significant fraction may be due to sampling variation across different survey areas.
Figure 12: Histograms of the number of galaxies within a 256 grid cell (\(\sim 300\) arcmin\({}^{2}\)) survey region, above a particular mass and in a given redshift slice, according to the geometry of the survey: left column \(-M_{*}>10^{10}\) M\({}_{\odot}\), \(4.5<z\leq 5.5\); right column \(-M_{*}>10^{9}\) M\({}_{\odot}\), \(9.5<z\leq 10.5\); upper row \(-\) 16 x 16; middle row \(-\) 256 x 1; lower row \(-\) 256 widely spaced grid cells. The dot-dashed, dashed and dotted lines show the median, one-sigma and two-sigma ranges, respectively; the box-plot shows the full extent of the data, plus the one and two-sigma ranges. In the top, right-hand panel a single point with \(N=12.3\) has been omitted, for clarity.
Cycle 1 of _JWST_ has a number of large area survey programs. One of the many aims of these surveys is to investigate galaxies in the EoR. JADES (Rieke, 2020) imaging will cover 2 fields, each roughly square and 100 arcmin\({}^{2}\) in area. The predicted galaxy numbers per unit redshift interval (Williams et al., 2018) vary from many thousand at \(z=5\) to a few hundred at \(z=10\): the survey variance will therefore roughly correspond to that in Figure 12, i.e. a 2-sigma range of about a factor of just over 3. Robertson et al. (2022) report first results for galaxies at \(z>10\) in JADES (spectroscopically confirmed by Curtis-Lake et al., 2022), finding 4 galaxies in a survey area of 65 arcmin\({}^{2}\).
CEERS (Bagley et al., 2022) is undertaking imaging and spectroscopy of the EGS HST legacy field, in an area of approximately 100 (20 x 5) arcmin\({}^{2}\), and PRIMER (Dunlop et al., 2021) is providing imaging of the CANDELS/COSMOS and CANDELS/UDS fields, each of order 100 arcmin\({}^{2}\). They too will suffer survey variance similar to that shown in Figure 12.
On a larger scale, the COSMOS-Web imaging survey (Casey et al., 2022) will have an area of 0.54 deg\({}^{2}\) with an estimated galaxy count of several thousand per unit redshift interval at \(z=6\) and 30-70 at \(z=10\). The survey area is approximately square in shape and so the 2-sigma range for galaxy counts will be around a factor of 2.5, as seen in Figure 13.
### Future work: more realistic mocks
There are a number of enhancements that we intend to make to this study in order to make more accurate predictions of survey variance:
* The use of the mean density field in dark matter grid cells to predict star and galaxy formation rates is fairly crude and leaves a lot of residual scatter, as seen in Figure 4, that we have struggled to reduce. Future work will use a higher resolution background dark matter simulation that will resolve halos and allow a better mapping from dark matter to galaxies.
* By resolving halos, we will be able to project onto light-cones centred on an observer without the need for smoothing, rather than projecting parallel to the simulation grid.
* Instead of predictions based solely on the mean number of galaxies expected in each sky pixel, we will make several realisations, drawn from a Poisson distribution, to properly sample the variance in number counts.
* We will use galaxies from the high-resolution, hydrodynamic simulations to make mock images of the sky in various bands, utilising the known star formation and metal enrichment histories, and applying realistic dust absorption.
* We will then make mock observations of those images, reproducing the selection criteria of the different surveys.
This is a substantial undertaking that will take some time to come to fruition, which is why we have given in this paper crude estimates of the magnitude of the survey variance that we expect to see that will still be of significant value as a qualitative estimate of the effect of survey variance for a given survey geometry.
## 5 Conclusions
In this paper we investigate the variation of star formation and galaxy properties with environment in the FLARES simulations of galaxy formation in the early Universe. Those simulations are designed to sample the full range of overdensities, averaged on a scale of \(14\,h^{-1}\,\mathrm{cMpc}\), within a 3.2 Gpc box. For the most part we look at properties averaged within cubical grid cells of edge 2.67 cMpc as a function of the overdensity averaged within a sphere of radius \(14\,h^{-1}\,\mathrm{cMpc}\), \(\delta_{14}\). We reach the following conclusions:
* The ratio of stellar density, \(\rho_{\mathrm{star}}\), to dark matter, \(\rho_{\mathrm{dm}}\), density within each grid cell is highly biased, varying by a factor of 100 between the lowest and highest values of \(\delta_{14}\) (Figure 4).
* Moreover, even at a fixed value of \(\delta_{14}\), the scatter in \(\rho_{\mathrm{star}}/\rho_{\mathrm{dm}}\) is enormous and the distribution is highly skewed such that the mean is 10 times the median.
* This bias remains constant across all redshifts between \(z=5\) and \(z=10\) (Figure 5).
* For \(\delta_{14}<0.9\) more than half the grid cells have no stars whatsoever within them; whereas in the highest overdensity cell roughly 10 per cent of the baryons have been turned into stars (Figure 6)
* The bias seen in the stellar distribution is replicated in star-forming gas, metals and black holes, and in the star formation and black hole accretion rates; that in non-star-forming gas is, however, much lower and similar to that of the dark matter (Figures 7 and 8)
* The mean galactic stellar mass function (GSMF) follows a similar form to that for the grid cells of mean matter density (\(\delta_{14}\approx 0\)) in the mass range where they overlap, but has a slightly higher normalisation due to the strong bias towards extra star formation in overdense regions (Figure 9). Only the highest overdensity regions contribute to the high-mass end of the GSMF
Figure 13: The mean and 2-sigma (2.3–97.7 percentile) range for expected galaxy counts as a function of survey area and shape. Measured values are taken for survey areas corresponding to 256, 1024 and 4096 pixels and results interpolated between these points. We report results for stellar mass: the upper/lower panels are for a large/low number count; similar results are found for star formation rates.
and these are very rare, which gives rise the the exponential decline in the GSMF.
* Because the highest mass galaxies are only found in the most overdense regions, we note that resummation of such regions within large volumes, such as undertaken in Flares, is the only way to capture them in simulations.
* The star formation rate function (SFRF) shows a similar behaviour to the GSMF, with the largest star formation rates being dominated by galaxies in the highest overdensity bins (Figure 10).
* Maps of unit redshift slices show significant clustering of galaxies at all redshifts and at both high and low number densities (Figure 11).
* Figure 12 illustrates the effect of clustering by looking at the variation in number counts in a region consisting of 256 grid cells (approximately \(\sim 300\) arcmin\({}^{2}\)) in different configurations. If the cells are widely-spaced then the variance is small, as would be expected. However, for a square survey area of 16x16 grid cells, then the 2-sigma variation in number counts is more like a factor of 4 (slightly less for high number counts and higher for low number counts).
* Very similar results hold for maps of galaxies exceeding a particular star formation rate (Figures A1 and A2).
* For larger surveys areas, the variance is reduced, dropping to a factor of about 2 for an area of 1.4 deg\({}^{2}\) (Figures B1 and B2).
Although we have presented results for physical rather than observable properties of galaxies, we would expect similar results to hold for flux-limited surveys also, as we expect a strong correlation between stellar mass / star formation rate and observable fluxes. We will explore this in future work, where we generate mock surveys in different bands.
The implications for the interpretation of galaxy surveys at these redshifts are clear: in any flux-limited survey of limited spatial extent, the variance in the number of detected galaxies is likely to be large. It should not be surprising to find number densities from different survey areas that differ by a factor of 2-4. Multiple widely-spaced regions will need to be combined to beat down sample variance. Number densities obtained from (a large number of) background regions in targeted observations of unrelated, compact, low-redshift sources would be one way to do that.
## Acknowledgements
We thank the Eagle team for their efforts in developing the Eagle simulation code. We also wish to acknowledge the following open source software packages used in the analysis: Scipy (Virtanen et al., 2020), Astropy (Robitaille et al., 2013), and Matplotlib (Hunter, 2007).
This work used the DiRAC@Durham facility managed by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was funded by BEIS capital funding via STFC capital grants ST/K00042X/1, ST/P002293/1, ST/R002371/1 and ST/S002502/1, Durham University and STFC operations grant ST/R000832/1. DiRAC is part of the National e-Infrastructure. The Eagle simulations were performed using the DiRAC-2 facility at Durham, managed by the ICC, and the PRACE facility Curie based in France at TGCC, CEA, Bruyeres-le-Chatel.
CCL acknowledges support from a Dennis Sciama fellowship funded by the University of Portsmouth for the Institute of Cosmology and Gravitation. DI acknowledges support by the European Research Council via ERC Consolidator Grant KETIU (no. 818930). APV acknowledges support from the Carlsberg Foundation (grant no CF20-0534). The Cosmic Dawn Center (DAWN) is funded by the Danish National Research Foundation under grant No. 140.
We list here the roles and contributions of the authors according to the Contributor Roles Taxonomy (CRediT)6. **Peter Thomas**: Conceptualization, Data curation, Methodology, Investigation, Formal Analysis, Visualization, Writing - original draft. **Christopher C. Lovell, Aswin P. Vijayan**: Data curation, Writing - review & editing. **Maxwell Maltz**: Methodology, Writing - review & editing. **Stephen M. Wilkins**: Conceptualization, Writing - review & editing. **Dimitrios Irodotou, Louise Seeyave, Will Roper**: Writing - review & editing.
Footnote 6: [https://credit.niso.org/](https://credit.niso.org/)
## Data Availability
A portion of the data used to produce this work can be found online: flaresimulations.github.io/#data. Much of the analysis used the raw data produced by the simulation which can be made available upon request.
|
2304.03185 | Pairwise Ranking with Gaussian Kernels | Regularized pairwise ranking with Gaussian kernels is one of the cutting-edge
learning algorithms. Despite a wide range of applications, a rigorous
theoretical demonstration still lacks to support the performance of such
ranking estimators. This work aims to fill this gap by developing novel oracle
inequalities for regularized pairwise ranking. With the help of these oracle
inequalities, we derive fast learning rates of Gaussian ranking estimators
under a general box-counting dimension assumption on the input domain combined
with the noise conditions or the standard smoothness condition. Our theoretical
analysis improves the existing estimates and shows that a low intrinsic
dimension of input space can help the rates circumvent the curse of
dimensionality. | Guanhang Lei, Lei Shi | 2023-04-06T16:10:14Z | http://arxiv.org/abs/2304.03185v1 | # Pairwise Ranking with Gaussian Kernels+
###### Abstract
Regularized pairwise ranking with Gaussian kernels is one of the cutting-edge learning algorithms. Despite a wide range of applications, a rigorous theoretical demonstration still lacks to support the performance of such ranking estimators. This work aims to fill this gap by developing novel oracle inequalities for regularized pairwise ranking. With the help of these oracle inequalities, we derive fast learning rates of Gaussian ranking estimators under a general box-counting dimension assumption on the input domain combined with the noise conditions or the standard smoothness condition. Our theoretical analysis improves the existing estimates and shows that a low intrinsic dimension of input space can help the rates circumvent the curse of dimensionality.
**Keywords and phrases:** Pairwise ranking, Oracle inequality, Pairwise Gaussian kernel, Box-counting dimension, Learning rates
## 1 Introduction
Ranking a set of objects based on their underlying utility, relevance, or quality is an important topic in statistical inference. It has been an intense recent study in various fields as diverse as economics, information retrieval, advertising, and collaborative filtering, see, e.g., [45, 20, 8]. Ranking is closely related to classification, which can also be formulated as a supervised learning problem. However, they are essentially different because ranking aims at the correct ordering of objects rather than the correct prediction of their categories. This paper considers the _pairwise ranking_ problem, which involves comparing two objects. Generally, we denote the data relevant to one single object as \((x,y)\), where \(x\) is a \(d\)-dimensional feature vector belonging to a compact set \(\mathcal{X}\subset\mathbb{R}^{d}\) and \(y\in\mathcal{Y}\) is the label of the object. Here \(\mathcal{Y}\subset\mathbb{R}\) consists of either discrete or continuous values. For any two distinct objects, described by \(z=(x,y)\) and \(z^{\prime}=(x^{\prime},y^{\prime})\), we assign a relation of order according to their labels: \(z\) ranking higher than \(z^{\prime}\) is equivalent to \(y>y^{\prime}\). In a pairwise ranking task, we predict the ordering between objects based on their features \(x\) and \(x^{\prime}\). To this end, a bivariate function \(f:\mathcal{X}\times\mathcal{X}\to\mathbb{R}\), called a
_ranking rule_, is introduced to make predictions. Namely, if \(f(x,x^{\prime})\geq 0\) then we predict that \(y>y^{\prime}\) implying \(f\) ranks \(z\) higher than \(z^{\prime}\) (here we break tie when \(f(x,x^{\prime})=0\) in favor of \(y>y^{\prime}\)). Let \(P\) be a probability distribution on \(\mathcal{Z}:=\mathcal{X}\times\mathcal{Y}\). We assume that the objects are randomly selected, meaning that they are described by independent and identically distributed (i.i.d.) random samples of distribution \(P\). The predictive performance of a ranking rule \(f\) is measured by the _ranking risk_, given by
\[\begin{split}\mathcal{R}(f):=& P\left(\left\{z=(x, y)\in\mathcal{Z},z^{\prime}=(x^{\prime},y^{\prime})\in\mathcal{Z}\mid y>y^{ \prime},f(x,x^{\prime})<0\right\}\right)\\ &+P\left(\left\{z=(x,y)\in\mathcal{Z},z^{\prime}=(x^{\prime},y^{ \prime})\in\mathcal{Z}\mid y<y^{\prime},f(x,x^{\prime})\geq 0\right\}\right), \end{split} \tag{1.1}\]
or equivalently, the _excess ranking risk_
\[\mathcal{E}(f):=\mathcal{R}(f)-\inf\left\{\mathcal{R}(g)\mid g:\mathcal{X} \times\mathcal{X}\rightarrow\mathbb{R}\text{ is Borel measurable}\right\}. \tag{1.2}\]
In this paper, we construct ranking rules through a regularized pairwise learning scheme with a loss \(\phi:\mathcal{Y}\times\mathcal{Y}\times\mathbb{R}\rightarrow[0,\infty)\) and Gaussian kernels. We call \(\phi\) a margin-based loss if \(\phi\) is defined through a margin-based loss for classification, namely, \(\phi(y,y^{\prime},t)=\psi(\text{sgn}(y-y^{\prime})t)\) where \(\psi:\mathbb{R}\rightarrow[0,\infty)\) is taken to be a margin-based classification loss (cf. [51, 26, 5]). Here for \(t\in\mathbb{R}\), \(\text{sgn}(t)=1\) if \(t>0\), \(\text{sgn}(t)=-1\) if \(t<0\), and \(\text{sgn}(t)=0\) if \(t=0\). Typical choices of \(\psi\), which will be the prime consideration in this work, are the hinge loss \(\psi_{\text{hinge}}(t):=\max\left\{1-t,0\right\}\) and the square loss \(\psi_{\text{square}}(t):=(1-t)^{2}\). Let \(\mathcal{X}^{2}:=\mathcal{X}\times\mathcal{X}\). The _pairwise Gaussian kernel_ with variance \(\sigma>0\) is a function on \(\mathcal{X}^{2}\times\mathcal{X}^{2}\) given by
\[\begin{split} K^{\sigma}((x,x^{\prime}),(u,u^{\prime})):=& \;\frac{1}{2}\exp\left(-\frac{\|(x,x^{\prime})-(u,u^{\prime})\|_{2}^{2}}{ \sigma^{2}}\right)\\ &\;-\frac{1}{2}\exp\left(-\frac{\|(x^{\prime},x)-(u,u^{\prime})\|_ {2}^{2}}{\sigma^{2}}\right),\end{split} \tag{1.3}\]
where \(\|\cdot\|_{2}\) denotes the usual Euclidean norm. One can verify that \(K^{\sigma}\) is continuous and positive semi-definite (thus is symmetric) on \(\mathcal{X}^{2}\times\mathcal{X}^{2}\). According to [4], \(K^{\sigma}\) uniquely defines a reproducing kernel Hilbert space (RKHS) \(\mathcal{H}_{K^{\sigma}}\). Concretely, let \(K^{\sigma}_{(u,u^{\prime})}:\mathcal{X}^{2}\rightarrow\mathbb{R}\) be a function defined by \(K^{\sigma}_{(x,x^{\prime})}(\cdot,\cdot):=K^{\sigma}((x,x^{\prime}),(\cdot, \cdot))\) for any \((x,x^{\prime})\in\mathcal{X}^{2}\). The RKHS \(\mathcal{H}_{K^{\sigma}}\) induced by \(K^{\sigma}\) is the completion of the linear span of \(\left\{K^{\sigma}_{(x,x^{\prime})}\mid(x,x^{\prime})\in\mathcal{X}^{2}\right\}\) with inner product denoted by \(\langle\cdot,\cdot\rangle_{K^{\sigma}}\) satisfying the reproducing property, that is, \(\langle K^{\sigma}_{(x,x^{\prime})},f\rangle_{K^{\sigma}}=f(x,x^{\prime})\) for any \((x,x^{\prime})\in\mathcal{X}^{2}\) and all \(f\in\mathcal{H}_{K^{\sigma}}\). Moreover, \(\mathcal{H}_{K^{\sigma}}\) consists of skew-symmetric continuous functions on \(\mathcal{X}^{2}\), i.e., \(f(x,x^{\prime})=-f(x^{\prime},x),\forall f\in\mathcal{H}_{K^{\sigma}}\). The construction of \(K^{\sigma}\) follows the idea of [32], which introduces the so-called intransitive kernels on pairs of data to characterize the pairwise skew-symmetric relation. Let \(\|\cdot\|_{K^{\sigma}}\) denote the norm of \(\mathcal{H}_{K^{\sigma}}\) induced by the inner product. Given an i.i.d. sample \(\mathbf{z}:=\left\{(X_{i},Y_{i})\right\}_{i=1}^{n}\) of \(P\), the pairwise learning algorithm seeks ranking rules in \(\mathcal{H}_{K^{\sigma}}\) through solving an optimization problem of the form
\[f_{\mathbf{z}}:=f^{\phi}_{\mathbf{z},\sigma,\lambda}\in\operatorname*{arg\,min} _{f\in\mathcal{H}_{K^{\sigma}}}\left\{\mathcal{R}^{\phi}_{\mathbf{z}}(f)+ \lambda\|f\|_{K^{\sigma}}^{2}\right\}, \tag{1.4}\]
where \(\lambda>0\) is a tuning parameter and \(\mathcal{R}^{\phi}_{\mathbf{z}}(f)\) denotes the _empirical \(\phi\)-ranking risk_ given by
\[\begin{split}\mathcal{R}^{\phi}_{\mathbf{z}}(f)&:= \mathbb{E}_{\mathbf{z}}\phi(Y,Y^{\prime},f(X,X^{\prime}))\\ &=\frac{1}{n(n-1)}\sum_{i=1}^{n}\sum_{j\neq i}\phi\left(Y_{i},Y_{ j},f(X_{i},X_{j})\right).\end{split} \tag{1.5}\]
When using a margin-based loss, \(\mathcal{R}^{\phi}_{\mathbf{z}}(f)\) takes the following form
\[\begin{split}\mathcal{R}^{\phi}_{\mathbf{z}}(f)&= \mathbb{E}_{\mathbf{z}}\psi(\operatorname{sgn}(Y-Y^{\prime})f(X,X^{\prime}))\\ &=\frac{1}{n(n-1)}\sum_{i=1}^{n}\sum_{j\neq i}\psi\left( \operatorname{sgn}(Y_{i}-Y_{j})f(X_{i},X_{j})\right).\end{split} \tag{1.6}\]
Choosing \(\mathcal{H}_{K^{\sigma}}\) as the family of candidate ranking rules naturally imposes the skew-symmetry restriction on \(f_{\mathbf{z}}\), which will lead to ordering predictions capable of guaranteeing the reciprocal relations from data. Note that \(\mathcal{R}^{\phi}_{\mathbf{z}}(f)\) is the empirical analogy of \(\phi\)_-ranking risk_
\[\begin{split}\mathcal{R}^{\phi}(f):&=\mathbb{E} \phi\left(Y,Y^{\prime},f(X,X^{\prime})\right)\\ &=\int_{\mathcal{Z}}\int_{\mathcal{Z}}\phi\left(y,y^{\prime},f(x, x^{\prime})\right)dP(x,y)dP(x^{\prime},y^{\prime}).\end{split} \tag{1.7}\]
Thus, the _excess \(\phi\)-ranking risk_ is defined by
\[\mathcal{E}^{\phi}(f):=\mathcal{R}^{\phi}(f)-\inf\left\{\mathcal{R}^{\phi}(g) \mid g:\mathcal{X}^{2}\to\mathbb{R}\text{ is Borel measurable}\right\}. \tag{1.8}\]
To a certain extent, it also reflects the generalization performance of the ranking rule \(f\).
The algorithm (1.4), or more precisely, the kernel-based regularized empirical risk minimization, is one of the learning schemes that have recently drawn much theoretical attention. Its cutting-edge empirical performance in applications, relatively simple implementation, and, last but not least, its flexibility all contribute to the grown interest. The flexibility of algorithm (1.4) is made possible by its two main ingredients, namely the RKHS \(\mathcal{H}_{K^{\sigma}}\) and the loss function \(\phi\). To be more specific, the loss function can be used to model the learning target, while \(\mathcal{H}_{K^{\sigma}}\) with varying width \(\sigma\) adapts to the smoothness of the target function, which incorporates the information of the distribution and the input domain. It is worthwhile to note that kernels can be defined on arbitrary input domains, allowing them to handle various types of data in addition to standard \(\mathbb{R}^{d}\)-valued data. For example, using a bivariate kernel on \(\mathcal{X}\times\mathcal{X}\), we can define a general pairwise kernel \(K\) to model reciprocal or antisymmetric relations. Concretely, let \(G:\mathcal{X}\times\mathcal{X}\to\mathbb{R}\) be a positive semi-definite kernel and \((\mathcal{H}_{G},\langle\cdot,\cdot\rangle_{G})\) be its associated RKHS. A pairwise kernel \(K:\mathcal{X}^{2}\times\mathcal{X}^{2}\to\mathbb{R}\) can be defined by
\[K((x,x^{\prime}),(u,u^{\prime}))=\langle G_{x}-G_{x^{\prime}},G_{u}-G_{u^{ \prime}}\rangle_{G},\]
where \(G_{x}:=G(x,\cdot)\), \(x\in\mathcal{X}.\) One may refer to [32, 29] for more details about these specific pairwise kernels. Moreover, the optimization problem related to algorithm (1.4), which can reduce to a convex quadratic optimization problem, are well understood for widely used loss functions such as the hinge loss and the square loss, see, e.g., [15, 39].
One of the main topics in theoretical studies on nonparametric ranking methods is the _learning rates_, i.e., the convergence rates of excess ranking error or excess \(\phi\)-ranking risk, which describe their ranking performance on specific classes of distributions. Therefore, more and more effort has been put into deriving well-established learning rates for various ranking algorithms. One can refer to [12, 33, 10, 50, 25] and references therein for some recent progress. However, there is little known about under which nontrivial conditions we can obtain fast learning rates. In this paper, we introduce suitable noise conditions in pairwise ranking and employ local Rademacher analysis to establish tight learning rates of algorithm (1.4). Our methodology combines the local Rademacher averages with the theory of U-statistics,
which incorporates peeling and symmetrization tricks as well as a contraction principle for U-processes. Our work is also motivated by the recent work of [18] in which the authors theoretically analyze the performance of Gaussian support vector machines under an assumption of a low intrinsic dimension of the input domain. As far as we know, our paper is the first to consider the learning behavior of regularized pairwise ranking with Gaussian kernels. This is in contrast with existing literature, which either only considers ranking with unregularized empirical risk minimization, e.g., [12, 33], or only handles the case, where the kernel is fixed during the training process, e.g., [10, 52, 25, 44]. We discuss the findings of these articles and compare them to our results in Section 3. Our main contributions are summarized as follows.
* We establish an oracle inequality to bound the excess \(\phi\)-ranking risk of the estimators produced by regularized empirical risk minimization (2.1) with a general pairwise kernel \(K\) (see Theorem 1). Our conclusion is based on a capacity assumption, which requires the RKHS \(\mathcal{H}_{K}\) to satisfy the empirical entropy condition (see Assumption 2). This well-established inequality deals with the stochastic part of the convergence analysis and provides a framework for bounding the excess \(\phi\)-ranking risk. Directly applying this inequality to the marginal-based loss refines the previous analysis of regularized ranking with general pairwise kernels and yields the best learning rates so far (see Theorem 2 and Section 3 for detailed comparisons).
* Inspired by the analysis of support vector machine for classification, see, e.g., [36], we introduce two noise conditions in the pairwise ranking setting. The first condition describes the amount of noise in the labels, which is analogous to Tsybakov's noise condition in binary classification (see Assumption 4). This condition enables us to establish an elegant calibration inequality that bounds the excess ranking risk by their excess \(\phi\)-ranking risk, and a refined variance bound for marginal-based losses which further improves the stochastic part of the analysis. The second condition is a geometric assumption for distributions that allows us to estimate the approximation properties of Gaussian kernels (see Assumption 5 and Subsection 5.2). Both of these two conditions play pivotal roles in deriving fast learning rates.
* We obtain the learning rates of the Gaussian regularized ranking algorithm (1.4) under a general box-counting dimension assumption on the input domain combined with the noise conditions or the standard regularity condition (see Theorem 3 and Theorem 4). Specifically, we first estimate the capacity of pairwise Gaussian kernel space \(\mathcal{H}_{K^{\sigma}}\) under the assumption that the marginal of the data generating distribution on the input space \(\mathcal{X}\subset\mathbb{R}^{d}\) is supported on a set of upper box-counting dimension \(\varrho\in(0,d]\). Then, we derive approximation error bounds for the hinge loss under noise conditions and the square loss under the Besov smoothness. Finally, we apply the well-established oracle inequality and calibration inequality to derive fast learning rates for the excess ranking risk. We show that a low intrinsic dimension of input space can help the rates circumvent the curse of dimensionality.
The rest of this paper is organized as follows. In Section 2, we first introduce basic notations and necessary assumptions. Then we present the main results of this paper, including a general oracle inequality for excess \(\phi\)-ranking risk and its application to Gaussian ranking estimators (1.4) with hinge loss and square loss. Section 3 provides an overview of related work and compares our results with other contributions on pairwise ranking. Section 4 presents
a detailed proof of the general oracle inequality established in Theorem 1. In Section 5, we apply the oracle inequality to derive learning rates for Gaussian ranking estimators with hinge loss and square loss, which gives proofs of Theorem 3 and Theorem 4. Section 4 and Section 5 also include some preliminary estimates that deserve attention in their own right.
## 2 Main Results
In this section, we state our main results. Let \((\mathcal{H}_{K},\langle\cdot,\cdot\rangle_{K})\) be an RKHS consisting of skew-symmetric functions on \(\mathcal{X}^{2}:=\mathcal{X}\times\mathcal{X}\) which is induced by a positive semi-definite kernel \(K:\mathcal{X}^{2}\times\mathcal{X}^{2}\to\mathbb{R}\). The first result is an oracle inequality established in Theorem 1, providing upper bounds for excess \(\phi\)-ranking risk of the regularized estimator
\[f_{\mathbf{z}}:=f_{\mathbf{z},\lambda}^{\phi}\in\operatorname*{arg\,min}_{f\in \mathcal{H}_{K}}\left\{\mathcal{R}_{\mathbf{z}}^{\phi}(f)+\lambda\|f\|_{K}^{2 }\right\}, \tag{2.1}\]
where \(\lambda>0\), \(\phi:\mathcal{Y}\times\mathcal{Y}\times\mathbb{R}\to[0,\infty)\) is a general loss function and \(\mathcal{R}_{\mathbf{z}}^{\phi}(f)\) is defined by (1.5). Hereinafter, we will use \(f_{\mathbf{z}}\) to denote either \(f_{\mathbf{z},\lambda}^{\phi}\) in (2.1) or \(f_{\mathbf{z},\sigma,\lambda}^{\phi}\) in (1.4) which will be specified in the context. The oracle inequalities have been extensively studied in the literature of nonparametric statistics (see [21] and references therein). An oracle inequality provides bounds on the risk of a statistical estimator compared with the one called an oracle that has an infinite amount of observation and minimizes the population risk over a specific function class. Recall that the \(\phi\)-ranking risk \(\mathcal{R}^{\phi}(f)\) is given by (1.7). The oracle in our setting is the infimum that achieves \(\min_{f\in\mathcal{H}_{K}}\lambda\|f\|_{K}^{2}+\mathcal{R}^{\phi}(f)\). The oracle inequality bounds the excess \(\phi\)-ranking risk and can be used to establish both consistency and learning rates for the ranking algorithm. We give a literature review on the existing oracle inequalities for different ranking algorithms in Section 3.
To state the oracle inequality for \(f_{\mathbf{z}}\) defined by (2.1), we introduce some necessary notations and assumptions. Define the _Bayes \(\phi\)-ranking rule_ as
\[f_{\phi}^{*}:=\arg\min\big{\{}\mathcal{R}^{\phi}(f)\mid f:\mathcal{X}^{2}\to \mathbb{R}\text{ is Borel measurable}\big{\}}. \tag{2.2}\]
In the following, the terminology "measurable" means "Borel measurable" unless otherwise specified. Given a measurable function \(f:\mathcal{X}^{2}\to\mathbb{R}\), define \(\phi_{f}:\mathcal{Z}\times\mathcal{Z}\to\mathbb{R}\) and \(Q\phi_{f}:\mathcal{Z}\to\mathbb{R}\) by
\[\phi_{f}(z,z^{\prime}) :=\phi(y,y^{\prime},f(x,x^{\prime})), \tag{2.3}\] \[Q\phi_{f}(z) :=\mathbb{E}[\phi_{f}(z,Z^{\prime})]=\int_{\mathcal{Z}}\phi\left( y,y^{\prime},f(x,x^{\prime})\right)dP(x^{\prime},y^{\prime}). \tag{2.4}\]
We then introduce the truncation operation, allowing us to derive Bernstein conditions of sup-norm bound and variance bound for the truncated random variables, which is essential to establish tight concentration inequalities and derive fast learning rates. We say that the loss function \(\phi:\mathcal{Y}\times\mathcal{Y}\times\mathbb{R}\to[0,\infty)\) can be truncated at \(M>0\), if for all \((y,y^{\prime},t)\in\mathcal{Y}\times\mathcal{Y}\times\mathbb{R}\), there holds \(\phi(y,y^{\prime},\pi(t))\leq\phi(y,y^{\prime},t)\), where
\[\pi(t):=\begin{cases}-M&t<-M\\ t&t\in[-M,M]\\ M&t>M,\end{cases}\]
denotes the truncated value of \(t\) at \(\pm M\). For any \(f:\mathcal{X}^{2}\to\mathbb{R}\), \(\pi(f)\) denotes the truncation of \(f\) onto \([-M,M]\) which is given by \(\pi(f)(x,x^{\prime}):=\pi(f(x,x^{\prime})),\forall(x,x^{\prime})\in\mathcal{X}^ {2}\). The idea of truncation has already been used in the literature of binary classification, see e.g., [16, 36]. We first give the following assumption on the general loss \(\phi\).
**Assumption 1**.: _The loss \(\phi:\mathcal{Y}\times\mathcal{Y}\times\mathbb{R}\to[0,\infty)\) can be truncated at some \(M>0\) and for all \((y,y^{\prime})\in\mathcal{Y}\times\mathcal{Y}\), \(\phi(y,y^{\prime},.):\mathbb{R}\to[0,\infty)\) is convex and locally L-Lipschitz on \([-M,M]\). The Bayes \(\phi\)-ranking rule \(f_{\phi}^{*}\) defined by (2.2) is measurable and skew-symmetric. For any skew-symmetric function \(f\), \(\phi_{f}\) defined by (2.3) satisfies_
\[\phi_{f}(z,z^{\prime})=\phi_{f}(z^{\prime},z),\quad\forall(z,z^{\prime})\in \mathcal{Z}\times\mathcal{Z}.\]
_Furthermore, there exist constants \(B>0,\tau\in[0,1]\) and \(V\geq B^{2-\tau}\) such that_
\[\phi(y,y^{\prime},t)\leq B,\quad\forall(y,y^{\prime},t)\in\mathcal{Y}\times \mathcal{Y}\times[-M,M] \tag{2.5}\]
_and for all measurable and skew-symmetric \(f:\mathcal{X}^{2}\to\mathbb{R}\), there holds_
\[\mathbb{E}(Q\phi_{\pi(f)}-Q\phi_{f_{\phi}^{*}})^{2}\leq V\big{(}\mathbb{E}(Q \phi_{\pi(f)}-Q\phi_{f_{\phi}^{*}})\big{)}^{\tau}, \tag{2.6}\]
_where \(Q\phi_{\pi(f)}\) and \(Q\phi_{f_{\phi}^{*}}\) are defined through (2.4)._
Next, we introduce a capacity assumption on \(\mathcal{H}_{K}\) which is described by the covering number or, equivalently, the entropy number of the unit norm ball in \(\mathcal{H}_{K}\).
**Definition 1**.: _Let \((\mathcal{T},\mathrm{d})\) be a pseudo-metric space and \(\varepsilon>0\). We call \(\mathcal{S}\subset\mathcal{T}\) an \(\varepsilon\)-net of \(\mathcal{T}\), if for all \(t\in\mathcal{T}\) there exists an \(s\in\mathcal{S}\) with \(\mathrm{d}(s,t)\leq\varepsilon\). The \(\varepsilon\)-covering number of \((\mathcal{T},\mathrm{d})\) is defined by_
\[\mathcal{N}(\mathcal{T},\mathrm{d},\varepsilon):=\inf\big{\{}|\mathcal{S}|: \mathcal{S}\subset\mathcal{T}\text{ and }\mathcal{S}\text{ is an }\varepsilon\text{-net of }\mathcal{T}\big{\}},\]
_where \(\inf\emptyset=\infty\). For \(i\in\mathbb{N}\), the \(i\)-th entropy number of \((\mathcal{T},\mathrm{d})\) is defined by_
\[e_{i}(\mathcal{T},\mathrm{d}):=\inf\bigg{\{}\varepsilon>0:\exists s_{1},\ldots,s_{2^{i-1}}\in\mathcal{T}\text{ such that }\mathcal{T}\subset\bigcup_{j=1}^{2^{i-1}}\mathcal{B}_{ \mathrm{d}}(s_{j},\varepsilon)\bigg{\}},\]
_where \(\mathcal{B}_{\mathrm{d}}(s,\varepsilon):=\big{\{}t\in\mathcal{T}:\mathrm{d}(t,s)\leq\varepsilon\big{\}}\) denotes the closed ball with center \(s\in\mathcal{T}\) and radius \(\varepsilon\). In particular, if \((\mathcal{T},\mathrm{d})\) is a subspace of a normed space \((\mathcal{F},\|\cdot\|_{\mathcal{F}})\) and the metric \(\mathrm{d}\) is given by \(\mathrm{d}(s,t):=\|s-t\|_{\mathcal{F}},\forall s,t\in\mathcal{T}\), we write \(\mathcal{N}(\mathcal{T},\|\cdot\|_{\mathcal{F}},\varepsilon):=\mathcal{N}( \mathcal{T},\mathrm{d},\varepsilon)\). Moreover, if \(S:\mathcal{F}\to\mathcal{F}^{\prime}\) is a bounded, linear operator between the normed spaces \(\mathcal{F}\) and \(\mathcal{F}^{\prime}\), we write \(e_{i}(S):=e_{i}(S\mathcal{B}_{\mathcal{F}},\|\cdot\|_{\mathcal{F}^{\prime}})\) and \(\mathcal{N}(S,\varepsilon):=\mathcal{N}(S\mathcal{B}_{\mathcal{F}},\|\cdot\|_ {\mathcal{F}^{\prime}},\varepsilon)\), where \(\mathcal{B}_{\mathcal{F}}\) is the closed unit ball of \(\mathcal{F}\)._
Let \(P_{\mathcal{X}}\) denote the marginal distribution of \(P\) on \(\mathcal{X}\). Given i.i.d. sample \(\mathbf{x}=\{X_{i}\}_{i=1}^{n}\) of \(P_{\mathcal{X}}\), define two empirical measures: \(P_{\mathbf{x}}^{n}:=\frac{1}{n}\sum_{i=1}^{n}\delta_{X_{i}}\) and \(P_{\mathbf{x}^{2}}^{n}:=\frac{1}{n(n-1)}\sum_{i\neq j}\delta_{(X_{i},X_{j})}\), where \(\delta_{(.)}\) is the counting measure. We give the following capacity assumption on the RKHS \(\mathcal{H}_{K}\).
**Assumption 2**.: _For all \(n\in\mathbb{N}\) and i.i.d. sample \(\mathbf{x}=\{X_{i}\}_{i=1}^{n}\) of \(P_{\mathcal{X}}\), there exist constants \(0<p_{1},p_{2}<1/2\), \(a_{1}>0\) and \(a_{2}>0\) such that_
\[e_{i}(\mathrm{id}:\mathcal{H}_{K}\to\mathcal{L}_{2}(P_{\mathbf{x}}^{n}\otimes P _{\mathcal{X}}))\leq a_{1}i^{-\frac{1}{2p_{1}}},\quad\forall i\in\mathbb{N},\]
\[e_{i}(\mathrm{id}:\mathcal{H}_{K}\to\mathcal{L}_{2}(P_{\mathbf{x}^{2}}^{n}))\leq a _{2}i^{-\frac{1}{2p_{2}}},\quad\forall i\in\mathbb{N}.\]
_Here \(\mathcal{L}_{2}(P_{\mathbf{x}}^{n}\otimes P_{\mathcal{X}})\) and \(\mathcal{L}_{2}(P_{\mathbf{x}^{2}}^{n})\) are the Hilbert spaces of square-integrable functions with respect to \(P_{\mathbf{x}}^{n}\otimes P_{\mathcal{X}}\) and \(P_{\mathbf{x}^{2}}^{n}\) in which the norms are defined for \(f\in\mathcal{H}_{K}\) by_
\[\|f\|_{\mathcal{L}_{2}(P_{\mathbf{x}}^{n}\otimes P_{\mathcal{X}})} :=\bigg{(}\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}_{P_{\mathcal{X}}} \left[f(X_{i},X)^{2}\right]\bigg{)}^{\frac{1}{2}},\] \[\|f\|_{\mathcal{L}_{2}(P_{\mathbf{x}^{2}}^{n})} :=\bigg{(}\frac{1}{n(n-1)}\sum_{i=1}^{n}\sum_{j\neq i}f(X_{i},X_{j })^{2}\bigg{)}^{\frac{1}{2}}.\]
Due to the connection between entropy numbers and covering numbers, cf. Lemma 6.21 of [36], the capacity assumption on entropy numbers can be restated by using the metric entropy, namely the logarithm of the covering number, i.e., there exist constants \(0<p_{1},p_{2}<1/2\), \(a_{1}>0\) and \(a_{2}>0\) such that
\[\log\mathcal{N}(\mathrm{id}:\mathcal{H}_{K}\to\mathcal{L}_{2}(P_ {\mathbf{x}}^{n}\otimes P_{\mathcal{X}}),\varepsilon) \leq\log(4)a_{1}^{2p_{1}}\varepsilon^{-2p_{1}},\] \[\log\mathcal{N}(\mathrm{id}:\mathcal{H}_{K}\to\mathcal{L}_{2}(P_ {\mathbf{x}^{2}}^{n}),\varepsilon) \leq\log(4)a_{2}^{2p_{2}}\varepsilon^{-2p_{2}}.\]
Recall that the \(\phi\)-ranking risk \(\mathcal{R}^{\phi}(\cdot)\) is given by (1.7) and the Bayes \(\phi\)-ranking rule \(f_{\phi}^{*}\) is defined by (2.2). Note that if the loss \(\phi\) can be truncated at some \(M>0\), there holds
\[\mathcal{R}^{\phi}(\pi(f))-\mathcal{R}^{\phi}(f_{\phi}^{*})\leq\mathcal{R}^{ \phi}(f)-\mathcal{R}^{\phi}(f_{\phi}^{*})\]
for all \(f:\mathcal{X}^{2}\to\mathbb{R}\). In the following, we can always consider truncated estimators because projecting the values of decision functions onto \([-M,M]\) does not increase their excess \(\phi\)-ranking risk. Now we can give our oracle inequality for the estimators generated by algorithm (2.1) with a general loss \(\phi\).
**Theorem 1**.: _Let \(\mathbf{z}=\{(X_{i},Y_{i})\}_{i=1}^{n}\) be an i.i.d. sample of a probability distribution \(P\) on \(\mathcal{X}\times\mathcal{Y}\). Given a loss function \(\phi:\mathcal{Y}\times\mathcal{Y}\times\mathbb{R}\to[0,\infty)\) and a measurable positive semi-definite kernel \(K:\mathcal{X}^{2}\times\mathcal{X}^{2}\to\mathbb{R}\) with the associated RKHS \(\mathcal{H}_{K}\) consisting of skew-symmetric functions, the estimator \(f_{\mathbf{z}}\) is defined by (2.1) with \(\lambda>0\). Assume that \(\phi\) satisfies Assumption 1 with \(L>0,B>0,\tau\in[0,1]\) and \(V\geq B^{2-\tau}\), and \(\mathcal{H}_{K}\) satisfies Assumption 2 with \(0<p_{1},p_{2}<1/2\), \(a_{1}\geq B\) and \(a_{2}\geq B\). If there exists some \(f_{0}\in\mathcal{H}_{K}\) satisfying \(\|\phi_{f_{0}}\|_{\infty}\leq B_{0}\) for some constant \(B_{0}\geq 0\), then for all \(n\geq 2\) and \(t>0\), with probability at least \(1-(c_{0}+5)\exp(-t)\), there holds_
\[\lambda\|f_{\mathbf{z}}\|_{K}^{2}+\mathcal{R}^{\phi}(\pi(f_{ \mathbf{z}}))-\mathcal{R}^{\phi}(f_{\phi}^{*}) \tag{2.7}\] \[\leq 8\big{(}\lambda\|f_{0}\|_{K}^{2}+\mathcal{R}^{\phi}(f_{0})- \mathcal{R}^{\phi}(f_{\phi}^{*})\big{)}+6c_{1}\left(\frac{a_{1}^{2p_{1}}V^{1-p _{1}}L^{2p_{1}}}{\lambda^{p_{1}}n}\right)^{\frac{1}{2-\tau-p_{1}+p_{1}+\tau}}\] \[\quad+\frac{6c_{2}a_{1}^{2p_{1}}B^{1-p_{1}}L^{2p_{1}}}{\lambda^{p _{1}}n}+\frac{3c_{3}a_{1}^{2p_{1}}L^{2p_{1}}B^{1-p_{1}}t}{\lambda^{p_{1}}n}+ \frac{3c_{4}a_{2}^{2p_{2}}L^{2p_{2}}B^{1-p_{2}}t}{\lambda^{p_{2}}n}\] \[\quad+\left(\frac{1872Vt}{n}\right)^{\frac{1}{2-\tau}}+\frac{900 Bt}{n}+\frac{456B_{0}t}{n}+\frac{3c_{5}t}{n}.\]
_where \(c_{i},i=0,1,2,3,4,5\) are constants independent of \(\lambda\), \(n\) or \(t\) and explicitly given in the proof._
Next, we apply the oracle inequality to the margin-based loss function \(\phi(y,y^{\prime},t)=\psi(\mathrm{sgn}(y-y^{\prime})t)\) with \(\psi:\mathbb{R}\rightarrow[0,\infty)\), where \(\psi\) satisfies the following assumption.
**Assumption 3**.: _The univariate function \(\psi:\mathbb{R}\rightarrow[0,\infty)\) is convex, differentiable at \(t=0\) with \(\psi^{\prime}(0)<0\), and the smallest zero of \(\psi\) is 1._
Examples of \(\psi\) in Assumption 3 include the hinge loss \(\psi_{\mathrm{hinge}}\), square loss \(\psi_{\mathrm{square}}\) and the the \(r\)-norm hinge loss with \(1\leq r<\infty\) defined by \(\psi_{r}(t):=(\psi_{\mathrm{hinge}}(t))^{r}\). Due to Lemma 9 in Subsection 5.3, if \(\psi\) satisfies Assumption 3, the margin-based loss \(\phi(y,y^{\prime},t)=\psi(\mathrm{sgn}(y-y^{\prime})t)\) can be truncated at \(M=1\) and the Bayes \(\phi\)-ranking rule \(f_{\phi}^{*}\) can be taken as skew-symmetric. Now we can give the oracle inequality for the estimators generated by algorithm (2.1) with the margin-based loss.
**Theorem 2**.: _Let \(\mathbf{z}=\left\{(X_{i},Y_{i})\right\}_{i=1}^{n}\) be an i.i.d. sample of a probability distribution \(P\) on \(\mathcal{X}\times\mathcal{Y}\). Given a margin-based loss \(\phi(y,y^{\prime},t)=\psi(\mathrm{sgn}(y-y^{\prime})t)\) with \(\psi:\mathbb{R}\rightarrow[0,\infty)\) satisfying Assumption 3 and a measurable positive semi-definite kernel \(K:\mathcal{X}^{2}\times\mathcal{X}^{2}\rightarrow\mathbb{R}\) with the associated RKHS \(\mathcal{H}_{K}\) consisting of skew-symmetric functions, the estimator \(f_{\mathbf{z}}\) is defined by (2.1) with \(\lambda>0\). Assume that there exist constants \(L>0\), \(B>0,\tau\in[0,1]\) and \(V\geq B^{2-\tau}\) such that \(\psi\) is locally \(L\)-Lipschitz and \(\psi(t)\leq B\) over \(t\in[-1,1]\), and the variance bound (2.6) of Assumption 1 holds with truncation parameter \(M=1\). If there exists some \(f_{0}\in\mathcal{H}_{K}\) satisfying \(\|\phi_{f_{0}}\|_{\infty}\leq B_{0}\) for some constant \(B_{0}\geq 0\) and \(\mathcal{H}_{K}\) satisfies Assumption 2 with \(0<p_{1},p_{2}<1/2\), \(a_{1}\geq B\) and \(a_{2}\geq B\), then the conclusion of Theorem 1 holds true._
**Remark 1**.: _Under the assumptions of Theorem 2, we further assume that \(p_{1}=p_{2}=p\) and there exist constants \(c>0\) and \(\gamma\in(0,1]\) such that, for all \(\lambda>0\), the approximation error \(\mathcal{A}(\lambda)\) is bounded as_
\[\mathcal{A}(\lambda):=\inf_{f\in\mathcal{H}_{K}}\lambda\|f\|_{K}^{2}+\mathcal{ R}^{\phi}(f)-\mathcal{R}^{\phi}(f_{\phi}^{*})\leq c\lambda^{\gamma}.\]
_Then by choosing_
\[\lambda=n^{-b},\ b=\frac{1}{(2-\tau-p+p\tau)\gamma+p},\]
_we have_
\[\mathcal{R}^{\phi}(\pi(f_{\mathbf{z}}))-\mathcal{R}^{\phi}(f_{\phi}^{*}) \lesssim n^{-\frac{\gamma}{(2-\tau-p+p\tau)\gamma+p}}.\]
_This learning rate significantly improves the existing results for pairwise ranking with margin-based losses. See Section 3 for a detailed comparison._
We will prove Theorem 1 and Theorem 2 in Section 4. Having derived the oracle inequality, we now focus on the pairwise ranking scheme (1.4) with pairwise Gaussian kernel \(K^{\sigma}\) and two typical margin-base losses:
\[\phi_{\mathrm{hinge}}(y,y^{\prime},t):=\psi_{\mathrm{hinge}}(\mathrm{sgn}(y-y ^{\prime})t)=\max\{1-\mathrm{sgn}(y-y^{\prime})t,0\}\]
and
\[\phi_{\mathrm{square}}(y,y^{\prime},t):=\psi_{\mathrm{square}}(\mathrm{sgn}(y- y^{\prime})t)=(1-\mathrm{sgn}(y-y^{\prime})t)^{2}.\]
Recall that the excess ranking risk \(\mathcal{E}(f)\) and the excess \(\phi\)-ranking risk \(\mathcal{E}^{\phi}(f)\) are defined by (1.2) and (1.8), respectively. To derive learning rates for excess ranking risk using Theorem 2, the following problems remain to be solved:
* Verify Assumption 1 for \(\phi_{\text{hinge}}\), \(\phi_{\text{square}}\);
* Construct an \(f_{0}\in\mathcal{H}_{K^{\sigma}}\) to bound the approximation error \(\mathcal{A}(\lambda)\);
* Verify the capacity condition (Assumption 2) for the RKHS \(\mathcal{H}_{K^{\sigma}}\);
* Establish calibration inequality for ranking, i.e., bounding the excess ranking risk \(\mathcal{E}(f)\) by the excess \(\phi\)-ranking risk \(\mathcal{E}^{\phi}(f)\).
In Section 5, we will solve these problems and establish the corresponding estimates under proper regularity assumptions. In the rest part of this section, We introduce these assumptions to characterize the distribution, the smoothness of the target function and the intrinsic dimension of the input domain. Then we present our results on fast learning rates of Gaussian ranking estimator (1.4) equipped with \(\phi_{\text{hinge}}\) and \(\phi_{\text{square}}\).
We first introduce two noise conditions to describe the distribution. Recall that \(P\) is a probability distribution on \(\mathcal{X}\times\mathcal{Y}\). For distinct \((x,y)\) and \((x^{\prime},y^{\prime})\), define the posterior probabilities of \(P\), denoted by \(\eta_{+},\eta_{-},\eta_{=}:\mathcal{X}^{2}\to[0,1]\), as
\[\eta_{+}(x,x^{\prime}) :=P(y>y^{\prime}|x,x^{\prime}),\] \[\eta_{-}(x,x^{\prime}) :=P(y<y^{\prime}|x,x^{\prime}),\] \[\eta_{=}(x,x^{\prime}) :=P(y=y^{\prime}|x,x^{\prime}).\]
Then partition \(\mathcal{X}^{2}\) into
\[\mathcal{X}^{2}_{+} :=\{(x,x^{\prime})\in\mathcal{X}^{2}:\eta_{+}(x,x^{\prime})>\eta_ {-}(x,x^{\prime})\},\] \[\mathcal{X}^{2}_{-} :=\{(x,x^{\prime})\in\mathcal{X}^{2}:\eta_{+}(x,x^{\prime})<\eta_ {-}(x,x^{\prime})\}, \tag{2.8}\] \[\mathcal{X}^{2}_{=} :=\{(x,x^{\prime})\in\mathcal{X}^{2}:\eta_{+}(x,x^{\prime})=\eta_ {-}(x,x^{\prime})\}.\]
Following Proposition 1 of [12], the _Bayes ranking rule_\(f^{*}_{\text{rank}}:\mathcal{X}^{2}\to\mathbb{R}\) which minimizes the ranking risk \(\mathcal{R}(f)\) in (1.1) over all measurable functions is given by
\[f^{*}_{\text{rank}}(x,x^{\prime})=\text{sgn}\big{(}\eta_{+}(x,x^{\prime})- \eta_{-}(x,x^{\prime})\big{)}=\begin{cases}1&\text{ if }(x,x^{\prime})\in \mathcal{X}^{2}_{+},\\ 0&\text{ if }(x,x^{\prime})\in\mathcal{X}^{2}_{=},\\ -1&\text{ if }(x,x^{\prime})\in\mathcal{X}^{2}_{-}.\end{cases}\]
Then the minimal ranking risk is \(\mathcal{R}(f^{*}_{\text{rank}})=\mathbb{E}\min\left\{\eta_{+}(X,X^{\prime}), \eta_{-}(X,X^{\prime})\right\}\).
Now we introduce a condition analogous to Tsybakov's noise condition in binary classification, describing the amount of noise in the labels. In order to motivate this condition, we note by
\[\min\left\{\eta_{+}(x,x^{\prime}),\eta_{-}(x,x^{\prime})\right\}=\frac{\eta_ {+}(x,x^{\prime})+\eta_{-}(x,x^{\prime})}{2}-\frac{\left|\eta_{+}(x,x^{\prime })-\eta_{-}(x,x^{\prime})\right|}{2}\]
that the function \(\left|\eta_{+}(x,x^{\prime})-\eta_{-}(x,x^{\prime})\right|\) can be used to describe the noise in comparing the labels of a distribution \(P\). Indeed, in regions where this function is close to \(1\), there is only a small amount of noise in comparing \(y\) and \(y^{\prime}\), whereas function values close to \(0\) only occur in regions with a high level of noise in comparing \(y\) and \(y^{\prime}\). The following condition in which we use the convention \(t^{\infty}:=0\) for \(t\in(0,1)\) describes the size of the latter regions. Recall that \(P_{\mathcal{X}}\) is the marginal distribution of \(P\) on \(\mathcal{X}\). Let \(P_{\mathcal{X}}^{2}:=P_{\mathcal{X}}\otimes P_{\mathcal{X}}\).
**Assumption 4**.: _There exist constants \(C_{*}>0\) and \(q\in[0,\infty]\) such that_
\[P_{\mathcal{X}}^{2}\big{(}\big{\{}(x,x^{\prime})\in\mathcal{X}^{2}:\big{|}\eta_{ +}(x,x^{\prime})-\eta_{-}(x,x^{\prime})\big{|}\leq t\big{\}}\big{)}\leq C_{*}t^{q} \tag{2.9}\]
_for all \(t>0\)._
Obviously, \(P_{\mathcal{X}}^{2}\) has the noise exponent \(q>0\) if and only if \(\big{|}\eta_{+}(x,x^{\prime})-\eta_{-}(x,x^{\prime})\big{|}^{-1}\) belongs to the Lorentz space \(\mathcal{L}_{q,\infty}(P_{\mathcal{X}}^{2})\) (see [27]). It is also easy to see that all distributions satisfy (2.9) with \(C_{*}=1\) and \(q=0\). Furthermore, the extreme case \(q=\infty\) requires that \(\big{|}\eta_{+}(x,x^{\prime})-\eta_{-}(x,x^{\prime})\big{|}\) is lower bounded by some positive constant for almost every \((x,x^{\prime})\in\mathcal{X}^{2}\), including the case of a noise-free distribution in which \(\big{|}\eta_{+}(x,x^{\prime})-\eta_{-}(x,x^{\prime})\big{|}=1\) almost surely. Under Assumption 4, we will demonstrate the variance bound (2.6) for \(\phi_{\mathrm{hinge}}\). We also leverage this assumption to establish a refined calibration inequality to bound \(\mathcal{E}(f)\) by \(\mathcal{E}^{\phi}(f)\). See Proposition 9 and Lemma 11. The work of [12] and [13] introduced another two noise conditions for bipartite ranking problems. Their noise conditions are stronger than ours which leads to a better variance bound exponent \(\tau\) in (2.6). We will make detailed comparisons of these noise conditions in Subsection 5.5.
The second noise condition is a geometric condition for distributions that describes the location of the noise sets and allows us to estimate the approximation error for Gaussian kernels. According to our discussion in Subsection 5.3, the Bayes ranking rule of \(\phi_{\mathrm{hinge}}\) can be defined pointwisely, i.e., given \((x,x^{\prime})\in\mathcal{X}^{2}\),
\[f_{\mathrm{hinge}}^{*}(x,x^{\prime}):=\operatorname*{arg\,min}_{t\in\mathbb{R }}\eta_{+}(x,x^{\prime})\psi_{\mathrm{hinge}}(t)+\eta_{-}(x,x^{\prime})\psi_{ \mathrm{hinge}}(-t).\]
Then \(f_{\mathrm{hinge}}^{*}\) can be explicitly given by
\[f_{\mathrm{hinge}}^{*}(x,x^{\prime})=\operatorname*{sgn}\bigl{(}\eta_{+}(x,x ^{\prime})-\eta_{-}(x,x^{\prime})\bigr{)},\ \forall(x,x^{\prime})\in\mathcal{X}^{2}, \tag{2.10}\]
which is exactly the optimal ranking rule \(f_{\mathrm{rank}}^{*}\). Note that \(f_{\mathrm{hinge}}^{*}\) is typically a step function. Therefore, employing the classical smoothness assumption to describe the regularity of \(f_{\mathrm{hinge}}^{*}\) seems rather restrictive. Inspired by the theoretical analysis of binary classification with hinge loss (cf. [36]), we introduce the second noise condition which is called the margin-noise condition. It relates the noise to the distance to the decision boundary and is applied to bound the approximation error with respect to hinge loss and Gaussian kernels. See Proposition 5.
**Assumption 5**.: _There exist constant \(C_{**}>0\) and \(\beta\geq 0\) such that for all \(t\geq 0\),_
\[\int_{\{(x,x^{\prime})\in\mathcal{X}^{2}:\Delta(x,x^{\prime})<t\}}|\eta_{+}(x, x^{\prime})-\eta_{-}(x,x^{\prime})|dP_{\mathcal{X}}(x)dP_{\mathcal{X}}(x^{ \prime})\leq C_{**}t^{\beta}. \tag{2.11}\]
_Here, \(\Delta(x,x^{\prime})\) is the distance to the decision boundary, defined as_
\[\Delta(x,x^{\prime}):=\begin{cases}\operatorname{dist}((x,x^{\prime}), \mathcal{X}_{\perp}^{2}\cup\mathcal{X}_{=}^{2})&(x,x^{\prime})\in\mathcal{X}_ {+}^{2}\\ \operatorname{dist}((x,x^{\prime}),\mathcal{X}_{+}^{2}\cup\mathcal{X}_{=}^{2} )&(x,x^{\prime})\in\mathcal{X}_{-}^{2}\\ 0&(x,x^{\prime})\in\mathcal{X}_{=}^{2},\end{cases} \tag{2.12}\]
_where \(\text{dist}(x,A):=\inf_{y\in A}\|x-y\|_{2}\)._
Note that in condition (2.11) we neither impose any kind of smoothness assumption nor require that \(P^{2}_{\mathcal{X}}\) is absolutely continuous with respect to the Lebesgue measure. If \(P^{2}_{\mathcal{X}}\) has only a low concentration near the decision boundary \(\mathcal{X}^{2}_{=}\) or it is particularly noisy in this region, Assumption 5 is satisfied for a large exponent \(\beta\). For instance, we can select arbitrary large values for \(\beta\) in the extreme case where \(\mathcal{X}^{2}_{+}\) and \(\mathcal{X}^{2}_{-}\) have positive distance.
Next, we introduce the box-counting dimension condition which takes full advantage of the low-dimensional intrinsic structure of the input data. Assumption 2 requires upper bounds of the entropy number of RKHS \(\mathcal{H}_{K}\) with respect to \(\mathcal{L}_{2}\)-seminorm \(\|\cdot\|_{\mathcal{L}_{2}(P^{n}_{\mathbf{x}}\otimes P_{\mathcal{X}})}\) and \(\|\cdot\|_{\mathcal{L}_{2}(P^{n}_{\mathbf{x}^{2}})}\), both of which are dominated by the sup-norm \(\|\cdot\|_{\infty}\). The covering number of Gaussian RKHS \(\mathcal{H}_{K^{\sigma}}\) with respect to the sup-norm has been intensively studied in the literature, cf. [23, 37], some of which have related it to the uniform covering number \(\mathcal{N}(\mathcal{X},\|\cdot\|_{\infty},\sigma)\) of the input space. Since \(\mathcal{N}(\mathcal{X},\|\cdot\|_{\infty},\sigma)\) exponentially depends on the dimension of the input space \(\mathcal{X}\subset\mathbb{R}^{d}\). When \(d\) is large, the learning rate suffers from the curse of dimensionality. This phenomenon is usually inevitable for high-dimensional data. However, we can derive much faster rates of convergence when the intrinsic dimension of the data is much smaller than the dimension of its ambient space. We describe the low-dimensional intrinsic structure of the input domain through the box-counting dimension which is introduced to describe the low-dimensional intrinsic fractal structure in fractal geometry, see, e.g., [17]. For a metric space \((\mathcal{T},\mathrm{d})\) and a subset \(\mathcal{S}\subset\mathcal{T}\), if the limit
\[\lim_{\varepsilon\to 0}\frac{\log\mathcal{N}(\mathcal{S},\mathrm{d}, \varepsilon)}{\log\frac{1}{\varepsilon}}\]
exists, it is called the _box-counting dimension_ of \(\mathcal{S}\). If not, we define the upper box-counting dimension
\[\limsup_{\varepsilon\to 0}\frac{\log\mathcal{N}(\mathcal{S},\mathrm{d}, \varepsilon)}{\log\frac{1}{\varepsilon}},\]
and lower box-counting dimension
\[\liminf_{\varepsilon\to 0}\frac{\log\mathcal{N}(\mathcal{S},\mathrm{d}, \varepsilon)}{\log\frac{1}{\varepsilon}}.\]
**Assumption 6**.: _There exists a constant \(\varrho>0\) such that_
\[\limsup_{\varepsilon\to 0}\frac{\log\mathcal{N}(\mathcal{X},\|\cdot\|_{\infty}, \varepsilon)}{\log\frac{1}{\varepsilon}}\leq\varrho, \tag{2.13}\]
_or, equivalently, there exist constants \(C_{\mathcal{X}}\geq 1\) and \(\varrho>0\) such that_
\[\mathcal{N}(\mathcal{X},\|\cdot\|_{\infty},\varepsilon)\leq C_{\mathcal{X}} \varepsilon^{-\varrho}. \tag{2.14}\]
The infimum over all \(\varrho\) satisfying (2.13) coincides with the upper box-counting dimension of \(\mathcal{X}\). This assumption is rather general because it captures the intrinsic dimension of low-dimensional smooth submanifold \(\mathcal{M}\subset\mathbb{R}^{d}\). When a bounded \(\mathcal{X}\subset\mathbb{R}^{d}\) has a non-empty interior, the assumption is fulfilled precisely for \(\varrho=d\). Under Assumption 6, we will show that the learning rate of the concerned algorithms mainly depends on the intrinsic dimension \(\varrho\) of the input domain.
**Remark 2**.: _In Subsection 5.2, we define a convolution operator \(\mathcal{K}^{\sigma}\ast:\mathcal{L}_{2}(\mathbb{R}^{2d})\to\mathcal{H}_{K^{ \sigma}}\) and show that acting the convolution operator on the Bayes \(\phi\)-ranking rule \(f_{\phi}^{\ast}\) can lead to the desired approximator. Hence, a proper extension of \(f_{\phi}^{\ast}:\mathcal{X}^{2}\to\mathbb{R}\) to \(\mathcal{L}_{2}(\mathbb{R}^{2d})\) should be taken into consideration. In many cases, it is enough to use the zero extension. However, If \(\mathcal{X}\) has a low intrinsic dimension \(\varrho<d\), \(\mathcal{X}\) may have a zero Lebesgue measure and the trivial zero extension of \(f_{\phi}^{\ast}\) only yields the zero function in \(\mathcal{L}_{2}(\mathbb{R}^{2d})\). For the case of square loss, we can fix this issue with the help of Whitney's extension theorem (see Theorem 2.3.6 of [19]), in which we directly regard \(f_{\mathrm{square}}^{\ast}\) (defined in (2.16)) as a function in \(\mathcal{L}_{2}(\mathbb{R}^{2d})\), and impose Besov smoothness assumption on \(f_{\mathrm{square}}^{\ast}\). As is explained in [18]: if \(\mathcal{X}^{2}\) is a compact \(C^{k}\)-manifold, by Whitney's extension theorem, any \(f\in C^{k}(\mathcal{X}^{2})\) has an extension to a function \(\widetilde{f}\in C^{k}(\mathcal{X}^{2}_{+\delta})\) where \(\mathcal{X}^{2}_{+\delta}\) is the \(\delta\)-neighbourhood of \(\mathcal{X}^{2}\) in \(\mathbb{R}^{2d}\). However, for the case of hinge loss, the situation is more challenging because we need to extend \(f_{\mathrm{hinge}}^{\ast}\) to \(\mathcal{L}_{2}(\mathbb{R}^{2d})\) without violating the margin-noise condition (5) which is necessary to bound the approximation error. There is no trivial way to fix this issue to the best of our knowledge. In this work, we will give a construction of this extension in the proof of Proposition 5._
Under the noise conditions, i.e., Assumption 4 and Assumption 5, and Assumption 6, we derive fast learning rates for of Gaussian ranking estimator (1.4) equipped with \(\phi_{\mathrm{hinge}}\). Let \(\mathcal{B}_{\mathbb{R}^{d}}\) denote the unit ball in \(\mathbb{R}^{d}\).
**Theorem 3**.: _Let \(\mathbf{z}=\{(X_{i},Y_{i})\}_{i=1}^{n}\) be an i.i.d. sample of a probability distribution \(P\) on \(\mathcal{X}\times\mathcal{Y}\). The estimator \(f_{\mathbf{z}}\) is defined by (1.4) with \(\lambda>0,\sigma\in(0,1)\), and \(\phi_{\mathrm{hinge}}(y,y^{\prime},t)=\max\{1-\mathrm{sgn}(y-y^{\prime})t,0\}\). Further, let \(P\) satisfy Assumption 4 for constant \(C_{\ast}\) and \(q\) and Assumption 5 for constants \(C_{\ast\ast}\) and \(\beta\). Assume that there exits some \(r>0\) such that \(\mathcal{X}\subset r\mathcal{B}_{\mathbb{R}^{d}}\) and Assumption 6 is satisfied for constants \(C_{\mathcal{X}}\) and \(\varrho\in(0,d]\). Then for all \(n\geq 2,t>0\), and \(p\in(0,1/4]\), with probability at least \(1-(c_{0}+5)\exp(-t)\), there holds_
\[\begin{split}&\lambda\|f_{\mathbf{z}}\|_{K^{\sigma}}^{2}+\mathcal{R }^{\phi_{\mathrm{hinge}}}(\pi(f_{\mathbf{z}}))-\mathcal{R}^{\phi_{\mathrm{ hinge}}}(f_{\mathrm{hinge}}^{\ast})\\ \leq&\frac{2^{3d+4}r^{2d}}{\Gamma(d)}\lambda\sigma^{ -2d}+\frac{2^{\beta/2+4}C_{\ast\ast}\Gamma(d+\frac{\beta}{2})}{\Gamma(d)} \sigma^{\beta}+36c_{6}\left(\frac{C_{\mathcal{X}}^{\ast}C_{\ast}^{\frac{1-p}{ q+1}}}{\lambda^{p}p^{2d+1}\sigma^{2\varrho}n}\right)^{\frac{q+1}{q-p+2}}\\ &+\frac{12C_{\mathcal{X}}^{\ast}c_{6}(t+1)}{\lambda^{p}p^{2d+1} \sigma^{2\varrho}n}+\left(\frac{11232C_{\ast}^{\frac{1}{q+1}}t}{n}\right)^{ \frac{q+1}{q+2}}+\frac{(2712+3c_{6})t}{n},\end{split} \tag{2.15}\]
_where_
\[C_{\mathcal{X}}^{\ast}:=12C_{\mathcal{X}}^{2}\binom{4e+2d}{2d}\frac{(2d+1)^{2 d+1}}{2^{2d+1}e^{4d+1}}\]
_and \(c_{0},c_{6}\) are constants independent of \(\lambda,\sigma,n,t\) or \(p\) which are explicitly given in the proof. In particular, choose_
\[\begin{split}\sigma&=n^{-a},\quad a=\frac{q+1}{ \beta(q+2)+2\varrho(q+1)};\\ \lambda&=n^{-b},\quad b\geq\frac{(2d+\beta)(q+1)}{ \beta(q+2)+2\varrho(q+1)},\quad p=\frac{\log 2}{4\log n}.\end{split}\]
_Then for all \(n\geq 2\) and \(t\geq 1\), we have_
\[\mathcal{E}(\pi(f_{\mathbf{z}}))\leq\mathcal{E}^{\phi_{\mathrm{hinge}}}(\pi(f_ {\mathbf{z}}))\lesssim tn^{-\frac{\beta(q+1)}{\beta(q+2)+2\varrho(q+1)}}\log^{2 d+1}n\]
_with probability at least \(1-(c_{0}+5)\exp(-t)\)._
For the case of square loss, the Bayes ranking rule of \(\phi_{\rm square}\) is defined pointwise by
\[f^{*}_{\rm square}(x,x^{\prime}):=\operatorname*{arg\,min}_{t\in\mathbb{R}} \eta_{+}(x,x^{\prime})\psi_{\rm square}(t)+\eta_{-}(x,x^{\prime})\psi_{\rm square }(-t)\]
which can be explicitly expressed as
\[f^{*}_{\rm square}(x,x^{\prime}):=\frac{\eta_{+}(x,x^{\prime})-\eta_{-}(x,x^{ \prime})}{\eta_{+}(x,x^{\prime})+\eta_{-}(x,x^{\prime})},\ \forall(x,x^{\prime})\in\mathcal{X}^{2}. \tag{2.16}\]
The function \(f^{*}_{\rm square}\) enjoys a smoothness property inherited from \(\eta_{+}(x,x^{\prime})\) and \(\eta_{-}(x,x^{\prime})\), allowing us to derive an approximation error bound more directly. Here we introduce the notation of Besov smoothness which is also adopted in [22, 40, 41, 18]. Given a function \(f:\mathbb{R}^{2d}\to\mathbb{R},h\in\mathbb{R}^{2d}\) and \(s\in\mathbb{N}\), define the \(s\)-fold application of the difference operator
\[\Delta^{s}_{h}f(x):=\sum_{j=0}^{s}(-1)^{s-j}\binom{s}{j}f(x+jh).\]
Given a measure \(\mu\) on \(\mathcal{X}^{2}\subset\mathbb{R}^{2d}\), define the \(s\)-th modulus of smoothness
\[\omega_{s,\mathcal{L}_{2}(\mu)}(f,t):=\sup_{\|h\|_{2}\leq t}\|\Delta^{s}_{h}f \|_{\mathcal{L}_{2}(\mu)}\,.\]
Finally, given an \(\alpha>0\) and set \(s=\lfloor\alpha\rfloor+1\), define the semi-norm
\[|f|_{\mathcal{B}^{\alpha}_{2,\infty}(\mu)}:=\sup_{t>0}t^{-\alpha}\omega_{s, \mathcal{L}_{2}(\mu)}(f,t).\]
Under Assumption 4, Assumption 6 and the Besov smoothness assumption of \(f^{*}_{\rm square}\), we derive fast learning rates for of Gaussian ranking estimator (1.4) equipped with \(\phi_{\rm square}\).
**Theorem 4**.: _Let \(\mathbf{z}=\{(X_{i},Y_{i})\}_{i=1}^{n}\) be an i.i.d. sample of a probability distribution \(P\) on \(\mathcal{X}\times\mathcal{Y}\). The estimator \(f_{\mathbf{z}}\) is defined by (1.4) with \(\lambda>0,\sigma\in(0,1)\), and \(\phi_{\rm square}(y,y^{\prime},t)=(1-{\rm sgn}(y-y^{\prime})t)^{2}\). Assume that Assumption 6 is satisfied for constants \(C_{\mathcal{X}}\) and \(\varrho\in(0,d]\). Further assume that \(f^{*}_{\rm square}\in\mathcal{L}_{2}(\mathbb{R}^{2d})\) and \(|f^{*}_{\rm square}|_{\mathcal{B}^{\alpha}_{2,\infty}(P^{2}_{\mathcal{X}})}<\infty\) for some \(\alpha>0\). Then for all \(n\geq 2,t>0\), and \(p\in(0,1/4]\), with probability at least \(1-(c_{0}+5)\exp(-t)\), there holds_
\[\begin{split}&\ \ \lambda\|f_{\mathbf{z}}\|_{K^{\sigma}}^{2}+ \mathcal{R}^{\phi_{\rm square}}(\pi(f_{\mathbf{z}}))-\mathcal{R}^{\phi_{\rm square }}(f^{*}_{\rm square})\\ \leq&\ \frac{2^{2s+3}\|f^{*}_{\rm square}\|_{\mathcal{L}_{2} (\mathbb{R}^{2d})}^{2}}{\pi^{d}}\lambda\sigma^{-2d}+2^{3-\alpha}\left(\frac{ \Gamma\left(d+\frac{\alpha}{2}\right)}{\Gamma(d)}\right)^{2}|f^{*}_{\rm square }|_{\mathcal{B}^{\alpha}_{2,\infty}(P^{2}_{\mathcal{X}})}^{2}\sigma^{2\alpha} \\ &\ \ +\frac{C^{*}_{\mathcal{X}}(96c_{6}+72c_{6}t)}{\lambda^{p}p^{2d+ 1}\sigma^{2\varrho}n}+\frac{(33552+1824\cdot 2^{2s}+3c_{6})t}{n},\end{split} \tag{2.17}\]
_where_
\[C^{*}_{\mathcal{X}}:=12C^{2}_{\mathcal{X}}\binom{4e+2d}{2d}\frac{(2d+1)^{2d+ 1}}{2^{2d+1}e^{4d+1}}\]
_and \(c_{0},c_{6}\) are constants independent of \(\lambda,\sigma,n,t\) or \(p\) which are explicitly given in the proof. In particular, choose_
\[\sigma=n^{-a},\quad a=\frac{1}{2\alpha+2\varrho};\]
\[\lambda=n^{-b},\quad b\geq\frac{\alpha+d}{\alpha+\varrho},\quad p=\frac{\log 2}{4 \log n}.\]
_Then for all \(n\geq 2,t\geq 1\), we have_
\[\mathcal{E}(\pi(f_{\mathbf{z}}))\lesssim\sqrt{\mathcal{E}^{\phi_{\mathrm{square }}}(\pi(f_{\mathbf{z}}))}\lesssim\sqrt{t}n^{-\frac{\alpha}{2(\alpha+\varrho)}} \log^{d+\frac{1}{2}}n\]
_with probability at least \(1-(c_{0}+5)\exp(-t)\). Furthermore, if \(P\) additionally satisfies Assumption 4 with \(q>0\), then with probability at least \(1-(c_{0}+5)\exp(-t)\), there holds_
\[\mathcal{E}(\pi(f_{\mathbf{z}}))\lesssim t^{\frac{q+1}{q+2}}n^{-\frac{(q+1) \alpha}{(q+2)(\alpha+\varrho)}}\log^{\frac{(q+1)(2d+1)}{q+2}}n.\]
Finally, we would like to point out that, in light of Proposition 3.2 in [18], the class of functions \(f\) satisfying \(|f|_{\mathcal{B}^{\alpha}_{2,\infty}(P^{2}_{\mathcal{X}})}<\infty\) is indeed very rich, even when the upper box-counting dimension of \(\mathcal{X}\) is strictly less than \(d\). For instance, if \(f\) is the Holder \(\alpha\)-continuous function on \(\mathcal{X}^{2}\), then \(|f|_{\mathcal{B}^{\alpha}_{2,\infty}(\mu)}<\infty\) for every measure \(\mu\) with \(\mathrm{supp}\mu\subset\mathcal{X}^{2}\). The proofs of Theorem 3 and Theorem 4 are postponed to Section 5 after establishing some preliminary results.
## 3 Discussions on Related Work
In this section, we compare our convergence analysis with some existing results in the literature. Our paper contributes to several rapidly growing literature, we give only those citations of particular relevance. More references can be found within these works.
In this paper, we consider the pairwise ranking model which aims to learn a bivariate ranking rule \(f:\mathcal{X}\times\mathcal{X}\to\mathbb{R}\). Our model is more general compared with the score-based ranking model, in which we learn a scoring function \(s:\mathcal{X}\to\mathbb{R}\) and construct the ranking rule as \(f(x,x^{\prime})=s(x)-s(x^{\prime})\). Some popular ranking algorithms including RankSVM in [20], RankNet in [7] and RankRLS in [30, 14, 31] are closely related to the score-based ranking model. For the error analysis of the score-based ranking model, [1] proved generalization bounds via algorithmic stability for the bipartite ranking problem where the label \(Y\) is binary. [11] derived a capacity-independent learning rate for RankRLS from the viewpoint of operator approximation. Whereafter, [52] developed a capacity-dependent generalization analysis for RankRLS by virtue of covering numbers and Hoeffding's decomposition for U-statistics.
In some learning tasks, for example, learning binary relations between two objects, we can assign a vertex to each object and an edge to represent the binary relations between two vertices, which naturally forms a graph structure. Denote \(\mathcal{X}\) the vertex set and \(\mathcal{X}^{2}\) the edge set. A positive semi-definite kernel \(G:\mathcal{X}\times\mathcal{X}\to\mathbb{R}\) induces an RKHS \(\mathcal{H}_{G}\) in which we can learn a function \(f:\mathcal{X}\to\mathbb{R}\) defined on the vertex set. However, in such graph learning tasks, if we want to figure out the binary relations, pairwise learning is more applicable, in which we need to learn a function \(f:\mathcal{X}^{2}\to\mathbb{R}\) defined on the edge set. In fact, the kernel function \(G\) defined between vertices can induce the so-called Kronecker product pairwise kernel \(K:\mathcal{X}^{2}\times\mathcal{X}^{2}\to\mathbb{R}\) by Kronecker product \(K((x,x^{\prime}),(u^{\prime},u^{\prime})):=G(x,u)G(x^{\prime},u^{\prime})\). Kronecker kernel ridge regression (KKRR) is a pairwise learning algorithm based on the Kronecker product pairwise kernel \(K\), where the induced regularized empirical minimization problem is solved in the RKHS \(\mathcal{H}_{K}\). In Subsection 5.1, we see that the RKHS \(\mathcal{H}_{K}\) consists of functions on \(\mathcal{X}^{2}\) and we can decompose any \(f\in\mathcal{H}_{K}\) into a direct sum of its symmetric part and
skew-symmetric part, which corresponds to a direct sum of two subspaces of \(\mathcal{H}_{K}\) and leads to a decomposition of the pairwise kernel \(K\) as well. The pairwise Gaussian kernel \(K^{\sigma}\) is indeed the skew-symmetric part of the traditional Gaussian kernel \(\widetilde{K}^{\sigma}\) on \(\mathcal{X}^{2}\) defined as \(\widetilde{K}^{\sigma}((x,x^{\prime}),(u,u^{\prime})):=\exp\left(-\|(x,x^{ \prime})-(u,u^{\prime})\|_{2}^{2}/\sigma^{2}\right)\), and hence the corresponding RKHS \(\mathcal{H}_{K^{\sigma}}\) is one subspace in the direct sum of \(\mathcal{H}_{\widetilde{K}^{\sigma}}\). Although \(\mathcal{H}_{K^{\sigma}}\) is a smaller hypothesis space, the universality of the kernel can be still maintained when only learning skew-symmetric relations. That is to say, if the Kronecker product pairwise kernel \(K\) can approximate any continuous function on \(\mathcal{X}^{2}\) arbitrarily well, then the skew-symmetric part of the pairwise kernel can also approximate any continuous skew-symmetric function arbitrarily well, cf. Theorem III.4. of [42]. Therefore, when we use pairwise learning algorithms including KKRR to learn skew-symmetric relations between data, we only need to restrict the estimators in the hypothesis spaces \(\mathcal{H}_{K^{\sigma}}\). A similar observation also holds when using the symmetric part of the kernel to learn symmetric relations. One can refer to [6, 42, 29, 43] for more details.
As the pairwise ranking model is more general, some theoretical frameworks have been established to analyze its convergence behaviors. The main distinction in convergence analysis between ranking and classification or regression is that in ranking problems, the stochastic part of the convergence analysis involves a second-order \(U\)-process rather than summations of i.i.d. random variables. [12] and [33] used Hoeffding's decomposition to the unregularized ranking algorithm and obtained generalization bounds with faster rates than \(\frac{1}{\sqrt{n}}\). Hoeffding's decomposition breaks the sample error of a ranking algorithm into an empirical term based on a sum of i.i.d. random variables and a degenerate U-process. In our work, we make full use of Talagrand's inequality, local Rademacher analysis, and capacity information of the hypothesis space to derive tight bounds on the empirical term as well as the degenerate part (see Section 4). For regularized pairwise ranking, [9] considered \(\ell_{1}\)-norm regularized SVM ranking and established a learning rate under a similar noise condition as Assumption 4. [34] proved an oracle inequality for parametric pairwise ranking with the Lasso penalty in high-dimensional settings. In contrast with our work, the work of [10, 25, 44] considered regularized pairwise ranking with a fixed kernel and derive learning rates by assuming a \(\gamma\)-decaying rate of the approximation error. We further emphasize the differences between our results and the work of [12, 33, 10, 25, 44]. [12] formulated the question if one can get generalization bounds with fast rates for the excess risk in ranking. They gave a positive answer for unregularized ranking estimators with non-convex \(0-1\) loss, i.e., \(\phi(y,y^{\prime},t)=\psi_{0-1}(\mathrm{sgn}(y-y^{\prime})t)\) where \(\psi_{0-1}(t):=\mathbb{I}_{[0,\infty)}(t)\), and left the case of convex risk minimization to future study. The work of [33] advanced this line of research, in which it established generalization bounds with better rates than \(\frac{1}{\sqrt{n}}\) for the excess ranking risk with convex margin-based losses. Both of these two papers constructed ranking estimators in a general hypothesis space and required the optimal ranking rule to reside in this function space. Hence they did not need to consider the analysis of the approximation error. Moreover, a noise condition was also proposed by [12], we postpone the comparison of different noise conditions to Subsection 5.5. The work of [10, 44] also developed capacity-dependent empirical process technique to analyze regularized ranking estimators, in which the capacity condition is based on the sup-norm \(\|\cdot\|_{\infty}\), which is more restrictive than the empirical \(\mathcal{L}_{2}\)-seminorm adopted in this paper. Moreover, [10] only focused on hinge loss and derived a rate of \(n^{-\frac{\gamma}{(2-\tau+p)\gamma+p}}\) which is slower than the rate of \(n^{-\frac{\gamma}{(2-\tau-p+pr)\gamma+p}}\) derived in Remark 1. The work of [25] considered a general loss function similar to ours and derived a rate of \(n^{-\frac{\gamma}{\gamma+1}}\) without any capacity condition of the hypothesis spaces nor Bernstein conditions on the bias and variance. We note that even for the case \(\tau=0\)
in which the variance bound (2.6) is trivially satisfied, our rate given by \(n^{-\frac{\gamma}{\gamma+p}}\) is still faster than \(n^{-\frac{\gamma}{\gamma+1}}\) provided \(\gamma<1\). [44] considered the least square loss \(\phi_{\mathrm{ls}}(y,y^{\prime},t):=(y-y^{\prime}-t)^{2}\) in a regression setting. We can apply our analysis developed for the square loss to derive the rate of \(n^{-\frac{\gamma}{\gamma+p}}\) for the case of \(\phi_{\mathrm{ls}}(y,y^{\prime},t)\) loss, which is much faster than \(n^{-\frac{2\gamma}{(p+1)\gamma+4}}\) obtained by [44]. We also note that the rate \(n^{-\frac{\gamma}{\gamma+p}}\) actually achieves the well-known minimax lower rate derived by [38] when approximating functions on \(\mathbb{R}^{2d}\). From this point of view, the oracle inequalities established in Theorem 1 and Theorem 2 are more general than existing results but can lead to more tight bounds on excess risk. Besides, [35] shows that the \(\gamma\)-decaying assumption of the approximation error is indeed a very restrictive assumption when describing the approximation ability of Gaussian RKHS and hence the setting of ranking with varying Gaussian kernels can not be handled by the approaches developed in the work of [10, 25, 44], in which they only focused on the regularized ranking with fixed pairwise kernels.
The intrinsic dimension of the data can be utilized to improve the dependence on the dimension of learning rates. To describe the intrinsic dimension structure, the probably most popular notion is to assume that \(\mathcal{X}\subset\mathbb{R}^{d}\) is a low-dimensional submanifold, see, e.g., [47, 48, 49]. The more general notion adopted in our paper is based on the box-counting dimension which considerably generalizes the manifold assumption. To the best of our knowledge, our paper is the first to consider the learning behavior of regularized Gaussian ranking estimators under the assumption that the gap between the intrinsic dimension of the data and the dimension of its ambient space is large.
To sum up, in this paper we consider a regularized pairwise ranking problem with a general convex loss under substantially general assumptions. The oracle inequality established in our work can lead to an elegant framework of convergence analysis which significantly improves the learning rates of existing work. It also enables us to derive fast learning rates of Gaussian ranking estimators which can avoid the curse of dimensionality by employing a low intrinsic dimensional assumption of the data.
## 4 Proofs of the General Oracle Inequalities
In this section, we provide detailed proofs of the general oracle inequality established in Theorem 1 and its variant in Theorem 2. Following a standard error decomposition process, we use Hoeffding's decomposition and concentration estimates with U-process and local Rademacher analysis to bound the stochastic parts of the error terms.
### Error Decomposition and Hoeffding's Decomposition
Given an \(f_{0}\in\mathcal{H}_{K}\), by the definition of \(f_{\mathbf{z}}\) we have \(\mathcal{R}_{\mathbf{z}}^{\phi}(f_{\mathbf{z}})+\lambda\|f_{\mathbf{z}}\|_{K }^{2}\leq\mathcal{R}_{\mathbf{z}}^{\phi}(f_{0})+\lambda\|f_{0}\|_{K}^{2}\). Then
\[\lambda\|f_{\mathbf{z}}\|_{K}^{2}+\mathcal{R}^{\phi}(f_{\mathbf{ z}})-\mathcal{R}^{\phi}(f_{\phi}^{*})\] \[\leq\left(\lambda\|f_{0}\|_{K}^{2}+\mathcal{R}^{\phi}(f_{0})- \mathcal{R}^{\phi}(f_{\phi}^{*})\right)+\left(\mathcal{R}_{\mathbf{z}}^{\phi }(f_{0})-\mathcal{R}_{\mathbf{z}}^{\phi}(f_{\phi}^{*})-\mathcal{R}^{\phi}(f_{0 })+\mathcal{R}^{\phi}(f_{\phi}^{*})\right)\] \[\quad+\left(\mathcal{R}^{\phi}(f_{\mathbf{z}})-\mathcal{R}^{\phi }(f_{\phi}^{*})-\mathcal{R}_{\mathbf{z}}^{\phi}(f_{\mathbf{z}})+\mathcal{R}_{ \mathbf{z}}^{\phi}(f_{\phi}^{*})\right)\]
For a loss function \(\phi\) that can be truncated at some \(M>0\), since \(\phi(y,y^{\prime},\pi(t))\leq\phi(y,y^{\prime},t)\), we have \(\mathcal{R}_{\mathbf{z}}^{\phi}(\pi(f_{\mathbf{z}}))\leq\mathcal{R}_{\mathbf{ z}}^{\phi}(f_{\mathbf{z}})\) which implies \(\mathcal{R}_{\mathbf{z}}^{\phi}(\pi(f_{\mathbf{z}}))+\lambda\|f_{\mathbf{z}}\| _{K}^{2}\leq\mathcal{R}_{\mathbf{z}}^{\phi}(f_{0})+\lambda\|f_{0}\|_{K}^{2}\) for
any \(f_{0}\in\mathcal{H}_{K}\). Hence we can bound \(\lambda\|f_{\mathbf{z}}\|_{K}^{2}+\mathcal{R}^{\phi}(\pi(f_{\mathbf{z}}))- \mathcal{R}^{\phi}(f_{\phi}^{*})\) by the following error decomposition
\[\lambda\|f_{\mathbf{z}}\|_{K}^{2}+\mathcal{R}^{\phi}(\pi(f_{\mathbf{z}}))- \mathcal{R}^{\phi}(f_{\phi}^{*})\leq\mathcal{A}(\lambda,f_{0})+\mathcal{S}_{ \mathbf{z}}(f_{0})+\mathcal{S}_{\mathbf{z}}(\pi(f_{\mathbf{z}})) \tag{4.1}\]
where
\[\mathcal{A}(\lambda,f_{0}) :=\lambda\|f_{0}\|_{K}^{2}+\mathcal{R}^{\phi}(f_{0})-\mathcal{R}^ {\phi}(f_{\phi}^{*}),\] \[\mathcal{S}_{\mathbf{z}}(f_{0}) :=\mathcal{R}_{\mathbf{z}}^{\phi}(f_{0})-\mathcal{R}_{\mathbf{z} }^{\phi}(f_{\phi}^{*})-\mathcal{R}^{\phi}(f_{0})+\mathcal{R}^{\phi}(f_{\phi}^ {*}),\] \[\mathcal{S}_{\mathbf{z}}(\pi(f_{\mathbf{z}})) :=\mathcal{R}^{\phi}(\pi(f_{\mathbf{z}}))-\mathcal{R}^{\phi}(f_{ \phi}^{*})-\mathcal{R}_{\mathbf{z}}^{\phi}(\pi(f_{\mathbf{z}}))+\mathcal{R}_{ \mathbf{z}}^{\phi}(f_{\phi}^{*}).\]
The first term \(\mathcal{A}(\lambda,f_{0})\) reflects the approximation error of using \(f_{0}\) to approximate \(f_{\phi}^{*}\). The other two terms \(\mathcal{S}_{\mathbf{z}}(f_{0})\) and \(\mathcal{S}_{\mathbf{z}}(\pi(f_{\mathbf{z}}))\) constitute the stochastic parts of the error decomposition.
Under Assumption 1, we can bound \(\mathcal{S}_{\mathbf{z}}(f_{0})\) by Bernstein's inequality of \(U\)-statistics, cf. [2], stated below.
**Lemma 1**.: _Let \(\mathbf{z}=\{Z_{i}\}_{i=1}^{n}\) be an i.i.d. sample of a probability distribution \(P\) on a measurable space \(\mathcal{Z}\), \(h:\mathcal{Z}\times\mathcal{Z}\rightarrow\mathbb{R}\) be a symmetric measurable function with \(\mathbb{E}[h(Z,Z^{\prime})]=0\) and \(\|h\|_{\infty}=b\). The \(U\)-statistics with kernel \(h\) is defined as_
\[U_{\mathbf{z}}(h)=\frac{1}{n(n-1)}\sum_{i=1}^{n}\sum_{j\neq i}h(Z_{i},Z_{j}).\]
_Let \(f:\mathcal{Z}\rightarrow\mathbb{R}\) be defined as \(f(z):=\mathbb{E}[h(z,Z^{\prime})]\) and \(\zeta^{2}:=\mathbb{E}[f(Z)^{2}]\). Then for all \(t>0\) we have_
\[P\big{(}\sqrt{n}|U_{\mathbf{z}}(h)|\geq t\big{)}\leq 4\exp\left(-\frac{t^{2}}{8 \zeta^{2}+\left(\frac{64}{\sqrt{n-1}}+\frac{1}{3\sqrt{n}}\right)bt}\right),\]
_which implies that_
\[P\left(U_{\mathbf{z}}(h)\geq\sqrt{\frac{8\zeta^{2}t}{n}}+\frac{150bt}{n} \right)\leq 2\exp(-t). \tag{4.2}\]
Recall that \(\mathcal{S}_{\mathbf{z}}(f_{0})=\mathcal{R}_{\mathbf{z}}^{\phi}(f_{0})- \mathcal{R}_{\mathbf{z}}^{\phi}(f_{\phi}^{*})-\mathcal{R}^{\phi}(f_{0})+ \mathcal{R}^{\phi}(f_{\phi}^{*})\).
**Proposition 1**.: _Given \(f_{0}\in\mathcal{H}_{K}\) satisfying \(\|\phi_{f_{0}}\|_{\infty}\leq B_{0}\) for some constant \(B_{0}\geq 0\). If Assumption 1 holds, then with probability at least \(1-4\exp(-t)\),_
\[\mathcal{S}_{\mathbf{z}}(f_{0})\leq\mathcal{R}^{\phi}(f_{0})-\mathcal{R}^{\phi }(f_{\phi}^{*})+\left(\frac{8Vt}{n}\right)^{\frac{1}{2-t}}+\frac{300Bt}{n}+ \frac{152B_{0}t}{n}. \tag{4.3}\]
Proof.: Define \(v_{f}:\mathcal{Z}\times\mathcal{Z}\rightarrow\mathbb{R}\) as \(v_{f}:=\phi_{f}-\phi_{f_{\phi}^{*}}\), then
\[\mathcal{S}_{\mathbf{z}}(f_{0})=\mathcal{R}_{\mathbf{z}}^{\phi}(f_{0})- \mathcal{R}_{\mathbf{z}}^{\phi}(f_{\phi}^{*})-\mathcal{R}^{\phi}(f_{0})+ \mathcal{R}^{\phi}(f_{\phi}^{*})=\mathbb{E}_{\mathbf{z}}v_{f_{0}}-\mathbb{E}v_ {f_{0}}.\]
We further decompose \(\mathbb{E}_{\mathbf{z}}v_{f_{0}}-\mathbb{E}v_{f_{0}}\) into two terms by introducing the truncation \(\pi(f_{0})\), which is given by
\[\mathbb{E}_{\mathbf{z}}v_{f_{0}}-\mathbb{E}v_{f_{0}}=[\mathbb{E}_{\mathbf{z}}(v _{f_{0}}-v_{\pi(f_{0})})-\mathbb{E}(v_{f_{0}}-v_{\pi(f_{0})})]+(\mathbb{E}_{ \mathbf{z}}v_{\pi(f_{0})}-\mathbb{E}v_{\pi(f_{0})}).\]
Each term above is a difference between a \(U\)-statistics and its expectation, hence we can apply Bernstein's inequality in Lemma 1 to bound them. To this end, we need to establish the sup-norm bound and variance bound for the related U-statistics.
We first bound \(\mathbb{E}_{\mathbf{z}}(v_{f_{0}}-v_{\pi(f_{0})})-\mathbb{E}(v_{f_{0}}-v_{\pi(f_ {0})})\). Since
\[\phi_{f_{0}}(z,z^{\prime})-\phi_{\pi(f_{0})}(z,z^{\prime})=\phi(y,y^{\prime},f_{ 0}(x,x^{\prime}))-\phi(y,y^{\prime},\pi(f_{0})(x,x^{\prime}))\geq 0\]
for all \((z,z^{\prime})\in\mathcal{Z}\times\mathcal{Z}\), we have
\[v_{f_{0}}-v_{\pi(f_{0})}=\phi_{f_{0}}-\phi_{\pi(f_{0})}:\mathcal{Z}\times \mathcal{Z}\to[0,B_{0}]\]
and
\[\|v_{f_{0}}-v_{\pi(f_{0})}-\mathbb{E}(v_{f_{0}}-v_{\pi(f_{0})})\|_{\infty}\leq B _{0}.\]
Define \(g:\mathcal{Z}\to\mathbb{R}\) as
\[g(z):=\mathbb{E}[v_{f_{0}}(z,Z^{\prime})-v_{\pi(f_{0})}(z,Z^{\prime})-\mathbb{ E}(v_{f_{0}}-v_{\pi(f_{0})})].\]
Then
\[\mathbb{E}[g(Z)^{2}]\leq\mathbb{E}\bigg{[}\big{(}\mathbb{E}[v_{f_{0}}(Z,Z^{ \prime})-v_{\pi(f_{0})}(Z,Z^{\prime})]\big{)}^{2}\bigg{]}\leq B_{0}\mathbb{E}( v_{f_{0}}-v_{\pi(f_{0})}).\]
Now we apply Bernstein's inequality (4.2) to the zero-mean symmetric function \(v_{f_{0}}-v_{\pi(f_{0})}-\mathbb{E}(v_{f_{0}}-v_{\pi(f_{0})})\), showing that
\[P\left(\mathbb{E}_{\mathbf{z}}(v_{f_{0}}-v_{\pi(f_{0})})-\mathbb{E}(v_{f_{0}}- v_{\pi(f_{0})})\geq\sqrt{\frac{8B_{0}\mathbb{E}(v_{f_{0}}-v_{\pi(f_{0})})t}{n}}+ \frac{150B_{0}t}{n}\right)\leq 2\exp(-t).\]
By basic inequality \(\sqrt{ab}\leq(a+b)/2\), we have
\[\sqrt{\frac{8B_{0}\mathbb{E}(v_{f_{0}}-v_{\pi(f_{0})})t}{n}}\leq\mathbb{E}(v_ {f_{0}}-v_{\pi(f_{0})})+\frac{2B_{0}t}{n}.\]
Hence with probability at least \(1-2\exp(-t)\), there holds
\[\mathbb{E}_{\mathbf{z}}(v_{f_{0}}-v_{\pi(f_{0})})-\mathbb{E}(v_{f_{0}}-v_{\pi (f_{0})})\leq\mathbb{E}(v_{f_{0}}-v_{\pi(f_{0})})+\frac{152B_{0}t}{n}. \tag{4.4}\]
To bound the second term \(\mathbb{E}_{\mathbf{z}}v_{\pi(f_{0})}-\mathbb{E}v_{\pi(f_{0})}\), define \(u:\mathcal{Z}\to\mathbb{R}\) as
\[u(z):=\mathbb{E}[v_{\pi(f_{0})}(z,Z^{\prime})-\mathbb{E}v_{\pi(f_{0})}].\]
By (2.5) and (2.6) of Assumption 1, we have
\[v_{\pi(f_{0})}=\phi_{\pi(f_{0})}-\phi_{f_{0}^{*}}\in[-B,B],\]
\[\|v_{\pi(f_{0})}-\mathbb{E}v_{\pi(f_{0})}\|_{\infty}\leq 2B,\]
and
\[\mathbb{E}[u(Z)^{2}]\leq\mathbb{E}\bigg{[}\big{(}\mathbb{E}[v_{\pi(f_{0})}(Z, Z^{\prime})]\big{)}^{2}\bigg{]}=\mathbb{E}\big{(}Q\phi_{\pi(f_{0})}(Z)-Q\phi_{f_{0}^{*}} (Z)\big{)}^{2}\leq V(\mathbb{E}v_{\pi(f_{0})})^{\tau}.\]
We apply Bernstein's inequality to the zero-mean symmetric function \(v_{\pi(f_{0})}-\mathbb{E}v_{\pi(f_{0})}\), showing that
\[P\left(\mathbb{E}_{\mathbf{z}}(v_{\pi(f_{0})})-\mathbb{E}v_{\pi(f_{0})}\geq \sqrt{\frac{8V(\mathbb{E}v_{\pi(f_{0})})^{\tau}t}{n}}+\frac{300Bt}{n}\right) \leq 2\exp(-t).\]
If \(\tau>0\), we use Young's inequality \(ab\leq a^{q}/q+b^{p}/p\) by setting
\[a=\sqrt{\frac{2^{3-\tau}\tau^{\tau}Vt}{n}},\quad b=\left(\frac{2\mathbb{E}v_{ \pi(f_{0})}}{\tau}\right)^{\tau/2},\quad q=\frac{2}{2-\tau},\quad p=\frac{2}{ \tau},\]
to yield
\[\sqrt{\frac{8V(\mathbb{E}v_{\pi(f_{0})})^{\tau}t}{n}}=ab\leq\frac{2-\tau}{2} \cdot\left(\frac{2^{3-\tau}\tau^{\tau}Vt}{n}\right)^{\frac{1}{2-\tau}}+\mathbb{ E}v_{\pi(f_{0})}\leq\left(\frac{8Vt}{n}\right)^{\frac{1}{2-\tau}}+\mathbb{E}v_{ \pi(f_{0})}.\]
Since \(\mathbb{E}v_{\pi(f_{0})}\geq 0\), this inequality also holds for \(\tau=0\). Hence with probability at least \(1-2\exp(-t)\), there holds
\[\mathbb{E}_{\mathbf{z}}(v_{\pi(f_{0})})-\mathbb{E}v_{\pi(f_{0})}\leq\mathbb{E} v_{\pi(f_{0})}+\left(\frac{8Vt}{n}\right)^{\frac{1}{2-\tau}}+\frac{300Bt}{n}. \tag{4.5}\]
Consequently, combining (4.4) and (4.5), we derive the desired bound (4.3). This completes the proof.
It is more involved to bound \(\mathcal{S}_{\mathbf{z}}(\pi(f_{\mathbf{z}}))\). Recall that
\[\mathcal{S}_{\mathbf{z}}(\pi(f_{\mathbf{z}}))=\mathcal{R}^{\phi}(\pi(f_{ \mathbf{z}}))-\mathcal{R}^{\phi}(f_{\phi}^{*})-\mathcal{R}_{\mathbf{z}}^{\phi }(\pi(f_{\mathbf{z}}))+\mathcal{R}_{\mathbf{z}}^{\phi}(f_{\phi}^{*}).\]
We apply Hoeffding's decomposition of \(U\)-statistics (cf. [24]) to \(\mathcal{S}_{\mathbf{z}}(\pi(f_{\mathbf{z}}))\), which is given by
\[\mathcal{R}^{\phi}(\pi(f_{\mathbf{z}}))-\mathcal{R}^{\phi}(f_{ \phi}^{*})-\mathcal{R}_{\mathbf{z}}^{\phi}(\pi(f_{\mathbf{z}}))+\mathcal{R}_{ \mathbf{z}}^{\phi}(f_{\phi}^{*}) \tag{4.6}\] \[=2Q_{\mathbf{z}}[\mathcal{R}^{\phi}(\pi(f_{\mathbf{z}}))- \mathcal{R}^{\phi}(f_{\phi}^{*})-Q\phi_{\pi(f_{\mathbf{z}})}+Q\phi_{f_{\phi}^ {*}}]-U_{\mathbf{z}}(h_{\pi(f_{\mathbf{z}})}-h_{f_{\phi}^{*}})\] \[=:2\mathcal{S}_{\mathbf{z},1}(\pi(f_{\mathbf{z}}))-\mathcal{S}_{ \mathbf{z},2}(\pi(f_{\mathbf{z}})).\]
Here the functionals \(Q_{\mathbf{z}},U_{\mathbf{z}}\) and function \(h_{f}:\mathcal{Z}\times\mathcal{Z}\to\mathbb{R}\) are defined by
\[Q_{\mathbf{z}}(g) :=\frac{1}{n}\sum_{i=1}^{n}g(Z_{i}),\forall g:\mathcal{Z}\to \mathbb{R},\] \[U_{\mathbf{z}}(g^{\prime}) :=\frac{1}{n(n-1)}\sum_{i=1}^{n}\sum_{j\neq i}g^{\prime}(Z_{i},Z_{j }),\forall g^{\prime}:\mathcal{Z}\times\mathcal{Z}\to\mathbb{R},\] \[h_{f}(z,z^{\prime}) :=\phi_{f}(z,z^{\prime})-Q\phi_{f}(z)-Q\phi_{f}(z^{\prime})+ \mathcal{R}^{\phi}(f).\]
Hoeffding's decomposition breaks \(\mathcal{S}_{\mathbf{z}}(\pi(f_{\mathbf{z}}))\) into a sum of i.i.d. random variables \(\mathcal{S}_{\mathbf{z},1}(\pi(f_{\mathbf{z}}))\) called _empirical term_ and a degenerate \(U\)-statistics \(\mathcal{S}_{\mathbf{z},2}(\pi(f_{\mathbf{z}}))\) called _degenerate term_ which has zero conditional expectation
\[\mathbb{E}[h_{\pi(f_{\mathbf{z}})}(Z,Z^{\prime})-h_{f_{\phi}^{*}}(Z,Z^{\prime} )|Z=z]=0.\]
We derive bounds for the empirical term and degenerate term in the following two subsections respectively.
### Bounding the Empirical Term
In this subsection, we estimate the empirical term
\[\mathcal{S}_{\mathbf{z},1}(\pi(f_{\mathbf{z}}))=Q_{\mathbf{z}}[\mathcal{R}^{\phi}( \pi(f_{\mathbf{z}}))-\mathcal{R}^{\phi}(f_{\phi}^{*})-Q\phi_{\pi(f_{\mathbf{z}}) }+Q\phi_{f_{\phi}^{*}}]\]
which can be further bounded by the supremum norm of an empirical process indexed by \(\mathcal{H}_{K}\), i.e.,
\[\mathcal{S}_{\mathbf{z},1}(\pi(f_{\mathbf{z}}))\leq\sup_{f\in\mathcal{H}_{K}} Q_{\mathbf{z}}[\mathcal{R}^{\phi}(\pi(f))-\mathcal{R}^{\phi}(f_{\phi}^{*})-Q \phi_{\pi(f)}+Q\phi_{f_{\phi}^{*}}].\]
In the remaining part of this subsection, we will use Talagrand's concentration inequality together with local Rademacher averages to derive a tight bound for the supremum norm of the empirical process. The idea of our estimates can be found in Chapter 7 of [36], in which the authors established refined oracle inequalities for classification with support vector machines.
Define \(s_{f},g_{f,r}:\mathcal{Z}\to\mathbb{R}\) as \(s_{f}:=Q\phi_{f}-Q\phi_{f_{\phi}^{*}}\) with \(Q\phi_{f}\) given by (2.4) and
\[g_{f,r}:=\frac{\mathbb{E}s_{\pi(f)}-s_{\pi(f)}}{\lambda\|f\|_{K}^{2}+\mathbb{ E}s_{\pi(f)}+r}\]
for all \(f\in\mathcal{H}_{K}\) and \(r>0.\) Under Assumption 1, we apply Talagrand's inequality to obtain the following lemma.
**Lemma 2**.: _If Assumption 1 holds, then for all \(t>0\), with probability at least \(1-\exp(-t)\),_
\[\sup_{f\in\mathcal{H}_{K}}|Q_{\mathbf{z}}(g_{f,r})|\leq\frac{5}{4}\mathbb{E} \sup_{f\in\mathcal{H}_{K}}|Q_{\mathbf{z}}(g_{f,r})|+\sqrt{\frac{2Vt}{nr^{2- \tau}}}+\frac{28Bt}{3nr}.\]
Proof.: Analogous to the proof of Proposition 1, by (2.5) of Assumption 1, obviously \(\|g_{f,r}\|_{\infty}\leq 2Br^{-1}\). For the variance bound, if \(\tau>0\), using Young's inequality, we can prove that \((qa)^{2/q}(pb)^{2/p}\leq(a+b)^{2}\), then setting \(a=r,b=\mathbb{E}s_{\pi(f)},q=2/(2-\tau),p=2/\tau\) yields
\[\mathbb{E}g_{f,r}^{2}\leq\frac{\mathbb{E}s_{\pi(f)}^{2}}{(\mathbb{E}s_{\pi(f) }+r)^{2}}\leq\frac{(2-\tau)^{2-\tau}\tau^{\tau}V}{4r^{2-\tau}}\leq Vr^{\tau-2}\]
which also holds for \(\tau=0\). Hence we can apply Talagrand's inequality, cf. Theorem 7.5 of [36], to \((g_{f,r})_{f\in\mathcal{H}_{K}}\), to derive the desired bound. This completes the proof.
In order to derive an upper bound of \(\sup_{f\in\mathcal{H}_{K}}|Q_{\mathbf{z}}(g_{f,r})|\), we leverage standard tools including peeling, symmetrization and Dudley's chaining to estimate \(\mathbb{E}\sup_{f\in\mathcal{H}_{K}}|Q_{\mathbf{z}}(g_{f,r})|\). Define
\[r^{*} :=\inf\{\lambda\|f\|_{K}^{2}+\mathbb{E}s_{\pi(f)}:f\in\mathcal{H }_{K}\},\] \[\mathcal{F}_{r} :=\{f\in\mathcal{H}_{K}:\lambda\|f\|_{K}^{2}+\mathbb{E}s_{\pi(f) }\leq r\},\ \forall r>r^{*},\] \[\mathcal{S}_{r} :=\{s_{\pi(f)}:f\in\mathcal{F}_{r}\},\ \forall r>r^{*},\]
and the _empirical Rademacher complexity_ of \(\mathcal{S}_{r}\)
\[\operatorname{Rad}(\mathcal{S}_{r},\mathbf{z}):=\mathbb{E}_{\varepsilon}\sup_ {f\in\mathcal{F}_{r}}\bigg{|}\frac{1}{n}\sum_{i=1}^{n}\varepsilon_{i}s_{\pi(f) }(Z_{i})\bigg{|},\]
where \(\{\varepsilon_{i}\}_{i=1}^{n}\) is the Rademacher sequence, i.e., \(\{\varepsilon_{i}\}_{i=1}^{n}\) are i.i.d. random variables uniformly chosen from \(\{-1,1\}\). The following two lemmas present peeling and symmetrization arguments to handle \(\mathbb{E}\sup_{f\in\mathcal{F}_{r}}|Q_{\mathbf{z}}(\mathbb{E}s_{\pi(f)}-s_{ \pi(f)})|\), which can be proved according to Theorem 7.7 and Proposition 7.10 in [36].
**Lemma 3**.: _Let \(\varphi:(r^{*},\infty)\to[0,\infty)\) such that \(\varphi(4r)\leq 2\varphi(r)\) and_
\[\mathbb{E}\sup_{f\in\mathcal{F}_{r}}|Q_{\mathbf{z}}(\mathbb{E}s_{\pi(f)}-s_{ \pi(f)})|\leq\varphi(r)\]
_for all \(r>r^{*}\). Then for all \(r>r^{*}\), we have_
\[\mathbb{E}\sup_{f\in\mathcal{H}_{K}}|Q_{\mathbf{z}}(g_{f,r})|\leq\frac{4 \varphi(r)}{r}.\]
**Lemma 4**.: \[\mathbb{E}\sup_{f\in\mathcal{F}_{r}}|Q_{\mathbf{z}}(\mathbb{E}s_{\pi(f)}-s_{ \pi(f)})|\leq 2\mathbb{E}\operatorname{Rad}(\mathcal{S}_{r},\mathbf{z}).\]
We then apply Theorem 7.16 of [36] to derive the following lemma.
**Lemma 5**.: _If Assumption 1 and Assumption 2 hold, then there exist constants \(k_{1}>0\) and \(k_{2}>0\) such that_
\[\mathbb{E}\operatorname{Rad}(\mathcal{S}_{r},\mathbf{z})\leq \max\bigg{\{}2^{p_{1}}k_{1}a_{1}^{p_{1}}\left(\frac{r}{\lambda} \right)^{\frac{p_{1}}{2}}(Vr^{\tau})^{\frac{1-p_{1}}{2}}L^{p_{1}}n^{-\frac{1}{ 2}},\] \[4^{\frac{p_{1}}{1+p_{1}}}k_{2}a_{1}^{\frac{2p_{1}}{1+p_{1}}} \left(\frac{r}{\lambda}\right)^{\frac{p_{1}}{1+p_{1}}}B^{\frac{1-p_{1}}{1+p_{1 }}}L^{\frac{2p_{1}}{1+p_{1}}}n^{-\frac{1}{1+p_{1}}}\bigg{\}}.\]
Proof.: We first establish the sup-norm bound and variance bound for \(s_{\pi(f)}\in\mathcal{S}_{r}.\) By Assumption 1 we have \(\|s_{\pi(f)}\|_{\infty}\leq B\) and
\[\mathbb{E}(s_{\pi(f)}^{2})=\mathbb{E}(Q\phi_{\pi(f)}-Q\phi_{f_{\phi}^{*}})^{2} \leq V\big{(}\mathbb{E}(Q\phi_{\pi(f)}-Q\phi_{f_{\phi}^{*}})\big{)}^{\tau}=V( \mathbb{E}s_{\pi(f)})^{\tau}\leq Vr^{\tau}.\]
For \(\mathbf{z}=\{(X_{i},Y_{i})\}_{i=1}^{n}\), the empirical norm \(\|\cdot\|_{\mathcal{L}_{2}(P_{\mathbf{z}}^{n})}\) on \(\mathcal{S}_{r}\) is defined as
\[\|s_{\pi(f)}\|_{\mathcal{L}_{2}(P_{\mathbf{z}}^{n})}:=Q_{\mathbf{z}}^{\frac{1} {2}}(s_{\pi(f)}^{2})=\bigg{(}\frac{1}{n}\sum_{i=1}^{n}s_{\pi(f)}(Z_{i})^{2} \bigg{)}^{\frac{1}{2}}.\]
According to Theorem 7.16 of [36], it remains to bound \(\mathbb{E}e_{i}(\operatorname{id}:\mathcal{S}_{r}\to\mathcal{L}_{2}(P_{ \mathbf{z}}^{n})).\) For all \((y,y^{\prime})\in\mathcal{Y}\times\mathcal{Y}\), function \(\phi(y,y^{\prime},\cdot)\) is locally \(L\)-Lipschitz over \([-M,M]\). Hence for all \(s_{\pi(f)},s_{\pi(f^{\prime})}\in\mathcal{S}_{r}\) and \(\mathbf{z}=\{(X_{i},Y_{i})\}_{i=1}^{n}\) we have
\[\|s_{\pi(f)}-s_{\pi(f^{\prime})}\|_{\mathcal{L}_{2}(P_{\mathbf{z}} ^{n})}^{2} =\frac{1}{n}\sum_{i=1}^{n}\big{(}s_{\pi(f)}(Z_{i})-s_{\pi(f^{ \prime})}(Z_{i})\big{)}^{2}\] \[=\frac{1}{n}\sum_{i=1}^{n}\bigg{(}\mathbb{E}\bigg{[}\phi(Y_{i},Y^{ \prime},\pi(f)(X_{i},X^{\prime}))-\phi(Y_{i},Y^{\prime},\pi(f^{\prime})(X_{i}, X^{\prime}))\bigg{]}\bigg{)}^{2}\] \[\leq\frac{L^{2}}{n}\sum_{i=1}^{n}\bigg{(}\mathbb{E}\bigg{|}\pi(f) (X_{i},X^{\prime})-\pi(f^{\prime})(X_{i},X^{\prime})\bigg{|}\bigg{)}^{2}\]
\[\leq L^{2}\|f-f^{\prime}\|_{\mathcal{L}_{2}(P_{\mathbf{x}}^{n}\otimes P _{\mathcal{X}})}^{2}.\]
Therefore, an \(\varepsilon\)-net of \((\mathcal{F}_{r},\|\cdot\|_{\mathcal{L}_{2}(P_{\mathbf{x}}^{n}\otimes P_{ \mathcal{X}})})\) induces an \(L\varepsilon\)-net of \((\mathcal{S}_{r},\|\cdot\|_{\mathcal{L}_{2}(P_{\mathbf{x}}^{n})})\). Besides, notice that \(\mathcal{F}_{r}\subset(r/\lambda)^{1/2}\mathcal{B}_{\mathcal{H}_{K}}\). Combine with Assumption 2 we have
\[\mathbb{E}e_{i}(\mathrm{id}:\mathcal{S}_{r}\to\mathcal{L}_{2}(P_{ \mathbf{z}}^{n})) \leq L\mathbb{E}e_{i}(\mathrm{id}:\mathcal{F}_{r}\to\mathcal{L}_{ 2}(P_{\mathbf{x}}^{n}\otimes P_{\mathcal{X}}))\] \[\leq 2L(r/\lambda)^{1/2}\mathbb{E}e_{i}(\mathrm{id}:\mathcal{H} _{K}\to\mathcal{L}_{2}(P_{\mathbf{x}}^{n}\otimes P_{\mathcal{X}}))\] \[\leq 2L(r/\lambda)^{1/2}a_{1}i^{-\frac{1}{2p_{1}}}.\]
Now we apply Theorem 7.16 of [36] and derive the desired bound. The proof is then finished.
With the help of the preceding lemmas, we can now establish an upper bound for empirical term \(\mathcal{S}_{\mathbf{z},1}(\pi(f_{\mathbf{z}}))\).
**Proposition 2**.: _If Assumption 1 and Assumption 2 hold, there exist constants_
\[c_{1}:=\frac{\big{(}2^{p_{1}}\cdot 270k_{1}\big{)}^{\frac{2}{2-\tau-p_{1}+p_{1 }\tau}}}{3},\quad c_{2}:=\frac{2^{1+3p_{1}}\big{(}135k_{2}\big{)}^{1+p_{1}}}{3}\]
_such that for all \(t>0\), \(n\geq 72t\) and \(f_{0}\in\mathcal{H}_{K}\), with probability at least \(1-\exp(-t)\),_
\[\mathcal{S}_{\mathbf{z},1}(\pi(f_{\mathbf{z}})) \leq\frac{1}{3}\left(\lambda\|f_{\mathbf{z}}\|_{K}^{2}+\mathcal{ R}^{\phi}(\pi(f_{\mathbf{z}}))-\mathcal{R}^{\phi}(f_{\phi}^{*})\right)+ \frac{1}{3}\left(\lambda\|f_{0}\|_{K}^{2}+\mathcal{R}^{\phi}(f_{0})-\mathcal{ R}^{\phi}(f_{\phi}^{*})\right)\] \[+c_{1}\left(\frac{a_{1}^{2p_{1}}V^{1-p_{1}}L^{2p_{1}}}{\lambda^{ p_{1}}n}\right)^{\frac{1}{2-\tau-p_{1}+p_{1}\tau}}+\frac{c_{2}a_{1}^{2p_{1}}B^{1-p_{1 }}L^{2p_{1}}}{\lambda^{p_{1}}n}+\left(\frac{24Vt}{n}\right)^{\frac{1}{2-\tau}}.\]
Proof.: By Lemma 5 we have
\[\mathbb{E}\operatorname{Rad}(\mathcal{S}_{r},\mathbf{z}) \leq\max\bigg{\{}2^{p_{1}}k_{1}a_{1}^{p_{1}}\left(\frac{r}{ \lambda}\right)^{\frac{p_{1}}{2}}(Vr^{\tau})^{\frac{1-p_{1}}{2}}L^{p_{1}}n^{- \frac{1}{2}},\] \[\qquad\qquad 4^{\frac{p_{1}}{1+p_{1}}}k_{2}a_{1}^{\frac{2p_{1}}{ 1+p_{1}}}\left(\frac{r}{\lambda}\right)^{\frac{p_{1}}{1+p_{1}}}B^{\frac{1-p_{1 }}{1+p_{1}}}L^{\frac{2p_{1}}{1+p_{1}}}n^{-\frac{1}{1+p_{1}}}\bigg{\}}.\]
Define
\[\varphi(r):=\max\bigg{\{}2^{p_{1}+1}k_{1}a_{1}^{p_{1}}\left(\frac {r}{\lambda}\right)^{\frac{p_{1}}{2}}(Vr^{\tau})^{\frac{1-p_{1}}{2}}L^{p_{1}}n ^{-\frac{1}{2}},\] \[\qquad\qquad 2^{\frac{1+3p_{1}}{1+p_{1}}}k_{2}a_{1}^{\frac{2p_{1}}{ 1+p_{1}}}\left(\frac{r}{\lambda}\right)^{\frac{p_{1}}{1+p_{1}}}B^{\frac{1-p_{1 }}{1+p_{1}}}L^{\frac{2p_{1}}{1+p_{1}}}n^{-\frac{1}{1+p_{1}}}\bigg{\}}.\]
One can verify that \(\varphi(4r)\leq 2\varphi(r)\). By Lemma 2, Lemma 3 and Lemma 4, for all \(t>0\) we have
\[P\left(\sup_{f\in\mathcal{H}_{K}}|Q_{\mathbf{z}}(g_{f,r})|\leq\frac{5\varphi(r )}{r}+\sqrt{\frac{2Vt}{r^{2-\tau}n}}+\frac{28Bt}{3nr}\right)\geq 1-\exp(-t).\]
For \(f_{\mathbf{z}}\in\mathcal{H}_{K}\), by the definition of \(g_{f_{\mathbf{z},r}}\), we have, with probability at least \(1-\exp(-t)\),
\[Q_{\mathbf{z}}[\mathcal{R}^{\phi}(\pi(f_{\mathbf{z}}))-\mathcal{R}^{\phi} (f_{\phi}^{*})-Q\phi_{\pi(f_{\mathbf{z}})}+Q\phi_{f_{\phi}^{*}}]\] \[\leq(\lambda\|f_{\mathbf{z}}\|_{K}^{2}+\mathcal{R}^{\phi}(\pi(f_{\mathbf{ z}}))-\mathcal{R}^{\phi}(f_{\phi}^{*})+r)\left(\frac{5\varphi(r)}{r}+\sqrt{\frac{2Vt}{r ^{2-\tau}n}}+\frac{28Bt}{3nr}\right).\]
If there exists an
\[r\geq\max\bigg{\{}135\varphi(r),\left(\frac{72Vt}{n}\right)^{\frac{1}{2-\tau}},r^{*}\bigg{\}},\]
then using simple algebra and the assumption \(n\geq 72t\) we have
\[\frac{5\varphi(r)}{r}\leq\frac{1}{27},\quad\sqrt{\frac{2Vt}{r^{2-\tau}n}}\leq \frac{1}{6},\quad\frac{28Bt}{3nr}\leq\frac{28V^{\frac{1}{2-\tau}}t}{3nr}\leq \frac{7}{54}.\]
Hence with probability at least \(1-\exp(-t)\),
\[Q_{\mathbf{z}}[\mathcal{R}^{\phi}(\pi(f_{\mathbf{z}}))-\mathcal{R}^{\phi }(f_{\phi}^{*})-Q\phi_{\pi(f_{\mathbf{z}})}+Q\phi_{f_{\phi}^{*}}] \tag{4.7}\] \[\leq\frac{1}{3}(\lambda\|f_{\mathbf{z}}\|_{K}^{2}+\mathcal{R}^{\phi}( \pi(f_{\mathbf{z}}))-\mathcal{R}^{\phi}(f_{\phi}^{*})+r).\]
It remains to find such \(r\). By definition of \(\varphi(r)\), if
\[r \geq\left(\frac{4^{p_{1}}\big{(}270k_{1}\big{)}^{2}a_{1}^{2p_{1}} V^{1-p_{1}}L^{2p_{1}}}{\lambda^{p_{1}}n}\right)^{\frac{1}{2-\tau-p_{1}+p_{1} \tau}}\] \[\quad+\frac{2^{1+3p_{1}}\big{(}135k_{2}\big{)}^{1+p_{1}}a_{1}^{2p _{1}}B^{1-p_{1}}L^{2p_{1}}}{\lambda^{p_{1}}n},\]
we have \(r\geq 135\varphi(r)\). Besides, notice that \(r^{*}\leq\lambda\|f_{0}\|_{K}^{2}+\mathcal{R}^{\phi}(f_{0})-\mathcal{R}^{\phi }(f_{\phi}^{*})\), it is sufficient to choose
\[r =\lambda\|f_{0}\|_{K}^{2}+\mathcal{R}^{\phi}(f_{0})-\mathcal{R}^{ \phi}(f_{\phi}^{*})+\left(\frac{4^{p_{1}}\big{(}270k_{1}\big{)}^{2}a_{1}^{2p_{ 1}}V^{1-p_{1}}L^{2p_{1}}}{\lambda^{p_{1}}n}\right)^{\frac{1}{2-\tau-p_{1}+p_{1} \tau}}\] \[\quad+\frac{2^{1+3p_{1}}\big{(}135k_{2}\big{)}^{1+p_{1}}a_{1}^{2p _{1}}B^{1-p_{1}}L^{2p_{1}}}{\lambda^{p_{1}}n}+\left(\frac{72Vt}{n}\right)^{ \frac{1}{2-\tau}}.\]
Substitute this \(r\) into (4.7) and choose
\[c_{1}=\frac{\big{(}2^{p_{1}}\cdot 270k_{1}\big{)}^{\frac{2}{2-\tau-p_{1}+p_{1} \tau}}}{3},\quad c_{2}=\frac{2^{1+3p_{1}}\big{(}135k_{2}\big{)}^{1+p_{1}}}{3}.\]
The proof of Proposition 2 is then finished.
Here we remark that the condition \(n\geq 72t\) in Proposition 2 is not essential and we will leave the case \(n<72t\) to the final proof of the general oracle inequality in the next subsection.
### Bounding the Degenerate Term
In this subsection, we use arguments of symmetrization and Dudley's chaining to bound the degenerate term
\[\mathcal{S}_{\mathbf{z},2}(\pi(f_{\mathbf{z}}))=U_{\mathbf{z}}(h_{\pi(f_{ \mathbf{z}})}-h_{f_{\phi}^{*}}).\]
If Assumption 1 holds, since \(\mathcal{R}_{\mathbf{z}}^{\phi}(f_{\mathbf{z}})+\lambda\|f_{\mathbf{z}}\|_{K} ^{2}\leq\mathcal{R}_{\mathbf{z}}^{\phi}(0)\leq B\), we have \(\|f_{\mathbf{z}}\|_{K}\leq\sqrt{B/\lambda}\). Recalling the Rademacher sequence \(\{\varepsilon_{i}\}_{i=1}^{n}\), define
\[\mathcal{F} :=\{f\in\mathcal{H}_{K}:\|f\|_{K}\leq\sqrt{B/\lambda}\},\] \[S_{\mathbf{z}}(h_{\pi(f)}-h_{f_{\phi}^{*}}) :=\frac{1}{n(n-1)}\sum_{i=1}^{n}\sum_{j\neq i}\varepsilon_{i} \varepsilon_{j}\big{(}h_{\pi(f)}(Z_{i},Z_{j})-h_{f_{\phi}^{*}}(Z_{i},Z_{j}) \big{)}.\]
Then \(f_{\mathbf{z}}\in\mathcal{F}\).
**Lemma 6**.: _There exist constants \(C_{1}>0,C_{2}>0\) such that, for all \(\xi>0\)_
\[\mathbb{E}\exp\left(\xi\sqrt{\sup_{f\in\mathcal{F}}|(n-1)U_{ \mathbf{z}}(h_{\pi(f)}-h_{f_{\phi}^{*}})|}\right)\] \[\leq C_{1}\mathbb{E}\mathbb{E}_{\varepsilon}\exp\left(C_{2}\xi \sqrt{\sup_{f\in\mathcal{F}}|(n-1)S_{\mathbf{z}}(h_{\pi(f)}-h_{f_{\phi}^{*}})| }\right)\]
Proof.: Define \(\varphi_{1}(t):=\exp(\xi\sqrt{t})\) and
\[\varphi_{2}(t):=\begin{cases}e&0\leq t\leq\xi^{-2}\\ \exp(\xi\sqrt{t})&t\geq\xi^{-2}.\end{cases}\]
Then \(\varphi_{2}\) is convex, non-decreasing, and the following inequality holds:
\[\varphi_{2}(t)e^{-1}\leq\varphi_{1}(t)\leq\varphi_{2}(t). \tag{4.8}\]
We can apply the symmetrization argument in the theory of \(U\)-processes with \(\varphi_{2}\), cf. Remarks 3.5.4 of [24], to show that there exist constants \(C_{1}^{\prime}>0\) and \(C_{2}^{\prime}>0\) such that
\[\mathbb{E}\varphi_{2}(\sup_{f\in\mathcal{F}}|(n-1)U_{\mathbf{z}}(h_{\pi(f)}-h _{f_{\phi}^{*}})|)\leq C_{1}^{\prime}\mathbb{E}\mathbb{E}_{\varepsilon}\varphi _{2}\big{(}C_{2}^{\prime}\sup_{f\in\mathcal{F}}|(n-1)S_{\mathbf{z}}(h_{\pi(f)} -h_{f_{\phi}^{*}})|\big{)}.\]
Combining this with the inequality (4.8) finishes the proof.
**Lemma 7**.: _There exist constants \(C_{3}>0,C_{4}>0\) such that, for all \(\xi>0\),_
\[C_{1}\mathbb{E}\mathbb{E}_{\varepsilon}\exp\left(C_{2}\xi\sqrt{ \sup_{f\in\mathcal{F}}|(n-1)S_{\mathbf{z}}(h_{\pi(f)}-h_{f_{\phi}^{*}})|}\right)\] \[\leq C_{3}\mathbb{E}\exp\left(C_{4}\xi^{2}\mathbb{E}_{\varepsilon }\sup_{f\in\mathcal{F}}|(n-1)S_{\mathbf{z}}(h_{\pi(f)}-h_{f_{\phi}^{*}})| \right).\]
Proof.: Define a Rademacher chaos of order two \(R_{1}:=C_{2}^{2}\xi^{2}(n-1)S_{\mathbf{z}}(h_{\pi(f)}-h_{f_{\phi}^{*}})\), and
\[\|R_{1}\|:=\sup_{f\in\mathcal{F}}C_{2}^{2}\xi^{2}(n-1)|S_{\mathbf{z}}(h_{\pi(f)}- h_{f_{\phi}^{*}})|.\]
By formula (3.5) of [3], there exist constants \(C_{3}^{\prime}>0,C_{4}^{\prime}>0\) such that
\[\mathbb{E}_{\varepsilon}\exp(\|R_{1}\|^{1/2})\leq C_{3}^{\prime}\exp(C_{4}^{ \prime}(\mathbb{E}_{\varepsilon}\|R_{1}\|^{2})^{1/2}). \tag{4.9}\]
Besides, by Holder's inequality and formula (3.4) of [3] we have
\[(\mathbb{E}_{\varepsilon}\|R_{1}\|^{2})^{1/2} =(\mathbb{E}_{\varepsilon}\|R_{1}\|^{1/2}\|R_{1}\|^{3/2})^{1/2}\] \[\leq(\mathbb{E}_{\varepsilon}\|R_{1}\|)^{1/4}(\mathbb{E}_{ \varepsilon}\|R_{1}\|^{3})^{1/4}\] \[\leq 2^{3/4}(\mathbb{E}_{\varepsilon}\|R_{1}\|)^{1/4}(\mathbb{E}_ {\varepsilon}\|R_{1}\|^{2})^{3/8},\]
which is equivalent to
\[(\mathbb{E}_{\varepsilon}\|R_{1}\|^{2})^{1/2}\leq 8\mathbb{E}_{\varepsilon}\|R _{1}\|.\]
Combining this with (4.9) we complete the proof.
Next we bound \(\mathbb{E}_{\varepsilon}\sup_{f\in\mathcal{F}}|(n-1)S_{\mathbf{z}}(h_{\pi(f)}- h_{f_{\phi}^{*}})|\) by the chaining argument.
**Lemma 8**.: _If Assumption 1 and Assumption 2 hold, then there exists a constant \(C_{5}>0\) such that_
\[\mathbb{E}_{\varepsilon}\sup_{f\in\mathcal{F}}|(n-1)S_{\mathbf{z }}(h_{\pi(f)}-h_{f_{\phi}^{*}})|\] \[\leq(\log 4)C_{5}B\bigg{[}\bigg{(}\frac{3a_{1}^{2}L^{2}}{2\lambda B }\bigg{)}^{p_{1}}\frac{4^{1-2p_{1}}}{1-2p_{1}}+\bigg{(}\frac{3a_{2}^{2}L^{2}}{4 \lambda B}\bigg{)}^{p_{2}}\frac{4^{1-2p_{2}}}{1-2p_{2}}\bigg{]}+\frac{(1+\log 4 \sqrt{3})C_{5}B}{4}.\]
Proof.: Define
\[\mathcal{G}:=\{h_{\pi(f)}-h_{f_{\phi}^{*}}:f\in\mathcal{F}\}\cup\{0:\mathcal{ Z}\times\mathcal{Z}\to\mathbb{R}\}.\]
By (2.5) of Assumption 1, \(\|h\|_{\infty}\leq 4B\) for all \(h\in\mathcal{G}\). Now given \(\mathbf{z}=\{(X_{i},Y_{i})\}_{i=1}^{n}\), define a stochastic process \(J_{\mathbf{z}}\) and a pseudometric \(\rho\) on \(\mathcal{G}\) as
\[J_{\mathbf{z}}(h) :=\frac{1}{4Bn}\sum_{i=1}^{n}\sum_{j\neq i}\varepsilon_{i} \varepsilon_{j}h(Z_{i},Z_{j}),\quad h\in\mathcal{G},\] \[\rho(h,h^{\prime}) :=\frac{1}{4B}\sqrt{\frac{1}{n(n-1)}\sum_{i=1}^{n}\sum_{j\neq i} \big{(}h(Z_{i},Z_{j})-h^{\prime}(Z_{i},Z_{j})\big{)}^{2}},\quad h,h^{\prime} \in\mathcal{G}.\]
We will apply chaining lemma for \(U\)-processes to \(\{J_{\mathbf{z}}(h):h\in\mathcal{G}\}\), cf. Lemma 5 of [28]. To this end we need to find a convex, strictly increasing function \(\varphi:[0,\infty)\to[0,\infty)\) such that \(0\leq\varphi(0)\leq 1\) and for all \(\rho(h,h^{\prime})>0\),
\[\mathbb{E}_{\varepsilon}\varphi\left(\frac{|J_{\mathbf{z}}(h)-J_{\mathbf{z}}( h^{\prime})|}{\rho(h,h^{\prime})}\right)\leq 1.\]
Define a Rademacher chaos of order two \(R_{2}:=J_{\mathbf{z}}(h)-J_{\mathbf{z}}(h^{\prime}).\) By Corollary 3.2.6 of [24], there exists a constant \(\kappa>0\) which is independent of \(h,h^{\prime},n\) such that
\[\mathbb{E}_{\varepsilon}\exp\left(\frac{|R_{2}|}{\kappa(\mathbb{E}_{ \varepsilon}|R_{2}|^{2})^{1/2}}\right)\leq e. \tag{4.10}\]
Notice that
\[\mathbb{E}_{\varepsilon}|R_{2}|^{2} =\frac{1}{16B^{2}n^{2}}\mathbb{E}_{\varepsilon}\bigg{(}\sum_{i \neq j}\varepsilon_{i}\varepsilon_{j}\big{(}h(Z_{i},Z_{j})-h^{\prime}(Z_{i},Z_{ j})\big{)}\bigg{)}^{2}\] \[=\frac{1}{16B^{2}n^{2}}\bigg{(}\sum_{i=k\neq j=l}\big{(}h(Z_{i},Z_ {j})-h^{\prime}(Z_{i},Z_{j})\big{)}\big{(}h(Z_{k},Z_{l})-h^{\prime}(Z_{k},Z_{l })\big{)}\] \[\quad+\sum_{i=l\neq j=k}\big{(}h(Z_{i},Z_{j})-h^{\prime}(Z_{i},Z_ {j})\big{)}\big{(}h(Z_{k},Z_{l})-h^{\prime}(Z_{k},Z_{l})\big{)}\bigg{)}\] \[=\frac{1}{8B^{2}n^{2}}\sum_{i\neq j}\big{(}h(Z_{i},Z_{j})-h^{ \prime}(Z_{i},Z_{j})\big{)}^{2}\] \[\leq 2\rho(h,h^{\prime})^{2}. \tag{4.11}\]
Hence if we choose \(\varphi(t)=\exp(\frac{t}{\sqrt{2}\kappa}-1)\), by (4.10) and (4.11), we have
\[\mathbb{E}_{\varepsilon}\varphi\left(\frac{|J_{\mathbf{z}}(h)-J_ {\mathbf{z}}(h^{\prime})|}{\rho(h,h^{\prime})}\right) \leq\mathbb{E}_{\varepsilon}\varphi\left(\frac{\sqrt{2}|R_{2}|}{ (\mathbb{E}_{\varepsilon}|R_{2}|^{2})^{1/2}}\right)\] \[=\mathbb{E}_{\varepsilon}\exp\left(\frac{|R_{2}|}{\kappa( \mathbb{E}_{\varepsilon}|R_{2}|^{2})^{1/2}}-1\right)\leq 1.\]
Fix \(h^{\prime}=0\in\mathcal{G}\), obviously \(\sup_{h\in\mathcal{G}}\rho(h,0)\leq 1\), and
\[4B\mathbb{E}_{\varepsilon}\sup_{h\in\mathcal{G}}|J_{\mathbf{z}}(h)-J_{ \mathbf{z}}(0)|=\mathbb{E}_{\varepsilon}\sup_{f\in\mathcal{F}}|(n-1)S_{ \mathbf{z}}(h_{\pi(f)}-h_{f_{\phi}^{*}})|.\]
Therefore, by Lemma 5 of [28], we obtain
\[\mathbb{E}_{\varepsilon}\sup_{f\in\mathcal{F}}|(n-1)S_{\mathbf{z} }(h_{\pi(f)}-h_{f_{\phi}^{*}})| \leq 32B\int_{0}^{\frac{1}{4}}\varphi^{-1}(\mathcal{N}(\mathcal{G}, \rho,\delta))\,d\delta \tag{4.12}\] \[\leq C_{5}B\int_{0}^{\frac{1}{4}}\log\mathcal{N}(\mathcal{G},\rho,\delta)\,d\delta,\]
where \(C_{5}=64\sqrt{2}\kappa\). It remains to bound the metric entropy \(\log\mathcal{N}(\mathcal{G},\rho,\delta)\). We write
\[16B^{2}\rho^{2}(h,h^{\prime})\] \[=\frac{1}{n(n-1)}\sum_{i=1}^{n}\sum_{j\neq i}\big{(}h(Z_{i},Z_{j })-h^{\prime}(Z_{i},Z_{j})\big{)}^{2}\] \[\leq\frac{4}{n(n-1)}\sum_{i=1}^{n}\sum_{j\neq i}\bigg{[}\big{(} \phi_{\pi(f)}(Z_{i},Z_{j})-\phi_{\pi(f^{\prime})}(Z_{i},Z_{j})\big{)}^{2}\] \[\quad+\big{(}Q\phi_{\pi(f)}(Z_{i})-Q\phi_{\pi(f^{\prime})}(Z_{i}) \big{)}^{2}+\big{(}Q\phi_{\pi(f)}(Z_{j})-Q\phi_{\pi(f^{\prime})}(Z_{j})\big{)} ^{2}\]
\[\leq\frac{4}{n(n-1)}\sum_{i=1}^{n}\sum_{j\neq i}\left[L^{2}\big{(}\pi(f)(X_{i},X_ {j})-\pi(f^{\prime})(X_{i},X_{j})\big{)}^{2}\right.\] \[\quad+L^{2}\mathbb{E}\big{(}\pi(f)(X_{i},X^{\prime})-\pi(f^{\prime })(X_{i},X^{\prime})\big{)}^{2}+L^{2}\mathbb{E}\big{(}\pi(f)(X_{j},X^{\prime}) -\pi(f^{\prime})(X_{j},X^{\prime})\big{)}^{2}\] \[\quad+\big{(}\mathcal{R}^{\phi}(\pi(f))-\mathcal{R}^{\phi}(\pi(f ^{\prime}))\big{)}^{2}\bigg{]}\] \[\leq 4L^{2}\|f-f^{\prime}\|_{\mathcal{L}_{2}(P_{\mathbf{x}^{2}}^{n })}^{2}+8L^{2}\|f-f^{\prime}\|_{\mathcal{L}_{2}(P_{\mathbf{x}}^{n}\otimes P_ {\mathcal{X}})}^{2}+4\big{(}\mathcal{R}^{\phi}(\pi(f))-\mathcal{R}^{\phi}(\pi (f^{\prime}))\big{)}^{2},\]
where the second inequality follows from the assumption that \(\phi\) is \(L\)-Lipschitz with respect to the third argument. This implies that
\[\mathcal{N}(\mathcal{G},\rho,\delta)\leq \,\mathcal{N}\bigg{(}\mathcal{F},\|\cdot\|_{\mathcal{L}_{2}(P_{ \mathbf{x}^{2}}^{n})},\frac{2B\delta}{\sqrt{3}L}\bigg{)}\cdot\mathcal{N}\bigg{(} \mathcal{F},\|\cdot\|_{\mathcal{L}_{2}(P_{\mathbf{x}}^{n}\otimes P_{\mathcal{ X}})},\frac{\sqrt{2}B\delta}{\sqrt{3}L}\bigg{)}\cdot\mathcal{N}\bigg{(}[0,B],| \cdot|,\frac{2B\delta}{\sqrt{3}}\bigg{)}\] \[= \,\mathcal{N}\bigg{(}\mathrm{id}:\mathcal{H}_{K}\to\mathcal{L}_{2 }(P_{\mathbf{x}^{2}}^{n}),2\sqrt{\frac{\lambda B}{3}}\frac{\delta}{L}\bigg{)} \cdot\mathcal{N}\bigg{(}\mathrm{id}:\mathcal{H}_{K}\to\mathcal{L}_{2}(P_{ \mathbf{x}}^{n}\otimes P_{\mathcal{X}}),\sqrt{\frac{2\lambda B}{3}}\frac{ \delta}{L}\bigg{)}\] \[\quad\cdot\mathcal{N}\bigg{(}[0,B],|\cdot|,\frac{2B\delta}{\sqrt{ 3}}\bigg{)}.\]
By Assumption 2 we have
\[\log\mathcal{N}(\mathcal{G},\rho,\delta)\leq\log 4\cdot\bigg{(}\frac{3a_{1}^{2}L^{2} }{2\lambda B\delta^{2}}\bigg{)}^{p_{1}}+\log 4\cdot\bigg{(}\frac{3a_{2}^{2}L^{2} }{4\lambda B\delta^{2}}\bigg{)}^{p_{2}}+\log\frac{\sqrt{3}}{\delta}.\]
Since \(p_{1},p_{2}\in(0,1/2)\), we integrate the right-hand side and yield the desired bound by (4.12). The proof is then finished.
With the help of the preceding lemmas, we can now establish an upper bound for degenerate term \(\mathcal{S}_{\mathbf{z},2}(\pi(f_{\mathbf{z}}))\).
**Proposition 3**.: _If Assumption 1 and Assumption 2 hold, there exist constants_
\[c_{0}:=C_{3},\quad c_{3}:=\frac{(64\log 2)C_{4}C_{5}3^{p_{1}}}{32^{p_{1}}(1-2 p_{1})},\quad c_{4}:=\frac{(64\log 2)C_{4}C_{5}3^{p_{2}}}{64^{p_{2}}(1-2p_{2})},\quad c _{5}:=(2+\log 48)C_{4}C_{5},\]
_such that for all \(t>0\) and \(n\geq 2\), with probability at least \(1-c_{0}\exp(-t)\) we have_
\[|\mathcal{S}_{\mathbf{z},2}(\pi(f_{\mathbf{z}}))|\leq\frac{c_{3}a_{1}^{2p_{1} }L^{2p_{1}}B^{1-p_{1}}t}{\lambda^{p_{1}}n}+\frac{c_{4}a_{2}^{2p_{2}}L^{2p_{2} }B^{1-p_{2}}t}{\lambda^{p_{2}}n}+\frac{c_{5}t}{n}.\]
Proof.: Recall that \(\mathcal{S}_{\mathbf{z},2}(\pi(f_{\mathbf{z}}))=U_{\mathbf{z}}(h_{\pi(f_{ \mathbf{z}})}-h_{f_{\phi}^{*}})\). Denote
\[\omega:=(\log 4)C_{5}B\bigg{[}\bigg{(}\frac{3a_{1}^{2}L^{2}}{2 \lambda B}\bigg{)}^{p_{1}}\frac{4^{1-2p_{1}}}{1-2p_{1}}+\bigg{(}\frac{3a_{2}^{2 }L^{2}}{4\lambda B}\bigg{)}^{p_{2}}\frac{4^{1-2p_{2}}}{1-2p_{2}}\bigg{]}+\frac {(1+\log 4\sqrt{3})C_{5}B}{4}.\]
Combine Lemma 6, Lemma 7 and Lemma 8 we have
\[\mathbb{E}\exp\Big{(}\xi\sqrt{|(n-1)U_{\mathbf{z}}(h_{\pi(f_{\mathbf{z}})}-h_{ f_{\phi}^{*}})|}\Big{)}\]
\[\leq C_{1}\mathbb{E}\mathbb{E}_{\varepsilon}\exp\left(C_{2}\xi\sqrt{ \sup_{f\in\mathcal{F}}|(n-1)S_{\mathbf{z}}(h_{\pi(f)}-h_{f_{\phi}^{*}})|}\right)\] \[\leq C_{3}\mathbb{E}\exp\left(C_{4}\xi^{2}\mathbb{E}_{\varepsilon} \sup_{f\in\mathcal{F}}|(n-1)S_{\mathbf{z}}(h_{\pi(f)}-h_{f_{\phi}^{*}})|\right) \leq C_{3}\exp\left(C_{4}\xi^{2}\omega\right).\]
By Markov's inequality we conclude that for all \(t^{\prime}>0\) and \(\xi>0\), we have
\[P\left(|(n-1)U_{\mathbf{z}}(h_{\pi(f_{\mathbf{z}})}-h_{f_{\phi}^{*}})|\geq t^{ \prime}\right)\leq C_{3}\exp(C_{4}\xi^{2}\omega-\xi\sqrt{t^{\prime}}).\]
Let \(t^{\prime}=4C_{4}\omega t\) and \(\xi=\frac{\sqrt{t^{\prime}}}{2C_{4}\omega}\), we have
\[P\left(|U_{\mathbf{z}}(h_{\pi(f_{\mathbf{z}})}-h_{f_{\phi}^{*}})| \geq\frac{8C_{4}\omega t}{n}\right)\] \[\leq P\left(|(n-1)U_{\mathbf{z}}(h_{\pi(f_{\mathbf{z}})}-h_{f_{ \phi}^{*}})|\geq 4C_{4}\omega t\right)\] \[\leq C_{3}\exp(-t).\]
This completes the proof of Proposition 3.
Now we are in the position to prove Theorem 1 which presents the general oracle inequality for excess \(\phi\)-ranking risk and its variant in Theorem 2 for margin-based losses.
Proof of Theorem 1.: By error decomposition (4.1) and Hoeffding's decomposition (4.6) we have
\[\lambda\|f_{\mathbf{z}}\|_{K}^{2}+\mathcal{R}^{\phi}(\pi(f_{ \mathbf{z}}))-\mathcal{R}^{\phi}(f_{\phi}^{*})\] \[\leq\left(\lambda\|f_{0}\|_{K}^{2}+\mathcal{R}^{\phi}(f_{0})- \mathcal{R}^{\phi}(f_{\phi}^{*})\right)+\left(\mathcal{R}_{\mathbf{z}}^{\phi} (f_{0})-\mathcal{R}_{\mathbf{z}}^{\phi}(f_{\phi}^{*})-\mathcal{R}^{\phi}(f_{0 })+\mathcal{R}^{\phi}(f_{\phi}^{*})\right)\] \[\quad+2Q_{\mathbf{z}}[\mathcal{R}^{\phi}(\pi(f_{\mathbf{z}}))- \mathcal{R}^{\phi}(f_{\phi}^{*})-Q\phi_{\pi(f_{\mathbf{z}})}+Q\phi_{f_{\phi}^{ *}}]-U_{\mathbf{z}}(h_{\pi(f_{\mathbf{z}})}-h_{f_{\phi}^{*}})\] \[=\mathcal{A}(\lambda,f_{0})+\mathcal{S}_{\mathbf{z}}(f_{0})+2 \mathcal{S}_{\mathbf{z},1}(\pi(f_{\mathbf{z}}))-\mathcal{S}_{\mathbf{z},2}( \pi(f_{\mathbf{z}})).\]
If \(n<72t\), by (2.5) of Assumption 1 and \(\|\phi_{f_{0}}\|_{\infty}\leq B_{0}\), we have
\[\lambda\|f_{\mathbf{z}}\|_{K}^{2}+\mathcal{R}^{\phi}(\pi(f_{ \mathbf{z}}))-\mathcal{R}^{\phi}(f_{\phi}^{*})\] \[\leq\left(\lambda\|f_{0}\|_{K}^{2}+\mathcal{R}^{\phi}(f_{0})- \mathcal{R}^{\phi}(f_{\phi}^{*})\right)+(B_{0}+B)+(2B+2B)+4B\] \[=\left(\lambda\|f_{0}\|_{K}^{2}+\mathcal{R}^{\phi}(f_{0})- \mathcal{R}^{\phi}(f_{\phi}^{*})\right)+9B+B_{0}\] \[\leq 8\big{(}\lambda\|f_{0}\|_{K}^{2}+\mathcal{R}^{\phi}(f_{0})- \mathcal{R}^{\phi}(f_{\phi}^{*})\big{)}+\frac{900bt}{n}+\frac{456B_{0}t}{n}.\]
The conclusion then follows immediately.
If \(n\geq 72t\), by Proposition 1, with probability at most \(4\exp(-t)\), there holds
\[\mathcal{S}_{\mathbf{z}}(f_{0})\geq\mathcal{R}^{\phi}(f_{0})- \mathcal{R}^{\phi}(f_{\phi}^{*})+\left(\frac{8Vt}{n}\right)^{\frac{1}{2-r}}+ \frac{300bt}{n}+\frac{152B_{0}t}{n}.\]
By Proposition 2, with probability at most \(\exp(-t)\), there holds
\[\mathcal{S}_{\mathbf{z},1}(\pi(f_{\mathbf{z}}))\geq\frac{1}{3}\left(\lambda\| f_{\mathbf{z}}\|_{K}^{2}+\mathcal{R}^{\phi}(\pi(f_{\mathbf{z}}))-\mathcal{R}^{ \phi}(f_{\phi}^{*})\right)+\frac{1}{3}\left(\lambda\|f_{0}\|_{K}^{2}+\mathcal{R }^{\phi}(f_{0})-\mathcal{R}^{\phi}(f_{\phi}^{*})\right)\]
\[+c_{1}\left(\frac{a_{1}^{2p_{1}}V^{1-p_{1}}L^{2p_{1}}}{\lambda^{p_{1}}n }\right)^{\frac{1}{2-\tau-p_{1}+p_{1}\tau}}+\frac{c_{2}a_{1}^{2p_{1}}B^{1-p_{1}} L^{2p_{1}}}{\lambda^{p_{1}}n}+\left(\frac{24Vt}{n}\right)^{\frac{1}{2-\tau}}.\]
By Proposition 3, with probability at most \(c_{0}\exp(-t)\), there holds
\[|\mathcal{S}_{\mathbf{z},2}(\pi(f_{\mathbf{z}}))|\geq\frac{c_{3}a_{1}^{2p_{1}} L^{2p_{1}}B^{1-p_{1}}t}{\lambda^{p_{1}}n}+\frac{c_{4}a_{2}^{2p_{2}}L^{2p_{2}}B^{1-p_ {2}}t}{\lambda^{p_{2}}n}+\frac{c_{5}t}{n}\]
Finally, combing all the estimates above, with probability at least \(1-(c_{0}+5)\exp(-t)\), we have
\[\lambda\|f_{\mathbf{z}}\|_{K}^{2}+\mathcal{R}^{\phi}(\pi(f_{ \mathbf{z}}))-\mathcal{R}^{\phi}(f_{\phi}^{*})\] \[\leq\left(\lambda\|f_{0}\|_{K}^{2}+\mathcal{R}^{\phi}(f_{0})- \mathcal{R}^{\phi}(f_{\phi}^{*})\right)+\mathcal{R}^{\phi}(f_{0})-\mathcal{R}^ {\phi}(f_{\phi}^{*})+\left(\frac{8Vt}{n}\right)^{\frac{1}{2-\tau}}+\frac{300Bt} {n}+\frac{152B_{0}t}{n}\] \[\quad+\frac{2}{3}\left(\lambda\|f_{\mathbf{z}}\|_{K}^{2}+ \mathcal{R}^{\phi}(\pi(f_{\mathbf{z}}))-\mathcal{R}^{\phi}(f_{\phi}^{*}) \right)+\frac{2}{3}\left(\lambda\|f_{0}\|_{K}^{2}+\mathcal{R}^{\phi}(f_{0})- \mathcal{R}^{\phi}(f_{\phi}^{*})\right)\] \[\quad+2c_{1}\left(\frac{a_{1}^{2p_{1}}V^{1-p_{1}}L^{2p_{1}}}{ \lambda^{p_{1}}n}\right)^{\frac{1}{2-\tau-p_{1}+p_{1}\tau}}+\frac{2c_{2}a_{1}^ {2p_{1}}B^{1-p_{1}}L^{2p_{1}}}{\lambda^{p_{1}}n}+\left(\frac{96Vt}{n}\right)^ {\frac{1}{2-\tau}}\] \[\quad+\frac{c_{3}a_{1}^{2p_{1}}L^{2p_{1}}B^{1-p_{1}}t}{\lambda^{p _{1}}n}+\frac{c_{4}a_{2}^{2p_{2}}L^{2p_{2}}B^{1-p_{2}}t}{\lambda^{p_{2}}n}+ \frac{c_{5}t}{n}.\]
The proof is then finished.
Proof of Theorem 2.: Using the conditions on \(\psi\) and the analysis conducted in subsection 5.3, we see that Assumption 1 is satisfied and hence we apply Theorem 1 to yield the conclusion.
## 5 Deriving the Learning Rates for Gaussian Ranking Estimators
In this section, we apply the oracle inequality to derive fast learning rates of Gaussian ranking estimator (1.4) with hinge loss and square loss. We first estimate the capacity of pairwise Gaussian kernel space \(\mathcal{H}_{K^{\varrho}}\) under the assumption that the marginal of the data generating distribution on the input space \(\mathcal{X}\subset\mathbb{R}^{d}\) is supported on a set of upper box-counting dimension \(\varrho\in(0,d]\). Then, we derive approximation error bounds for the hinge loss under noise conditions and the square loss under the Besov smoothness. Finally, we apply the well-established oracle inequality and calibration inequality to derive fast learning rates for the excess ranking risk. We also make some comparisons of different noise conditions at the end of this section.
### Entropy Number Estimate of Pairwise Gaussian Kernel Spaces
In this subsection, we write \(\mathcal{H}_{K}(\mathcal{X}^{2})\) to emphasize that the RKHS induced by the pairwise reproducing kernel \(K\) consists of functions defined on \(\mathcal{X}^{2}=\mathcal{X}\times\mathcal{X}\), or equivalently, \(K\) is
defined or restricted on \(\mathcal{X}^{2}\times\mathcal{X}^{2}\), where \(\mathcal{X}\) will be specified in the context. The traditional Gaussian kernel on \(\mathbb{R}^{2d}\times\mathbb{R}^{2d}\) with variance \(\sigma>0\) is defined as
\[\widetilde{K}^{\sigma}((x,x^{\prime}),(u,u^{\prime})):=\exp\left(-\frac{\|(x,x ^{\prime})-(u,u^{\prime})\|_{2}^{2}}{\sigma^{2}}\right) \tag{5.1}\]
and we denote by \(\mathcal{H}_{\widetilde{K}^{\sigma}}(\mathcal{X}^{2})\) the induced RKHS restricted on \(\mathcal{X}^{2}\), cf. [4]. Note that we require the ranking estimators to be skew-symmetric. It is not suitable to choose \(\mathcal{H}_{\widetilde{K}^{\sigma}}(\mathcal{X}^{2})\) as the hypothesis space directly. However, we can decompose \(\widetilde{f}\in\mathcal{H}_{\widetilde{K}^{\sigma}}(\mathcal{X}^{2})\) into a symmetric part (\(\widetilde{f}-f\)) and a skew-symmetric part \(f\):
\[\widetilde{f}(x,x^{\prime})=\frac{\widetilde{f}(x,x^{\prime})+\widetilde{f}(x ^{\prime},x)}{2}+\frac{\widetilde{f}(x,x^{\prime})-\widetilde{f}(x^{\prime},x )}{2}=:(\widetilde{f}-f)+f \tag{5.2}\]
which leads to an orthogonal decomposition of \(\mathcal{H}_{\widetilde{K}^{\sigma}}(\mathcal{X}^{2})\) in \(\mathcal{L}_{2}(P_{\mathcal{X}}^{2})\) and also a decomposition of the traditional Gaussian kernel given by
\[\widetilde{K}^{\sigma}((x,x^{\prime})(u,u^{\prime}))=\frac{ \widetilde{K}^{\sigma}((x,x^{\prime}),(u,u^{\prime}))+\widetilde{K}^{\sigma}( (x^{\prime},x),(u,u^{\prime}))}{2}\] \[\qquad\qquad\qquad\qquad+\frac{\widetilde{K}^{\sigma}((x,x^{ \prime}),(u,u^{\prime}))-\widetilde{K}^{\sigma}((x^{\prime},x),(u,u^{\prime})) }{2}.\]
The skew-symmetric part of \(\widetilde{K}^{\sigma}((x,x^{\prime})(u,u^{\prime}))\) yields the pairwise Gaussian kernel \(K^{\sigma}((x,x^{\prime}),(u,u^{\prime}))\) of the form (1.3) considered in this paper. According to the properties of reproducing kernels, cf. [4], \(\mathcal{H}_{K^{\sigma}}(\mathcal{X}^{2})\) is a subspace of \(\mathcal{H}_{\widetilde{K}^{\sigma}}(\mathcal{X}^{2})\) with \(\|f\|_{K^{\sigma}}=\|f\|_{\widetilde{K}^{\sigma}}\) for all \(f\in\mathcal{H}_{K^{\sigma}}(\mathcal{X}^{2})\). Therefore, we can carry out the entropy number estimate of \(\mathcal{H}_{K^{\sigma}}(\mathcal{X}^{2})\) by using the existing estimates developed for \(\mathcal{H}_{\widetilde{K}^{\sigma}}(\mathcal{X}^{2})\). Concretely, [23] showed that
\[\log\mathcal{N}(\mathrm{id}:\mathcal{H}_{\widetilde{K}^{\sigma}}([0,1]^{2d}) \to C([0,1]^{2d}),\varepsilon)\asymp\frac{\left(\log\frac{1}{\varepsilon} \right)^{2d+1}}{\left(\log\log\frac{1}{\varepsilon}\right)^{2d}},\quad \varepsilon\to 0.\]
A refined analysis in [37] clarified the crucial dependency of constants on \(\sigma\) and the underlying space \(\mathcal{X}\subset\mathbb{R}^{d}\):
\[\begin{split}&\log\mathcal{N}(\mathrm{id}:\mathcal{H}_{ \widetilde{K}^{\sigma}}(\mathcal{X}^{2})\to C(\mathcal{X}^{2}),\varepsilon)\\ &\leq\mathcal{N}(\mathcal{X}^{2},\|\cdot\|_{\infty},\sigma)\cdot \binom{4e+2d}{2d}e^{-2d}\frac{\left(\log\frac{1}{\varepsilon}\right)^{2d+1}} {\left(\log\log\frac{1}{\varepsilon}\right)^{2d}}.\end{split} \tag{5.3}\]
Based on these results above, we obtain the following proposition under the intrinsic dimension assumption of \(\mathcal{X}\).
**Proposition 4**.: _If Assumption 6 holds, then \(\mathcal{H}_{K^{\sigma}}(\mathcal{X}^{2})\) satisfies Assumption 2 for all \(p_{1}=p_{2}=p\in(0,1/2)\) and \(a_{1}=a_{2}=a:=(C_{\mathcal{X}}^{*}p^{-2d-1}\sigma^{-2\varrho})^{\frac{1}{2p}}\)._
Proof.: First we show that
\[\mathcal{N}(\mathrm{id}:\mathcal{H}_{K^{\sigma}}(\mathcal{X}^{2})\to C( \mathcal{X}^{2}),\varepsilon)\leq\mathcal{N}(\mathrm{id}:\mathcal{H}_{ \widetilde{K}^{\sigma}}(\mathcal{X}^{2})\to C(\mathcal{X}^{2}),\varepsilon).\]
Recall that \(\mathcal{B}_{\mathcal{F}}\) denotes the unit ball of a normed space \((\mathcal{F},\|\cdot\|_{\mathcal{F}})\). Given an \(\varepsilon\)-net \(\{\widetilde{f}_{1},\ldots,\widetilde{f}_{m}\}\) of \((\mathcal{B}_{\mathcal{H}_{\widetilde{K}^{\sigma}}(\mathcal{X}^{2})},\| \cdot\|_{\infty})\), by the decomposition (5.2), it induces a set of skew-symmetric functions
\(\{f_{1},\ldots,f_{m}\}\subset\mathcal{B}_{\mathcal{H}_{K^{\sigma}}(\mathcal{X}^{2})}\). Let \(f\in\mathcal{B}_{\mathcal{H}_{K^{\sigma}}(\mathcal{X}^{2})}\subset\mathcal{B}_{ \mathcal{H}_{\widetilde{K}^{\sigma}}(\mathcal{X}^{2})}\). There exists a \(\widetilde{f}_{i}\in\{\widetilde{f}_{1},\ldots,\widetilde{f}_{m}\}\) such that \(\|\widetilde{f}_{i}-f\|_{\infty}\leq\varepsilon\). Notice that \(f\) is skew-symmetric. Then due to the decomposition (5.2), we have \(\|f_{i}-f\|_{\infty}\leq\|\widetilde{f}_{i}-f\|_{\infty}\leq\varepsilon\). Hence \(\{f_{1},\ldots,f_{m}\}\) is a \(\varepsilon\)-net of \((B_{\mathcal{H}_{K^{\sigma}}(\mathcal{X}^{2})},\|\cdot\|_{\infty})\). Then we have the following estimate for all \(p\in(0,1/2)\) by (2.14) and (5.3).
\[\log\mathcal{N}(\mathrm{id}:\mathcal{H}_{K^{\sigma}}(\mathcal{X}^ {2})\to C(\mathcal{X}^{2}),\varepsilon)\] \[\leq \log\mathcal{N}(\mathrm{id}:\mathcal{H}_{\widetilde{K}^{\sigma}} (\mathcal{X}^{2})\to C(\mathcal{X}^{2}),\varepsilon)\] \[\leq \mathcal{N}(\mathcal{X}^{2},\|\cdot\|_{\infty},\sigma)\cdot{4e+2d \choose 2d}e^{-2d}\frac{\left(\log\frac{4}{\varepsilon}\right)^{2d+1}}{\left( \log\log\frac{4}{\varepsilon}\right)^{2d}}\] \[\leq C_{\mathcal{X}}^{2}\sigma^{-2\varrho}{4e+2d\choose 2d}e^{-2d} \left(\frac{2d+1}{2p}\right)^{2d+1}4^{2p}e^{-2d-1}\varepsilon^{-2p}\] \[\leq 4C_{\mathcal{X}}^{2}{4e+2d\choose 2d}\frac{(2d+1)^{2d+1}}{2^ {2d+1}e^{4d+1}}p^{-2d-1}\sigma^{-2\varrho}\varepsilon^{-2p}.\]
We then use the above estimate to establish a bound for the entropy number, cf. Exercise 6.8 of [36], which is given by
\[e_{i}(\mathrm{id}:\mathcal{H}_{K^{\sigma}}(\mathcal{X}^{2})\to C(\mathcal{X}^ {2}))\leq(C_{\mathcal{X}}^{*}p^{-2d-1}\sigma^{-2\varrho})^{\frac{1}{2p}}i^{- \frac{1}{2p}}\]
where
\[C_{\mathcal{X}}^{*}:=12C_{\mathcal{X}}^{2}{4e+2d\choose 2d}\frac{(2d+1)^{2d+1}}{ 2^{2d+1}e^{4d+1}}.\]
Notice that \(\mathcal{L}_{2}\)-seminorm \(\|\cdot\|_{\mathcal{L}_{2}(P_{\mathcal{X}}^{n}\otimes P_{\mathcal{X}})}\) and \(\|\cdot\|_{\mathcal{L}_{2}(P_{\mathcal{X}}^{n})}\) are both dominated by the sup-norm \(\|\cdot\|_{\infty}\). Therefore, Assumption 2 holds for all \(p_{1}=p_{2}=p\in(0,1/2)\) and \(a_{1}=a_{2}=a:=(C_{\mathcal{X}}^{*}p^{-2d-1}\sigma^{-2\varrho})^{\frac{1}{2p}}.\) The proof is then finished.
### Bounding the Approximation Error
In this subsection, we aim to construct a suitable \(f_{0}\in\mathcal{H}_{K^{\sigma}}\) and bound the approximation error \(\lambda\|f_{0}\|_{K^{\sigma}}^{2}+\mathcal{R}^{\phi}(f_{0})-\mathcal{R}^{\phi }(f_{\phi}^{*})\) for \(\phi=\phi_{\mathrm{hinge}}\) and \(\phi=\phi_{\mathrm{square}}\). Inspired by the work of [36, 46], define
\[\widetilde{\mathcal{K}}^{\sigma}(x,x^{\prime}):=\left(\frac{2}{\sigma\sqrt{ \pi}}\right)^{d}\exp\left(-\frac{2\|(x,x^{\prime})\|_{2}^{2}}{\sigma^{2}}\right)\]
and an operator \(\widetilde{\mathcal{K}}^{\sigma}*\cdot\) on \(\mathcal{L}_{2}(\mathbb{R}^{2d})\) as
\[\widetilde{\mathcal{K}}^{\sigma}*f(x,x^{\prime}):=\int_{\mathbb{R}^{2d}} \widetilde{\mathcal{K}}^{\sigma}(u,u^{\prime})f((x,x^{\prime})+(u,u^{\prime}) )d(u,u^{\prime}),\forall f\in\mathcal{L}_{2}(\mathbb{R}^{2d}).\]
One can verify that
\[\widetilde{\mathcal{K}}^{\sigma}*f(x,x^{\prime})\]
\[=\bigg{(}\frac{1}{\sigma\sqrt{\pi}}\bigg{)}^{d}\int_{\mathbb{R}^{2d}} \widetilde{K}^{\sigma}((x,x^{\prime}),(u,u^{\prime}))f\bigg{(}\bigg{(}1-\frac{1} {\sqrt{2}}\bigg{)}(x,x^{\prime})+\frac{1}{\sqrt{2}}(u,u^{\prime})\bigg{)}d(u,u^ {\prime})\]
where \(\widetilde{K}^{\sigma}\) is the traditional Gaussian kernel given by (5.1). Therefore, \(\widetilde{\mathcal{K}}^{\sigma}*\cdot\) is essentially a convolution operator which is a metric surjection from \(\mathcal{L}_{2}(\mathbb{R}^{2d})\) to \(\mathcal{H}_{\widetilde{K}^{\sigma}}\), i.e., \(\widetilde{\mathcal{K}}^{\sigma}*f\in\mathcal{H}_{\widetilde{K}^{\sigma}}\) and \(\|\widetilde{\mathcal{K}}^{\sigma}*f\|_{\widetilde{K}^{\sigma}}\leq\|f\|_{ \mathcal{L}_{2}(\mathbb{R}^{2d})}\). This motivates us to consider the following operator on \(\mathcal{L}_{2}(\mathbb{R}^{2d})\) defined by
\[\mathcal{K}^{\sigma}*f(x,x^{\prime}):=\frac{1}{2}\int_{\mathbb{R}^{2d}} \widetilde{\mathcal{K}}^{\sigma}(u,u^{\prime})\bigg{(}f((x,x^{\prime})+(u,u^ {\prime}))-f((x^{\prime},x)+(u,u^{\prime}))\bigg{)}d(u,u^{\prime}).\]
In fact, for all \(f\in\mathcal{L}_{2}(\mathbb{R}^{2d})\), \(\mathcal{K}^{\sigma}*f(x,x^{\prime})\) is skew-symmetric, i.e., \(\mathcal{K}^{\sigma}*f(x,x^{\prime})=-\mathcal{K}^{\sigma}*f(x^{\prime},x)\), and if \(f\) is skew-symmetric, \(\mathcal{K}^{\sigma}*f(x,x^{\prime})=\widetilde{\mathcal{K}}^{\sigma}*f(x,x^ {\prime}).\) Moreover, if \(f\) is symmetric, we have \(\mathcal{K}^{\sigma}*f(x,x^{\prime})=0\). For all \(\widetilde{f}\in\mathcal{L}_{2}(\mathbb{R}^{2d})\), by decomposition (5.2), we write \(\widetilde{f}=(\widetilde{f}-f)+f\) where \(f\) is skew-symmetric while \(\widetilde{f}-f\) is symmetric. Then we have
\[\|\mathcal{K}^{\sigma}*\widetilde{f}\|_{K^{\sigma}}=\|\mathcal{K}^{\sigma}*f \|_{K^{\sigma}}=\|\widetilde{\mathcal{K}}^{\sigma}*f\|_{\widetilde{K}^{\sigma }}\leq\|f\|_{\mathcal{L}_{2}(\mathbb{R}^{2d})}\leq\|\widetilde{f}\|_{\mathcal{ L}_{2}(\mathbb{R}^{2d})}.\]
Hence \(\mathcal{K}^{\sigma}*\cdot\) is a metric surjection from \(\mathcal{L}_{2}(\mathbb{R}^{2d})\) to \(\mathcal{H}_{K^{\sigma}}\). Note that the Bayes ranking rule \(f_{\phi}^{*}\) of \(\phi=\phi_{\rm hinge}\) and \(\phi=\phi_{\rm square}\), which we need to approximate, are skew-symmetric. In the followings, we will act the convolution operator \(\widetilde{\mathcal{K}}^{\sigma}*\cdot\) on skew-symmetric functions to construct efficient approximations of \(f_{\phi}^{*}\) in \(\mathcal{H}_{K^{\sigma}}\) and then carry out the approximation analysis.
We first deal with hinge loss. Under margin-noise condition Assumption 5, we establish the following proposition to bound the approximation error.
**Proposition 5**.: _Denote \(\mathcal{B}_{\mathbb{R}^{2d}}\) the unit closed ball in \(\mathbb{R}^{2d}\). Suppose that \(\mathcal{X}^{2}\subset r\mathcal{B}_{\mathbb{R}^{2d}}\) for \(r>0\) and Assumption 5 holds with \(C_{**}>0\) and \(\beta>0\). Then for all \(\lambda>0,\sigma>0\), there exists an \(f_{0}\in\mathcal{H}_{K^{\sigma}}\) such that \(\|f_{0}\|_{\infty}\leq 1\) and_
\[\lambda\|f_{0}\|_{K^{\sigma}}^{2}+\mathcal{R}^{\phi_{\rm hinge}}(f_{0})- \mathcal{R}^{\phi_{\rm hinge}}(f_{\rm hinge}^{*})\leq\frac{2^{2d+1}r^{2d}}{ \Gamma(d)}\lambda\sigma^{-2d}+\frac{2^{\beta/2+1}C_{**}\Gamma(d+\frac{\beta}{ 2})}{\Gamma(d)}\sigma^{\beta}.\]
Proof.: Recall that the Bayes ranking rule for hinge loss \(f_{\rm hinge}^{*}:\mathcal{X}^{2}\to\mathbb{R}\) takes the form
\[f_{\rm hinge}^{*}(x,x^{\prime})={\rm sgn}(\eta_{+}(x,x^{\prime})-\eta_{-}(x,x^ {\prime})).\]
Denote \({\rm vol}(\cdot)\) the Lebesgue measure on Euclidean space. Note that if \({\rm vol}(\mathcal{X})=0\), for example, when \(\mathcal{X}\) is a submanifold with upper box-counting dimension \(\varrho<d\), the trivial zero extension of \(f_{\phi}^{*}\) only yields the zero function in \(\mathcal{L}_{2}(\mathbb{R}^{2d})\). Hence, before we construct the approximator using \(\widetilde{\mathcal{K}}^{\sigma}*f_{\rm hinge}^{*}\), we need to extend \(f_{\rm hinge}^{*}\) to a domain with a positive measure in a nontrivial way, while still maintaining the margin-noise condition.
To this end, we extend the posterior probabilities \(\eta_{+},\eta_{-},\eta_{-}:\mathcal{X}^{2}\to[0,1]\) to \(2r\mathcal{B}_{\mathbb{R}^{2d}}\). For all \((x,x^{\prime})\in\mathcal{X}^{2}\), with a slight abuse of notation, denote \(\mathcal{B}_{\mathbb{R}^{2d}}((x,x^{\prime}),\Delta(x,x^{\prime})/2)\) the open ball centered at \((x,x^{\prime})\) with radius \(\Delta(x,x^{\prime})/2\), where \(\Delta(x,x^{\prime})\) is defined as (2.12). Since \(\mathcal{X}^{2}\subset r\mathcal{B}_{\mathbb{R}^{2d}}\), we have \(\Delta(x,x^{\prime})\leq 2r\) and \(\mathcal{B}_{\mathbb{R}^{2d}}((x,x^{\prime}),\Delta(x,x^{\prime})/2)\subset 2r \mathcal{B}_{\mathbb{R}^{2d}}\). For all \((u,u^{\prime})\in 2r\mathcal{B}_{\mathbb{R}^{2d}}\backslash\mathcal{X}^{2}\), define the extension as
\[\eta_{+}(u,u^{\prime})=1,\eta_{-}(u,u^{\prime})=\eta_{=}(u,u^{\prime})=0,\]
\[\forall(u,u^{\prime})\in\bigcup_{(x,x^{\prime})\in\mathcal{X}^{2}_{+}} \mathcal{B}_{\mathbb{R}^{2d}}((x,x^{\prime}),\Delta(x,x^{\prime})/2),\] \[\eta_{-}(u,u^{\prime})=1,\eta_{+}(u,u^{\prime})=\eta_{=}(u,u^{ \prime})=0,\] \[\forall(u,u^{\prime})\in\bigcup_{(x,x^{\prime})\in\mathcal{X}^{2} _{-}}\mathcal{B}_{\mathbb{R}^{2d}}((x,x^{\prime}),\Delta(x,x^{\prime})/2),\] \[\eta_{=}(u,u^{\prime})=1,\eta_{+}(u,u^{\prime})=\eta_{-}(u,u^{ \prime})=0,\] \[\forall(u,u^{\prime})\in 2r\mathcal{B}_{\mathbb{R}^{2d}}\backslash \bigcup_{(x,x^{\prime})\in\mathcal{X}^{2}_{+}\cup\mathcal{X}^{2}_{-}} \mathcal{B}_{\mathbb{R}^{2d}}((x,x^{\prime}),\Delta(x,x^{\prime})/2).\]
Since \(\bigcup_{(x,x^{\prime})\in\mathcal{X}^{2}_{+}}\mathcal{B}_{\mathbb{R}^{2d}}(( x,x^{\prime}),\Delta(x,x^{\prime})/2)\) and \(\bigcup_{(x,x^{\prime})\in\mathcal{X}^{2}_{-}}\mathcal{B}_{\mathbb{R}^{2d}}(( x,x^{\prime}),\Delta(x,x^{\prime})/2)\) have an empty intersection by definition of \(\Delta(x,x^{\prime})\) and triangle inequality, the extension is well-defined. Using extended \(\eta_{+},\eta_{-},\eta_{=}:2r\mathcal{B}_{\mathbb{R}^{2d}}\to[0,1]\), define
\[(2r\mathcal{B}_{\mathbb{R}^{2d}})_{+}:=\{(x,x^{\prime})\in 2r \mathcal{B}_{\mathbb{R}^{2d}}:\eta_{+}(x,x^{\prime})>\eta_{-}(x,x^{\prime})\},\] \[(2r\mathcal{B}_{\mathbb{R}^{2d}})_{-}:=\{(x,x^{\prime})\in 2r \mathcal{B}_{\mathbb{R}^{2d}}:\eta_{+}(x,x^{\prime})<\eta_{-}(x,x^{\prime})\},\] \[(2r\mathcal{B}_{\mathbb{R}^{2d}})_{=}:=\{(x,x^{\prime})\in 2r \mathcal{B}_{\mathbb{R}^{2d}}:\eta_{+}(x,x^{\prime})=\eta_{-}(x,x^{\prime})\},\]
and a new distant function \(\widetilde{\Delta}:\mathcal{X}^{2}\to\mathbb{R}\) as
\[\widetilde{\Delta}(x,x^{\prime}):=\begin{cases}\operatorname{dist}((x,x^{ \prime}),(2r\mathcal{B}_{\mathbb{R}^{2d}})_{-}\cup(2r\mathcal{B}_{\mathbb{R}^ {2d}})_{=})&(x,x^{\prime})\in\mathcal{X}^{2}_{+}\\ \operatorname{dist}((x,x^{\prime}),(2r\mathcal{B}_{\mathbb{R}^{2d}})_{+}\cup( 2r\mathcal{B}_{\mathbb{R}^{2d}})_{=})&(x,x^{\prime})\in\mathcal{X}^{2}_{-}\\ 0&(x,x^{\prime})\in\mathcal{X}^{2}_{=}.\end{cases}\]
Here \(\mathcal{X}^{2}_{+},\mathcal{X}^{2}_{-}\) and \(\mathcal{X}^{2}_{=}\) are defined in (2.8) with original \(\eta_{+},\eta_{-}\) and \(\eta_{=}\).
We claim that with the redefined \(\widetilde{\Delta}\), the margin-noise condition holds with the power index \(\beta\) and constant \(2^{\beta}C_{**}\). To see this, one may notice that for all \((x,x^{\prime})\in\mathcal{X}^{2}_{+}\), if \(\widetilde{\Delta}(x,x^{\prime})<\Delta(x,x^{\prime})/2\), then there exists a \((u,u^{\prime})\in(2r\mathcal{B}_{\mathbb{R}^{2d}})_{-}\) or \((u,u^{\prime})\in(2r\mathcal{B}_{\mathbb{R}^{2d}})_{=}\) such that \(\operatorname{dist}((x,x^{\prime}),(u,u^{\prime}))<\Delta(x,x^{\prime})/2\). If \((u,u^{\prime})\in(2r\mathcal{B}_{\mathbb{R}^{2d}})_{-}\), then it must be the case that \((u,u^{\prime})\in\mathcal{B}_{\mathbb{R}^{2d}}((v,v^{\prime}),\Delta(v,v^{ \prime})/2)\) for some \((v,v^{\prime})\in\mathcal{X}^{2}_{-}\). By triangle inequality we have
\[\operatorname{dist}((x,x^{\prime}),(v,v^{\prime})) \leq\operatorname{dist}((x,x^{\prime}),(u,u^{\prime}))+ \operatorname{dist}((u,u^{\prime}),(v,v^{\prime}))\] \[<\Delta(x,x^{\prime})/2+\Delta(v,v^{\prime})/2,\]
which is a contradiction to the definition of \(\Delta\). If \((u,u^{\prime})\in(2r\mathcal{B}_{\mathbb{R}^{2d}})_{=}\), then
\[(u,u^{\prime})\in 2r\mathcal{B}_{\mathbb{R}^{2d}}\backslash\bigcup_{(v,v^{ \prime})\in\mathcal{X}^{2}_{+}\cup\mathcal{X}^{2}_{-}}\mathcal{B}_{\mathbb{R}^ {2d}}((v,v^{\prime}),\Delta(v,v^{\prime})/2),\]
which also contradicts to the fact that \(\operatorname{dist}((x,x^{\prime}),(u,u^{\prime}))<\Delta(x,x^{\prime})/2\). Following the same argument, one can discuss the case when \((x,x^{\prime})\in\mathcal{X}^{2}_{-}\). Hence we conclude that \(\widetilde{\Delta}(x,x^{\prime})\geq\Delta(x,x^{\prime})/2\) for all \((x,x^{\prime})\in\mathcal{X}^{2}\). Then the margin-noise condition holds for \(\widetilde{\Delta}\), given by
\[\int_{\{(x,x^{\prime})\in\mathcal{X}^{2}:\widetilde{\Delta}(x,x^{ \prime})<t\}}|\eta_{+}(x,x^{\prime})-\eta_{-}(x,x^{\prime})|dP^{2}_{\mathcal{X} }(x,x^{\prime})\] \[\leq \int_{\{(x,x^{\prime})\in\mathcal{X}^{2}:\Delta(x,x^{\prime})<2t\} }|\eta_{+}(x,x^{\prime})-\eta_{-}(x,x^{\prime})|dP^{2}_{\mathcal{X}}(x,x^{ \prime}) \tag{5.4}\] \[\leq 2^{\beta}C_{**}t^{\beta}.\]
Now we extend the original \(f_{\rm hinge}^{*}\) as
\[f_{\rm hinge}^{*}(x,x^{\prime})=\mathbb{I}_{2r}\mathcal{B}_{\mathbb{R}^{2d}}(x,x ^{\prime}){\rm sgn}(\eta_{+}(x,x^{\prime})-\eta_{-}(x,x^{\prime})).\]
One can verify that \(f_{\rm hinge}^{*}\) maintains skew-symmetric and \(f_{\rm hinge}^{*}\in\mathcal{L}_{2}(\mathbb{R}^{2d})\) with \(\|f_{\rm hinge}^{*}\|_{\mathcal{L}_{2}(\mathbb{R}^{2d})}\leq\sqrt{{\rm vol}(2r \mathcal{B}_{\mathbb{R}^{2d}})}=(2r)^{d}\sqrt{{\rm vol}(\mathcal{B}_{\mathbb{ R}^{2d}})}.\) We choose \(f_{0}\in\mathcal{H}_{K^{\sigma}}\) as
\[f_{0}(x,x^{\prime})=(\pi\sigma^{2})^{-d/2}\widetilde{\mathcal{K}}^{\sigma}*f_ {\rm hinge}^{*}(x,x^{\prime}).\]
Since \(\widetilde{\mathcal{K}}^{\sigma}*\cdot\) is a metric surjection,
\[\|f_{0}\|_{K^{\sigma}}\leq(\pi\sigma^{2})^{-d/2}\|f_{\rm hinge}^{*}\|_{ \mathcal{L}_{2}(\mathbb{R}^{2d})}\leq(\pi\sigma^{2})^{-d/2}(2r)^{d}\sqrt{{ \rm vol}(\mathcal{B}_{\mathbb{R}^{2d}})}. \tag{5.5}\]
Besides, by Young's convolution inequality, there holds
\[\|f_{0}\|_{\infty}\leq\frac{2^{d}}{\sigma^{2d}\pi^{d}}\|f_{\rm hinge}^{*}\|_{ \infty}\int_{\mathbb{R}^{2d}}\exp\left(-\frac{2\|(x,x^{\prime})\|_{2}^{2}}{ \sigma^{2}}\right)d(x,x^{\prime})=1. \tag{5.6}\]
It remains to bound the excess risk \(\mathcal{R}^{\phi_{\rm hinge}}(f_{0})-\mathcal{R}^{\phi_{\rm hinge}}(f_{\rm hinge }^{*})\). Fix a \((x,x^{\prime})\in\mathcal{X}_{+}^{2}\subset(2r\mathcal{B}_{\mathbb{R}^{2d}})_ {+}\). For all \((u,u^{\prime})\in\mathcal{B}_{\mathbb{R}^{2d}}((x,x^{\prime}),\widetilde{ \Delta}(x,x^{\prime}))\), we have \((u,u^{\prime})\in(2r\mathcal{B}_{\mathbb{R}^{2d}})_{+}\). Hence
\[\mathcal{B}_{\mathbb{R}^{2d}}((x,x^{\prime}),\widetilde{\Delta}(x,x^{\prime}) )\subset(2r\mathcal{B}_{\mathbb{R}^{2d}})_{+},\quad(2r\mathcal{B}_{\mathbb{ R}^{2d}})_{-}\subset\mathbb{R}^{2d}\backslash\mathcal{B}_{\mathbb{R}^{2d}}((x,x^{ \prime}),\widetilde{\Delta}(x,x^{\prime}))\]
and
\[f_{0}(x,x^{\prime})\] \[= \left(\frac{2}{\pi\sigma^{2}}\right)^{d}\int_{\mathbb{R}^{2d}} \exp\left(-\frac{2\|(x,x^{\prime})-(u,u^{\prime})\|_{2}^{2}}{\sigma^{2}} \right)f_{\rm hinge}^{*}(u,u^{\prime})d(u,u^{\prime})\] \[= \left(\frac{2}{\pi\sigma^{2}}\right)^{d}\left(\int_{(2r\mathcal{B }_{\mathbb{R}^{2d}})_{+}}\exp\left(-\frac{2\|(x,x^{\prime})-(u,u^{\prime})\|_ {2}^{2}}{\sigma^{2}}\right)d(u,u^{\prime})\right.\] \[\left.-\int_{(2r\mathcal{B}_{\mathbb{R}^{2d}})_{-}}\exp\left(- \frac{2\|(x,x^{\prime})-(u,u^{\prime})\|_{2}^{2}}{\sigma^{2}}\right)d(u,u^{ \prime})\right)\] \[\geq \left(\frac{2}{\pi\sigma^{2}}\right)^{d}\left(\int_{\mathcal{B}_{ \mathbb{R}^{2d}}((x,x^{\prime}),\widetilde{\Delta}(x,x^{\prime}))}\exp\left( -\frac{2\|(x,x^{\prime})-(u,u^{\prime})\|_{2}^{2}}{\sigma^{2}}\right)d(u,u^{ \prime})\right.\] \[\left.-\int_{\mathbb{R}^{2d}\backslash\mathcal{B}_{\mathbb{R}^{2d} }((x,x^{\prime}),\widetilde{\Delta}(x,x^{\prime}))}\exp\left(-\frac{2\|(x,x^{ \prime})-(u,u^{\prime})\|_{2}^{2}}{\sigma^{2}}\right)d(u,u^{\prime})\right)\] \[= 2\left(\frac{2}{\pi\sigma^{2}}\right)^{d}\int_{\mathcal{B}_{ \mathbb{R}^{2d}}((x,x^{\prime}),\widetilde{\Delta}(x,x^{\prime}))}\exp\left( -\frac{2\|(x,x^{\prime})-(u,u^{\prime})\|_{2}^{2}}{\sigma^{2}}\right)d(u,u^{ \prime})-1.\]
By (5.6) and \(f_{\rm hinge}^{*}(x,x^{\prime})=1\), we derive that
\[|f_{0}(x,x^{\prime})-f_{\rm hinge}^{*}(x,x^{\prime})|\] \[= 1-f_{0}(x,x^{\prime})\] \[\leq 2-2\left(\frac{2}{\pi\sigma^{2}}\right)^{d}\int_{\mathcal{B}_ {\mathbb{R}^{2d}}((x,x^{\prime}),\widetilde{\Delta}(x,x^{\prime}))}\exp\left(- \frac{2\|(x,x^{\prime})-(u,u^{\prime})\|_{2}^{2}}{\sigma^{2}}\right)d(u,u^{ \prime})\]
\[=2\left(\frac{2}{\pi\sigma^{2}}\right)^{d}\int_{\mathbb{R}^{2d} \setminus\mathcal{B}_{\mathbb{R}^{2d}}((x,x^{\prime}),\widetilde{\Delta}(x,x^{ \prime}))}\exp\left(-\frac{2\|(x,x^{\prime})-(u,u^{\prime})\|_{2}^{2}}{\sigma^ {2}}\right)d(u,u^{\prime})\] \[=\frac{2^{d+2}}{\sigma^{2d}\Gamma(d)}\int_{\widetilde{\Delta}(x, x^{\prime})}^{\infty}\exp(-2\sigma^{-2}\rho^{2})\rho^{2d-1}d\rho\] \[=\frac{2}{\Gamma(d)}\int_{2\widetilde{\Delta}(x,x^{\prime})^{2} \sigma^{-2}}^{\infty}e^{-\rho}\rho^{d-1}d\rho.\]
This inequality also holds for all \((x,x^{\prime})\in\mathcal{X}_{-}^{2}\) by the same argument. Note that
\[\mathcal{R}^{\phi_{\mathrm{hinge}}}(f_{0})-\mathcal{R}^{\phi_{ \mathrm{hinge}}}(f_{\mathrm{hinge}}^{*}) =\mathbb{E}[\mathrm{sgn}(Y-Y^{\prime})\cdot(f_{\mathrm{hinge}}^{ *}(X,X^{\prime})-f_{0}(X,X^{\prime}))]\] \[=\mathbb{E}_{P_{\mathcal{X}}^{2}}[(\eta_{+}(X,X^{\prime})-\eta_{ -}(X,X^{\prime}))\cdot(f_{\mathrm{hinge}}^{*}(X,X^{\prime})-f_{0}(X,X^{\prime }))],\]
we apply our claim (5.4) and yield that
\[\mathcal{R}^{\phi_{\mathrm{hinge}}}(f_{0})-\mathcal{R}^{\phi_{ \mathrm{hinge}}}(f_{\mathrm{hinge}}^{*})\] \[=\int_{\mathcal{X}_{-}^{2}\cup\mathcal{X}_{+}^{2}}|f_{0}(x,x^{ \prime})-f_{\mathrm{hinge}}^{*}(x,x^{\prime})|\cdot|\eta_{+}(x,x^{\prime})- \eta_{-}(x,x^{\prime})|dP_{\mathcal{X}}^{2}(x,x^{\prime})\] \[\leq\frac{2}{\Gamma(d)}\int_{\mathcal{X}_{-}^{2}\cup\mathcal{X}_ {+}^{2}}\int_{2\widetilde{\Delta}(x,x^{\prime})^{2}\sigma^{-2}}^{\infty}e^{- \rho}\rho^{d-1}|\eta_{+}(x,x^{\prime})-\eta_{-}(x,x^{\prime})|d\rho dP_{ \mathcal{X}}^{2}(x,x^{\prime})\] \[=\frac{2}{\Gamma(d)}\int_{0}^{\infty}e^{-\rho}\rho^{d-1}\int_{ \mathcal{X}_{+}^{2}\cup\mathcal{X}_{-}^{2}}\mathbb{I}_{[0,\sigma(\rho/2)^{1/2} )}(\widetilde{\Delta}(x,x^{\prime}))|\eta_{+}(x,x^{\prime})-\eta_{-}(x,x^{ \prime})|dP_{\mathcal{X}}^{2}(x,x^{\prime})d\rho\] \[\leq\frac{2^{1+\beta/2}C_{**}\sigma^{\beta}}{\Gamma(d)}\int_{0}^{ \infty}e^{-\rho}\rho^{d-1+\beta/2}d\rho\] \[=\frac{2^{1+\beta/2}C_{**}\Gamma(d+\frac{\beta}{2})}{\Gamma(d)} \sigma^{\beta}.\]
Combine with (5.5) (5.6) we conclude that \(f_{0}\in\mathcal{H}_{K^{\sigma}}\), \(\|f_{0}\|_{\infty}\leq 1\) and
\[\lambda\|f_{0}\|_{K^{\sigma}}^{2}+\mathcal{R}^{\phi_{\mathrm{hinge}}}(f_{0})- \mathcal{R}^{\phi_{\mathrm{hinge}}}(f_{\mathrm{hinge}}^{*})\leq\frac{2^{2d+1}r ^{2d}}{\Gamma(d)}\lambda\sigma^{-2d}+\frac{2^{1+\beta/2}C_{**}\Gamma(d+\frac{ \beta}{2})}{\Gamma(d)}\sigma^{\beta}.\]
The proof is then finished.
Next, we establish the approximation error estimates for the square loss under the standard Besov smoothness condition on the Bayes ranking rule
\[f_{\mathrm{square}}^{*}(x,x^{\prime})=\frac{\eta_{+}(x,x^{\prime})-\eta_{-}(x, x^{\prime})}{\eta_{+}(x,x^{\prime})+\eta_{-}(x,x^{\prime})}.\]
Due to the discussion in Remark 2, we can regard \(f_{\mathrm{square}}^{*}\) as a function in \(\mathcal{L}_{2}(\mathbb{R}^{2d})\), or equivalently, \(f_{\mathrm{square}}^{*}\) is the restriction of a function in \(\mathcal{L}_{2}(\mathbb{R}^{2d})\) on \(\mathcal{X}^{2}\).
**Proposition 6**.: _Suppose that \(f_{\mathrm{square}}^{*}\in\mathcal{L}_{2}(\mathbb{R}^{2d})\) and \(|f_{\mathrm{square}}^{*}|_{\mathcal{B}_{2,\infty}^{\alpha}(P_{\mathcal{X}}^{2} )}<\infty\) for some \(\alpha>0\). Then for all \(\lambda>0,\sigma>0\), there exists an \(f_{0}\in\mathcal{H}_{K^{\sigma}}\) such that \(\|f_{0}\|_{\infty}\leq 2^{s}\) and_
\[\lambda\|f_{0}\|_{K^{\sigma}}^{2}+\mathcal{R}^{\phi_{\mathrm{ square}}}(f_{0})-\mathcal{R}^{\phi_{\mathrm{square}}}(f_{\mathrm{square}}^{*})\] \[\leq\frac{2^{2s}\|f_{\mathrm{square}}^{*}\|_{\mathcal{L}_{2}( \mathbb{R}^{2d})}^{2}}{\pi^{d}}\lambda\sigma^{-2d}+\frac{|f_{\mathrm{square}}^{*} |_{\mathcal{B}_{2,\infty}^{\alpha}(P_{\mathcal{X}}^{2})}^{2}}{2^{\alpha}} \left(\frac{\Gamma\left(d+\frac{\alpha}{2}\right)}{\Gamma(d)}\right)^{2}\sigma ^{2\alpha}.\]
Proof.: Define the \(s\)-fold application of the convolution operator \(\widetilde{\mathcal{K}}^{\sigma}\) as
\[\widetilde{\mathcal{K}}^{\sigma}_{s}*.:=\sum_{j=1}^{s}(-1)^{1-j}\binom{s}{j}(j \sigma\sqrt{\pi})^{-d}\widetilde{\mathcal{K}}^{j\sigma}*.\]
and we define the approximator \(f_{0}\in\mathcal{H}_{K^{\sigma}}\) as
\[\begin{split}& f_{0}(x,x^{\prime}):=\widetilde{\mathcal{K}}^{ \sigma}_{s}*f^{*}_{\text{square}}(x,x^{\prime})\\ =&\ \int_{\mathbb{R}^{2d}}\sum_{j=1}^{s}(-1)^{1-j} \binom{s}{j}\left(\frac{2}{\pi\bar{j}^{2}\sigma^{2}}\right)^{d}\exp\left(-\frac {2\|(u,u^{\prime})\|_{2}^{2}}{(j\sigma)^{2}}\right)f^{*}_{\text{square}}((x,x^ {\prime})+(u,u^{\prime}))d(u,u^{\prime})\\ =&\ \int_{\mathbb{R}^{2d}}\sum_{j=1}^{s}(-1)^{1-j} \binom{s}{j}\left(\frac{2}{\pi\sigma^{2}}\right)^{d}\exp\left(-\frac{2\|(u,u ^{\prime})\|_{2}^{2}}{\sigma^{2}}\right)f^{*}_{\text{square}}((x,x^{\prime})+ j(u,u^{\prime}))d(u,u^{\prime}).\end{split}\]
Then we have
\[\begin{split}& f_{0}(x,x^{\prime})-f^{*}_{\text{square}}(x,x^{ \prime})\\ =&\ \widetilde{\mathcal{K}}^{\sigma}_{s}*f^{*}_{ \text{square}}(x,x^{\prime})-\int_{\mathbb{R}^{2d}}(\sigma\sqrt{\pi})^{-d} \mathcal{K}^{\sigma}(u,u^{\prime})f^{*}_{\text{square}}(x,x^{\prime})d(u,u^{ \prime})\\ =&\ \int_{\mathbb{R}^{2d}}\sum_{j=0}^{s}(-1)^{1-j} \binom{s}{j}\left(\frac{2}{\pi\sigma^{2}}\right)^{d}\exp\left(-\frac{2\|(u,u^ {\prime})\|_{2}^{2}}{\sigma^{2}}\right)f^{*}_{\text{square}}((x,x^{\prime})+j( u,u^{\prime}))d(u,u^{\prime})\\ =&\ \int_{\mathbb{R}^{2d}}\sum_{j=0}^{s}(-1)^{1-j} \binom{s}{j}\left(\frac{2}{\pi\sigma^{2}}\right)^{d}\exp\left(-\frac{2\|(u,u^ {\prime})\|_{2}^{2}}{\sigma^{2}}\right)f^{*}_{\text{square}}((x,x^{\prime})+j (u,u^{\prime}))d(u,u^{\prime})\\ =&\ \int_{\mathbb{R}^{2d}}(-1)^{1-s}\left(\frac{2}{\pi \sigma^{2}}\right)^{d}\exp\left(-\frac{2\|(u,u^{\prime})\|_{2}^{2}}{\sigma^{2} }\right)\Delta^{s}_{(u,u^{\prime})}f^{*}_{\text{square}}(x,x^{\prime})d(u,u^{ \prime}).\end{split}\]
Notice that by definition of \(f^{*}_{\text{square}}\),
\[\begin{split}&\ \ \ \ \ \ \mathcal{R}^{\phi_{\text{square}}}(f_{0})- \mathcal{R}^{\phi_{\text{square}}}(f^{*}_{\text{square}})\\ =&\ \mathbb{E}[(1-\text{sgn}(Y-Y^{\prime})f_{0}(X,X^{ \prime}))^{2}-(1-\text{sgn}(Y-Y^{\prime})f^{*}_{\text{square}}(X,X^{\prime}))^{ 2}]\\ =&\ \mathbb{E}_{P^{2}_{\mathcal{X}}}\bigg{[}\big{(} \eta_{+}(2-f_{0}-f^{*}_{\text{square}})-\eta_{-}(2+f_{0}+f^{*}_{\text{square}}) \big{)}\cdot\big{(}f^{*}_{\text{square}}-f_{0}\big{)}\bigg{]}\\ =&\ \mathbb{E}_{P^{2}_{\mathcal{X}}}\bigg{[}(\eta_{+}+ \eta_{-})\cdot(f^{*}_{\text{square}}-f_{0})^{2}\bigg{]}\\ \leq&\ \|f^{*}_{\text{square}}-f_{0}\|_{\mathcal{L}_{2 }(P^{2}_{\mathcal{X}})}^{2}.\end{split} \tag{5.7}\]
By Minkowski's integral inequality, we have
\[\begin{split}&\ \ \ \ \mathcal{R}^{\phi_{\text{square}}}(f_{0})- \mathcal{R}^{\phi_{\text{square}}}(f^{*}_{\text{square}})\\ \leq&\ \|f^{*}_{\text{square}}-f_{0}\|_{\mathcal{L}_{2 }(P^{2}_{\mathcal{X}})}^{2}\\ =&\ \int_{\mathcal{X}^{2}}\bigg{|}\int_{\mathbb{R}^{2d}}(-1 )^{1-s}\left(\frac{2}{\pi\sigma^{2}}\right)^{d}\exp\left(-\frac{2\|(u,u^{ \prime})\|_{2}^{2}}{\sigma^{2}}\right)\Delta^{s}_{(u,u^{\prime})}f^{*}_{\text{ square}}(x,x^{\prime})d(u,u^{\prime})\bigg{|}^{2}dP^{2}_{\mathcal{X}}(x,x^{ \prime})\\ \leq&\ \left(\int_{\mathbb{R}^{2d}}\left(\int_{ \mathcal{X}^{2}}\left(\left(\frac{2}{\pi\sigma^{2}}\right)^{d}\exp\left(-\frac{2 \|(u,u^{\prime})\|_{2}^{2}}{\sigma^{2}}\right)\Delta^{s}_{(u,u^{\prime})}f^{* }_{\text{square}}(x,x^{\prime})\right)^{2}dP^{2}_{\mathcal{X}}(x,x^{\prime}) \right)^{\frac{1}{2}}d(u,u^{\prime})\right)^{2}\end{split}\]
\[= \left(\int_{\mathbb{R}^{2d}}\left(\frac{2}{\pi\sigma^{2}}\right)^{d} \exp\left(-\frac{2\|(u,u^{\prime})\|_{2}^{2}}{\sigma^{2}}\right)\|\Delta_{(u,u^ {\prime})}^{s}f_{\mathrm{square}}^{*}\|_{\mathcal{L}_{2}(P_{\mathcal{X}}^{2})}d( u,u^{\prime})\right)^{2}\] \[\leq |f_{\mathrm{square}}^{*}|_{\mathcal{B}_{2,\infty}^{\alpha}(P_{ \mathcal{X}}^{2})}^{2}\left(\frac{2}{\pi\sigma^{2}}\right)^{2d}\left(\int_{ \mathbb{R}^{2d}}\exp\left(-\frac{2\|(u,u^{\prime})\|_{2}^{2}}{\sigma^{2}} \right)\|(u,u^{\prime})\|_{2}^{\alpha}d(u,u^{\prime})\right)^{2}\] \[= |f_{\mathrm{square}}^{*}|_{\mathcal{B}_{2,\infty}^{\alpha}(P_{ \mathcal{X}}^{2})}^{2}2^{-\alpha}\left(\frac{\Gamma(d+\frac{\alpha}{2})}{ \Gamma(d)}\right)^{2}\sigma^{2\alpha}.\]
By Proposition 4.46 of [36], for \(0<\sigma_{1}<\sigma_{2}<\infty\), the space \(\mathcal{H}_{\widetilde{K}^{\sigma_{2}}}\) is continuously embedded into \(\mathcal{H}_{\widetilde{K}^{\sigma_{1}}}\) with the operator norm satisfies:
\[\left\|\mathrm{id}:\mathcal{H}_{\widetilde{K}^{\sigma_{2}}}\to\mathcal{H}_{ \widetilde{K}^{\sigma_{1}}}\right\|\leq\left(\frac{\sigma_{2}}{\sigma_{1}} \right)^{d}.\]
Then we can bound \(\|f_{0}\|_{K^{\sigma}}\) as
\[\|f_{0}\|_{K^{\sigma}} \leq\sum_{j=1}^{s}\binom{s}{j}(j\sigma\sqrt{\pi})^{-d}\|\widetilde {K}^{j\sigma}*f_{\mathrm{square}}^{*}\|_{K^{\sigma}}\] \[=\sum_{j=1}^{s}\binom{s}{j}(j\sigma\sqrt{\pi})^{-d}\|\widetilde {K}^{j\sigma}*f_{\mathrm{square}}^{*}\|_{\widetilde{K}^{\sigma}}\] \[\leq\sum_{j=1}^{s}\binom{s}{j}(\sigma\sqrt{\pi})^{-d}\|\widetilde {K}^{j\sigma}*f_{\mathrm{square}}^{*}\|_{\widetilde{K}^{j\sigma}}\] \[\leq 2^{s}(\sigma\sqrt{\pi})^{-d}\|f_{\mathrm{square}}^{*}\|_{ \mathcal{L}_{2}(\mathbb{R}^{2d})}.\]
Finally, by Young's convolution inequality, we have
\[\|f_{0}\|_{\infty} \leq\|f_{\mathrm{square}}^{*}\|_{\infty}\left\|\sum_{j=1}^{s}(-1) ^{1-j}\binom{s}{j}(j\sigma\sqrt{\pi})^{-d}\widetilde{K}^{j\sigma}\right\|_{L _{1}(\mathbb{R}^{2d})}\] \[\leq\sum_{j=1}^{s}\binom{s}{j}(j\sigma\sqrt{\pi})^{-d}\|\widetilde {K}^{j\sigma}\|_{L_{1}(\mathbb{R}^{2d})}\] \[=\sum_{j=1}^{s}\binom{s}{j}\] \[\leq 2^{s}.\]
Combining all these estimates, we complete the proof.
### Calibration Inequality for Pairwise Ranking
In this subsection, we establish calibration inequality for pairwise ranking with the margin-based loss \(\phi(y,y^{\prime},t)=\psi(\mathrm{sgn}(y-y^{\prime})t)\). Our discussion is not only limited to \(\phi=\phi_{\mathrm{hing}}\) and \(\phi=\phi_{\mathrm{square}}\), but also can be generalized to study more general margin-based losses. So far,
our estimates suffice to derive the learning rates for excess \(\phi\)-ranking risk. To bound the excess ranking risk \(\mathcal{E}(f)\) by the excess \(\phi\)-ranking risk, it remains to establish the so-called calibration inequality or comparison theorem. The calibration inequality in binary classification has been extensively studied in a vast literature, see, e.g., [51, 5, 16, 36] and references therein. Motivated by these studies, we in this subsection aim to build similar calibration inequalities in the pairwise ranking setting. Concretely, for \(\phi=\phi_{\mathrm{hinge}}\), an analogous version of Zhang's inequality for pairwise ranking is established in Proposition 7. For those \(\psi\) satisfying Assumption 3 and \(\psi^{\prime\prime}(0)>0\) which includes \(\phi=\phi_{\mathrm{square}}\) as a special case, we prove that the excess ranking risk can be bounded by the square root of excess \(\phi\)- ranking risk (multiplying by some constant), and the noise condition in Assumption 4 can further refine this upper bound.
**Proposition 7**.: _For all measurable \(f:\mathcal{X}^{2}\to\mathbb{R}\) we have_
\[\mathcal{R}(f)-\mathcal{R}(f^{*}_{\mathrm{rank}})\leq\mathcal{R}^{\phi_{ \mathrm{hinge}}}(f)-\mathcal{R}^{\phi_{\mathrm{hinge}}}(f^{*}_{\mathrm{hinge}}).\]
Proof.: Consider the truncated function \(\pi(f):\mathcal{X}^{2}\to[-1,1]\). Since hinge loss can be truncated at \(t=1\) and the truncated function does not change the ranking risk, it is sufficient to prove the proposition for all measurable \(f:\mathcal{X}^{2}\to[-1,1]\). Note that for all measurable \(f:\mathcal{X}^{2}\to[-1,1]\), there holds
\[\mathcal{R}^{\phi_{\mathrm{hinge}}}(f)-\mathcal{R}^{\phi_{\mathrm{hinge}}}(f^{ *}_{\mathrm{hinge}})=\mathbb{E}_{P^{2}_{\mathcal{X}}}[(\eta_{+}(X,X^{\prime})- \eta_{-}(X,X^{\prime}))\cdot(f^{*}_{\mathrm{hinge}}(X,X^{\prime})-f(X,X^{\prime }))].\]
Then
\[\mathcal{R}(f)-\mathcal{R}(f^{*}_{\mathrm{rank}})\] \[=\mathbb{E}_{P^{2}_{\mathcal{X}}}[\eta_{+}\mathbb{I}_{(-\infty,0) }(f)+\eta_{-}\mathbb{I}_{[0,\infty)}(f)-\min\{\eta_{+},\eta_{-}\}]\] \[\leq\mathbb{E}_{P^{2}_{\mathcal{X}}}\big{[}(\eta_{+}-\eta_{-}) \cdot(f^{*}_{\mathrm{hinge}}-f)\big{]}\] \[=\mathcal{R}^{\phi_{\mathrm{hinge}}}(f)-\mathcal{R}^{\phi_{ \mathrm{hinge}}}(f^{*}_{\mathrm{hinge}}),\]
where the inequality can be derived by consider the cases \(\eta_{+}>\eta_{-}\), \(\eta_{+}=\eta_{-}\) and \(\eta_{+}<\eta_{-}\) respectively. The proof is then finished.
To continue our analysis of general margin-based losses, we further introduce some definitions and notations. Given a \(\psi\) satisfying Assumption 3, fix an \((x,x^{\prime})\in\mathcal{X}^{2}\), we define function \(\Psi=\Psi_{(x,x^{\prime})}:\mathbb{R}\to[0,\infty)\) as
\[\Psi_{(x,x^{\prime})}(t):=\eta_{+}(x,x^{\prime})\psi(t)+\eta_{-}(x,x^{\prime}) \psi(-t)+\eta_{=}(x,x^{\prime})\psi(0).\]
Since \(\psi\) is convex, the left derivative \(\psi^{\prime}_{-}(t)\geq 0\) for \(t\in(1,\infty)\) and the right derivative \(\psi^{\prime}_{+}(t)<0\) for \(t\in(-\infty,1)\). Hence we can define
\[f^{*}_{-}(x,x^{\prime}):=\sup\big{\{}t\in\mathbb{R}\mid\Psi^{ \prime}_{-}(t)<0\big{\}},\] \[f^{*}_{+}(x,x^{\prime}):=\inf\big{\{}t\in\mathbb{R}\mid\Psi^{ \prime}_{+}(t)>0\big{\}}.\]
Since \(\Psi\) is also a convex function, we have \(f^{*}_{-}(x,x^{\prime})\leq f^{*}_{+}(x,x^{\prime})\). The following lemmas can be proved by the same arguments in Theorem 10.8 and Lemma 10.10 of [16]. Recall that \(\mathcal{X}^{2}_{+},\mathcal{X}^{2}_{-}\) and \(\mathcal{X}^{2}_{=}\) are defined in (2.8).
**Lemma 9**.: _Given \(\phi(y,y^{\prime},t)=\psi(\operatorname{sgn}(y-y^{\prime})t)\) and \((x,x^{\prime})\in\mathcal{X}^{2}\). Let \(f_{\phi}^{*}\) denote the Bayes \(\phi\)-ranking rule. If \(\psi\) satisfies Assumption 3, then the following statements hold._
1. \(\Psi_{(x,x^{\prime})}\) _is strictly decreasing on_ \((-\infty,f_{-}^{*}(x,x^{\prime})]\)_, strictly increasing on_ \([f_{+}^{*}(x,x^{\prime}),\infty)\) _and keep constant on_ \([f_{-}^{*}(x,x^{\prime}),f_{+}^{*}(x,x^{\prime})]\)_._
2. \(f_{\phi}^{*}(x,x^{\prime})\) _is a minimizer of_ \(\Psi_{(x,x^{\prime})}\)_, i.e.,_ \(f_{\phi}^{*}(x,x^{\prime})=\operatorname*{arg\,min}_{t\in\mathbb{R}}\Psi_{(x, x^{\prime})}(t)\)_, and can take any value in_ \([f_{-}^{*}(x,x^{\prime}),f_{+}^{*}(x,x^{\prime})]\)_._
3. _There holds_ \(\begin{cases}0\leq f_{-}^{*}(x,x^{\prime})\leq f_{\phi}^{*}(x,x^{\prime}),&(x,x^{\prime})\in\mathcal{X}_{+}^{2},\\ f_{\phi}^{*}(x,x^{\prime})\leq f_{+}^{*}(x,x^{\prime})\leq 0,&(x,x^{\prime}) \in\mathcal{X}_{-}^{2},\\ f_{-}^{*}(x,x^{\prime})\leq 0\leq f_{+}^{*}(x,x^{\prime}),&(x,x^{\prime}) \in\mathcal{X}_{=}^{2}.\end{cases}\)__
4. \(f_{-}^{*}(x,x^{\prime})\leq 1\) _and_ \(f_{+}^{*}(x,x^{\prime})\geq-1\)_._
**Lemma 10**.: _If \(\psi\) satisfies Assumption 3 and \(\psi^{\prime\prime}(0)>0\). Then there exists a constant \(C_{\psi}>0\) only depending on \(\psi\) such that for all \((x,x^{\prime})\in\mathcal{X}^{2}\)_
\[\left(\eta_{+}(x,x^{\prime})-\eta_{-}(x,x^{\prime})\right)^{2}\leq C_{\psi} \big{(}\Psi_{(x,x^{\prime})}(0)-\Psi_{(x,x^{\prime})}(f_{\phi}^{*}(x,x^{ \prime}))\big{)}.\]
By Lemma 9 we also see that \(\phi\) is truncated at \(t=1\) and the Bayes \(\phi\)-ranking rule can be defined pointwise which can be taken as skew-symmetric. Then we can establish a general calibration inequality for pairwise ranking.
**Proposition 8**.: _If \(\psi\) satisfies Assumption 3 and \(\psi^{\prime\prime}(0)>0\). Let \(f_{\phi}^{*}\) be the Bayes \(\phi\)- ranking rule. Then there exists a constant \(c_{\psi}>0\) only depending on \(\psi\) such that for all measurable \(f:\mathcal{X}^{2}\to\mathbb{R}\) we have_
\[\mathcal{R}(f)-\mathcal{R}(f_{\mathrm{rank}}^{*})\leq c_{\psi}\sqrt{\mathcal{R }^{\phi}(f)-\mathcal{R}^{\phi}(f_{\phi}^{*})}.\]
Proof.: Given \(f:\mathcal{X}^{2}\to\mathbb{R}\), define
\[\mathcal{X}_{f}^{2}:= \big{\{}(x,x^{\prime})\in\mathcal{X}^{2}\mid(x,x^{\prime})\in \mathcal{X}_{+}^{2}\text{ and }f(x,x^{\prime})<0\big{\}}\] \[\quad\bigcup\big{\{}(x,x^{\prime})\in\mathcal{X}^{2}\mid(x,x^{ \prime})\in\mathcal{X}_{-}^{2}\text{ and }f(x,x^{\prime})\geq 0\big{\}}.\]
Then by Cauchy-Schwarz inequality and Lemma 10 we have
\[\mathcal{R}(f)-\mathcal{R}(f_{\mathrm{rank}}^{*}) =\int_{\mathcal{X}_{f}^{2}}|\eta_{+}(x,x^{\prime})-\eta_{-}(x,x^{ \prime})|dP_{\mathcal{X}}^{2}(x,x^{\prime})\] \[\leq\left(\int_{\mathcal{X}_{f}^{2}}|\eta_{+}(x,x^{\prime})-\eta_ {-}(x,x^{\prime})|^{2}dP_{\mathcal{X}}^{2}(x,x^{\prime})\right)^{\frac{1}{2}}\] \[\leq\left(C_{\psi}\int_{\mathcal{X}_{f}^{2}}\Psi_{(x,x^{\prime})} (0)-\Psi_{(x,x^{\prime})}(f_{\phi}^{*}(x,x^{\prime}))dP_{\mathcal{X}}^{2}(x,x ^{\prime})\right)^{\frac{1}{2}}.\]
For \((x,x^{\prime})\in\mathcal{X}_{f}\), if \((x,x^{\prime})\in\mathcal{X}_{+}^{2}\), then \(f(x,x^{\prime})<0\) and \(f_{\mathrm{rank}}^{*}(x,x^{\prime})=1\). By Lemma 9, \(\Psi_{(x,x^{\prime})}\) is strictly decreasing on \((-\infty,0]\) and hence \(\Psi_{(x,x^{\prime})}(0)\leq\Psi_{(x,x^{\prime})}(f(x,x^{\prime}))\). Following the same argument, the inequality also holds for \((x,x^{\prime})\in\mathcal{X}_{-}^{2}\). Thus, let \(c_{\psi}:=C_{\psi}^{\frac{1}{2}}\) we have
\[\mathcal{R}(f)-\mathcal{R}(f_{\mathrm{rank}}^{*})\leq\left(C_{\psi}\int_{ \mathcal{X}_{f}^{2}}\Psi_{(x,x^{\prime})}(0)-\Psi_{(x,x^{\prime})}(f_{\phi}^{ *}(x,x^{\prime}))dP_{\mathcal{X}}^{2}(x,x^{\prime})\right)^{\frac{1}{2}}\]
\[\leq c_{\psi}\sqrt{\mathcal{R}^{\phi}(f)-\mathcal{R}^{\phi}(f_{\phi}^ {*})}.\]
The proof is then finished.
Furthermore, under the noise condition in Assumption 4, a refined oracle inequality can be established.
**Proposition 9**.: _Assume that \(\psi\) satisfies Assumption 3, \(\psi^{\prime\prime}(0)>0\) and Assumption 4 holds with \(C_{*}>0\) and \(q\in[0,\infty]\). Let \(f_{\phi}^{*}\) be the Bayes \(\phi\)-ranking rule. Then there exists a constant \(c_{\psi,q}>0\) only depending on \(\psi,q\) and \(C_{*}\) such that for all measurable \(f:\mathcal{X}^{2}\to\mathbb{R}\),_
\[\mathcal{R}(f)-\mathcal{R}(f_{\mathrm{rank}}^{*})\leq c_{\psi,q}(\mathcal{R}^ {\phi}(f)-\mathcal{R}^{\phi}(f_{\phi}^{*}))^{\frac{q+1}{q+2}}.\]
Proof.: For \(t>0\) we denote
\[U_{t} :=\big{\{}(x,x^{\prime})\in\mathcal{X}^{2}:\big{|}\eta_{+}(x,x^{ \prime})-\eta_{-}(x,x^{\prime})\big{|}\leq t\big{\}},\] \[V_{t} :=\big{\{}(x,x^{\prime})\in\mathcal{X}^{2}:\big{|}\eta_{+}(x,x^{ \prime})-\eta_{-}(x,x^{\prime})\big{|}>t\big{\}}.\]
Recall \(\mathcal{X}_{f}^{2}\) defined in the proof of Proposition 9. By Assumption 4 and analysis from the last proof, we have
\[\mathcal{R}(f)-\mathcal{R}(f_{\mathrm{rank}}^{*})\] \[= \int_{\mathcal{X}_{f}^{2}}|\eta_{+}(x,x^{\prime})-\eta_{-}(x,x^{ \prime})|dP_{\mathcal{X}}^{2}(x,x^{\prime})\] \[= \int_{\mathcal{X}_{f}^{2}\cap U_{t}}|\eta_{+}(x,x^{\prime})-\eta_ {-}(x,x^{\prime})|dP_{\mathcal{X}}^{2}(x,x^{\prime})+\int_{\mathcal{X}_{f}^{2 }\cap V_{t}}|\eta_{+}(x,x^{\prime})-\eta_{-}(x,x^{\prime})|dP_{\mathcal{X}}^{2 }(x,x^{\prime})\] \[\leq tP_{\mathcal{X}}^{2}(U_{t})+\frac{1}{t}\int_{\mathcal{X}_{f }^{2}\cap V_{t}}|\eta_{+}(x,x^{\prime})-\eta_{-}(x,x^{\prime})|^{2}dP_{\mathcal{ X}}^{2}(x,x^{\prime})\] \[\leq C_{*}t^{q+1}+\frac{C_{\psi}(\mathcal{R}^{\phi}(f)-\mathcal{ R}^{\phi}(f_{\phi}^{*}))}{t}.\]
Then the optimal choice
\[t:=\left(\frac{C_{\psi}(\mathcal{R}^{\phi}(f)-\mathcal{R}^{\phi}(f_{\phi}^{*}))}{(q+1)C_{*}}\right)^{\frac{1}{q+2}}\]
yields that
\[\mathcal{R}(f)-\mathcal{R}(f_{\mathrm{rank}}^{*}) \leq C_{*}\left(\frac{C_{\psi}(\mathcal{R}^{\phi}(f)-\mathcal{R}^ {\phi}(f_{\phi}^{*}))}{(q+1)C_{*}}\right)^{\frac{q+1}{q+2}}\] \[\quad+(q+1)^{\frac{1}{q+2}}C_{*}^{\frac{1}{q+2}}C_{\psi}^{\frac{q +1}{q+2}}(\mathcal{R}^{\phi}(f)-\mathcal{R}^{\phi}(f_{\phi}^{*}))^{\frac{q+1} {q+2}}.\]
By taking \(c_{\psi,q}:=(q+1)^{-\frac{q+1}{q+2}}C_{*}^{\frac{1}{q+2}}C_{\psi}^{\frac{q+1} {q+2}}+(q+1)^{\frac{1}{q+2}}C_{*}^{\frac{1}{q+2}}C_{\psi}^{\frac{q+1}{q+2}}\), we complete the proof.
We see that the case of \(q=0\) in which Assumption 4 holds trivially, Proposition 9 reduces to Proposition 8. When \(q>0\), the calibration inequality is refined compared to the previous one given in Proposition 8. In particular, when \(q=\infty\), we obtain a similar inequality as that established for \(\phi_{\text{hinge}}\) in Proposition 7.
### Learning Rates of Gaussian Ranking Estimators with Hinge Loss and Square Loss
We are now in the position to prove the oracle inequalities of Gaussian ranking estimators with \(\phi=\phi_{\text{hinge}}\) and \(\phi=\phi_{\text{square}}\), and then derive the learning rates. To this end, it only remains to verify Assumption 1, particularly the variance bound (2.6). For hinge loss, the noise condition in Assumption 4 can be used to establish the variance bound.
**Lemma 11**.: _If Assumption 4 holds with \(C_{*}>0\) and \(q\in[0,\infty]\), then for all measurable \(f:\mathcal{X}^{2}\to[-1,1]\) we have_
\[\mathbb{E}(Q\phi_{\text{hinge}f}-Q\phi_{\text{hinge}f^{*}_{\text{hinge}}})^{2 }\leq V\big{(}\mathbb{E}(Q\phi_{\text{hinge}f}-Q\phi_{\text{hinge}f^{*}_{ \text{hinge}}})\big{)}^{\tau},\]
_where \(V=2^{\frac{q+2}{q+1}}C_{*}^{\frac{1}{q+1}}q^{\frac{1}{q+1}}(1+q^{-1})\) and \(\tau=\frac{q}{q+1}\)._
Proof.: Recall \(U_{t}\) and \(V_{t}\) defined in the proof of Proposition 9, which is given by
\[U_{t} =\big{\{}(x,x^{\prime})\in\mathcal{X}^{2}:\big{|}\eta_{+}(x,x^{ \prime})-\eta_{-}(x,x^{\prime})\big{|}\leq t\big{\}},\] \[V_{t} =\big{\{}(x,x^{\prime})\in\mathcal{X}^{2}:\big{|}\eta_{+}(x,x^{ \prime})-\eta_{-}(x,x^{\prime})\big{|}>t\big{\}}.\]
Then we have
\[\mathbb{E}(Q\phi_{\text{hinge}f}-Q\phi_{\text{hinge}f^{*}_{\text{ hinge}}})^{2}\] \[=\] \[\leq \int_{\mathcal{X}\times\mathcal{Y}}\int_{\mathcal{X}\times \mathcal{Y}}\biggl{(}\phi_{\text{hinge}}(y,y^{\prime},f(x,x^{\prime}))-\phi_{ \text{hinge}}(y,y^{\prime},f^{*}_{\text{hinge}}(x,x^{\prime}))\biggr{)}^{2}dP( x^{\prime},y^{\prime})dP(x,y)\] \[\leq \int_{\mathcal{X}^{2}}|f(x,x^{\prime})-f^{*}_{\text{hinge}}(x,x^{ \prime})|^{2}dP^{2}_{\mathcal{X}}(x,x^{\prime})\] \[\leq 2\int_{\mathcal{X}^{2}}|f(x,x^{\prime})-f^{*}_{\text{hinge}}(x,x^{\prime})|dP^{2}_{\mathcal{X}}(x,x^{\prime})\] \[= 2\int_{U_{t}\cup V_{t}}|f(x,x^{\prime})-f^{*}_{\text{hinge}}(x,x^ {\prime})|dP^{2}_{\mathcal{X}}(x,x^{\prime})\] \[\leq 4P^{2}_{\mathcal{X}}(U_{t})+\frac{2}{t}\int_{V_{t}}|\eta_{+} (x,x^{\prime})-\eta_{-}(x,x^{\prime})|\cdot|f(x,x^{\prime})-f^{*}_{\text{hinge }}(x,x^{\prime})|dP^{2}_{\mathcal{X}}(x,x^{\prime})\] \[\leq 4C_{*}t^{q}+2t^{-1}\mathbb{E}(Q\phi_{\text{hinge}f}-Q\phi_{ \text{hinge}f^{*}_{\text{hinge}}})\] \[\leq 2^{\frac{q+2}{q+1}}C_{*}^{\frac{1}{q+1}}q^{\frac{1}{q+1}}(1+q ^{-1})\big{(}\mathbb{E}(Q\phi_{\text{hinge}f}-Q\phi_{\text{hinge}f^{*}_{\text{ hinge}}})\big{)}^{\frac{q}{q+1}}.\]
In the last inequality, we choose the optimal
\[t:=\left(\frac{\mathbb{E}(Q\phi_{\text{hinge}f}-Q\phi_{\text{hinge}f^{*}_{ \text{hinge}}})}{2qC_{*}}\right)^{\frac{1}{q+1}}\]
to minimize the sum. The proof is then finished.
Now we can prove Theorem 3 which presents the oracle inequality and learning rates for \(\phi=\phi_{\mathrm{hinge}}\).
Proof of Theorem 3.: Combining Proposition 4, Proposition 5 and Proposition 11, we directly apply Theorem 2 with \(L=1,B=2,B_{0}=2,\tau=\frac{q}{q+1},V=2^{\frac{q+2}{q+1}}C_{*}^{\frac{1}{q+1}}q^{ \frac{1}{q+1}}(1+q^{-1})\leq 6C_{*}^{\frac{1}{q+1}},p_{1}=p_{2}=p,a_{1}=a_{2}=(C_{ \mathcal{X}}^{*}p^{-2d-1}\sigma^{-2\varrho})^{\frac{1}{2p}}\), which yields
\[\lambda\|f_{\mathbf{z}}\|_{K^{\sigma}}^{2}+\mathcal{R}^{\phi_{ \mathrm{hinge}}}(\pi(f_{\mathbf{z}}))-\mathcal{R}^{\phi_{\mathrm{hinge}}}(f_{ \mathrm{hinge}}^{*})\] \[\leq\frac{2^{3d+4}r^{2d}}{\Gamma(d)}\lambda\sigma^{-2d}+\frac{2^{ \beta/2+4}C_{**}\Gamma(d+\frac{\beta}{2})}{\Gamma(d)}\sigma^{\beta}+36c_{1} \left(\frac{C_{\mathcal{X}}^{*}C_{*}^{\frac{1-p}{q+1}}}{\lambda^{p}p^{2d+1} \sigma^{2\varrho}n}\right)^{\frac{q+1}{q-p+2}}\] \[\quad+\frac{6C_{\mathcal{X}}^{*}(2c_{2}+c_{3}t+c_{4}t)}{\lambda^ {p}p^{2d+1}\sigma^{2\varrho}n}+\left(\frac{11232C_{*}^{\frac{1}{q+1}}t}{n} \right)^{\frac{q+1}{q+2}}+\frac{(2712+3c_{5})t}{n}.\]
Recall the proof of Proposition 3 and notice that constants \(c_{3}\) and \(c_{4}\) depend on \(p\). When \(p\in(0,1/4]\), one can verify that \(c_{3}\) and \(c_{4}\) are uniformly upper bounded. Taking \(c_{6}=\sup\{c_{i}:1\leq i\leq 5,0<p\leq 1/4\}\), we complete the proof of the oracle inequality.
To derive the learning rate, for all \(t\geq 1\), write
\[\mathcal{R}^{\phi_{\mathrm{hinge}}}(\pi(f_{\mathbf{z}}))-\mathcal{R}^{\phi_{ \mathrm{hinge}}}(f_{\mathrm{hinge}}^{*})\lesssim\lambda\sigma^{-2d}+\sigma^{ \beta}+\left(\frac{t}{\lambda^{p}p^{2d+1}\sigma^{2\varrho}n}\right)^{\frac{q+1 }{q-p+2}}+\left(\frac{t}{n}\right)^{\frac{q+1}{q+2}}.\]
With the choices we specified in the statement of the theorem, we have
\[\lambda<1,\quad\sigma^{2\varrho}n\geq 1,\quad\frac{q+1}{q+2}\leq\frac{q+1}{q-p+2 }\leq 1,\]
which leads to
\[\left(\frac{t}{p^{2d+1}\sigma^{2d}\lambda^{p}n}\right)^{\frac{q+1 }{q-p+2}} \leq tp^{-2d-1}\lambda^{-p}(\sigma^{2\varrho}n)^{-\frac{q+1}{q+2}}\] \[\lesssim t\lambda^{-1/\log n}(\sigma^{2\varrho}n)^{-\frac{q+1}{q+ 2}}\log^{2d+1}n\] \[=te^{b}n^{-\frac{\beta(q+1)}{\beta(q+2)+2\varrho(q+1)}}\log^{2d+1 }n,\]
while other terms have convergence rates faster than this. Then for all \(n\geq 2,t\geq 1\), by Proposition 7, with probability at least \(1-(c_{0}+5)\exp(-t)\), we have
\[\mathcal{E}(\pi(f_{\mathbf{z}}))\leq\mathcal{E}^{\phi_{\mathrm{hinge}}}(\pi(f_{ \mathbf{z}}))\lesssim tn^{-\frac{\beta(q+1)}{\beta(q+2)+2\varrho(q+1)}}\log^{2 d+1}n.\]
Thus we complete the proof.
For \(\phi=\phi_{\mathrm{square}}\), the variance bound can be proved directly.
**Lemma 12**.: _For all measurable \(f:\mathcal{X}^{2}\to[-1,1]\) we have_
\[\mathbb{E}(Q\phi_{\mathrm{square}f}-Q\phi_{\mathrm{square}f^{*}_{\mathrm{square} }})^{2}\leq 16\mathbb{E}(Q\phi_{\mathrm{square}f}-Q\phi_{\mathrm{square}f^{*}_{ \mathrm{square}}}).\]
Proof.: Recall the calculation we did in (5.7), we have
\[\mathbb{E}(Q\phi_{\mathrm{square}f}-Q\phi_{\mathrm{square}f^{*}_{ \mathrm{square}}})^{2}\] \[= \int_{\mathcal{X}\times\mathcal{Y}}\bigg{(}\int_{\mathcal{X} \times\mathcal{Y}}\phi_{\mathrm{square}}(y,y^{\prime},f(x,x^{\prime}))-\phi_{ \mathrm{square}}(y,y^{\prime},f^{*}_{\mathrm{square}}(x,x^{\prime}))dP(x^{ \prime},y^{\prime})\bigg{)}^{2}dP(x,y)\] \[\leq \int_{\mathcal{X}\times\mathcal{Y}}\int_{\mathcal{X}\times \mathcal{Y}}\bigg{(}\phi_{\mathrm{square}}(y,y^{\prime},f(x,x^{\prime}))-\phi_ {\mathrm{square}}(y,y^{\prime},f^{*}_{\mathrm{square}}(x,x^{\prime}))\bigg{)}^ {2}dP(x^{\prime},y^{\prime})dP(x,y)\] \[= \mathbb{E}_{P^{2}_{\mathcal{X}}}\big{[}\eta_{+}(2-f-f^{*}_{ \mathrm{square}})^{2}(f^{*}_{\mathrm{square}}-f)^{2}+\eta_{-}(2+f+f^{*}_{ \mathrm{square}})^{2}(f^{*}_{\mathrm{square}}-f)^{2}\big{]}\] \[\leq 16\mathbb{E}_{P^{2}_{\mathcal{X}}}\big{[}(\eta_{+}+\eta_{-})( f^{*}_{\mathrm{square}}-f)^{2}\big{]}\] \[= 16(\mathcal{R}^{\phi_{\mathrm{square}}}(f)-\mathcal{R}^{\phi_{ \mathrm{square}}}(f^{*}_{\mathrm{square}}))\] \[= 16\mathbb{E}(Q\phi_{\mathrm{square}f}-Q\phi_{\mathrm{square}f^{*} _{\mathrm{square}}}).\]
The proof is then finished.
At the end of this subsection, we prove Theorem 4 which presents the oracle inequality and learning rates for \(\phi=\phi_{\mathrm{square}}\).
Proof of Theorem 4.: Combining Proposition 4, Proposition 6 and Proposition 12, we can apply Theorem 2 with \(L=4,B=4,B_{0}=2^{2s+2},\tau=1,V=16,p_{1}=p_{2}=p,a_{1}=a_{2}=(C^{*}_{\mathcal{ X}}p^{-2d-1}\sigma^{-2\varrho})^{\frac{1}{2p}}\), and the \(c_{6}\) we introduce in proof of Theorem 3, which yields
\[\lambda\|f_{\mathbf{z}}\|_{K^{\sigma}}^{2}+\mathcal{R}^{\phi_{ \mathrm{square}}}(\pi(f_{\mathbf{z}}))-\mathcal{R}^{\phi_{\mathrm{square}}}(f^{ *}_{\mathrm{square}})\] \[\leq \frac{2^{2s+3}\|f^{*}_{\mathrm{square}}\|_{\mathcal{L}_{2}(\mathbb{ R}^{2d})}^{2}}{\pi^{d}}\lambda\sigma^{-2d}+2^{3-\alpha}\left(\frac{\Gamma \left(d+\frac{\alpha}{2}\right)}{\Gamma(d)}\right)^{2}|f^{*}_{\mathrm{square} }|_{\mathcal{B}^{\alpha}_{2,\infty}(P^{2}_{\mathcal{X}})}^{2}\sigma^{2\alpha}\] \[+\frac{C^{*}_{\mathcal{X}}(96c_{6}+72c_{6}t)}{\lambda^{p}p^{2d+ 1}\sigma^{2\varrho}n}+\frac{(33552+1824\cdot 2^{2s}+3c_{6})t}{n}.\]
To derive the learning rate, for all \(t\geq 1\), write
\[\mathcal{R}^{\phi_{\mathrm{square}}}(\pi(f_{\mathbf{z}}))-\mathcal{R}^{\phi_{ \mathrm{square}}}(f^{*}_{\mathrm{square}})\lesssim\lambda\sigma^{-2d}+\sigma^{2 \alpha}+\frac{t}{\lambda^{p}p^{2d+1}\sigma^{2\varrho}n}+\frac{t}{n}.\]
With the choices we specified in the statement of the theorem, we have
\[\lambda<1,\quad\sigma^{2\varrho}n\geq 1,\]
which leads to
\[\frac{t}{p^{2d+1}\sigma^{2\varrho}\lambda^{p}n}\lesssim t\lambda^{-1/\log n}( \sigma^{2\varrho}n)^{-1}\log^{2d+1}n=e^{b}n^{\frac{\alpha}{\alpha+\varrho}} \log^{2d+1}n,\]
while other terms have convergence rates faster than this. Then for all \(n\geq 2,t\geq 1\), by Proposition 8, with probability at least \(1-(c_{0}+5)\exp(-t)\), we have
\[\mathcal{E}(\pi(f_{\mathbf{z}}))\lesssim\sqrt{\mathcal{E}^{\phi_{\mathrm{ square}}}(\pi(f_{\mathbf{z}}))}\lesssim\sqrt{t}n^{-\frac{\alpha}{2(\alpha+ \varrho)}}\log^{d+\frac{1}{2}}n.\]
Furthermore, if \(P\) additionally satisfies Assumption 4, by Proposition 9, with probability at least \(1-(c_{0}+5)\exp(-t)\) we have
\[\mathcal{E}(\pi(f_{\mathbf{z}}))\lesssim(\mathcal{E}^{\phi_{\text{square}}}(\pi(f _{\mathbf{z}})))^{\frac{q+1}{q+2}}\lesssim t^{\frac{q+1}{q+2}}n^{-\frac{(q+1) \alpha}{(q+2)(\alpha+\theta)}}\log^{\frac{(q+1)(2d+1)}{q+2}}n.\]
The proof is then finished.
### Comparisons of Noise Conditions
In this subsection, we make some comparisons of different noise conditions. Let \(\mathbf{TN(q)}\) denote the noise condition in Assumption 4. [12] and [13] propose another two noise conditions in the context of bipartite ranking, namely the global low noise conditions \(\mathbf{LN(q)}\) and \(\mathbf{NA(q)}\), which can be reformulated as the following conditions in the pairwise ranking setting.
**Assumption 7** (**Global low noise condition \(\mathbf{LN(q)}\)**).: _There exist constants \(C_{\mathrm{LN}}>0\) and \(q\in[0,\infty]\) such that for all \(x\in\mathcal{X}\),_
\[\mathbb{E}_{X^{\prime}}[|\eta_{+}(x,X^{\prime})-\eta_{-}(x,X^{\prime})|^{-q}] \leq C_{\mathrm{LN}}.\]
**Assumption 8** (**Global low noise condition \(\mathbf{NA(q)}\)**).: _There exists constants \(C_{\mathrm{NA}}>0\) and \(q\in[0,\infty]\) such that for all \(x\in\mathcal{X}\) and \(t>0\),_
\[P_{\mathcal{X}}(\{x^{\prime}\in\mathcal{X}:|\eta_{+}(x,x^{\prime})-\eta_{-}(x, x^{\prime})|\leq t\})\leq C_{\mathrm{NA}}t^{q}.\]
Similar to the proof of Proposition 3 of [13] which describes the connection between \(\mathbf{LN(q)}\) and \(\mathbf{NA(q)}\), we can show that \(\mathbf{LN(q)}\) implies \(\mathbf{NA(q)}\) and \(\mathbf{NA(q)}\) implies \(\mathbf{LN(q^{\prime})}\) for all \(q^{\prime}<q\) so \(\mathbf{NA(q)}\) can be considered as a slightly weaker condition.
Obviously, \(\mathbf{NA(q)}\) implies \(\mathbf{TN(q)}\), and here is an example showing that this implication is sharp. Consider \(P_{\mathcal{X}}\) be a uniform distribution on \(\mathcal{X}=[0,1]\) and the label \(Y=X+\epsilon\) where \(\epsilon\) is a random noise uniformly distributed on \([0,1]\). The conditional distribution \(P_{Y|X}\) together with \(P_{\mathcal{X}}\) generate the distribution \(P\) on \(\mathcal{X}\times\mathcal{Y}=[0,1]\times[0,2]\). A simple calculation shows that \(|\eta_{+}(x,x^{\prime})-\eta_{-}(x,x^{\prime})|=2|x-x^{\prime}|-|x-x^{\prime }|^{2}\) and hence the set \(\{(x,x^{\prime})\in\mathcal{X}^{2}:|\eta_{+}(x,x^{\prime})-\eta_{-}(x,x^{ \prime})|\leq t\}\) is a belt domain along the diagonal with width and probability measure of \(O(t)\) asymptotically when \(t\to 0\). Thus, \(P\) satisfies \(\mathbf{NA(1)}\) and \(\mathbf{TN(1)}\) but does not satisfy \(\mathbf{TN(q)}\) for all \(q>1\). This is also an example that \(P\) does not satisfy \(\mathbf{LN(1)}\) while it satisfies \(\mathbf{LN(q)}\) for all \(q<1\).
On the other hand, generally, \(\mathbf{TN(q)}\) can not imply \(\mathbf{NA(q^{\prime})}\) for any \(q^{\prime}\in(0,q]\) because the global low noise condition is stated at every \(x\in\mathcal{X}\) while \(\mathbf{TN(q)}\) only requires a fast decay speed of the product probability measure. One can easily construct a decreasing sequence of sets on \(\mathcal{X}^{2}\) with fast decay measure but the measure of their cross-section at some point \(x\in\mathcal{X}\) keeps a positive constant.
Having noticed that \(\mathbf{LN(q)}\) and \(\mathbf{NA(q)}\) are strictly stronger than \(\mathbf{TN(q)}\), one may expect that the global low noise condition can derive a bigger variance bound exponent \(\tau\) compared with \(\tau=\frac{q}{q+1}\) stated in Lemma 11 under \(\mathbf{TN(q)}\). In fact, we have the following result.
**Proposition 10**.: _The following assertions hold._
1. _If_ \(P\) _satisfies_ _LN(q)_ _with_ \(q\in[0,1]\)_, then for all measurable_ \(f:\mathcal{X}^{2}\to[-1,1]\) _we have_ \[\mathbb{E}(Q\phi_{\mathrm{hingef}}-Q\phi_{\mathrm{hingef}_{\mathrm{hinge}}^{*}}) ^{2}\leq V\big{(}\mathbb{E}(Q\phi_{\mathrm{hingef}}-Q\phi_{\mathrm{hingef}_{ \mathrm{hinge}}^{*}})\big{)}^{\tau},\] _where_ \(V=4C_{\mathrm{LN}}\) _and_ \(\tau=q\)_._
2. _If_ \(P\) _satisfies_ _NA(q)_ _with_ \(q\in[0,\infty]\)_, then for all measurable_ \(f:\mathcal{X}^{2}\to[-1,1]\) _we have_ \[\mathbb{E}(Q\phi_{\mathrm{hingef}}-Q\phi_{\mathrm{hinge}f_{\mathrm{hinge}}^{*} })^{2}\leq V\big{(}\mathbb{E}(Q\phi_{\mathrm{hingef}}-Q\phi_{\mathrm{hinge}f_{ \mathrm{hinge}}^{*}})\big{)}^{\tau},\] _where_ \(V=2^{\frac{2q+3}{2q+1}}C_{\mathrm{NA}}^{\frac{2}{2q+1}}q^{\frac{1}{2q+1}}(2 +q^{-1})\) _and_ \(\tau=\frac{2q}{2q+1}\)_._
3. _The best variance exponent_ \(\tau\) _we can derive from the global low noise conditions can be listed below:_
\begin{tabular}{|c|c|c|c|} \hline \(\tau\) & \(q\in[0,1/2]\) & \(q\in(1/2,1]\) & \(q\in(1,\infty]\) \\ \hline
**LN(q)** & \(2q/(2q+1)\) & \(q\) & \(1\) \\ \hline
**NA(q)** & \(2q/(2q+1)\) & \((1/2,q)\) & \(1\) \\ \hline \end{tabular}
Proof.: If **LN(q)** holds with \(q\in[0,1]\), by Cauchy-Schwarz inequality and Jensen's inequality we have
\[\mathbb{E}(Q\phi_{\mathrm{hingef}}-Q\phi_{\mathrm{hinge}f_{\mathrm{ hinge}}^{*}})^{2}\] \[\leq\mathbb{E}_{X}(\mathbb{E}_{X^{\prime}}|f(X,X^{\prime})-f_{ \mathrm{hinge}}^{*}(X,X^{\prime})|)^{2}\] \[\leq C_{\mathrm{LN}}\mathbb{E}_{X}(\mathbb{E}_{X^{\prime}}|f(X,X ^{\prime})-f_{\mathrm{hinge}}^{*}(X,X^{\prime})|^{2}\cdot|\eta_{+}(X,X^{ \prime})-\eta_{-}(X,X^{\prime})|^{q})\] \[\leq 4C_{\mathrm{LN}}\mathbb{E}_{X}(\mathbb{E}_{X^{\prime}}|f(X,X ^{\prime})-f_{\mathrm{hinge}}^{*}(X,X^{\prime})|^{q}\cdot|\eta_{+}(X,X^{ \prime})-\eta_{-}(X,X^{\prime})|^{q})\] \[\leq 4C_{\mathrm{LN}}(\mathbb{E}_{X}\mathbb{E}_{X^{\prime}}|f(X,X ^{\prime})-f_{\mathrm{hinge}}^{*}(X,X^{\prime})|\cdot|\eta_{+}(X,X^{\prime})- \eta_{-}(X,X^{\prime})|)^{q}\] \[=4C_{\mathrm{LN}}\big{(}\mathbb{E}(Q\phi_{\mathrm{hingef}}-Q\phi _{\mathrm{hinge}f_{\mathrm{hinge}}^{*}})\big{)}^{q}.\]
If **NA(q)** holds with \(q\in[0,\infty]\), we have
\[\mathbb{E}(Q\phi_{\mathrm{hingef}}-Q\phi_{\mathrm{hinge}f_{ \mathrm{hinge}}^{*}})^{2}\] \[\leq\mathbb{E}_{X}(\mathbb{E}_{X^{\prime}}|f(X,X^{\prime})-f_{ \mathrm{hinge}}^{*}(X,X^{\prime})|\cdot\mathbb{E}_{\{|\eta_{+}(X,X^{\prime})- \eta_{-}(X,X^{\prime})|>t\}}))^{2}\] \[\leq 2\mathbb{E}_{X}(\mathbb{E}_{X^{\prime}}|f(X,X^{\prime})-f_{ \mathrm{hinge}}^{*}(X,X^{\prime})|\cdot\mathbb{E}_{\{|\eta_{+}(X,X^{\prime})- \eta_{-}(X,X^{\prime})|\leq t\}})^{2}\] \[\quad+2\mathbb{E}_{X}(\mathbb{E}_{X^{\prime}}|f(X,X^{\prime})-f_{ \mathrm{hinge}}^{*}(X,X^{\prime})|\cdot\mathbb{I}_{\{|\eta_{+}(X,X^{\prime})- \eta_{-}(X,X^{\prime})|>t\}})^{2}\] \[\leq 8C_{\mathrm{NA}}^{2}t^{2q}+2\mathbb{E}_{X}\mathbb{E}_{X^{ \prime}}|f(X,X^{\prime})-f_{\mathrm{hinge}}^{*}(X,X^{\prime})|^{2}\cdot \mathbb{I}_{\{|\eta_{+}(X,X^{\prime})-\eta_{-}(X,X^{\prime})|>t\}}\] \[\leq 8C_{\mathrm{NA}}^{2}t^{2q}+\frac{4}{t}\mathbb{E}_{X}\mathbb{E }_{X^{\prime}}|f(X,X^{\prime})-f_{\mathrm{hinge}}^{*}(X,X^{\prime})|\cdot|\eta_ {+}(X,X^{\prime})-\eta_{-}(X,X^{\prime})|\] \[\leq 8C_{\mathrm{NA}}^{2}t^{2q}+\frac{4}{t}\mathbb{E}(Q\phi_{ \mathrm{hinge}f}-Q\phi_{\mathrm{hinge}f_{\mathrm{hinge}}^{*}})\] \[\leq 2^{\frac{2q+3}{2q+1}}\frac{2}{C_{\mathrm{NA}}^{\frac{2}{2q+1}}q ^{\frac{1}{2q+1}}}(2+q^{-1})\big{(}\mathbb{E}(Q\phi_{\mathrm{hinge}f}-Q\phi_ {\mathrm{hinge}f_{\mathrm{hinge}}^{*}})\big{)}^{\frac{2q}{2q+1}}.\]
In the last inequality, we choose the optimal
\[t:=\Bigg{(}\frac{\mathbb{E}(Q\phi_{\mathrm{hinge}f}-Q\phi_{ \mathrm{hinge}f_{\mathrm{hinge}}^{*}})}{4qC_{\mathrm{NA}}^{2}}\Bigg{)}^{\frac{ 1}{2q+1}}\]
to minimize the sum.
For the best variance exponent, we use the connection between **LN(\(q\))** and **NA(\(q\))**. If \(q\in[0,1/2]\), **LN(\(q\))** implies **NA(\(q\))** and \(2q/(2q+1)\geq q\), hence \(\tau=2q/(2q+1)\). If \(q\in(1/2,1]\), **NA(\(q\))** implies **LN(\(q^{\prime}\))** for all \(q^{\prime}<q\), hence \(\tau\) can be chosen arbitrarily close to \(q\) with the constant \(V\) depending on \(\tau\). If \(q\in(1,\infty]\), **LN(\(q\))** or **NA(\(q\))** both imply **LN(\(1\))**, hence \(\tau=1\). Thus we complete the proof.
From the discussion above, we see that the global low noise condition can always derive a sharper exponent \(\tau\) under the same noise exponent \(q\) compared with the noise condition in Assumption 4 and hence can derive faster learning rates. One can examine these noise conditions carefully in an actual instance.
|
2305.05805 | Quantum Fokker-Planck structure of the Lindblad equation | We show that the quantum Fokker-Planck equation, obtained by a canonical
quantization of its classical version, can be transformed into an equation of
the Lindblad form. This result allows us to conclude that the quantum
Fokker-Planck equation preserves the trace and positivity of the density
operator. The Fokker-Planck structure gives explicit expression for the quantum
equivalence of probability current as well as the quantum equivalence of
detailed balance. We also propose expression for the rate of entropy production
and show that it does not vanish for a closed system except in equilibrium. | Mário J. de Oliveira | 2023-05-09T23:37:20Z | http://arxiv.org/abs/2305.05805v1 | # Quantum Fokker-Planck structure of the Lindblad equation
###### Abstract
We show that the quantum Fokker-Planck equation, obtained by a canonical quantization of its classical version, can be transformed into an equation of the Lindblad form. This result allows us to conclude that the quantum Fokker-Planck equation preserves the trace and positivity of the density operator. The Fokker-Planck structure gives explicit expression for the quantum equivalence of probability current as well as the quantum equivalence of detailed balance. We also propose expression for the rate of entropy production and show that it does not vanish for a closed system except in equilibrium.
## I Introduction
The dynamics of quantum open systems [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16] is usually formulated considering a system and its environment. The equations of motion of the system is then derived by summing out the degrees of freedom of the environment. The derivation is not accomplished without assuming an approximation concerning the interactions of the system with its environment. Usually, the environment is considered a thermal system which means to say that the interaction with the system is regarded as being of stochastic nature. Starting from the quantum Liouville equation for the total system, which is the system proper and its environment, the resulting evolution equation is a quantum Liouville equation supplemented by a dissipation term \(D\)[14; 15; 16],
\[\frac{d\rho}{dt}=\frac{1}{i\hbar}[H,\rho]+D, \tag{1}\]
where \(H\) is the Hamiltonian of the system.
Taking into account that the variables of the environment acts as stochastic variables, the reduced equation (1) describes a quantum Markov process which means that the dynamics is described by a quantum dynamic semi-group. The most general form of the generator \(D\) that has the property of semi-group and that preserves the trace and positivity of the density operator is of the Lindblad form [14; 15; 16],
\[D=\sum_{jk}a_{jk}(2A_{j}\rho A_{k}^{\dagger}-A_{k}^{\dagger}A_{j}\rho-\rho A_{ k}^{\dagger}A_{j}), \tag{2}\]
where \(a_{jk}\) are the entries of a Hermitian and positive matrix. The equation (1) with \(D\) in the form (2) is called the Lindblad equation.
An alternative approach to reach an equation for quantum open system is to consider the classical Fokker-Planck-Kramers equation [17; 18; 19; 20], which is known to describe open classical systems, and carry out its canonical quantization [21; 22; 23]. The resulting quantum Fokker-Planck (FP) equation that gives the time evolution of the density operator \(\rho\) is given by
\[i\hbar\frac{d\rho}{dt}=[H,\rho]-\frac{1}{2}\sum_{j}[x_{j},J_{j}+J_{j}^{ \dagger}], \tag{3}\]
where \(J_{j}\) is the quantum version of the probability current, given by
\[J_{j}=-\gamma_{j}(\rho g_{j}+\frac{m}{i\hbar\beta_{j}}[x_{j},\rho]), \tag{4}\]
and
\[g_{j}=-\frac{m}{i\hbar\beta_{j}}(e^{\beta_{j}H}x_{j}e^{-\beta_{j}H}-x_{j}). \tag{5}\]
As happens to its classical version, this equation describes the contact of a system of interacting particles of mass \(m\) with thermal reservoirs at temperatures inversely proportional to \(\beta_{j}\), and \(\gamma_{j}\) measures the strength of the interaction with the reservoirs. The positions and momenta of the particles are denoted by \(x_{j}\) and \(p_{j}\). The first and second terms of the current \(J_{j}\) corresponds to the dissipation and fluctuation, respectively. We remark that \(g_{j}\), which is related to dissipation, is not in general proportional to the momentum \(p_{j}\) as in the classical dissipation but becomes proportional to \(p_{j}\) in the classical limit.
Equations similar to (3) were considered by Dekker [5] and by Caldeira and Leggett [8; 15]. However, there is a difference in that the dissipation term of their equations is proportional to the momentum whereas in equation (3), the dissipation term is proportional to a general term \(g_{j}\) which depends on the Hamiltonian of the system as can be seen in equation (5). This form of \(g_{j}\) is crucial if we wish to describe the thermodynamic equilibrium, or in other words, if we wish that the system thermalizes in the long run.
The quantum FP equation (3) can be written in a more symmetric form in terms of annihilation and creation operator \(a_{j}\) and \(a_{j}^{\dagger}\) in which case it reads
\[i\hbar\frac{d\rho}{dt}=[H,\rho]-\sum_{j}([a_{j},J_{j}^{\dagger}]+[a_{j}^{ \dagger},J_{j}]), \tag{6}\]
where
\[J_{j}=i\gamma_{j}(g_{j}\rho+\frac{1}{\beta_{j}}[a_{j},\rho]), \tag{7}\]
and
\[g_{j}=\frac{1}{\beta_{j}}(e^{-\beta H}a_{j}e^{\beta_{j}h}-a_{j}). \tag{8}\]
It is worth writing \(J_{j}\) in the form
\[J_{j}=\frac{i\gamma_{j}}{\beta_{j}}(e^{-\beta H}a_{j}e^{\beta_{j}h}\rho-\rho a_{j }). \tag{9}\]
The quantum FP equation in either forms (3) or (6) is understood as describing a quantum Markov process and in this sense it should be of the type given by equations (1) and (2). The first term of the quantum FP equation which corresponds to a unitary transformation is indeed the same. As to the second non-unitary term, it is not of the Lindblad type given by (2). The main purpose of the present paper is to show that the second term of the quantum FP equation can be transformed into the Lindblad form, showing thus that the quantum FP equation preserves the trace and positivity of the density operator \(\rho\).
## II Quantum FP structure
We consider a Hilbert vector space and choose a basis consisting of the eigenvectors of some Hermitian operator \(L\). The eigenvectors associated to the eigenvalue \(\lambda_{i}\) of \(L\) are denoted by \(\phi_{i}\), that is, \(L\phi_{i}=\lambda_{i}\phi_{i}\). Taking into account that \(L\) is Hermitian its left eigenvectors are the adjoint vector \(\phi_{i}^{\dagger}\). The operators acting on the vectors of the Hilbert space such as \(L\) itself can also be understood as belonging to another vector space, the Liouville space, whose complete basis consists of the operators \(A_{ij}=\phi_{j}\phi_{i}^{\dagger}\).
The general expression of a non-unitary generator \(D\) which preserve the trace of \(\rho\) and its complete positivity for any initial condition is of the Lindblad form [14],
\[D=\sum_{ij,k\ell}a_{ij,k\ell}(2A_{ij}\rho A_{k\ell}^{\dagger}-A_{k\ell}^{ \dagger}A_{ij}\rho-\rho A_{k\ell}^{\dagger}A_{ij}), \tag{10}\]
where \(a_{ij,k\ell}\) are the entries of a Hermitian and positive matrix. Our point of depart is the following expression of the Lindblad type
\[D=\sum_{ij,k\ell}b_{ij,k\ell}(2A_{ij}\rho A_{k\ell}^{\dagger}-A_{k\ell}^{ \dagger}A_{ij}\rho-\rho A_{k\ell}^{\dagger}A_{ij})\]
\[+\sum_{ij,k\ell}c_{ij,k\ell}(2A_{ij}^{\dagger}\rho A_{k\ell}-A_{k\ell}A_{ij}^ {\dagger}\rho-\rho A_{k\ell}A_{ij}^{\dagger}), \tag{11}\]
where \(b_{ij,k\ell}\) and \(c_{ij,k\ell}\) are the entries of Hermitian and positive matrices, and are nonzero only when \(i\leq j\) and \(k\leq\ell\).
Defining the operators \(B_{ij}\) by
\[\alpha_{ij}B_{ij}=\sum_{k\ell}b_{ij,k\ell}^{*}A_{k\ell}, \tag{12}\]
where \(\alpha_{ij}\geq 0\), the first summation can be written in the form
\[\sum_{ij}\alpha_{ij}(A_{ij}\rho B_{ij}^{\dagger}+B_{ij}\rho A_{ij}^{\dagger}- A_{ij}^{\dagger}B_{ij}\rho-\rho B_{ij}^{\dagger}A_{ij}). \tag{13}\]
In an analogous manner we defined the operators \(C_{ij}\) by
\[\alpha_{ij}C_{ij}=\sum_{k\ell}c_{ij,k\ell}A_{k\ell}, \tag{14}\]
and the second summation becomes
\[\sum_{ij}\alpha_{ij}(A_{ij}^{\dagger}\rho C_{ij}+C_{ij}^{\dagger}\rho A_{ij}- A_{ij}C_{ij}^{\dagger}\rho-\rho C_{ij}A_{ij}^{\dagger}). \tag{15}\]
Summing up these two terms, we reach the expression
\[D=\sum_{ij}\alpha_{ij}\{[A_{ij},\rho B_{ij}^{\dagger}-C_{ij}^{\dagger}\rho]-[A_ {ij}^{\dagger},B_{ij}\rho-\rho C_{ij}]\}, \tag{16}\]
and the Lindblad equation acquires the FP structure
\[i\hbar\frac{d\rho}{dt}=[H,\rho]-\sum_{ij}\{[A_{ij},J_{ij}^{\dagger}]+[A_{ij}^ {\dagger},J_{ij}]\}, \tag{17}\]
where \(J_{ij}\) is given by
\[J_{ij}=i\hbar\alpha_{ij}(B_{ij}\rho-\rho C_{ij}). \tag{18}\]
The expression (17) for the Lindblad equation is particularly meaningful because \(J_{ij}\) represents the quantum version of the probability current. Suppose that the right-hand side of (17) vanishes for a density operator \(\rho_{0}\), in which case the system is said to be in a stationary state. If in addition the currents \(J_{ij}(\rho_{0})\) vanish, in which case \([H,\rho]\) also vanishes, then the system will be in thermodynamic equilibrium. The vanishing of the currents corresponds to the condition of detailed balance since each term of the summation in (17) vanishes. The condition of detailed balance is represented by
\[B_{ij}\rho_{0}=\rho_{0}C_{ij}, \tag{19}\]
for some \(\rho_{0}\).
## III Contact with thermal reservoirs
There are various possibilities of choosing the operators \(B_{ij}\) and \(C_{ij}\). The only restriction is that the coefficients of the expansions (12) and (14) define Hermitian and positive matrices. The choices will depend on the type of physical conditions one wants to describe. Here we choose these operators with the purpose of describing a system in contact with several heat reservoir at distinct temperatures. When the temperatures are all the same, then in the stationary state the system will be in thermodynamic equilibrium in which case the density operator is of the Gibbs form
\[\rho_{0}=\frac{1}{Z}e^{-\beta H}, \tag{20}\]
where \(\beta\) is inversely proportional to the temperature of the reservoirs.
We choose \(C_{ij}\) and \(B_{ij}\) so that
\[B_{ij}e^{-\beta_{ij}H}=e^{-\beta_{ij}H}C_{ij}, \tag{21}\]
where \(\beta_{ij}\) are constants and \(H\) is the Hamiltonian. When \(\beta_{ij}\) has the same value, independent of \(i\) and \(j\), the condition (21) guarantees that the detailed balance condition (19) is fulfilled and the system will be found in thermodynamic equilibrium. In other words, the system will thermalize in the long run.
A simplification arises by choosing \(C_{ij}=A_{ij}\) so that \(B_{ij}\) is given by
\[B_{ij}=e^{-\beta_{ij}H}A_{ij}e^{\beta_{ij}H}, \tag{22}\]
and the current (18) becomes
\[J_{ij}=i\hbar\alpha_{ij}(e^{-\beta_{ij}H}A_{ij}e^{\beta_{ij}H}\rho-\rho A_{ij}), \tag{23}\]
which has the form (9) as desired.
It is now left to show that the coefficients of the expansion of \(B_{ij}\) in terms of \(A_{ij}\) are entries of a Hermitian and positive matrix. We recall that \(A_{ij}=\phi_{j}\phi_{i}^{\dagger}\) where \(\phi_{i}\) are the eigenvectors of \(L\).
Let us denote by \(\chi_{j}\) and \(E_{j}\) the eigenvectors and eigenvalues of the Hamiltonian, that is, \(H\chi_{i}=E_{i}\chi_{i}\). The operators \(X_{ij}=\chi_{j}\chi_{i}^{\dagger}\) can be considered a complete basis of the Liouville space. The change from this basis to the basis used above is given by the unitary transformation
\[A_{ij}=\sum_{k\ell}U_{ij,k\ell}X_{k\ell}. \tag{24}\]
Replacing this expression in (22), we find
\[B_{ij}=\sum_{ij}U_{ij,k\ell}e^{-\beta_{ij}(E_{k}-E_{\ell})}X_{k\ell}. \tag{25}\]
Using
\[X_{ij}=\sum_{k\ell}U_{ij,k\ell}^{\dagger}A_{k\ell}, \tag{26}\]
it can be written as
\[B_{ij}=\sum_{k\ell}G_{ij,k\ell}A_{k\ell}, \tag{27}\]
where
\[G^{*}_{k\ell,ij}=\sum_{mn}U_{ij,mn}e^{-\beta_{ij}(E_{m}-E_{n})}U_{mn,k\ell}^{ \dagger}. \tag{28}\]
From this expression it follows that \(G^{*}_{k\ell,ij}=G_{ij,k\ell}\) and that the matrix with elements \(G_{ij,k\ell}\) is positive, finishing our demonstration.
## IV Entropy production
The Lindblad equation or its quantum FP version (17) is supposed to describe the thermodynamic of quantum system in equilibrium or out of equilibrium. In this sense it constitutes the basic equation of a stochastic quantum thermodynamics [21; 22; 23]. A fundamental concept in the description of systems out of thermodynamic equilibrium is the entropy production which we discuss below.
The average energy \(U\) is defined by
\[U=\mathrm{Tr}(H\rho), \tag{29}\]
and its time evolution is obtained from the quantum FP equation (17), and is given by
\[\frac{dU}{dt}=\sum_{ij}\Phi^{u}_{ij}, \tag{30}\]
where
\[\Phi^{u}_{ij}=-\frac{1}{i\hbar}\{\mathrm{Tr}[H,A_{ij}]J^{\dagger}_{ij}+ \mathrm{Tr}[H,A^{\dagger}_{ij}]J_{ij}\}, \tag{31}\]
and \(\Phi^{u}\) is understood as the flux of energy to the system. The entropy of the system is defined by
\[S=-k\mathrm{Tr}(\rho\ln\rho), \tag{32}\]
and its time variation is obtained from the FP equation (17), and is given by
\[\frac{dS}{dt}=\Pi+\Phi, \tag{33}\]
where
\[\Pi=\frac{k}{i\hbar}\sum_{ij}\mathrm{Tr}([\ln\rho-\ln\rho_{ij},A_{ij}]J^{ \dagger}_{ij}+[\ln\rho-\ln\rho_{ij},A^{\dagger}_{ij}]J_{ij}), \tag{34}\]
is understood as the rate of entropy production, and
\[\Phi=\frac{k}{i\hbar}\sum_{ij}\mathrm{Tr}([\ln\rho_{ij},A_{ij}]J^{\dagger}_{ ij}+[\ln\rho_{ij},A^{\dagger}_{ij}]J_{ij}) \tag{35}\]
is understood as the flux of entropy to the system, where \(\rho_{ij}\) is given by
\[B_{ij}\rho_{ij}=\rho_{ij}C_{ij}. \tag{36}\]
For the case of a system in contact with several heat reservoirs, \(\ln\rho_{ij}\) is proportional to \(-\beta_{ij}H\) and the flux of entropy can be written in the form
\[\Phi=\sum_{ij}\frac{1}{T_{ij}}\Phi^{u}_{ij}, \tag{37}\]
where \(T_{ij}=1/k\beta_{ij}\) and can be understood as the temperature of the heat reservoirs. We recall that \(\Phi^{u}_{ij}\) is the energy flux, or heat flux in the present case, from each heat reservoirs to the system
In the stationary state, the total flux of energy
\[\Phi^{u}=\sum_{ij}\Phi^{u}_{ij} \tag{38}\]
vanishes, but it does not mean that each flux \(\Phi^{u}_{ij}\) vanishes because the temperatures are not all the same. In this case the flux of entropy \(\Phi\) does not vanish, and \(\Pi=\Phi\). As a consequence, in the nonequilibrium stationary state the production of entropy \(\Pi\) is nonzero. If however temperatures of the reservoirs are all the same, \(T_{ij}=T\), then
\[\Phi=\frac{1}{T}\Phi^{u}, \tag{39}\]
and in this case \(\Phi\) vanishes and so does \(\Pi\), which describes a system in thermodynamic equilibrium.
## V Isolated system
Usually one describes an isolated system by the Liouville equation. This is in fact the point of depart of deriving the Lindblad equations for a given system. The given system plus the environment are assumed to be described by the Liouville equation because as a whole they are isolated, and the total energy is a conserved quantity. If the system of interest is itself isolated, then we could describe it by the Liouville equation, which means to regard the dissipative term \(D\) of equation (1) as nonexistent.
The mean feature of an isolated system that allows us to use the Liouville equation is that the Hamiltonian is strictly conserved along a trajectory. However, it is possible to impose a conservation of the Hamiltonian along a stochastic trajectory in such a way that the dissipation term does not need to be absent. Indeed, if we choose the operators \(C_{ij}=B_{ij}=A_{ij}\) then
\[J_{ij}(\rho)=i\hbar\alpha_{ij}[A_{ij},\rho], \tag{40}\]
and if \(A_{ij}\) commutes with the Hamiltonian \(J_{ij}(H)=0\) and the Hamiltonian will be strictly invariant.
In this case, \(\Phi^{u}_{ij}\) vanishes identically and there will be no flux of energy, as expected. The fluxes of entropy \(\Phi_{ij}\) will also vanish identically and there is no flux of entropy to or from the system. The rate of entropy production \(\Pi\) equals \(dS/dt\) and is given by
\[\Pi=\frac{k}{i\hbar}\sum_{ij}\mathrm{Tr}([\ln\rho,A_{ij}]J^{\dagger}_{ij}+[ \ln\rho,A^{\dagger}_{ij}]J_{ij}), \tag{41}\]
It is nonzero but vanishes in the stationary state in which case it is also the equilibrium state because \(J_{ij}\) vanishes.
## VI Conclusion
We have shown that the quantum FP equation can be transformed into an equation that has the Lindblad equation. As the Lindblad equations preserves the trace and positivity of the density operator so does the quantum FP equation. The advantage of the FP form is that one easily recognizes the quantum equivalents of the probability current and of the detailed balance condition. When the detailed balance condition is not satisfied, the quantum system in the long run will be found in a nonequilibrium stationary state. In this case the production of entropy is nonzero and can be obtained by the expression provided for the rate of entropy production. The Fokker-Planck form allows to determine a dissipation term for the case in which the Hamiltonian of the system is strictly constant, which can be understood as a closed system.
|
2301.06623 | Universal minima of potentials of certain spherical designs contained in
the fewest parallel hyperplanes | We find the set of all universal minimum points of the potential of the
$16$-point sharp code on $S^4$ and (more generally) of the demihypercube on
$S^d$, $d\geq 5$, as well as of the $2_{41}$ polytope on $S^7$. We also extend
known results on universal minima of three sharp configurations on $S^{20}$ and
$S^{21}$ containing no antipodal pair to their symmetrizations about the
origin. Finally, we prove certain general properties of spherical
$(2m-1)$-designs contained in as few as $m$ parallel hyperplanes (all but one
configuration considered here possess this property). | Sergiy Borodachov | 2023-01-16T22:17:26Z | http://arxiv.org/abs/2301.06623v1 | Universal minima of potentials of certain spherical designs contained in the fewest parallel hyperplanes
###### Abstract
We find the set of all universal minimum points of the potential of the 16-point sharp code on \(S^{4}\) and (more generally) of the demihypercube on \(S^{d}\), \(d\geq 5\), as well as of the \(2_{41}\) polytope on \(S^{7}\). We also extend known results on universal minima of three sharp configurations on \(S^{20}\) and \(S^{21}\) containing no antipodal pair to their symmetrizations about the origin. Finally, we prove certain general properties of spherical \((2m-1)\)-designs contained in as few as \(m\) parallel hyperplanes (all but one configuration considered here possess this property).
Sergiy Borodachov
_Department of Mathematics, Towson University, Towson, MD, 21252_
_Keywords_: spherical design, Fazekas-Levenshtein bound, extreme values of potentials, demihypercube, regular eight-dimensional polytope, Gegenbauer polynomials, non-trivial index.
_MSC 2020_: 52B11, 52C17, 33C45, 41A05, 31B99.
## 1 Statement of the problem and review of known results
Let \(S^{d}:=\{(x_{1},\ldots,x_{d+1})\in\mathbb{R}^{d+1}:x_{1}^{2}+\ldots+x_{d+1}^{2} =1\}\) be the unit sphere in \(\mathbb{R}^{d+1}\). Throughout the text, \(\omega_{N}\) will denote a configuration of \(N\) pairwise distinct points on \(S^{d}\) and \(\mathbf{x}_{1},\ldots,\mathbf{x}_{N}\) will denote the points in \(\omega_{N}\). We will also call \(\omega_{N}\) a spherical code. We call a function \(g:[-1,1]\to(-\infty,\infty]\) an _admissible potential function_ if \(g\) is continuous on \([-1,1)\) with \(g(1)=\lim\limits_{t\to 1^{-}}g(t)\) and differentiable in \((-1,1)\). Additional assumption(s) on the derivative(s) of
\(g\) will be further specified in each case. Define the \(g\)-potential of \(\omega_{N}\) by
\[p^{g}(\mathbf{x},\omega_{N}):=\sum_{i=1}^{N}g(\mathbf{x}\cdot\mathbf{x}_{i}), \quad\mathbf{x}\in S^{d}.\]
We consider the following extremal problem over the sphere.
**Problem 1.1**.: Find the quantity
\[P^{g}(\omega_{N},S^{d}):=\min_{\mathbf{x}\in S^{d}}p^{g}(\mathbf{x},\omega_{N}) \tag{1}\]
and points \(\mathbf{x}^{*}\in S^{d}\) attaining the minimum in (1).
We have an important special case when the potential function \(g\) is absolutely monotone on \([-1,1)\). Then \(g(t)=f(2-2t)\) for some completely monotone \(f\) on \((0,4]\). Recall that a function \(g\) is called _absolutely monotone_ or _completely monotone_ on an interval \(I\) if \(g^{(k)}\geq 0\) or \((-1)^{k}g^{(k)}\geq 0\) on \(I\), respectively, for every \(k\geq 0\). The function \(g\) is _strictly absolutely_ or _strictly completely monotone_ on \(I\) if the corresponding inequality is strict in the interior of \(I\) for all \(k\geq 0\). The kernel
\[g(\mathbf{x}\cdot\mathbf{y})=f\left(\left|\mathbf{x}-\mathbf{y}\right|^{2}\right) \tag{2}\]
with a strictly absolutely monotone \(g\) on \([-1,1)\) includes the Riesz \(s\)-kernel for \(s>0\) and the Gaussian kernel. After adding an appropriate positive constant, it also includes the logarithmic kernel and the Riesz \(s\)-kernel for \(-2<s<0\).
The problem about absolute extrema on the sphere of potentials of spherical codes was earlier solved by Stolarsky [22, 23] and Nikolov and Rafailov [20, 21] for Riesz \(s\)-kernels, \(s\neq 0\), and sets of vertices of a regular \(N\)-gon on \(S^{1}\) and of a regular simplex, regular cross-polytope, and cube inscribed in \(S^{d}\). Hardin, Kendall, and Saff [16] proved that absolute minima of the potential of a regular \(N\)-gon on \(S^{1}\) with respect to a decreasing and convex function of the geodesic distance are attained at points of the dual regular \(N\)-gon.
Recently, results from [22, 23, 20, 21] for \(s>-2\), \(s\neq 0\) (except for absolute maxima for the cube), were extended to kernels (2) with an absolutely monotone function \(g\) and spherical designs of the highest (in a certain sense) strength. In particular, for a regular simplex on \(S^{d}\), absolute maxima are at its vertices and absolute minima are at the antipods of its vertices, see [7]. Absolute maxima with respect to kernel (2) were found in [5] for sharp spherical codes that are antipodal or are designs of an even strength (called by some authors "strongly
sharp"). Absolute maxima appear to be independent of the potential function \(g\) when \(g\) is strictly absolutely monotone on \([-1,1)\) (they are at points of the code itself). Such absolute maxima are called _universal maxima_.
The set of universal minima (defined in a similar way) of any spherical code that, for some \(m\in\mathbb{N}\), is a \((2m-1)\)-design forming \(m\) distinct dot products with some point \(\mathbf{z}\in S^{d}\), is also known (we will call such codes \(m\)-stiff). It is exactly the set of all such points \(\mathbf{z}\), see talk [6]1 by the author for the proof or Lemma 3.5 in [5]2, from which this result follows. We will call the set of such points \(\mathbf{z}\) the dual configuration. We restate this result here as Theorem 2.5. Immediate consequences of this result are universal minima of a regular \(2m\)-gon on \(S^{1}\), of a regular cross-polytope and cube on \(S^{d}\), and of the 24-cell on \(S^{3}\), since finding the dual configuration is elementary for these codes. Stiff spherical codes attain the Fazekas-Levenshtein bound for covering [13, Theorem 2].
Footnote 1: Talk [6] was given in January, 2022 at ESI and can be found in the ESI’s YouTube account.
Footnote 2: Paper [5] is on ArXiv since March 2022.
Universal minima of any strongly sharp code are at the antipods of points of the code. This also follows from [5, Lemma 3.5]. Immediate consequences of this fact (other than the regualr simplex) are universal minima of a regular \((2m+1)\)-gon on \(S^{1}\), the Schlaffi configuration on \(S^{5}\), and the McLaughlin configuration on \(S^{21}\). Strongly sharp spherical codes also attain the Fazekas-Levenshtein bound for covering [13, Theorem 2].
Boyvalenkov, Dragnev, Hardin, Saff, and Stoyanova [9, Theorems 3.4 and 3.7] (see also [10, Theorem 1.4])3 proved universal upper and lower bounds for the potential of a general spherical design. These bounds become sharp in the cases mentioned above: in the case of a minimum for stiff and strongly sharp configurations and in the case of a maximum for sharp antipodal and strongly sharp ones. The lower bound is an analogue of the Fazekas-Levenshtein bound for covering [13, Theorem 2].
Footnote 3: Papers [9] and [10] are on ArXiv since July and October 2022, respectively.
Paper [9] also showed that the universal maxima of the 600-cell on \(S^{3}\) are vertices of the 600-cell itself. The work [10] proved that a number of known sharp codes are also stiff. Then the result from [6] implies that their universal minima are at points of the dual configuration. Paper [10] further studies the dual configuration for each code and antipods of the two strongly sharp codes on \(S^{5}\) and \(S^{21}\) mentioned above. The author in paper [2]4 found explicitly the sets of all universal minima for five more stiff configurations (which are
not sharp) on spheres of different dimensions as well as for the 56-point kissing configuration on \(S^{6}\), which is a known sharp code (paper [10] gives one universal minimum of this code).
Certain remarkable spherical configurations are not stiff or strongly sharp. Universal minima of the regular icosahedron and regular dodecahedron on \(S^{2}\) were characterized in [3]5 as well as universal minima of the \(E_{8}\) lattice on \(S^{7}\). Furthermore, one universal minimum and the corresponding absolute minimum value of the potential were found in [10]6 for the Leech lattice on \(S^{23}\). Papers [3]7 and [10] establish general theorems (different to a certain extent) for the so-called "skip one add two" case and use them to establish the above mentioned results.
Footnote 5: On ArXiv since October 9, 2022. The proof for icosahedron was briefly discussed in talk [6] in January 2022.
Footnote 6: On Arxiv since October 31, 2022.
Critical points of the total potential of finite configurations of charges were also analysed (see [1, 14] and references therein). This work is related to the known Maxwell's conjecture. A more detailed review of known results on extrema of potentials of spherical codes can be found, for example, in [2].
In this paper, we prove certain properties of general stiff configurations and characterize universal minima of the 16-point sharp code8 on \(S^{4}\), of a demihypercube on \(S^{d}\), \(d\geq 5\), and of the \(2_{41}\) polytope on \(S^{7}\) (it is dual to the \(E_{8}\) lattice). For some sharp spherical codes that have no antipodal pairs, we show that universal minima found in [10] are also universal minima for the symmetrization of each of them about the origin.
Footnote 7: The general “skip one add two” theorem from [3] was stated in talk [4] in August 2022, see Theorem 2.6.
Footnote 8: In [10], its universal minima were presented without a characterization.
One of important applications of Problem 1.1 is the polarization problem on the sphere. Papers [22, 20, 16, 7, 5, 9] that we mentioned when reviewing results on extrema of potentials solve certain its cases. A more comprehensive review of known work on polarization can be found, for example, in book [8, Chapter 14] with most recent results reviewed in, e.g., [5].
The paper is structured as follows. Section 2 contains the necessary preliminaries. In Section 3, we characterize universal minima of the \(d\)-demicube for \(d\geq 5\) (including the 16-point sharp code on \(S^{4}\)). In Section 4, we find all the universal minima of the \(2_{41}\) polytope on \(S^{7}\). Section 5 extends known results on universal minima for certain three non-antipodal sharp configurations to their symmetrizations. In Section 6, we establish certain properties of general stiff
configurations and of their duals.
## 2 Preliminaries
In this section, we state definitions and known facts used further in the paper. Define
\[w_{d}(t):=\gamma_{d}(1-t^{2})^{d/2-1},\]
where the constant \(\gamma_{d}\) is such that \(w_{d}\) is a probability density on \([-1,1]\). The _Gegenbauer orthogonal polynomials_ corresponding to the sphere \(S^{d}\) in \(\mathbb{R}^{d+1}\) are terms of the sequence \(\{P_{n}^{(d)}\}_{n=0}^{\infty}\) of univariate polynomials such that \(\deg P_{n}^{(d)}=n\), \(n\geq 0\), and
\[\int_{-1}^{1}P_{i}^{(d)}(t)P_{j}^{(d)}(t)w_{d}(t)\,dt=0,\quad i\neq j,\]
normalized so that \(P_{n}^{(d)}(1)=1\), \(n\geq 0\) (see [25, Chapter 4] or, e.g., [8, Chapter 5]).
For a configuration \(\omega_{N}=\{{\bf x}_{1},\ldots,{\bf x}_{N}\}\subset S^{d}\), let \({\mathcal{I}}(\omega_{N})\) be the set of all \(n\in\mathbb{N}\) such that
\[\sum_{i=1}^{N}\sum_{j=1}^{N}P_{n}^{(d)}({\bf x}_{i}\cdot{\bf x}_{j})=0. \tag{3}\]
We call \({\mathcal{I}}(\omega_{N})\)_the index set_ of \(\omega_{N}\). Let \(\sigma_{d}\) be the \(d\)-dimensional area measure on the sphere \(S^{d}\) normalized to be a probability measure. A configuration \(\omega_{N}\) is called a _spherical \(n\)-design_ if, for every polynomial \(p\) on \(\mathbb{R}^{d+1}\) of degree at most \(n\),
\[\frac{1}{N}\sum_{i=1}^{N}p({\bf x}_{i})=\int_{S^{d}}p({\bf x})\,d\sigma_{d}({ \bf x}), \tag{4}\]
see the paper by Delsarte, Goethals, and Seidel [12]. The maximal number \(n\) in this definition is called the _strength_ of the spherical design \(\omega_{N}\).
We recall the following equivalent definitions of a spherical design. Let \(\mathbb{P}_{n}\) denote the space of all univariate polynomials of degree at most \(n\).
**Theorem 2.1**.: _(see [12, 13] or, e.g., [8, Lemma 5.2.2 and Theorem 5.4.2]) Let \(d,n\geq 1\) and \(\omega_{N}=\{{\bf x}_{1},\ldots,{\bf x}_{N}\}\) be a point configuration on \(S^{d}\). The following are equivalent:_
_(i)_ \(\omega_{N}\) _is a spherical_ \(n\)_-design;_
_(ii)_: \(\{1,\ldots,n\}\subset\mathcal{I}(\omega_{N});\)__
_(iv) for every polynomial \(q\in\mathbb{P}_{n}\), we have \(p^{q}(\mathbf{y},\omega_{N})=\sum_{i=1}^{N}q(\mathbf{y}\cdot\mathbf{x}_{i})=C\), \(\mathbf{y}\in S^{d}\), where \(C\) is a constant._
If item (iii) holds in the above theorem, then \(C=a_{0}(q)N\), where
\[a_{0}(q):=\int_{-1}^{1}q(t)w_{d}(t)\,dt \tag{5}\]
is the \(0\)-th Gegenbauer coefficient of polynomial \(q\). For a given \(m\in\mathbb{N}\) and a given configuration \(\omega_{N}\subset S^{d}\), denote by \(\mathcal{D}_{m}(\omega_{N})\) the set of all points \(\mathbf{z}\in S^{d}\) for which the set of dot products
\[D(\mathbf{z},\omega_{N}):=\{\mathbf{z}\cdot\mathbf{x}_{i}:i=1,\ldots,N\}\]
has at most \(m\) distinct elements.
**Definition 2.2**.: We call a point configuration \(\omega_{N}\subset S^{d}\)_\(m\)-stiff_, \(d,m\geq 1\), if \(\omega_{N}\) is a spherical \((2m-1)\)-design and the set \(\mathcal{D}_{m}(\omega_{N})\) is non-empty. The set \(\mathcal{D}_{m}(\omega_{N})\) of a given \(m\)-stiff configuration \(\omega_{N}\) is called _the dual configuration_ for \(\omega_{N}\).
Following [11], we call a configuration \(\omega_{N}\subset S^{d}\)_sharp_ if, for some \(m\in\mathbb{N}\), it is a \((2m-1)\)-design and there are exactly \(m\) distinct values of the dot product between distinct points in \(\omega_{N}\). If, in addition, \(\omega_{N}\) is a \(2m\)-design, we call it _strongly sharp_.
The next statement is a part of the classification of quadratures in [12] corresponding to spherical designs of the highest strength; i.e., stiff, strongly sharp, or sharp antipodal codes. For the reader's convenience, we mention that its proof can also be found, for example, in [2, Proposition 7.2]. Let \(\{\varphi_{1},\ldots,\varphi_{m}\}\) be the fundamental polynomials for the set \(-1<\kappa_{1}^{m}<\ldots<\kappa_{m}^{m}<1\) of zeros of the Gegenbauer polynomial \(P_{m}^{(d)}\); that is, \(\varphi_{i}\in\mathbb{P}_{m-1}\), \(\varphi_{i}(\kappa_{i}^{m})=1\), and \(\varphi_{i}(\kappa_{j}^{m})=0\), \(j\neq i\), \(i=1,\ldots,m\).
**Proposition 2.3**.: _If \(\omega_{N}\) is an \(m\)-stiff configuration on \(S^{d}\), then for every \(\mathbf{z}\in\mathcal{D}_{m}(\omega_{N})\), the set \(D(\mathbf{z},\omega_{N})\) contains exactly \(m\) distinct elements located in \((-1,1)\), which are \(\kappa_{1}^{m},\ldots,\kappa_{m}^{m}\). Furthermore, the number of indices \(i\) such that \(\mathbf{z}\cdot\mathbf{x}_{i}=\kappa_{j}^{m}\) does not depend on \(\mathbf{z}\) and equals \(a_{0}(\varphi_{j})N\), \(j=1,\ldots,m\)._
_In particular, if \(m=2\), then for every \(\mathbf{z}\in\mathcal{D}_{2}(\omega_{N})\), we have \(D(\mathbf{z},\omega_{N})=\left\{-\frac{1}{\sqrt{d+1}},\frac{1}{\sqrt{d+1}}\right\}\)._
**Remark 2.4**.: In view of Proposition 2.3, an \(m\)-stiff configuration may exist on \(S^{d}\) for given \(m,d\geq 1\), only if the all the numbers \(a_{0}(\varphi_{i})\), \(i=1,\ldots,m\), are positive rationals. These numbers are the weights of the Gauss-Gegenbauer quadrature for integral (5) (the nodes are \(\kappa_{1}^{m},\ldots,\kappa_{m}^{m}\)).
We next restate the result proved in talk [6]. It follows from [5, Lemma 3.5], see the proof of [2, Theorem 4.3] for details.
**Theorem 2.5**.: _Let \(m\geq 1\), \(d\geq 1\), and \(g\) be an admissible potential function with a convex derivative \(g^{(2m-2)}\) on \((-1,1)\). If \(\omega_{N}=\{{\bf x}_{1},\ldots,{\bf x}_{N}\}\) is an \(m\)-stiff configuration on the sphere \(S^{d}\), then the potential_
\[p^{g}({\bf x},\omega_{N})=\sum_{i=1}^{N}g({\bf x}\cdot{\bf x}_{i}),\quad{\bf x }\in S^{d},\]
_attains its absolute minimum over \(S^{d}\) at every point of the set \({\mathcal{D}}_{m}(\omega_{N})\)._
_If, in addition, \(g^{(2m-2)}\) is strictly convex on \((-1,1)\), then \({\mathcal{D}}_{m}(\omega_{N})\) contains all points of absolute minimum of the potential \(p^{g}(\cdot,\omega_{N})\) on \(S^{d}\)._
We will also need the "skip one add two" result from [3, Theorem 3.1].
**Theorem 2.6**.: _Let \(d,m\in\mathbb{N}\), \(m\geq 2\), and \(\omega_{N}=\{{\bf x}_{1},\ldots,{\bf x}_{N}\}\) be a point configuration on \(S^{d}\) whose index set \({\mathcal{I}}(\omega_{N})\) contains numbers \(1,2,\ldots,2m-3,2m-1,2m\). Assume that numbers \(-1<t_{1}<t_{2}<\ldots<t_{m}<1\) are such that_
\[\sum_{i=1}^{m}t_{i}<t_{m}/2\quad\text{and}\quad\sum_{i=1}^{m}t_{i}^{2}-2\left( \sum_{i=1}^{m}t_{i}\right)^{2}<\frac{m(2m-1)}{4m+d-3}, \tag{6}\]
_and that the set \({\mathcal{D}}\) of points \({\bf x}^{*}\in S^{d}\) with \(D({\bf x}^{*},\omega_{N})\subset\{t_{1},\ldots,t_{m}\}\) is non-empty. Let \(g\) be an admissible potential function with non-negative derivatives \(g^{(2m-2)}\), \(g^{(2m-1)}\), and \(g^{(2m)}\) on \((-1,1)\). Then, for every point \({\bf x}^{*}\in{\mathcal{D}}\),_
\[\min_{{\bf x}\in S^{d}}\sum_{i=1}^{N}g({\bf x}\cdot{\bf x}_{i})=\sum_{i=1}^{N} g({\bf x}^{*}\cdot{\bf x}_{i}). \tag{7}\]
_If, in addition, \(g^{(2m)}>0\) on \((-1,1)\), then the absolute minimum in (7) is achieved only at points of the set \({\mathcal{D}}\)._
We remark that proofs of Theorems 2.5 and 2.6 utilize the Delsarte-Yudin method (also known as the Delsarte or linear programming or polynomial method), see the work by Delsarte, Goethals, and Seidel [12] or by Yudin [24]. A detailed description of this approach and references to works using it can also be found, in particular, in [17, 18, 19, 11, 9, 10] and in [8, Chapter 5].
## 3 The \(16\)-point sharp code on \(S^{4}\) and the demihypercube
Denote by \(\omega_{2d}^{*}:=\{\pm\mathbf{e}_{1},\ldots,\pm\mathbf{e}_{d}\}\), \(d\geq 2\), where \(\mathbf{e}_{1},\ldots,\mathbf{e}_{d}\) are vectors of the standard basis in \(\mathbb{R}^{d}\), the set of vertices of the regular cross-polytope inscribed in \(S^{d-1}\) and let \(U_{d}\) be the set \(\left\{\left(\pm\frac{1}{\sqrt{d}},\ldots,\pm\frac{1}{\sqrt{d}}\right)\right\} \subset\mathbb{R}^{d}\) of vertices of the cube inscribed in \(S^{d-1}\). It is not difficult to see that \(\mathcal{D}_{2}(\omega_{2d}^{*})=U_{d}\) and \(\mathcal{D}_{2}(U_{d})=\omega_{2d}^{*}\).
Let \(\overline{\omega}^{d}\), \(d\geq 2\), be the set of \(N=2^{d-1}\) points \(\left(\pm\frac{1}{\sqrt{d}},\ldots,\pm\frac{1}{\sqrt{d}}\right)\in U_{d}\) with an even number of minus signs. This configuration forms the set of vertices of a \(d\)_-demicube_ (also called the _demihypercube_). The set \(\widetilde{\omega}^{d}\) of vectors from \(U_{d}\) with an odd number of minus signs is a reflection of \(\overline{\omega}^{d}\) with respect to any of the coordinate hyperplanes; i.e., \(\widetilde{\omega}^{d}\) is an isometric copy of \(\overline{\omega}^{d}\). Therefore, it is sufficient to consider just \(\overline{\omega}^{d}\). We have \(\overline{\omega}^{d}\cup\widetilde{\omega}^{d}=U_{d}\) and the two sets are disjoint. For \(d\) odd, we have \(\widetilde{\omega}^{d}=-\overline{\omega}^{d}\) with both sets not containing antipodal pairs. For \(d\) even, each of the sets \(\overline{\omega}^{d}\) and \(\widetilde{\omega}^{d}\) is itself antipodal.
Observe that for \(d=2\), both configurations consist of one antipodal pair; i.e., they are \(1\)-stiff. For \(d=3\), each one is a regular simplex inscribed in \(S^{2}\) (strongly sharp and, hence, not stiff). For \(d=4\), the set \(\overline{\omega}^{d}\) consists of eight points \(\left(\pm\frac{1}{2},\pm\frac{1}{2},\pm\frac{1}{2},\pm\frac{1}{2}\right)\) with an even number of minus signs. Each set \(\overline{\omega}^{4}\) and \(\widetilde{\omega}^{4}\) is an isometric copy of a regular cross-polytope in \(\mathbb{R}^{4}\); i.e., it is \(2\)-stiff. For \(d=5\), configuration \(\overline{\omega}^{d}\) consists of \(16\) points on \(S^{4}\) of the form \(\left(\pm\frac{1}{\sqrt{5}},\pm\frac{1}{\sqrt{5}},\pm\frac{1}{\sqrt{5}},\pm \frac{1}{\sqrt{5}},\pm\frac{1}{\sqrt{5}}\right)\) with an even number of minus signs. This is the well-known sharp \((5,16,1/5)\)-code. It was described by Gossett [15]. The set \(\widetilde{\omega}^{5}\) is the antipode of this code. The \(2\)-stiffness property of \(\overline{\omega}^{5}\) was observed in [10]. We show that the \(d\)-demicube is \(2\)-stiff for any \(d\geq 6\). We start with the following auxiliary statement.
**Lemma 3.1**.: _Let \(\omega_{N}\) be a non-empty subset of \(U_{d}=\left\{-\frac{1}{\sqrt{d}},\frac{1}{\sqrt{d}}\right\}^{d}\), \(d\geq 3\). Then \(\omega_{N}\) is a \(3\)-design if, and only if, \(N\) is even and for every set \(I\) of one, two, or three pairwise distinct indices, exactly half of vectors in \(\omega_{N}\) have an even number of negative coordinates with indices in \(I\) and exactly half have an odd number of negative coordinates with indices in \(I\)._
Proof.: Let \(\omega_{N}\subset U_{d}\) be arbitrary. Using the notation \(\mathbf{y}=(y_{1},\ldots,y_{d})\) for a point \(\mathbf{y}\in\omega_{N}\), define
\[S_{i}:=\sum_{\mathbf{y}\in\omega_{N}}y_{i},\quad\ S_{i,j}:=\sum_{\mathbf{y} \in\omega_{N}}y_{i}y_{j},\quad\text{and}\quad S_{i,j,k}:=\sum_{\mathbf{y}\in \omega_{N}}y_{i}y_{j}y_{k}.\]
Observe that \(S_{i,j}\) and \(S_{i,j,k}\) do not depend on permutations of indices and that \(S_{i,i}=\frac{N}{d}\). When some two indices coincide, say \(i=j\), we have
\[S_{i,j,k}=\sum_{\mathbf{y}\in\omega_{N}}\frac{y_{k}}{d}=\frac{1}{d}S_{k}. \tag{8}\]
Formula (8) holds even if \(i=j=k\). Let \(\mathbf{x}=(x_{1},\ldots,x_{d})\in S^{d-1}\) be any vector. Then
\[\sum_{\mathbf{y}\in\omega_{N}}\mathbf{x}\cdot\mathbf{y}=\sum_{ \mathbf{y}\in\omega_{N}}\sum_{i=1}^{d}x_{i}y_{i}=\sum_{i=1}^{d}\sum_{\mathbf{y }\in\omega_{N}}x_{i}y_{i}=\sum_{i=1}^{d}x_{i}\sum_{\mathbf{y}\in\omega_{N}}y_{i }=\sum_{i=1}^{d}S_{i}x_{i},\] \[\sum_{\mathbf{y}\in\omega_{N}}\left(\mathbf{x}\cdot\mathbf{y} \right)^{2}=\sum_{\mathbf{y}\in\omega_{N}}\left(\sum_{j=1}^{d}x_{j}y_{j} \right)^{2}=\sum_{\mathbf{y}\in\omega_{N}}\sum_{i=1}^{d}\sum_{j=1}^{d}x_{i}x_{ j}y_{i}y_{j}=\sum_{i=1}^{d}\sum_{j=1}^{d}\sum_{\mathbf{y}\in\omega_{N}}x_{i}x_{ j}y_{i}y_{j}\] \[=\sum_{i=1}^{d}x_{i}^{2}\sum_{\mathbf{y}\in\omega_{N}}y_{i}^{2}+ \sum_{i,j=1\atop i\neq j}^{d}x_{i}x_{j}\sum_{\mathbf{y}\in\omega_{N}}y_{i}y_{j }=\frac{N}{d}+\sum_{i,j=1\atop i\neq j}^{d}S_{i,j}x_{i}x_{j},\]
and
\[\sum_{\mathbf{y}\in\omega_{N}}\left(\mathbf{x}\cdot\mathbf{y} \right)^{3} =\sum_{\mathbf{y}\in\omega_{N}}\left(\sum_{j=1}^{d}x_{j}y_{j} \right)^{3}=\sum_{\mathbf{y}\in\omega_{N}}\sum_{i,j,k=1}^{d}x_{i}x_{j}x_{k}y_{ i}y_{j}y_{k}\] \[=\sum_{i,j,k=1}^{d}x_{i}x_{j}x_{k}\sum_{\mathbf{y}\in\omega_{N}}y _{i}y_{j}y_{k}=\sum_{i,j,k=1}^{d}S_{i,j,k}x_{i}x_{j}x_{k}.\]
The configuration \(\omega_{N}\) will be a 3-design if, and only if, the three sums above are constant, see Theorem 2.1. If
\[S_{i}=0\text{ for all }i,\ \ S_{i,j}=0,\text{ for }i\neq j,\ \text{ and }\ S_{i,j,k}=0,\text{ for }i,j,k\text{ distinct} \tag{9}\]
then three sums above will be constant (one should also use (8)). Conversely, if all three sums above are constant, then the first one has the same value for every vector \(\pm\mathbf{e}_{i}\), which is \(\pm S_{i}\), \(i=1,\ldots,d\). Then \(S_{i}=0\) for all \(i\). For every vector \(\mathbf{x}\in S^{d-1}\) with the \(\ell\)-th coordinate with \(1/\sqrt{2}\), the \(n\)-th coordinate with \(\pm\frac{1}{\sqrt{2}}\), and the remaining coordinates being zero, \(\ell\neq n\), the value of the second sum is \(N/d\pm S_{\ell,n}=\text{const}\). This forces \(S_{\ell,n}=0\), \(\ell\neq n\). For vector \(\pm\mathbf{x}\), where the \(\ell\)-th, \(n\)-th, and \(m\)-th coordinate of \(\mathbf{x}\) equals \(1/\sqrt{3}\), \(\ell,n,m\) are pairwise distinct,
and the remaining coordinates are zero, the third sum equals (use (8) and the fact that \(S_{i}=0\) for all \(i\))
\[\pm\frac{1}{3\sqrt{3}}\sum_{i,j,k\in\{\ell,n,m\}}S_{i,j,k}=\pm\frac{6}{3\sqrt{3} }S_{\ell,n,m}=\text{const}.\]
Then \(S_{\ell,n,m}=0\). Thus, \(\omega_{N}\) is a \(3\)-design if, and only if, relations (9) hold.
In each sum \(S_{i}\), \(S_{i,j}\), and \(S_{i,j,k}\) in (9), all terms have the same absolute values. Then the value of each sum in (9) equals that common absolute value times the difference between the number of positive and negative terms. Therefore, relations (9) hold if, and only if, each sum in (9) has equal number of positive and negative terms. This, in turn, will hold if, and only if, for any set of indices \(I=\{i\}\) or \(\{i,j\}\), where \(i\neq j\), or \(\{i,j,k\}\), where \(i,j,k\) are pairwise distinct, the number of vectors in \(\omega_{N}\) with an even number of negative components with indices in \(I\) equals the number of vectors in \(\omega_{N}\) with an odd number of negative components with indices in \(I\). This also forces \(N\) to be even.
**Lemma 3.2**.: _The \(d\)-demicube \(\overline{\omega}^{d}\), \(d\geq 4\), is \(2\)-stiff._
Proof.: Since \(\overline{\omega}^{d}\) is a subset of the set of vertices of a cube, it is contained in two parallel hyperplanes. Thus, it remains to show that \(\overline{\omega}^{d}\) is a \(3\)-design. Let \(I\) be any set of \(k\) pairwise distinct indices, where \(k=1,2,3\). A combination of signs of coordinates corresponding to \(I\) with an even number of negative ones can be chosen in \(2^{k-1}\) different ways. For each of these combinations, the remaining \(d-k\) positions can have \(2^{d-k}\) different combinations of signs with \(2^{d-k-1}\) of them having an even number of minus signs. Then the total number of vectors in \(\overline{\omega}^{d}\) with an even number of negative coordinates corresponding to \(I\) will be \(2^{k-1}\cdot 2^{d-k-1}=2^{d-2}\). By a similar argument, the number of vectors in \(\overline{\omega}^{d}\) with an odd number of negative coordinates corresponding to \(I\) will also be \(2^{d-2}\). Lemma 3.1 now implies that \(\overline{\omega}^{d}\) is a \(3\)-design and, hence, is \(2\)-stiff.
We next find the dual configuration of the \(d\)-demicube. For \(d=2\), the \(d\)-demicube is a pair of antipodal vectors, which is \(1\)-stiff. Its dual is the perpendicular pair of antipodal vectors. For \(d=3\), the \(d\)-demicube is a regular simplex, which is not stiff, since it is strongly sharp. For \(d=4\), the configuration \(\overline{\omega}^{d}\) is a regular cross-polytope, and its dual is the corresponding cube inscribed in \(S^{3}\). For \(d\geq 5\), we have the following result.
**Lemma 3.3**.: _For every \(d\geq 5\), we have \(\mathcal{D}_{2}(\overline{\omega}^{d})=\omega_{2d}^{*}\)._
Since \(\mathcal{D}_{2}(\omega_{2d}^{*})=U_{d}\), \(d\geq 2\), Lemma 3.3 shows that the inclusion \(\omega_{N}\subset\mathcal{D}_{m}(\mathcal{D}_{m}(\omega_{N}))\) can be strict for an \(m\)-stiff configuration \(\omega_{N}\) with \(m\geq 2\) even if \(\omega_{N}\) is antipodal. Furthermore, for \(d\geq 5\) odd, it provides another example of a non-antipodal \(m\)-stiff configuration with \(m\geq 2\) (non-antipodal \(1\)-stiff configurations are easy to construct).
Proof of Lemma 3.3.: Every vector \(\pm\mathbf{e}_{i}\in\omega_{2d}^{*}\) forms only dot products \(\frac{1}{\sqrt{d}}\) and \(-\frac{1}{\sqrt{d}}\) with points from \(\overline{\omega}^{d}\); i.e., it belongs to \(\mathcal{D}_{2}(\overline{\omega}^{d})\). Choose any \(\mathbf{x}=(x_{1},\ldots,x_{d})\in\mathcal{D}_{2}(\overline{\omega}^{d})\). Assume to the contrary that \(\mathbf{x}\) has at least three non-zero coordinates. Let \(k\) be the number of strictly negative components in \(\mathbf{x}\). If \(k\) is even, we choose a vector \(\mathbf{z}=(z_{1},\ldots,z_{d})\in\overline{\omega}^{d}\) with \(-\frac{1}{\sqrt{d}}\) on all positions corresponding to strictly negative components in \(\mathbf{x}\) and \(\frac{1}{\sqrt{d}}\) on all other positions. If \(k\) is odd, we choose \(\mathbf{z}\in\overline{\omega}^{d}\) with \(-\frac{1}{\sqrt{d}}\) on all but one positions corresponding to strictly negative components of \(\mathbf{x}\) and \(\frac{1}{\sqrt{d}}\) on all other positions. Then dot product \(\mathbf{x}\cdot\mathbf{z}=x_{1}z_{1}+\ldots+x_{d}z_{d}\) has at most one strictly negative term and at least two other strictly positive terms, which we denote by \(x_{i}z_{i}\) and \(x_{j}z_{j}\). Since \(d\geq 5\), we can choose two disjoint pairs of positions in \(\mathbf{z}\) one containing \(z_{i}\) and the other one containing \(z_{j}\) with both pairs avoiding the position corresponding to the possible negative term in \(\mathbf{x}\cdot\mathbf{z}\). Changing the sign of the coordinates of \(\mathbf{z}\) in the first pair of positions, we keep \(\mathbf{z}\) in \(\overline{\omega}^{d}\) and strictly decrease the dot product \(\mathbf{x}\cdot\mathbf{z}\). Changing the sign of the coordinates of the new vector \(\mathbf{z}\) in the second pair of positions, we keep the resulting vector in \(\overline{\omega}^{d}\) and further decrease dot product \(\mathbf{x}\cdot\mathbf{z}\). This shows that \(\mathbf{x}\) forms at least three distinct dot products with points of \(\overline{\omega}^{d}\) contradicting its choice.
Therefore, \(\mathbf{x}\) has at most two non-zero components. Assume to the contrary that \(\mathbf{x}\) has exactly two non-zero components, say \(x_{\ell}\) and \(x_{n}\). Then \(\mathbf{x}\) forms dot products \(\frac{\pm x_{\ell}\pm x_{n}}{\sqrt{d}}\) with vectors from \(\overline{\omega}^{d}\) and at least three of them are distinct. Thus, \(\mathbf{x}\) has one non-zero component. Since \(\mathbf{x}\) is on \(S^{d-1}\), this component must be \(\pm 1\); that is, \(\mathbf{x}\in\omega_{2d}^{*}\). Thus, \(\mathcal{D}_{2}(\overline{\omega}^{d})=\omega_{2d}^{*}\).
We are ready to characterize universal minima of the \(d\)-demicube for \(d\geq 5\).
**Theorem 3.4**.: _Let \(d\geq 5\) and \(g\) be an admissible potential function with a convex derivative \(g^{\prime\prime}\) on \((-1,1)\). Then the potential \(p^{g}(\cdot,\overline{\omega}^{d})\) of \(d\)-demicube \(\overline{\omega}^{d}\) attains its absolute minimum over \(S^{d-1}\) at every point of cross-polytope \(\omega_{2d}^{*}\)._
_If, in addition, \(g^{\prime\prime}\) is strictly convex on \((-1,1)\), then \(\omega_{2d}^{*}\) contains all points of absolute minimum of the potential \(p^{g}(\cdot,\overline{\omega}^{d})\) on \(S^{d-1}\)._
In the case \(d=5\), the first paragraph of Theorem 3.4 follows from the results of [10].
Proof.: Since \(\overline{\omega}^{d}\) is \(2\)-stiff, by Theorem 2.5, the potential \(p^{g}(\cdot,\overline{\omega}^{d})\) attains its absolute minimum over \(S^{d-1}\), \(d\geq 5\), at points of the set \(\mathcal{D}_{2}(\overline{\omega}^{d})\), which, by Lemma 3.3, equals \(\omega_{2d}^{*}\). If \(g^{\prime\prime}\) is strictly convex on \((-1,1)\), then, by Theorem 2.5, the set \(\mathcal{D}_{2}(\overline{\omega}^{d})=\omega_{2d}^{*}\) contains all absolute minima of \(p^{g}(\cdot,\overline{\omega}^{d})\) over \(S^{d-1}\).
## 4 The \(2_{41}\) polytope on \(S^{7}\)
Recall that _the \(E_{8}\) lattice_ is the set (lattice in \(\mathbb{R}^{8}\)) of vectors in \(\mathbb{Z}^{8}\cup(\mathbb{Z}+1/2)^{8}\) whose coordinates sum to an even integer. Let \(\overline{\omega}_{240}\) be the set of minimal length non-zero vectors of the \(E_{8}\) lattice normalized to lie on \(S^{7}\). The configuration \(\overline{\omega}_{240}\) consists of \(4\left(\begin{smallmatrix}8\\ 2\end{smallmatrix}\right)=112\) vectors with \(6\) zero coordinates and two coordinates with \(\pm 1/\sqrt{2}\) and \(2^{7}=128\) vectors with all eight coordinates \(\pm\frac{1}{2\sqrt{2}}\) and even number of "\(-\)" signs (this part is the \(8\)-demicube). For brevity, we will also call \(\overline{\omega}_{240}\) the \(E_{8}\) lattice.
The \(2_{41}\) polytope on \(S^{7}\) (the name is due to Coxeter), denoted here by \(\overline{\omega}_{2160}\), is the set of \(N=2160\) vectors on \(S^{7}\) that includes \(16\left(\begin{smallmatrix}8\\ 4\end{smallmatrix}\right)=1120\) vectors with \(4\) zero coordinates and \(4\) coordinates with \(\pm 1/2\) (let us call them type I vectors), \(16\) vectors with \(7\) zero coordinates and one coordinate with \(\pm 1\) (let us call them type II vectors), and \(8\left(\begin{smallmatrix}8\\ 1\end{smallmatrix}\right)+\left(\begin{smallmatrix}8\\ 3\end{smallmatrix}\right)+\left(\begin{smallmatrix}8\\ 5\end{smallmatrix}\right)+\left(\begin{smallmatrix}8\\ 7\end{smallmatrix}\right)\right)=1024\) vectors with \(7\) coordinates with \(\pm 1/4\), one coordinate with \(\pm 3/4\), and an odd number of negative coordinates (call them type III vectors). One can verify directly that equality (3) holds for \(d=7\) and \(n\in\{1,\ldots,7,9,10\}\). Indeed, since \(\overline{\omega}_{2160}\) is antipodal, (3) holds trivially for every \(n\) odd. For \(n=2,4,6,10\), we have
\[\begin{split}\sum_{\mathbf{x}\in\overline{\omega}_{2160}}\sum_{ \mathbf{y}\in\overline{\omega}_{2160}}P_{n}^{(7)}(\mathbf{x}\cdot\mathbf{y}) =4320(P_{n}^{(7)}(1)+64P_{n}^{(7)}\left(3/4\right)\\ +280P_{n}^{(7)}\left(1/2\right)+448P_{n}^{(7)}(1/4)+287P_{n}^{(7)} \left(0\right))=0.\end{split} \tag{10}\]
The code \(\overline{\omega}_{2160}\) is a \(7\)-design (and not an \(8\)-design). However, it is not stiff, since \(\mathcal{D}_{4}(\overline{\omega}_{2160})=\emptyset\) as the following lemma suggests.
**Lemma 4.1**.: _For every vector \(\mathbf{x}\in S^{7}\), the set \(D(\mathbf{x},\overline{\omega}_{2160})\) has at least five distinct elements. The only vectors \(\mathbf{x}\in S^{7}\) such that \(D(\mathbf{x},\overline{\omega}_{2160})\) has exactly five distinct elements are those in \(\overline{\omega}_{240}\). For each \(\mathbf{x}\in\overline{\omega}_{240}\), we have \(D(\mathbf{x},\overline{\omega}_{2160})=\left\{0,\pm\frac{1}{2\sqrt{2}},\pm \frac{1}{\sqrt{2}}\right\}\)._
Proof.: Let \(\mathbf{x}=(x_{1},\ldots,x_{8})\in\mathcal{D}_{5}(\overline{\omega}_{2160})\) be arbitrary. If non-zero coordinates of \(\mathbf{x}\) had at least three distinct absolute values, then \(\mathbf{x}\) would form at least 6 distinct dot products with vectors of type II. Therefore, non-zero coordinates of \(\mathbf{x}\) have at most two distinct absolute values.
Assume to the contrary that non-zero coordinates of \(\mathbf{x}\) have exactly two distinct absolute values. Denote them by \(0<b<c\). Then \(\mathbf{x}\) forms each of the dot products \(\pm b,\pm c\) with vectors of type II. If \(\mathbf{x}\) formed with some vector \(\mathbf{z}\in\overline{\omega}_{2160}\) a positive dot product \(u\) distinct from \(b\) and \(c\), since \(\overline{\omega}_{2160}\) is antipodal, there would be a sixth dot product \(-u\), contradicting the assumption that \(\mathbf{x}\in\mathcal{D}_{5}(\overline{\omega}_{2160})\). Thus, \(D(\mathbf{x},\overline{\omega}_{2160})\) contains only two positive dot products: \(b\) and \(c\). Let \(k\) coordinates of \(\mathbf{x}\) have absolute value \(b\) and \(\ell\) coordinates have absolute value \(c\). If it were that \(\ell\geq 2\), then \(\mathbf{x}\) would form positive dot products \(\frac{2c+b+v}{2}\) and \(\frac{b+v}{2}\) with appropriately chosen two vectors of type I, where \(v\geq 0\) is the absolute value of one of the coordinates of \(\mathbf{x}\). Then \(\frac{2c+b+v}{2}=c\) forcing \(b\leq 0\). Thus, \(\ell=1\). If it were that \(k\geq 3\), then \(\mathbf{x}\) would form positive dot products \(\frac{c+3b}{2}\) and \(\frac{c+b}{2}\) with certain two vectors of type I. This would force \(\frac{c+b}{2}=b\); that is, \(c=b\). Thus, \(k\leq 2\). We can now take vector \(\mathbf{z}\) to be of type III with the coordinate with \(\pm 3/4\) corresponding to a zero coordinate of \(\mathbf{x}\) such that \(\mathbf{x}\) forms positive dot products \(\mathbf{x}\cdot\mathbf{z}=\frac{c+kb}{4}\) and \(\frac{c+(k-2)b}{4}\). Then \(c=\frac{c+kb}{4}\leq\frac{c+2b}{4}<\frac{3c}{4}\), which is a contradiction.
Thus, all non-zero coordinates of \(\mathbf{x}\) have the same absolute value, which we denote by \(a\). Let \(n\) be the number of non-zero coordinates of \(\mathbf{x}\). If \(n=1\), then \(a=1\) and \(\mathbf{x}\) forms nine dot products, \(0,\pm 1/4,\pm 1/2,\pm 3/4,\pm 1\), with points of \(\overline{\omega}_{2160}\). If \(n=3\), then \(\mathbf{x}\) forms seven dot products, \(0,\pm a/2,\pm a,\pm 3a/2\), with type I vectors. If now \(4\leq n\leq 7\), then \(\mathbf{x}\) forms nine dot products \(0,\pm a/2,\pm a,\pm 3a/2,\pm 2a\) with type I vectors. Therefore, \(n=2\) or \(8\).
If \(n=2\), then \(a=1/\sqrt{2}\) and \(\mathbf{x}\in\overline{\omega}_{240}\). Finally, if \(n=8\), then every coordinate of \(\mathbf{x}\) is \(\pm\frac{1}{2\sqrt{2}}\). Assume to the contrary that \(\mathbf{x}\) has an odd number of negative coordinates. Then for every vector \(\mathbf{z}\) of type III, \(\mathbf{x}\cdot\mathbf{z}\) is a sum of seven signed terms \(\frac{1}{8\sqrt{2}}\) and one signed term \(\frac{3}{8\sqrt{2}}\) with an even total number of minus signs. Then \(\mathbf{x}\cdot\mathbf{z}\), in particular, has six values \(\pm 2w,\pm 6w,\pm 10w\), where \(w=\frac{1}{8\sqrt{2}}\). Therefore, coordinates of \(\mathbf{x}\) have an even number of minus signs and \(\mathbf{x}\in\overline{\omega}_{240}\).
Thus, if \(\mathbf{x}\notin\overline{\omega}_{240}\) then \(\mathbf{x}\) forms more than five distinct dot products with points of \(\overline{\omega}_{2160}\). One can also verify directly that every \(\mathbf{x}\in\overline{\omega}_{240}\) forms exactly five distinct dot products with points of \(\overline{\omega}_{2160}\), which are \(0,\pm\frac{1}{2\sqrt{2}},\pm\frac{1}{\sqrt{2}}\).
We are now ready to state the main result of this section.
**Theorem 4.2**.: _Let \(\overline{\omega}_{2160}=\{{\bf x}_{1},\ldots,{\bf x}_{2160}\}\) be the \(2_{41}\) polytope on \(S^{7}\) and \(g\) be an admissible potential function with non-negative derivatives \(g^{(8)}\), \(g^{(9)}\), and \(g^{(10)}\) on \((-1,1)\). Then, for every point \({\bf x}^{*}\in\overline{\omega}_{240}\),_
\[\min_{{\bf x}\in S^{7}}\sum_{i=1}^{2160}g({\bf x}\cdot{\bf x}_{i})=\sum_{i=1}^{ 2160}g({\bf x}^{*}\cdot{\bf x}_{i}). \tag{11}\]
_If, in addition, \(g^{(10)}>0\) on \((-1,1)\), then the absolute minimum in (11) is achieved only at points of the set \(\overline{\omega}_{240}\)._
Proof.: We have \(\{1,2,3,4,5,6,7,9,10\}\subset\mathcal{I}(\overline{\omega}_{2160})\) in view of (10). Applying Theorem 2.6 with \(d=7\), \(m=5\), \(\{t_{1},\ldots,t_{5}\}=\left\{0,\pm\frac{1}{2\sqrt{2}},\pm\frac{1}{\sqrt{2}}\right\}\) we have \(\mathcal{D}=\overline{\omega}_{240}\) and equality (11) holds for every \({\bf x}^{*}\in\overline{\omega}_{240}\). If \(g^{(10)}>0\) on \((-1,1)\), then, by Theorem 2.6, (11) holds only for \({\bf x}^{*}\in\overline{\omega}_{240}\).
## 5 Three symmetrized sharp configurations on \(S^{20}\) and \(S^{21}\)
In this section, we discuss one simple method which allows to construct some new stiff configurations and obtain their universal minima.
**Lemma 5.1**.: _Let \(\omega_{N}\subset S^{d}\) be an \(m\)-stiff configuration, \(m,d\geq 1\), which does not contain an antipodal pair. Let \(\omega_{N}^{\prime}:=\omega_{N}\cup(-\omega_{N})\) be its symmetrization. Then \(\omega_{N}^{\prime}\) is also \(m\)-stiff with \(\mathcal{D}_{m}(\omega_{N}^{\prime})=\mathcal{D}_{m}(\omega_{N})\)._
Proof.: The configurations \(\omega_{N}\) and \(-\omega_{N}\) are disjoint and both are \((2m-1)\)-designs. Then their union \(\omega_{N}^{\prime}\) has \(2N\) points and is also a \((2m-1)\)-design. We immediately have \(\mathcal{D}_{m}(\omega_{N}^{\prime})\subset\mathcal{D}_{m}(\omega_{N})\). If \({\bf x}\in\mathcal{D}_{m}(\omega_{N})\) then, by Proposition 2.3, \({\bf x}\) forms one of the dot products \(\kappa_{1}^{m},\ldots,\kappa_{m}^{m}\) with any point from \(\omega_{N}\). Since \(\kappa_{1}^{m},\ldots,\kappa_{m}^{m}\) are zeros of \(P_{m}^{(d)}\), they are symmetric about the origin. Since \(-{\bf x}\in\mathcal{D}_{m}(\omega_{N})\), for every \({\bf y}\in-\omega_{N}\), we have \({\bf x}\cdot{\bf y}=-{\bf x}\cdot(-{\bf y})\in\{\kappa_{1}^{m},\ldots,\kappa_ {m}^{m}\}\) because \(-{\bf y}\in\omega_{N}\). Then \({\bf x}\in\mathcal{D}_{m}(\omega_{N}^{\prime})\); that is, \(\mathcal{D}_{m}(\omega_{N})=\mathcal{D}_{m}(\omega_{N}^{\prime})\). Since \(\mathcal{D}_{m}(\omega_{N})\neq\emptyset\), we have \(\mathcal{D}_{m}(\omega_{N}^{\prime})\neq\emptyset\); that is, \(\omega_{N}^{\prime}\) is \(m\)-stiff.
We remark that Lemmas 5.1 and 3.2 immediately imply Lemma 3.3 for \(d\geq 5\) odd, since, in this case, \(\overline{\omega}^{d}\) has no antipodal pair, \(U_{d}=\overline{\omega}^{d}\cup(-\overline{\omega}^{d})\), and \(\mathcal{D}_{2}(U_{d})=\omega_{2d}^{*}\). Lemma 5.1 also applies to the following two cases.
_Case I._ The Higman-Sims configuration, denoted by \(\overline{\omega}_{100}\), is a 100-point 3-design on \(S^{21}\), where distinct points form only dot products \(-4/11\) and \(1/11\) with each other (see, e.g., [11, Table 1]). For \(n=1,2,3\), we have
\[\sum_{\mathbf{x}\in\overline{\omega}_{100}}\sum_{\mathbf{y}\in\overline{ \omega}_{100}}P_{n}^{(21)}(\mathbf{x}\cdot\mathbf{y})=100\left(P_{n}^{(21)}(1 )+77P_{n}^{(21)}\left(\frac{1}{11}\right)+22P_{n}^{(21)}\left(-\frac{4}{11} \right)\right)=0.\]
According to [10], this configuration is 2-stiff. Paper [10] finds 176 pairs of antipodal vectors on \(S^{21}\), where each vector forms only dot products \(\frac{1}{\sqrt{22}}\) and \(-\frac{1}{\sqrt{22}}\) with vectors from \(\overline{\omega}_{100}\). Denote this set of \(2\cdot 176=352\) vectors by \(\overline{\omega}_{352}\). We have \(\overline{\omega}_{352}\subset\mathcal{D}_{2}(\overline{\omega}_{100})\). Each of these vectors is a universal minimum of \(\overline{\omega}_{100}\) (see Theorem 2.5). Since no dot product in \(\overline{\omega}_{100}\) is \(-1\), the set \(\overline{\omega}_{100}\) does not contain an antipodal pair.
By Lemma 5.1, the symmetrized Higman-Sims configuration \(\overline{\omega}_{200}:=\overline{\omega}_{100}\cup(-\overline{\omega}_{100})\) is 2-stiff with \(\mathcal{D}_{2}(\overline{\omega}_{200})=\mathcal{D}_{2}(\overline{\omega}_{1 00})\supset\overline{\omega}_{352}\).
_Case II._ Two sharp codes on \(S^{20}\) can be derived from the McLaughlin configuration \(\overline{\omega}_{275}\subset S^{21}\), which is strongly sharp (see, e.g., [11, Table 1]). Fix a point \(\mathbf{x}\in\overline{\omega}_{275}\). It forms dot product \(1/6\) with 162 points from \(\overline{\omega}_{275}\). Let \(\omega_{162}^{\mathbf{x}}\) denote the set of these 162 points. They lie in the intersection of \(S^{21}\) with the hyperplane \(\mathbf{x}\cdot\mathbf{t}=1/6\). Point \(\mathbf{x}\) forms dot product \(-1/4\) with the remaining set of 112 points from \(\overline{\omega}_{275}\), which we denote by \(\omega_{112}^{\mathbf{x}}\).
We apply homotheties to \(\omega_{162}^{\mathbf{x}}\) and \(\omega_{112}^{\mathbf{x}}\) to scale them to \(\widetilde{S}^{20}:=S^{21}\cap H\), where \(H\) is the 21-dimensional linear subspace of \(\mathbb{R}^{22}\) orthogonal to \(\mathbf{x}\). Denote the resulting configurations by \(\overline{\omega}_{162}\) and \(\overline{\omega}_{112}\), respectively. Both \(\overline{\omega}_{162}\) and \(\overline{\omega}_{112}\) are 3-designs. They are known sharp configurations, see [11, Table 1] (\(\overline{\omega}_{112}\) is the isomorphic subspace with \(q=3\)). Since any vector from \(\omega_{162}^{\mathbf{x}}\) and any vector from \(\omega_{112}^{\mathbf{x}}\) form only dot products \(1/6\) or \(-1/4\) with each other, any vector from \(\overline{\omega}_{162}\) and any vector from \(\overline{\omega}_{112}\) form only dot products \(\frac{1}{\sqrt{21}}\) or \(-\frac{1}{\sqrt{21}}\). Then both configurations are 2-stiff with \(\overline{\omega}_{112}\subset\mathcal{D}_{2}(\overline{\omega}_{162})\) and \(\overline{\omega}_{162}\subset\mathcal{D}_{2}(\overline{\omega}_{112})\) (this was observed in [10]).
Since any two distinct points in \(\omega_{162}^{\mathbf{x}}\) form dot products \(1/6\) or \(-1/4\) with each other, any two distinct points in \(\overline{\omega}_{162}\) form dot products \(1/7\) or \(-2/7\). Since any two distinct points in \(\omega_{112}^{\mathbf{x}}\) form dot products \(1/6\) or \(-1/4\) with each other, any two distinct points in \(\overline{\omega}_{112}\) form dot products \(1/9\) or \(-1/3\). In particular, \(\overline{\omega}_{162}\) and \(\overline{\omega}_{112}\) do not contain an antipodal pair. Let \(\overline{\omega}_{324}:=\overline{\omega}_{162}\cup(-\overline{\omega}_{162})\) and \(\overline{\omega}_{224}:=\overline{\omega}_{112}\cup(-\overline{\omega}_{112})\) be their symmetrization about the origin. It is not difficult to see that \(\overline{\omega}_{224}\subset\mathcal{D}_{2}(\overline{\omega}_{162})\) and \(\overline{\omega}_{324}\subset\mathcal{D}_{2}(\overline{\omega}_{112})\). Each point of \(\overline{\omega}_{224}\) is
a universal minimum of \(\overline{\omega}_{162}\) and each point of \(\overline{\omega}_{324}\) is a universal minimum of \(\overline{\omega}_{112}\) (see Theorem 2.5).
By Lemma 5.1, both \(\overline{\omega}_{324}\) and \(\overline{\omega}_{224}\) are \(2\)-stiff with \(\mathcal{D}_{2}(\overline{\omega}_{324})=\mathcal{D}_{2}(\overline{\omega}_{16 2})\supset\overline{\omega}_{224}\) and \(\mathcal{D}_{2}(\overline{\omega}_{224})=\mathcal{D}_{2}(\overline{\omega}_{1 12})\supset\overline{\omega}_{324}\).
Concluding paragraphs in Cases I and II and Theorem 2.5 imply the following.
**Proposition 5.2**.: _Let \(g\) be an admissible potential function with a convex derivative \(g^{\prime\prime}\) on \((-1,1)\). Then_
1. _every point of the configuration_ \(\overline{\omega}_{352}\) _is a point of absolute minimum over_ \(S^{21}\) _of the potential_ \(p^{g}(\cdot,\overline{\omega}_{200})\) _of the symmetrized Higman-Sims configuration_ \(\overline{\omega}_{200}\)_;_
2. _every point of the configuration_ \(\overline{\omega}_{224}\) _is a point of absolute minimum over_ \(\widetilde{S}^{20}\) _of the potential_ \(p^{g}(\cdot,\overline{\omega}_{324})\)_;_
3. _every point of the configuration_ \(\overline{\omega}_{324}\) _is a point of absolute minimum over_ \(\widetilde{S}^{20}\) _of the potential_ \(p^{g}(\cdot,\overline{\omega}_{224})\)_._
## 6 Certain properties of general stiff configurations
Every time we have a stiff configuration, in view of Theorem 2.5, we automatically have its universal minima (the dual configuration). Moreover, every stiff configuration attains the Fazekas-Levenshtein bound for covering [13, Theorem 2]. Therefore, it is important to study stiff codes and their duals in general. In this section, we characterize \(1\)-stiff configurations on \(S^{d}\), \(m\)-stiff configurations on \(S^{1}\), and their duals and also prove some basic properties of stiff configurations and of their duals. We call the point \(\mathbf{c}=\frac{1}{N}\sum_{i=1}^{N}\mathbf{x}_{i}\)_the center of mass_ of a configuration \(\omega_{N}=\{\mathbf{x}_{1},\ldots,\mathbf{x}_{N}\}\).
**Proposition 6.1**.: _Let \(d\geq 1\). A configuration \(\omega_{N}\subset S^{d}\), \(N\geq 1\), is \(1\)-stiff if and only if its center of mass is at the origin and \(\omega_{N}\) is contained in a \(d\)-dimensional linear subspace of \(\mathbb{R}^{d+1}\)._
Proof.: The proposition follows from the fact that a point configuration is a spherical \(1\)-design if and only if its center of mass is located at the origin and the fact that a hyperplane containing \(\omega_{N}\) also contains its center of mass.
We next describe the dual of a \(1\)-stiff configuration.
**Proposition 6.2**.: _Let \(\omega_{N}\subset S^{d}\), \(d\geq 1\), be a \(1\)-stiff configuration. Then \(\mathcal{D}_{1}(\omega_{N})=L^{\perp}\cap S^{d}\), where \(L\) is the linear subspace of \(\mathbb{R}^{d+1}\) spanned by \(\omega_{N}\). If \(k:=\dim L\leq d-1\), then \(\mathcal{D}_{1}(\omega_{N})\) is a sphere in a \((d+1-k)\)-dimesnional subspace of \(\mathbb{R}^{d+1}\). If \(k=d\), then \(\mathcal{D}_{1}(\omega_{N})=\{\mathbf{a},-\mathbf{a}\}\) for some \(\mathbf{a}\in S^{d}\), which is a \(1\)-stiff configuration._
Proof.: Let \(\mathbf{z}\) be any vector in \(\mathcal{D}_{1}(\omega_{N})\). By Proposition 2.3, we have \(\mathbf{z}\cdot\mathbf{y}=0\) for every \(\mathbf{y}\in\omega_{N}\), since \(0\) is the only root of \(P_{1}^{(d)}\). Then \(\mathbf{z}\bot L\); i.e., \(\mathbf{z}\in L^{\perp}\cap S^{d}\). If \(\mathbf{z}\) is any vector in \(L^{\perp}\cap S^{d}\), then it forms only one dot product (which is \(0\)) with any point from \(\omega_{N}\); that is, \(\mathbf{z}\in\mathcal{D}_{1}(\omega_{N})\). The rest of Proposition 6.2 follows immediately.
We now charactirize stiff configurations on \(S^{1}\).
**Proposition 6.3**.: _For every \(m\geq 1\), a configuration on \(S^{1}\) is \(m\)-stiff if and only if it is a regular \(2m\)-gon._
We remark that the regular \(2m\)-gon \(\widetilde{\omega}_{2m}\) on \(S^{1}\) is antipodal and its dual \(\mathcal{D}_{m}(\widetilde{\omega}_{2m})\) is another regular \(2m\)-gon with \(\mathcal{D}_{m}(\mathcal{D}_{m}(\widetilde{\omega}_{2m}))=\widetilde{\omega}_ {2m}\).
Proof of Proposition 6.3.: If \(\omega_{N}=\widetilde{\omega}_{2m}\) then it is a \((2m-1)\)-design and the midpoint \(\mathbf{y}\) of the arc joining any two neighboring vertices forms \(m\) distinct values of dot product with points from \(\omega_{N}\); i.e., \(\omega_{N}\) is \(m\)-stiff.
By Proposition 2.3, the point \(\mathbf{y}\) forms dot products \(\kappa_{1}^{m},\ldots,\kappa_{m}^{m}\) (zeros of \(P_{m}^{(1)}\)) with points of \(\widetilde{\omega}_{2m}\), each with frequency \(2=2ma_{0}(\varphi_{i})\), \(i=1,\ldots,m\), where \(\{\varphi_{1},\ldots,\varphi_{m}\}\) is the fundamental system of polynomials for the nodes \(\kappa_{i}^{m}\). We have
\[a_{0}(\varphi_{i})=\frac{1}{m}\ \ \text{and}\ \ \kappa_{i}^{m}=\cos\frac{(2i-1) \pi}{2m},\ \ \ \ i=1,\ldots,m. \tag{12}\]
Assume that \(\omega_{N}\) is \(m\)-stiff. Let \(\mathbf{z}\) be any point in \(\mathcal{D}_{m}(\omega_{N})\). By Proposition 2.3, point \(\mathbf{z}\) forms dot products \(\kappa_{1}^{m},\ldots,\kappa_{m}^{m}\) with points of \(\omega_{N}\), where \(\kappa_{i}^{m}\), \(i=1,\ldots,m\), are the zeros of \(P_{m}^{(1)}\). Then \(\omega_{N}\) is contained in the set of points of intersection of \(S^{1}\) with \(m\) parallel lines; that is, \(\#\omega_{N}\leq 2m\). By Proposition 2.3 and (12), the frequency \(M_{i}\) of the dot product \(\kappa_{i}^{m}\) is \(M_{i}=Na_{0}(\varphi_{i})=N/m\leq 2\), \(i=1,\ldots,m\). Hence, frequencies \(M_{i}\) are equal and each of the \(m\) parallel lines contains the same number of points from \(\omega_{N}\) (one or two). Assume to the contrary that each \(M_{i}\) equals \(1\). Then \(\omega_{N}\) has only \(m\) points, which means that any point \(\mathbf{y}\in\omega_{N}\) forms at most \(m\) distinct values of the dot product with points of
\(\omega_{N}\); i.e., \(\mathbf{y}\in\mathcal{D}_{m}(\omega_{N})\). One of dot products is \(1\), while by Proposition 2.3, these dot products must be zeros \(\kappa_{i}^{m}\) of \(P_{m}^{(1)}\) none of which is \(1\). This contradiction shows that \(M_{i}=2\), \(i=1,\ldots,m\), and, hence \(\#\omega_{N}=2m\). Vector \(\mathbf{z}\) forms each of the angles \(\frac{(2i-1)\pi}{2m}=\arccos\kappa_{i}^{m}\) with exactly two points from \(\omega_{N}\), \(i=1,\ldots,m\). Then \(\omega_{N}\) is a regular \(2m\)-gon.
We next prove the following basic statement.
**Proposition 6.4**.: _For any configuration \(\omega_{N}\subset S^{d}\), \(d\geq 1\), with \(\mathcal{D}_{m}(\omega_{N})\neq\emptyset\), \(m\geq 1\), the set \(\mathcal{D}_{m}(\omega_{N})\) is antipodal. If \(\omega_{N}\) is \(m\)-stiff, then \(\omega_{N}\subset\mathcal{D}_{m}(\mathcal{D}_{m}(\omega_{N}))\). If, in addition, \(\mathcal{D}_{m}(\omega_{N})\) is finite and \(m\)-stiff, then \(\mathcal{D}_{m}(\mathcal{D}_{m}(\mathcal{D}_{m}(\omega_{N})))=\mathcal{D}_{m} (\omega_{N})\)._
Though the dual \(\mathcal{D}_{m}(\omega_{N})\) is antipodal, this is not always true for the \(m\)-stiff configuration \(\omega_{N}\) itself. For example, the \(d\)-demicube \(\overline{\omega}^{d}\) on \(S^{d-1}\) is not antipodal for any \(d\geq 3\) odd.
In view of Proposition 6.4, equality \(\omega_{N}=\mathcal{D}_{m}(\mathcal{D}_{m}(\omega_{N}))\) with an \(m\)-stiff \(\omega_{N}\) implies that \(\omega_{N}\) is antipodal. However, the inclusion \(\omega_{N}\subset\mathcal{D}_{m}(\mathcal{D}_{m}(\omega_{N}))\) can be sometimes strict (even when both \(\omega_{N}\) and \(\mathcal{D}_{m}(\omega_{N})\) are antipodal and \(m\)-stiff). This is the case, for example, for \(\omega_{N}=\overline{\omega}^{d}\) and any \(d\geq 5\) in view of Lemma 3.3 and the fact that the dual of the cross-polytope \(\omega_{2d}^{*}\) is the whole cube \(U_{d}\) (for \(d\geq 5\) even, both \(\overline{\omega}^{d}\) and \(\mathcal{D}_{2}(\overline{\omega}^{d})=\omega_{2d}^{*}\) are antipodal and \(2\)-stiff). At the same time, every stiff configuration from [2, Table 3] coincides with the dual of its dual.
Some other examples of non-antipodal stiff configurations \(\omega_{N}\) are given in [10]. Since, by Proposition 6.4, the duals of their duals are antipodal, the inclusion \(\omega_{N}\subset\mathcal{D}_{m}(\mathcal{D}_{m}(\omega_{N}))\) is strict as well.
Proof of Proposition 6.4.: Let \(\omega_{N}\subset S^{d}\) be arbitrary with \(\mathcal{D}_{m}(\omega_{N})\neq\emptyset\). For any point \(\mathbf{z}\in\mathcal{D}_{m}(\omega_{N})\), the point \(-\mathbf{z}\) also forms at most \(m\) distinct dot products with points of \(\omega_{N}\); that is \(-\mathbf{z}\in\mathcal{D}_{m}(\omega_{N})\) and the set \(\mathcal{D}_{m}(\omega_{N})\) is antipodal. Choose any point \(\mathbf{z}\) in an \(m\)-stiff \(\omega_{N}\). For any point \(\mathbf{y}\in\mathcal{D}_{m}(\omega_{N})\), by Proposition 2.3, we have \(\mathbf{y}\cdot\mathbf{z}\in\{\kappa_{1}^{m},\ldots,\kappa_{m}^{m}\}\); that is, \(\mathbf{z}\) forms at most \(m\) distinct dot products with points of \(\mathcal{D}_{m}(\omega_{N})\). Then \(\mathbf{z}\in\mathcal{D}_{m}(\mathcal{D}_{m}(\omega_{N}))\) and \(\omega_{N}\subset\mathcal{D}_{m}(\mathcal{D}_{m}(\omega_{N}))\). Assume additionally that \(X:=\mathcal{D}_{m}(\omega_{N})\) is finite and \(m\)-stiff. The inclusion \(\omega_{N}\subset\mathcal{D}_{m}(\mathcal{D}_{m}(\omega_{N}))\) implies that \(X=\mathcal{D}_{m}(\omega_{N})\supset\mathcal{D}_{m}(\mathcal{D}_{m}(X))\). Since \(X\) is \(m\)-stiff, we have the opposite inclusion.
We say that a point set in \(\mathbb{R}^{d+1}\) is _in general position_ if it is not contained in any hyperplane. One can construct plenty of examples of \(m\)-stiff configurations, \(m\geq 2\), whose dual is not in general position. For instance, start with the cube \(U_{3}\) inscribed in \(S^{2}\) and let \(\alpha_{1}\) and \(\alpha_{2}\) be the parallel planes containing some two paralel facets of \(U_{3}\). For a given \(n\geq 2\), we rotate the cube \(U_{3}\) about the axis \(\ell\) perpendicular to the planes \(\alpha_{1}\) and \(\alpha_{2}\) and passing through the origin by angles \(\frac{\pi k}{2n}\), \(k=0,1,\ldots,n-1\), and let \(\omega_{N}\) be the union of the resulting \(n\) cubes. Then \(\omega_{N}\) is a \(3\)-design as a disjoint union of finitely many \(3\)-designs. Since \(\omega_{N}\) is still contained in planes \(\alpha_{1}\) and \(\alpha_{2}\), it is \(2\)-stiff. However, its dual is \(\mathcal{D}_{2}(\omega_{N})=\{\mathbf{a},-\mathbf{a}\}\), where \(\mathbf{a}\) is a unit vector parallel to the axis \(\ell\). The dual is not in general position. It is also only \(1\)-stiff. This example can, of course, be extended to other dimensions and other initial configurations.
**Proposition 6.5**.: _Let \(\omega_{N}\subset S^{d}\), \(d\geq 1\), be an \(m\)-stiff configuration, \(m\geq 2\). Then \(\mathcal{D}_{m}(\omega_{N})\) contains at most \(m^{d+1}\) points. If \(\mathcal{D}_{m}(\omega_{N})\) is not in general position, then \(\mathcal{D}_{m}(\omega_{N})\) is \(1\)-stiff and, hence, not \(m\)-stiff._
Proposition 6.5 implies that an \(m\)-stiff configuration on \(S^{d}\), \(m\geq 2\), cannot have more than \(m^{d+1}\) universal minima. This cardinality bound can be achieved: take \(\omega_{N}\) to be the set of vertices of a regular cross-polytope inscribed in \(S^{d}\). It is \(2\)-stiff and its dual is a cube inscribed in \(S^{d}\), which has exactly \(2^{d+1}\) vertices. At the same time, no upper bound depending only on \(m\) and \(d\) can be written for the cardinality of an \(m\)-stiff configuration \(\omega_{N}\) itself, \(d\geq 2\) (when \(\omega_{N}\) exists for those \(m\) and \(d\)), see Proposition 6.6 below.
Another interesting question related to Proposition 6.5 is about the general assumptions under which the dual of a given \(m\)-stiff configuration, \(m\geq 2\), is in general position and whether this is sufficient for the dual to be also \(m\)-stiff. The dual is \(m\)-stiff with the same \(m\), for example, for every stiff configuration mentioned in [2, Table 3] and for the \(d\)-demicube, \(d\geq 4\).
Proof of Proposition 6.5.: Since \(\omega_{N}\) is at least a \(3\)-design, it is in general position. If it were not, then \(\omega_{N}\) would not be a \(2\)-design: for the polynomial \(p(\mathbf{x})=(\mathbf{x}\cdot\mathbf{v}-\alpha)^{2}\), where \(\mathbf{x}\cdot\mathbf{v}=\alpha\) is an equation of the hyperplane containing \(\omega_{N}\), the average of \(p\) over \(\omega_{N}\) is zero while the average of \(p\) over \(S^{d}\) is positive. Since \(\omega_{N}\) is in general position, it contains a linearly independent subset \(\{\mathbf{y}_{1},\ldots,\mathbf{y}_{d+1}\}\).
Let \(\mathbf{z}\) be any point in \(\mathcal{D}_{m}(\omega_{N})\). Then, by Proposition 2.3, we have \(\mathbf{z}\cdot\mathbf{y}_{i}\in K_{m}:=\{\kappa_{1}^{m},\ldots,\kappa_{m}^{m}\}\), \(i=1,\ldots,d+1\), where \(\kappa_{j}^{m}\)'s are zeros of \(P_{m}^{(d)}\). Consequently, \(\mathbf{z}\) is a solution to a linear system \(\mathbf{y}_{j}\cdot\mathbf{z}=\alpha_{j}\), \(j=1,\ldots,d+1\), where
\(\alpha_{1},\ldots,\alpha_{d+1}\in K_{m}\). Since this system has a non-singular coefficient matrix, it has a unique solution, which may or may not be on \(S^{d}\). Since there are \(m^{d+1}\) possible vectors of right-hand sides for this system, there are at most \(m^{d+1}\) points in \(\mathcal{D}_{m}(\omega_{N})\).
Assume that \(\mathcal{D}_{m}(\omega_{N})\) is contained in some hyperplane \(H\). By Proposition 6.4, it is antipodal. Then its center of mass is at the origin; that is \(H\) is a \(d\)-dimensional linear subspace of \(\mathbb{R}^{d+1}\). By Proposition 6.1, \(\mathcal{D}_{m}(\omega_{N})\) is \(1\)-stiff. Since \(\mathcal{D}_{m}(\omega_{N})\) is contained in one hyperplane, by the above argument, it cannot be a \(2\)-design. Then \(\mathcal{D}_{m}(\omega_{N})\) cannot be \(m\)-stiff.
The statement below implies that the set of possible cardinalities of \(m\)-stiff configurations on \(S^{d}\), \(d\geq 2\) (provided that an \(m\)-stiff configuration exists on \(S^{d}\)), forms an additive semigroup (in particular, it is not bounded above). This is not the case for \(d=1\) in view of Proposition 6.3. In the case \(m=1\) and \(d\geq 2\), this semigroup is the set of all integers \(N\geq 2\), see Proposition 6.1.
**Proposition 6.6**.: _Suppose that for given \(m\geq 1\) and \(d\geq 2\), there exist \(m\)-stiff configurations on \(S^{d}\) of cardinalities \(N_{1}\) and \(N_{2}\). Then there is an \(m\)-stiff configuration on \(S^{d}\) of cardinality \(N_{1}+N_{2}\)._
Proposition 6.6 and Bezout's identity imply that if, for a given pair \((m,d)\), \(d\geq 2\), cardinalities of some two \(m\)-stiff configurations on \(S^{d}\) have greatest common divisor \(\delta\), then for any sufficiently large multiple \(N\) of \(\delta\), there exists an \(m\)-stiff configuration on \(S^{d}\) of cardinality \(N\).
Proof.: Let \(\omega_{N_{i}}\subset S^{d}\) be an \(m\)-stiff configuration of cardinality \(N_{i}\), \(i=1,2\), and let \(\mathbf{z}_{i}\in\mathcal{D}_{m}(\omega_{N_{i}})\), \(i=1,2\), be chosen so that \(\mathbf{z}_{1}\neq\mathbf{z}_{2}\) (if it happens that \(\mathbf{z}_{1}=\mathbf{z}_{2}\), we choose \(-\mathbf{z}_{2}\) instead of \(\mathbf{z}_{2}\)). We will construct an \(m\)-stiff configuration of cardinality \(N_{1}+N_{2}\). By Proposition 2.3, both vectors \(\mathbf{z}_{1}\) and \(\mathbf{z}_{2}\) form only dot products \(\kappa_{1}^{m},\ldots,\kappa_{m}^{m}\) with points from the corresponding configuration \(\omega_{N_{i}}\). Let \(H\) be the \(d\)-dimensional subspace of \(\mathbb{R}^{d+1}\), which is the perpendicular bisector for the line segment \([\mathbf{z}_{1},\mathbf{z}_{2}]\). Let \(r_{H}:\mathbb{R}^{d+1}\to\mathbb{R}^{d+1}\) be the reflection transformation about the subspace \(H\) and let \(U\) be its matrix (\(U\) is orthogonal). Then for every \(\mathbf{x}\in\omega_{N_{1}}^{\prime}:=r_{H}(\omega_{N_{1}})\), there is \(\mathbf{y}\in\omega_{N_{1}}\) such that \(\mathbf{x}=U\mathbf{y}\) and
\[\mathbf{x}\cdot\mathbf{z}_{2}=U\mathbf{y}\cdot\mathbf{z}_{2}=\mathbf{y}\cdot U ^{T}\mathbf{z}_{2}=\mathbf{y}\cdot\mathbf{z}_{1}=\kappa_{i}^{m}\quad\text{for some}\quad i=1,\ldots,m. \tag{13}\]
The set \(\omega_{N_{1}}^{\prime}\) is \(m\)-stiff. Denote by \(\mathbf{a}\in S^{d}\) a vector perpendicular to \(\mathbf{z}_{2}\), not contained in any subspace \(\text{span}\{\mathbf{w}-\mathbf{v}\}\), where \(\mathbf{w}\in\omega_{N_{2}}\) and \(\mathbf{v}\in\omega_{N_{1}}^{\prime}\), and \(\mathbf{a}\) is
not perpendicular to any vector from \(\omega_{N_{2}}\). Such a vector \(\mathbf{a}\) exists, since all these conditions delete a set of \((d-1)\)-dimensional measure zero from \(S^{d}\cap\{\mathbf{z}_{2}\}^{\perp}\).
Then \(L=\{\mathbf{a}\}^{\perp}\) is disjoint with \(\omega_{N_{2}}\). Furthermore, \(\mathbf{z}_{2}\in L\) and the configuration \(\omega_{N_{1}}^{\prime\prime}:=r_{L}(\omega_{N_{1}}^{\prime})\) is disjoint with \(\omega_{N_{2}}\). If it were not, then \(L\) would be the perpendicular bisector for a line segment whose one endpoint \(\mathbf{w}_{1}\) is in \(\omega_{N_{2}}\) and the other endpoint \(\mathbf{v}_{1}\) is in \(\omega_{N_{1}}^{\prime}\). We have \(\mathbf{w}_{1}\neq\mathbf{v}_{1}\), since, otherwise, \(\mathbf{w}_{1}\in L\) and, hence, \(\mathbf{a}\bot\mathbf{w}_{1}\). Then \(\mathbf{w}_{1}-\mathbf{v}_{1}\) is a non-zero vector perpendicular to \(L\) and \(\mathbf{a}\in\mathrm{span}\{\mathbf{w}_{1}-\mathbf{v}_{1}\}\) contradicting the choice of \(\mathbf{a}\).
Let \(V\) be the matrix of the reflection transformation \(r_{L}\). Then for every \(\mathbf{z}\in\omega_{N_{1}}^{\prime\prime}\), there is \(\mathbf{x}\in\omega_{N_{1}}^{\prime}\) such that \(\mathbf{z}=V\mathbf{x}\) and using (13) we have
\[\mathbf{z}\cdot\mathbf{z}_{2}=V\mathbf{x}\cdot\mathbf{z}_{2}=\mathbf{x}\cdot V ^{T}\mathbf{z}_{2}=\mathbf{x}\cdot\mathbf{z}_{2}=\kappa_{i}^{m}\quad\text{for some}\quad i=1,\dots,m.\]
Thus, \(\mathbf{z}_{2}\in\mathcal{D}_{m}(\omega_{N_{1}}^{\prime\prime}\cup\omega_{N_{ 2}})\) and \(\omega_{N_{1}}^{\prime\prime}\cup\omega_{N_{2}}\) is a disjoint union of two \((2m-1)\)-designs. Then it is also a \((2m-1)\)-design, and, hence is an \(m\)-stiff configuration of cardinality \(N_{1}+N_{2}\).
|
2303.17157 | Convergence of the CEM-GMsFEM for compressible flow in highly
heterogeneous media | This paper presents and analyses a Constraint Energy Minimization Generalized
Multiscale Finite Element Method (CEM-GMsFEM) for solving single-phase
non-linear compressible flows in highly heterogeneous media. The construction
of CEM-GMsFEM hinges on two crucial steps: First, the auxiliary space is
constructed by solving local spectral problems, where the basis functions
corresponding to small eigenvalues are captured. Then the basis functions are
obtained by solving local energy minimization problems over the oversampling
domains using the auxiliary space. The basis functions have exponential decay
outside the corresponding local oversampling regions. The convergence of the
proposed method is provided, and we show that this convergence only depends on
the coarse grid size and is independent of the heterogeneities. An online
enrichment guided by \emph{a posteriori} error estimator is developed to
enhance computational efficiency. Several numerical experiments on a
three-dimensional case to confirm the theoretical findings are presented,
illustrating the performance of the method and giving efficient and accurate
numerical. | Leonardo A. Poveda, Shubin Fu, Eric T. Chung, Lina Zhao | 2023-03-30T05:30:17Z | http://arxiv.org/abs/2303.17157v1 | # Convergence of the CEM-GMsFEM for compressible flow in highly heterogeneous media
###### Abstract
This paper presents and analyses a Constraint Energy Minimization Generalized Multiscale Finite Element Method (CEM-GMsFEM) for solving single-phase nonlinear compressible flows in highly heterogeneous media. The construction of CEM-GMsFEM hinges on two crucial steps: First, the auxiliary space is constructed by solving local spectral problems, where the basis functions corresponding to small eigenvalues are captured. Then the basis functions are obtained by solving local energy minimization problems over the oversampling domains using the auxiliary space. The basis functions have exponential decay outside the corresponding local oversampling regions. The convergence of the proposed method is provided, and we show that this convergence only depends on the coarse grid size and is independent of the heterogeneities. An online enrichment guided by _a posteriori_ error estimator is developed to enhance computational efficiency. Several numerical experiments on a three-dimensional case to confirm the theoretical findings are presented, illustrating the performance of the method and giving efficient and accurate numerical
**Keywords:** Constraint energy minimization, multiscale finite element methods, compressible flow, highly heterogeneous, local spectral problems
## 1 Introduction
The numerical solution of non-linear partial differential equations defined on domains with multiscale and heterogeneous properties is an active research subject in the scientific community. The subject is related to several engineering applications, such as composite materials, porous media flow, and fluid mechanics. A common feature for all these applications is that they are very computationally challenging and often impossible to solve within an acceptable tolerance using standard fine-scale approximations due to the disparity between scales that need to be represented and the inherent nonlinearities. For this reason, coarse-grid computational models are often used. These approaches are usually referred to as multiscale methods in the literature, among which we may mention: Multiscale Finite Element Method [17], the Variational Multiscale Method [18],
Mixed Multiscale Finite Element Method [6], Mixed Mortar Multiscale Finite Element Method [2], the two-scale Finite Element Method [24], and the Multiscale Finite Volume method [19, 20]. The aforementioned methods share model reduction techniques, using different structures to find multiscale solutions, especially in many practical applications such as fluid flow simulations, for instance, [1, 5, 25, 21, 16, 27, 15, 26]. In particular, we consider a family of an extended version of MsFEM, known as the Generalized Multiscale Finite Element Method (GMsFEM) that was first introduced by [12, 13, 8, 11]. The main idea of GMsFEM is to construct localized basis functions by solving local spectral problems that are used to approximate the solution on a coarse grid incorporating fine-scale features. Following this, we construct an auxiliary space associated with local spectral problems in the coarse grid. The first few eigenfunctions corresponding to small eigenvalues (the convergence depends on the decay of the spectral problems [12]) are considered as the multiscale basis functions. In this paper, we extend the Constraint Energy Minimization Generalized Multiscale Finite Element Method (CEM-GMsFEM) developed in [9, 22] to solve single-phase nonlinear compressible flow. The key ideas of the method can be summarized as follows: First, we construct the auxiliary basis functions by solving local spectral problems. Then, by using oversampling techniques and localization (cf. [23]), we solve an appropriate energy subject to some constrainted oversampling regions to find the required basis functions. Finally, the resulting basis functions are shown to have exponential decay away from the target coarse element, and therefore, they are localizable. Then we rigorously analyze the convergence error estimates for the proposed scheme. Our theories indicate that the convergence rate behaves as \(H/\Lambda\), where \(H\) denotes the coarse-grid size and \(\Lambda\) is proportional to the minimum (taken over all coarse regions) of the eigenvalues that the corresponding eigenvectors are not included in the coarse space. Since the problems under consideration are nonlinear, some novel methodologies shall be incorporated to overcome the difficulties present in the analysis. Several numerical experiments are carried out to demonstrate the capabilities and efficiency of the proposed method.
The outline of the paper is following. In Section 2, we briefly derive a mathematical model for compressible fluid flows in porous media. Section 3 is devoted to constructing the offline multiscale space and framework of CEM-GMsFEM. Section 4 presents our convergence analysis for the proposed method. Numerical experiments are presented in Section 5. Finally, concluding remarks and future perspectives are given in Section 6.
## 2 Mathematical model for compressible fluid flow
In this section, we consider the governing equations of the single-phase, non-linear compressible fluid flow processes in a porous medium that are defined by
\[\left\{\begin{aligned} \partial_{t}(\phi\rho)-\nabla\cdot \left(\frac{\kappa}{\mu}\rho\nabla p\right)&=q,\quad\text{in }\Omega_{T}:=\Omega\times(0,T], \quad T>0,\\ \frac{\kappa}{\mu}\rho\nabla p\cdot n&=0,\quad \text{on }\Gamma_{N}\times(0,T],\\ p&=p^{D},\quad\text{on }\Gamma_{D}\times(0,T],\\ p&=p_{0},\quad\text{on }\Omega\times\{t=0\}.\end{aligned}\right. \tag{2.1}\]
For simplicity of presentation, let \(\Omega\subset\mathbb{R}^{d}\) be the computational domain with a boundary defined by \(\partial\Omega=\Gamma_{D}\cup\Gamma_{N}\). We will henceforth neglect the gravity effects and capillary forces and assume that \(\phi\), the porosity of the medium, is assumed to be a constant. We aim to seek the fluid pressure \(p\), \(\kappa\) denotes the permeability field that may be highly heterogeneous, such that \(\kappa_{0}\leq\kappa\leq\kappa_{1}\), where \(0<\kappa_{0}<\kappa_{1}<\infty\); and \(\mu\) is the constant fluid viscosity. The fluid density \(\rho\) is a function
of the fluid pressure \(p\) defined as
\[\rho(p)=\rho_{\rm ref}e^{c(p-p_{\rm ref})}, \tag{2.2}\]
where \(\rho_{\rm ref}\) is the given reference density and \(p_{\rm ref}\) is the reference pressure. Finally, \(n\) denotes the outward unit-normal vector on \(\partial\Omega\).
For a sub-domain \(D\subset\Omega\), let \({\rm V}:={\rm H}^{1}(D)\) be the standard Sobolev spaces endowed with the norm \(\|\cdot\|_{1,D}\). We further denote by \((\cdot,\cdot)\) and \(\|\cdot\|_{0,D}\) the inner product and norm, respectively in \({\rm L}^{2}(D)\). The subscript \(D\) will be omitted whenever \(D=\Omega\). In addition, we use the space \({\rm V}_{0}:={\rm H}^{1}_{0}(D)\), which is a subspace of \({\rm V}\) made of functions that vanish at \(\Gamma_{D}\). Finally, let \({\rm L}^{2}(0,T;{\rm L}^{2}(D))\) denote the set of functions with norm
\[\|v\|_{{\rm L}^{2}(0,T;{\rm L}^{2}(D))}=\left(\int_{0}^{T}\|v(\cdot,t)\|_{0,D }^{2}dt\right)^{1/2}.\]
Throughout the paper, \(a\preceq b\) means there exists a positive constant \(C\) independent of the mesh size such that \(a\leq Cb\).
### A finite element approximation
In this subsection, we introduce the notions of fine and coarse grids to discretize the problem (2.1). Let \(\mathcal{T}^{H}\) be a usual conforming partition of the computational domain \(\Omega\) into coarse block \(K\in\mathcal{T}^{H}\) with diameter \(H\). Then, we denote this partition as the coarse grid and assume that each coarse element is partitioned into a connected union of fine-grid blocks. In this case, the fine grid partition will be denoted by \(\mathcal{T}^{h}\), and is, by definition, a refinement of the coarse grid \(\mathcal{T}^{H}\), such that \(h\ll H\). We shall denote \(\{x_{i}\}_{i=1}^{N_{\rm coarse}}\) as the vertices of the coarse grid \(\mathcal{T}^{H}\), where \(N_{\rm coarse}\) denotes the number of coarse nodes. We define the neighborhood of the node \(x_{i}\) by
\[\omega_{i}=\bigcup\{K_{j}\in\mathcal{T}^{H}:x_{i}\in\overline{K}_{j}\}.\]
In addition, for CEM-GMsFEM considered in this paper, we have that given a coarse block \(K_{i}\), we represent the oversampling region \(K_{i,m}\subset\Omega\) obtained by enlarging \(K_{i}\) with \(m\geq 1\) coarse grid layers, see Fig. 1.
We consider the linear finite element space \({\rm V}^{h}\) associated with the grid \(\mathcal{T}^{h}\), where the basis functions in this space are the standard Lagrange basis functions defined
Figure 1: Illustration of the 2D multiscale grid with a typical coarse element \(K_{i}\) and oversampling domain \(K_{i,2}\), the fine grid element and neighborhood \(\omega_{i}\) of the node \(x_{i}\).
as \(\{\eta^{i}\}_{i=1}^{N_{\text{fine}}}\). Then, the semi-discrete finite element approximation to (2.1) on the fine grid is to find \(p^{h}\in\mathrm{V}^{h}\) such that
\[(\phi\partial_{t}\rho(p^{h}),v)+\left(\tfrac{\kappa}{\mu}\rho(p^{h})\nabla p^{h},\nabla v\right)=(q,v),\quad\text{for each }v\in\mathrm{V}^{h}. \tag{2.3}\]
We can now define the fully-discrete scheme for the discrete formulation (2.3). Let \(0=t_{0}<t_{1}<\cdots<t_{N_{\text{time}}-1}<t_{N_{\text{time}}}=T\) be a partition of the interval \([0,T]\), with time-step size given by \(\Delta_{n}=t_{n}-t_{n-1}\), for \(n=1,\ldots,N_{\text{time}}\), where \(N_{\text{time}}\) is an integer. The backward Euler time integration leads to finding \(p_{n}^{h}\) such that
\[(\phi\rho(p_{n}^{h}),v)-(\phi\rho(p_{n-1}^{h}),v)+\Delta_{n}\left(\tfrac{ \kappa}{\mu}\rho(p_{n}^{h})\nabla p_{n}^{h},\nabla v\right)=\Delta_{n}(q,v), \quad\text{for each }v\in\mathrm{V}^{h}. \tag{2.4}\]
Linearization of (2.4) via Newton-Raphson iteration yields a iterative linear matrix problem,
\[\mathbf{J}^{n,k}\boldsymbol{\delta}_{p^{n,k}}=-\mathbf{F}^{n,k},\]
where \(\mathbf{J}^{n,k}:=[J_{ji}^{n,k}]_{i,j=1}^{N_{\text{fine}}}\) denotes the Jacobi matrix, with entries
\[J_{ji}^{n,k}:=(\phi\rho(p_{n}^{h,k})\eta_{i},\eta_{j})+\Delta_{n}\left(\frac{ \kappa}{\mu}\rho(p_{n}^{h,k})\nabla\eta_{i},\nabla\eta_{j}\right)+\Delta_{n} \left(c\frac{\kappa}{\mu}\eta_{i}\rho(p_{n}^{h,k})\sum_{i}p_{n}^{i,k}\nabla \eta_{i},\nabla\eta_{j}\right),\]
\(\mathbf{F}^{n,k}:=[F_{j}^{n,k}]_{j=1}^{N_{\text{fine}}}\) is the residual with entries
\[F_{j}^{n,k} =\left(\phi\rho\left(\sum_{i=1}^{N_{\text{fine}}}p_{n}^{i,k}\eta _{i}\right),\eta_{j}\right)-\left(\phi\rho\left(\sum_{i=1}^{N_{\text{fine}}}p _{n-1}^{i}\eta_{i}\right),\eta_{j}\right)+\Delta_{n}\left(\frac{\kappa}{\mu} \rho\left(\sum_{i=1}^{N_{\text{fine}}}p_{n}^{i,k}\eta_{i}\right)\sum_{i=1}^{N _{\text{fine}}}p_{n}^{i,k}\nabla\eta_{i},\nabla\eta_{j}\right)\] \[\quad-\Delta_{n}(q,\eta_{j})=0,\]
and \(p_{n}^{k+1}=p_{n}^{k}+\boldsymbol{\delta}_{p^{n,k}}\), where \(p_{n}^{h,k}=\sum_{i}p_{n}^{i,k}\eta_{i}\) and \(p_{n-1}^{h}=\sum_{i}p_{n-1}^{i}\eta_{i}\), with \(k\) and \(k-1\) the new and old Newton iteration. Here \(\{\eta^{i}\}_{i=1}^{N_{\text{fine}}}\) represents the finite element basis functions for \(\mathrm{V}^{h}\).
## 3 Construction of CEM-GMsFEM basis functions
This section will describe the construction of CEM-GMsFEM basis functions using the framework of [9] and [22]. This procedure can be divided into two stages. The first stage involves constructing the auxiliary spaces by solving a local spectral problem in each coarse element \(K\), see [12]. The second stage is to provide the multiscale basis functions by solving some local constraint energy minimization problems in oversampling regions.
### Auxiliary basis function
In this subsection, we present the construction of the auxiliary multiscale basis functions by solving the local eigenvalue problem for each coarse element \(K_{i}\). We consider \(\mathrm{V}(K_{i}):=\mathrm{V}\big{|}_{K_{i}}\) the restriction of the space \(\mathrm{V}\) to the coarse element \(K_{i}\). We solve the following local eigenvalue problem: find \(\{\lambda_{j}^{(i)},\varphi_{j}^{(i)}\}\) such that
\[a_{i}(\varphi_{j}^{(i)},w)=\lambda_{j}^{(i)}s_{i}(\varphi_{j}^{(i)},w),\quad \text{for each }w\in\mathrm{V}(K_{i}), \tag{3.1}\]
where
\[a_{i}(v,w):=\int_{K_{i}}\kappa\rho(p_{0})\nabla v\cdot\nabla wd\mathrm{x},\quad s_ {i}(v,w):=\int_{K_{i}}\widetilde{\kappa}vwd\mathrm{x}.\]
Here \(\widetilde{\kappa}=\rho(p_{0})\kappa\sum_{i=1}^{N_{\mathrm{coarse}}}|\nabla \chi_{i}|^{2}\), where \(N_{\mathrm{coarse}}\) is the total number of neighborhoods, \(p_{0}\) is the pressure \(p\) at the initial time and \(\{\chi_{i}\}\) is a set of partition of unity functions for the coarse grid \(\mathcal{T}^{H}\), see [3]. The problem defined above is solved on the fine grid in the actual computation. We assume that the eigenfunctions satisfy the normalized condition \(s_{i}(\varphi_{j}^{(i)},\varphi_{j}^{(i)})=1\). We let \(\lambda_{j}^{(i)}\) be the eigenvalues of (3.1) arranged in ascending order. We shall use the first \(L_{i}\) eigenfunctions to construct the local auxiliary multiscale space \(\mathrm{V}_{\mathrm{aux}}^{(i)}:=\{\varphi_{j}^{(i)}:1\leq j\leq L_{i}\}\). We can define the global auxiliary multiscale space as \(\mathrm{V}_{\mathrm{aux}}:=\bigoplus_{i=1}^{N_{\mathrm{coarse}}}\mathrm{V}_ {\mathrm{aux}}^{(i)}\).
For the local auxiliary space \(\mathrm{V}_{\mathrm{aux}}^{(i)}\), the bilinear form \(s_{i}\) given above defines an inner product with norm \(\|v\|_{s(K_{i})}=s_{i}(v,v)^{1/2}\). Then, we can define the inner product and norm for the global auxiliary multiscale space \(\mathrm{V}_{\mathrm{aux}}\), which are defined by
\[s(v,w)=\sum_{i=1}^{N_{\mathrm{coarse}}}s_{i}(v,w),\quad\|v\|_{s}:=s(v,v)^{1/2 },\quad\text{for each }v,w\in\mathrm{V}_{\mathrm{aux}}.\]
To construct the CEM-GMsFEM basis functions, we use the following definition.
**Definition 3.1** ([9]).: Given a function \(\varphi_{j}^{(i)}\in\mathrm{V}_{\mathrm{aux}}\), if a function \(\psi\in\mathrm{V}\) satisfies
\[s(\psi,\varphi_{j}^{(i)}):=1,\quad s(\psi,\varphi_{j^{\prime}}^{(i^{\prime})}) =0,\quad\text{if }j^{\prime}\neq j\text{ or }i^{\prime}\neq i,\]
then, we say that is \(\varphi_{j}^{(i)}\)-orthogonal where \(s(v,w)=\sum_{i=1}^{N}s_{i}(v,w)\).
Now, we define \(\pi:\mathrm{V}\to\mathrm{V}_{\mathrm{aux}}\) to be the projection with respect to the inner product \(s(v,w)\). So, \(\pi\) is defined by
\[\pi(v):=\sum_{i=1}^{N_{\mathrm{coarse}}}\pi_{i}(v)=\sum_{i=1}^{N_{\mathrm{ coarse}}}\sum_{j=1}^{L_{i}}s_{i}(v,\varphi_{j}^{(i)})\varphi_{j}^{(i)},\quad\text{ for each }v\in\mathrm{V},\]
where \(\pi_{i}:L^{2}(K_{i})\to\mathrm{V}_{\mathrm{aux}}^{(i)}\) denotes the projection with respect to inner product \(s_{i}(\cdot,\cdot)\). The null space of the operator \(\pi\) is defined by \(\widetilde{\mathrm{V}}=\{v\in\mathrm{V}:\pi(v)=0\}\). Now, we will construct the multiscale basis functions. Given a coarse block \(K_{i}\), we denote the oversampling region \(K_{i,m}\subset\Omega\) obtained by enlarging \(K_{i}\) with an arbitrary number of coarse grid layers \(m\geq 1\), see Figure 1. Let \(\mathrm{V}_{0}(K_{i,m}):=\mathrm{H}_{0}^{1}(K_{i,m})\). Then, we define the multiscale basis function
\[\psi_{j,\mathrm{ms}}^{(i)}=\mathrm{argmin}\{a(\psi,\psi):\psi\in\mathrm{V}_{0 }(K_{i,m}),\,\psi\text{ is }\varphi_{j}^{(i)}\text{-orthogonal}\}, \tag{3.2}\]
where \(\mathrm{V}(K_{i,m})\) is the restriction of \(\mathrm{V}_{0}\) in \(K_{i,m}\) and \(\mathrm{V}_{0}(K_{i,m})\) is the subspace of \(\mathrm{V}(K_{i,m})\) with zero trace on \(\partial K_{i,m}\). The multiscale finite element space \(\mathrm{V}^{\mathrm{ms}}\) is defined by
\[\mathrm{V}^{\mathrm{ms}}=\mathrm{span}\{\psi_{j,\mathrm{ms}}^{(i)}:1\leq j\leq L _{i},1\leq i\leq N_{\mathrm{coarse}}\}.\]
By introducing the Lagrange multiplier, the problem (3.2) is equivalent to the explicit form: find \(\psi_{j,\mathrm{ms}}^{(i)}\in\mathrm{V}_{0}(K_{i,m})\), \(\lambda\in\mathrm{V}_{\mathrm{aux}}^{(i)}(K_{i})\) such that
\[\begin{cases}a(\psi_{j,\mathrm{ms}}^{(i)},\eta)+s(\eta,\lambda)&=0,\quad\text{ for all }\eta\in\mathrm{V}(K_{i,m}),\\ s(\psi_{j,\mathrm{ms}}^{(i)}-\varphi_{j}^{(i)},\nu)&=0,\quad\text{for all }\nu\in \mathrm{V}_{\mathrm{aux}}^{(i)}(K_{i,m}),\end{cases}\]
where \(\mathrm{V}^{(i)}_{\mathrm{aux}}(K_{i,m})\) is the union of all local auxiliary spaces for \(K_{i}\subset K_{i,m}\). Thus, the semi-discrete multiscale approximation reads as follows: find \(p_{n}^{\mathrm{ms}}\in\mathrm{V}^{\mathrm{ms}}\) such that
\[(\phi\partial_{t}\rho(p_{n}^{\mathrm{ms}}),v)+\left(\tfrac{\kappa}{\mu}\rho(p_{ n}^{\mathrm{ms}})\nabla p_{n}^{\mathrm{ms}},\nabla v\right)=(q,v),\quad\text{for each }v\in\mathrm{V}^{\mathrm{ms}}. \tag{3.3}\]
Using the backward Euler time-stepping scheme, we have a full-discrete formulation: find \(p_{n}^{\mathrm{ms}}\in\mathrm{V}^{\mathrm{ms}}\) such that
\[(\phi\rho(p_{n}^{\mathrm{ms}}),v)-(\phi\rho(p_{n-1}^{\mathrm{ms}}),v)+\Delta_{ n}\left(\tfrac{\kappa}{\mu}\rho(p_{n}^{\mathrm{ms}})\nabla p_{n}^{\mathrm{ms}}, \nabla v\right)=\Delta_{n}(q,v),\quad\text{for each }v\in\mathrm{V}^{ \mathrm{ms}}. \tag{3.4}\]
## 4 Convergence analysis
In this section, we establish the estimates of the convergence order of the proposed method.
### Error estimates
In this subsection, we will present the convergence error estimates for the semi-discrete scheme (3.3). The analysis consists of two main steps. First, we derive the error estimate for the difference between the exact solution and its corresponding elliptic projection. Second, we estimate the difference between the solution of (2.1) and solution of (3.3) by the difference between exact solution and the elliptic projection solution of problem (2.1).
To begin, we let \(\hat{p}\in\mathrm{V}^{\mathrm{ms}}\) be the elliptic projection of the function \(p\in\mathrm{V}\) that is defined by
\[(\tfrac{\kappa}{\mu}\rho(p_{0})\nabla(p-\hat{p}),\nabla w)=0,\quad\text{for each }w\in\mathrm{V}^{\mathrm{ms}}. \tag{4.1}\]
The following lemma gives us the error bound of \(\hat{p}\) for the nonlinear parabolic problem.
**Lemma 4.1**.: _Let \(p\) be the solution of (2.3). For each \(t>0\), we define the elliptic projection \(\hat{p}\in\mathrm{V}^{\mathrm{ms}}\) that satisfies (4.1). Then,_
\[\|(\tfrac{\kappa}{\mu})^{1/2}\nabla(p-\hat{p})(t)\|_{0}\preceq\Lambda^{-1/2}H \|(\tfrac{\kappa}{\mu})^{1/2}(q-\partial_{t}\rho(p))(t)\|_{0},\]
_where \(\Lambda=\min_{1\leq i\leq N}\lambda_{L_{i}+1}^{(i)}\)._
Proof.: Let \(\hat{p}\in\mathrm{V}^{\mathrm{ms}}\) be the projection of \(p\). By boundedness of \(\rho\) and orthogonality property, we can write
\[\|(\tfrac{\kappa}{\mu})^{1/2}\nabla(p-\hat{p})\|_{0}^{2}\preceq \int_{\Omega}(\tfrac{\kappa}{\mu})\rho(p_{0})|\nabla(p-\hat{p})|^{2}d\mathrm{x} =(\tfrac{\kappa}{\mu}\rho(p_{0})\nabla(p-\hat{p}),\nabla(p-\hat{p}))\] \[=(\tfrac{\kappa}{\mu}\rho(p_{0})\nabla p,\nabla(p-\hat{p})).\]
Invoking again the boundedness of \(\rho\), we have
\[|(\tfrac{\kappa}{\mu}\rho(p_{0})\nabla p,\nabla(p-\hat{p}))|\preceq|(\tfrac{ \kappa}{\mu}\rho(p)\nabla p,\nabla(p-\hat{p}))|.\]
Now, from problem (2.3), we get that
\[(\tfrac{\kappa}{\mu}\rho(p)\nabla p,\nabla(p-\hat{p}))=(q-\phi\partial_{t} \rho(p),\nabla(p-\hat{p})),\quad\text{for all }t>0.\]
Therefore, we arrive at
\[\|(\tfrac{\kappa}{\mu})^{1/2}\nabla(p-\hat{p})\|_{0}^{2}\preceq(q-\phi\partial _{t}\rho(p),p-\hat{p})\leq\|\widetilde{\kappa}^{-1/2}(q-\phi\partial_{t}\rho(p ))\|_{0}\|p-\hat{p}\|_{s},\quad\text{for all }t>0. \tag{4.2}\]
Since \(p-\hat{p}\in\mathrm{V}\), implies that \(\pi(p-\hat{p})=0\). According to [9], the coarse blocks \(K_{i}\) with \(i=1,\ldots,N_{\mathrm{coarse}}\) are disjoint, so we obtain that \(\pi_{i}(p-\hat{p})=0\), for all \(i=1,2,\ldots,N_{\mathrm{coarse}}\). Thus, the local spectral problem (3.1) yields that
\[\|p-\hat{p}\|_{s}^{2}=\sum_{i=1}^{N_{\mathrm{coarse}}}\|p-\hat{p}\|_{s_{i}}^{2} =\sum_{i=1}^{N_{\mathrm{coarse}}}\|(I-\pi_{i})(p-\hat{p})\|_{s_{i}}^{2}\preceq \frac{1}{\Lambda}\sum_{i=1}^{N_{\mathrm{coarse}}}\|(\tfrac{\kappa}{\mu})^{1/ 2}\nabla(p-\hat{p})\|_{0,K_{i}}^{2}, \tag{4.3}\]
where \(\Lambda=\min_{1\leq i\leq N}\lambda_{L_{i}+1}^{(i)}\). Therefore, by combining (4.2) and (4.3), and using the fact \(|\nabla\chi_{i}|=\mathcal{O}(H^{-1})\), we obtain
\[\|(\tfrac{\kappa}{\mu})^{1/2}\nabla(p-\hat{p})\|_{0}\preceq\Lambda^{-1/2}H\| \kappa^{-1/2}(q-\phi\partial_{t}\rho(p))\|_{0}.\]
This completes the proof.
The above estimate is the essence of the following result.
**Lemma 4.2**.: _Under Assumptions of Lemma 4.1, the following estimates hold_
\[\|p-\hat{p}\|_{0} \preceq\Lambda^{-1}H^{2}\|\kappa^{-1/2}(q-\phi\partial_{t}\rho( p))\|_{0},\] \[\|\partial_{t}(p-\hat{p})\|_{0} \preceq\Lambda^{-1}H^{2}\|\kappa^{-1/2}\partial_{t}(q-\phi \partial_{t}\rho(p))\|_{0}.\]
Proof.: First, we will invoke the duality argument. For each \(t>0\), we define \(w\in\mathrm{V}_{0}\) by
\[\int_{\Omega}\kappa\rho(p_{0})\nabla w\cdot\nabla vd\mathrm{x}=\int_{\Omega}( p-\hat{p})vd\mathrm{x},\quad\text{for each $v\in\mathrm{V}_{0}$},\]
and consider \(\hat{w}\) as elliptic projection of \(w\) in \(\mathrm{V}^{\mathrm{ms}}\). By Lemma 4.1, for \(v=p-\hat{p}\), we have
\[\|p-\hat{p}\|_{0}^{2}=\int_{\Omega}\kappa\rho(p_{0})\nabla w\cdot \nabla(p-\hat{p})d\mathrm{x} =\int_{\Omega}\kappa\rho(p_{0})\nabla(w-\hat{w})\cdot\nabla(p- \hat{p})d\mathrm{x}\] \[\preceq\int_{\Omega}\tfrac{\kappa}{\mu}\rho(p_{0})\nabla(w-\hat{w })\cdot\nabla(p-\hat{p})d\mathrm{x}\] \[\leq\|(\tfrac{\kappa}{\mu})^{1/2}\nabla(w-\hat{w})\|_{0}\|( \tfrac{\kappa}{\mu})^{1/2}\nabla(p-\hat{p})\|_{0}\] \[\preceq\left(H\Lambda^{-1/2}\max\{\kappa^{-1/2}\}\|p-\hat{p}\|_{0 }\right)\] \[\quad\times\left(H\Lambda^{-1/2}\|\kappa^{-1/2}(q-\phi\partial_{t }\rho(p))\|_{0}\right).\]
Hence, we have
\[\|p-\hat{p}\|_{0}\preceq\Lambda^{-1}H^{2}\|\kappa^{-1/2}(q-\phi\partial_{t} \rho(p))\|_{0}.\]
By a similar computation, we can obtain the second estimate. This completes the proof.
We will derive an error estimate for the difference between the solution of (2.1) and the CEM-GMsFEM solution of (3.3) using the framework of [15].
**Theorem 4.3**.: _Let \(p\) be the solution obtained from (2.3), \(p^{\mathrm{ms}}\in\mathrm{V}^{\mathrm{ms}}\) be the multiscale solution of (3.3) using CEM-GMsFEM and \(\hat{p}\) be an elliptic projection of \(p\) in \(\mathrm{V}^{\mathrm{ms}}\). Then, the following error estimate holds_
\[\|(p-p^{\mathrm{ms}})(t)\|_{0}^{2}+\int_{0}^{T}\|(\tfrac{\kappa}{ \mu})^{1/2}\nabla(p-p^{\mathrm{ms}})\|_{0}^{2}dt \preceq\Lambda^{-1}H^{2}\left(\|\kappa^{-1/2}\partial_{t}(q-\phi \partial_{t}\rho(p))(t)\|_{0}^{2}\right.\] \[+\left.\int_{0}^{T}\|\kappa^{-1/2}\partial_{t}(q-\phi\partial_{t} \rho(p))\|_{0}^{2}dt\right)\] \[+\|(\hat{p}-p^{\mathrm{ms}})(0)\|_{0}^{2}.\]
Proof.: Subtracting (3.3) from (2.3), and using (4.7), we have that
\[(\phi\partial_{t}\rho(p),v)+\left(\tfrac{\kappa}{\mu}\rho(p)\nabla p,\nabla v \right)-(\phi\partial_{t}\rho(p^{\text{ms}}),v)-\left(\tfrac{\kappa}{\mu}\rho(p ^{\text{ms}})\nabla p^{\text{ms}},\nabla v\right)=0,\quad\text{for each }v\in\text{V}^{\text{ms}}.\]
Since \(\hat{p}\in\text{V}^{\text{ms}}\), we put \(v=\hat{p}-p^{\text{ms}}\), then follows that
\[\underbrace{(\phi\partial_{t}(\rho(p)-\rho(p^{\text{ms}})),\hat{p}-p^{\text{ ms}})}_{I_{1}}+\underbrace{\left(\tfrac{\kappa}{\mu}(\rho(p)\nabla p-\rho(p^{ \text{ms}})\nabla p^{\text{ms}}),\nabla(\hat{p}-p^{\text{ms}})\right)}_{I_{2}}=0. \tag{4.4}\]
About \(I_{1}\), we can rewrite
\[I_{1}=\underbrace{(\phi\partial_{t}(\rho(\hat{p})-\rho(p^{\text{ms}})),\hat{ p}-p^{\text{ms}})}_{I_{3}}+\underbrace{(\phi\partial_{t}(\rho(p)-\rho(\hat{p})), \hat{p}-p^{\text{ms}})}_{I_{4}}. \tag{4.5}\]
For \(I_{3}\), we obtain
\[I_{3} =\frac{d}{dt}\int_{\Omega}\phi\int_{0}^{\hat{p}-p^{\text{ms}}} \rho^{\prime}(\hat{p}+\xi)\xi d\xi dx-\underbrace{\int_{\Omega}\phi\int_{0}^{ \hat{p}-p^{\text{ms}}}\rho^{\prime\prime}(\hat{p}+\xi)\partial_{t}\hat{p}\xi d \xi dx}_{I_{5}}\] \[\quad+\underbrace{\int_{\Omega}\phi\int_{\Omega}(\rho^{\prime}(p ^{\text{ms}})-\rho^{\prime}(\hat{p}))\partial_{t}\hat{p}(\hat{p}-p^{\text{ms} })dx}_{I_{6}}.\]
Following [25, 21], we have that the terms \(I_{5}\) and \(I_{6}\) are bounded by \(\|\hat{p}-p^{\text{ms}}\|_{0}^{2}\). We deduce that
\[I_{3}\geq\frac{d}{dt}\int_{\Omega}\phi\int_{0}^{\hat{p}-p^{\text{ms}}}\rho^{ \prime}(\hat{p}+\xi)\xi d\xi dx-C_{1}\|\hat{p}-p^{\text{ms}}\|_{0}^{2},\]
where \(C_{1}\) is a positive constant independent of the mesh size. In virtue of \(\rho^{\prime}\) being bounded below positively, we have
\[\int_{\Omega}\phi\int_{0}^{\hat{p}-p^{\text{ms}}}\rho^{\prime}(\hat{p}+\xi) \xi d\xi dx\geq C_{2}\|\hat{p}-p^{\text{ms}}\|_{0}^{2}.\]
Then, for \(I_{3}\), we obtain
\[I_{3}=(\phi\partial_{t}(\rho(\hat{p})-\rho(p^{\text{ms}})),\hat{p}-p^{\text{ ms}})\geq\frac{d}{dt}\|\hat{p}-p^{\text{ms}}\|_{0}^{2}-C_{3}\|\hat{p}-p^{\text{ms}}\|_ {0}^{2}. \tag{4.6}\]
For \(I_{4}\), by using the chain rule and Young's inequality, one can get
\[I_{4} =(\phi(\rho^{\prime}(p)-\rho^{\prime}(\hat{p}))\partial_{t}\hat{p },\hat{p}-p^{\text{ms}})+(\phi\rho^{\prime}(p)(\partial_{t}p-\partial_{t}\hat {p}),\hat{p}-p^{\text{ms}})\] \[\preceq\|p-\hat{p}\|_{0}^{2}+\|\partial_{t}(p-\hat{p})\|_{0}^{2}+ \|\hat{p}-p^{\text{ms}}\|_{0}^{2}.\]
Now, for \(I_{2}\), we get
\[I_{2}=\underbrace{\left(\tfrac{\kappa}{\mu}(\rho(p^{\text{ms}})\nabla(\hat{p} -p^{\text{ms}}),\nabla(\hat{p}-p^{\text{ms}})\right)}_{I_{7}}+\underbrace{ \left(\tfrac{\kappa}{\mu}\left(\rho(p)\nabla p-\rho(p^{\text{ms}})\nabla\hat{p }\right),\nabla(\hat{p}-p^{\text{ms}})\right)}_{I_{8}}\]
Then, for \(I_{7}\) we have
\[I_{7}\geq C\|(\tfrac{\kappa}{\mu})^{1/2}\nabla(\hat{p}-p^{\text{ms}})\|_{0}^{2},\]
and for \(I_{8}\) by invoking Young's inequality, one obtains
\[I_{8} =\left(\tfrac{\kappa}{\mu}(\rho(p)-\rho(p^{\mathrm{ms}}))\nabla p, \nabla(\hat{p}-p^{\mathrm{ms}})\right)+\left(\tfrac{\kappa}{\mu}(\rho(p^{ \mathrm{ms}})\nabla p-\rho(p^{\mathrm{ms}})\nabla\hat{p}),\nabla(\hat{p}-p^{ \mathrm{ms}})\right)\] \[\leq C\left(\|(\tfrac{\kappa}{\mu})^{1/2}\nabla(p-\hat{p})\|_{0}^ {2}+\|p-p^{\mathrm{ms}}\|_{0}^{2}\right)+\epsilon\|(\tfrac{\kappa}{\mu})^{1/2} \nabla(\hat{p}-p^{\mathrm{ms}})\|_{0}^{2},\]
where in the last inequality we use the boundedness of \(\rho\) and \(\rho^{\prime}\), and \(p\in W^{1,\infty}(\Omega)\). Combining the above estimates and taking \(\epsilon\) small enough, we can obtain
\[\frac{d}{dt}\|\hat{p}-p^{\mathrm{ms}}\|_{0}^{2}+\|(\tfrac{\kappa} {\mu})^{1/2}\nabla(p^{\mathrm{ms}}-\hat{p})\|_{0}^{2} \preceq\|(\tfrac{\kappa}{\mu})^{1/2}\nabla(p-\hat{p})\|_{0}^{2}+ \|p-\hat{p}\|_{0}^{2}+\|\partial_{t}(p-\hat{p})\|_{0}^{2}\] \[\quad+\|\hat{p}-p^{\mathrm{ms}}\|_{0}^{2}.\]
Integrating with respect to time \(t\) and invoking the continuous Gronwall's inequality [4], we can infer that
\[\|(\hat{p}-p^{\mathrm{ms}})(t)\|_{0}^{2}+\int_{0}^{T}\|(\tfrac{ \kappa}{\mu})^{1/2}\nabla(p^{\mathrm{ms}}-\hat{p})\|_{0}^{2}dt \preceq\|(\hat{p}-p^{\mathrm{ms}})(0)\|_{0}^{2}+\int_{0}^{T}\|( \tfrac{\kappa}{\mu})^{1/2}\nabla(p-\hat{p})\|_{0}^{2}dt\] \[\quad+\int_{0}^{T}\left(\|p-\hat{p}\|_{0}^{2}+\|\partial_{t}(p- \hat{p})\|_{0}^{2}\right)dt.\]
Thus, we use the triangle inequality to get
\[\|(p-p^{\mathrm{ms}})(t)\|_{0}^{2}+\int_{0}^{T}\|(\tfrac{\kappa} {\mu})^{1/2}\nabla(p-p^{\mathrm{ms}})\|_{0}^{2}dt \preceq\|(\hat{p}-p^{\mathrm{ms}})(0)\|_{0}^{2}+\int_{0}^{T}(\|( \tfrac{\kappa}{\mu})^{1/2}\nabla(p-\hat{p})\|_{0}^{2}dt\] \[\quad+\int_{0}^{T}\left(\|p-\hat{p}\|_{0}^{2}+\|\partial_{t}(p- \hat{p})\|_{0}^{2}\right)dt+\|(p-\hat{p})(t)\|_{0}^{2}.\]
Finally, the proof is completed by using Lemmas 4.1 and 4.2.
### A posteriori error estimate
We shall give an _a posteriori_ error estimate, which provides a computable error bound to access the quality of the numerical solution. To begin, notice that since \(\mathrm{V}^{\mathrm{ms}}\subset\mathrm{V}^{h}\), we can derive from the fully-discrete approximation a residual expression defined by
\[r_{n}^{\mathrm{ms}}(v):=(\phi\rho(p_{n}^{\mathrm{ms}}),v)-(\phi\rho(p_{n-1}^{ \mathrm{ms}}),v)+\Delta_{n}\left(\tfrac{\kappa}{\mu}\rho(p_{n}^{\mathrm{ms}}) \nabla p_{n}^{\mathrm{ms}},\nabla v\right)-\Delta_{n}(q,v),\quad\text{for each }v\in \mathrm{V}^{\mathrm{ms}}. \tag{4.7}\]
We also consider, the local residuals. For each coarse node \(x_{i}\), we define \(\omega_{i}\) be the set of coarse blocks having the vertex \(x_{i}\). For each coarse neighborhood \(\omega_{i}\), we define the local residual functional \(r_{i}:\mathrm{V}\to\mathbb{R}\) by
\[r_{n}^{(i)}(v)=r(\chi_{i}v)=(\phi\rho(p_{n}^{\mathrm{ms}}),\chi_{i}v)-(\phi \rho(p_{n-1}^{\mathrm{ms}}),\chi_{i}v)+\Delta_{n}\left(\tfrac{\kappa}{\mu} \rho(p_{n}^{\mathrm{ms}})\nabla p_{n}^{\mathrm{ms}},\nabla\chi_{i}v\right)- \Delta_{n}(q,\chi_{i}v),\]
for all \(v\in\mathrm{V}\). The local residual \(r_{i}\) gives a measure of the error \(p-p_{n}^{\mathrm{ms}}\) in the coarse neighborhood \(\omega_{i}\).
**Theorem 4.4**.: _Let \(p_{n}\) be the solution obtained from (2.1) at \(t_{n}\) and \(p_{n}^{\mathrm{ms}}\in\mathrm{V}^{\mathrm{ms}}\) denote the CEM-GMsFEM solution of the fully discrete scheme of (3.4) at \(t_{n}\). Then, there exists a positive constant \(C\) independent of the mesh size such that_
\[\|p_{N_{\mathrm{time}}}-p_{N_{\mathrm{time}}}^{\mathrm{ms}}\|_{0}^{2}+\sum_{n=1} ^{N_{\mathrm{time}}}\Delta_{n}\|(\tfrac{\kappa}{\mu})^{1/2}\nabla(p_{n}-p_{n} ^{\mathrm{ms}})\|_{0}^{2}\preceq(1+\Lambda^{-1})\sum_{n=1}^{N_{\mathrm{time}}} \sum_{i=1}^{N_{\mathrm{coarse}}}\|\widetilde{r}_{i}^{n}\|_{\nabla_{i}^{2}}^{2}+ \|p_{0}-p_{0}^{\mathrm{ms}}\|_{0}^{2},\]
_where_
\[\widehat{r}_{n}^{(i)}=\Delta_{n}\int_{\omega_{i}}q_{n}vd\mathbf{x}-\int_{\omega_{i }}\phi(\rho(p_{n}^{\mathrm{ms}})-\rho(p_{n-1}^{\mathrm{ms}}))vd\mathbf{x}- \Delta_{n}\int_{\omega_{i}}\tfrac{\kappa}{\mu}\rho(p_{n}^{\mathrm{ms}})\nabla p_ {n}^{\mathrm{ms}}\cdot\nabla vd\mathbf{x},\]
_and the residual norm is defined by_
\[\|\widehat{r}_{n}^{(i)}\|_{\mathrm{V}_{i}^{\ast}}=\sup_{v\in\mathrm{L}^{2}(t_{ n},t_{n+1};\mathrm{H}_{0}^{1}(\omega_{i}))}\frac{\widehat{r}_{n}^{(i)}}{\|v\|_{ \mathrm{V}_{i}}}.\]
Proof.: Subtracting (3.4) from (2.4), we get for \(p\in\mathrm{V}\) at \(t_{n}\)
\[(\phi(\rho(p_{n})-\rho(p_{n}^{\mathrm{ms}})),v)-(\phi(\rho(p_{n-1})-\rho(p_{n- 1}^{\mathrm{ms}}),v)+\Delta_{n}\left(\tfrac{\kappa}{\mu}(\rho(p_{n})\nabla p_{ n}-\rho(p_{n}^{\mathrm{ms}})\nabla p_{n}^{\mathrm{ms}},\nabla v\right)=0, \tag{4.8}\]
for each \(v\in\mathrm{V}^{\mathrm{ms}}\) Putting \(v=p_{n}-p_{n}^{\mathrm{ms}}\) and using the fact that \(\rho\) is bounded below positively, we easily obtain that
\[(\phi(\rho(p_{n})-\rho(p_{n}^{\mathrm{ms}})),p_{n}-p_{n}^{\mathrm{ms}})\geq C \|p_{n}-p_{n}^{\mathrm{ms}}\|_{0}^{2}.\]
Similarly, for the second term of (4.8), we can use the boundedness of \(\rho\) and the Young's inequality to yield
\[(\phi(\rho(p_{n-1})-\rho(p_{n-1}^{\mathrm{ms}}),p_{n}-p_{n}^{\mathrm{ms}})\leq C \|p_{n-1}-p_{n-1}^{\mathrm{ms}}\|_{0}^{2}+\epsilon\|p_{n}-p_{n}^{\mathrm{ms}} \|_{0}^{2}.\]
Gathering the above inequalities and for \(\epsilon\) small enough, we arrive to
\[(\phi(\rho(p_{n})-\rho(p_{n}^{\mathrm{ms}})),p_{n}-p_{n}^{\mathrm{ ms}})-(\phi(\rho(p_{n-1})-\rho(p_{n-1}^{\mathrm{ms}}),p_{n}-p_{n}^{\mathrm{ms}}) \geq C\left(\|p_{n}-p_{n}^{\mathrm{ms}}\|_{0}^{2}\right.\] \[-\left.\|p_{n-1}-p_{n-1}^{\mathrm{ms}}\|_{0}^{2}\right).\]
For third term of (4.8), we have that
\[\left(\tfrac{\kappa}{\mu}\left(\rho(p_{n})\nabla p_{n}-\rho(p_{n}^{\mathrm{ms} })\nabla p_{n}^{\mathrm{ms}}\right),\nabla(p_{n}-p_{n}^{\mathrm{ms}})\right) \geq C\|(\tfrac{\kappa}{\mu})^{1/2}\nabla(p_{n}-p_{n}^{\mathrm{ms}})\|_{0}^{2}.\]
Thus, these above inequalities drive us the expression
\[\|p_{n}-p_{n}^{\mathrm{ms}}\|_{0}^{2}-\|p_{n-1}-p_{n-1}^{\mathrm{ ms}}\|_{0}^{2}+\Delta_{n}\|(\tfrac{\kappa}{\mu})^{1/2}\nabla(p_{n}-p_{n}^{ \mathrm{ms}})\|_{0}^{2}\preceq(\phi(\rho(p_{n})-\rho(p_{n}^{\mathrm{ms}})),p_ {n}-p_{n}^{\mathrm{ms}})\] \[\qquad\qquad\qquad-(\phi(\rho(p_{n-1})-\rho(p_{n-1}^{\mathrm{ms} })),p_{n}-p_{n}^{\mathrm{ms}})\] \[\qquad\qquad\qquad+\Delta_{n}\left(\tfrac{\kappa}{\mu}\left(\rho (p_{n})\nabla p_{n}-\rho(p_{n}^{\mathrm{ms}})\nabla p_{n}^{\mathrm{ms}}\right),\nabla(p_{n}-p_{n}^{\mathrm{ms}})\right)\] \[=(\phi(\rho(p_{n})-\rho(p_{n}^{\mathrm{ms}})),p_{n}-p_{n}^{ \mathrm{ms}})-(\phi(\rho(p_{n-1})-\rho(p_{n-1}^{\mathrm{ms}})),p_{n}-p_{n}^{ \mathrm{ms}})\] \[\qquad\qquad+\Delta_{n}\left(\tfrac{\kappa}{\mu}\left(\rho(p_{n}) \nabla p_{n}-\rho(p_{n})\nabla p_{n}^{\mathrm{ms}}\right),\nabla(p_{n}-p_{n}^{ \mathrm{ms}})\right)\] \[\qquad\qquad+\Delta_{n}\left(\tfrac{\kappa}{\mu}\left(\rho(p_{n} )\nabla p_{n}^{\mathrm{ms}}-\rho(p_{n}^{\mathrm{ms}})\nabla p_{n}^{\mathrm{ms}} \right),\nabla(p_{n}-p_{n}^{\mathrm{ms}})\right).\]
Reorganizing the terms and using the definition of the weak formulation (2.1), we get that
\[\|p_{n}-p_{n}^{\mathrm{ms}}\|_{0}^{2}-\|p_{n-1}-p_{n-1}^{\mathrm{ ms}}\|_{0}^{2}+\Delta_{n}\|(\tfrac{\kappa}{\mu})^{1/2}\nabla(p_{n}-p_{n}^{ \mathrm{ms}})\|_{0}^{2}\preceq\Delta_{n}(q^{n},p_{n}-p_{n}^{\mathrm{ms}}) \tag{4.9}\] \[\qquad\qquad-(\phi(\rho(p_{n}^{\mathrm{ms}})-\rho(p_{n-1}^{\mathrm{ ms}})),p_{n}-p_{n}^{\mathrm{ms}})-\Delta_{n}\left(\tfrac{\kappa}{\mu}\rho(p_{n}) \nabla p_{n}^{\mathrm{ms}},\nabla(p_{n}-p_{n}^{\mathrm{ms}})\right)\] \[\qquad\qquad+\Delta_{n}\left(\tfrac{\kappa}{\mu}\left(\rho(p_{n} )-\rho(p_{n}^{\mathrm{ms}})\right)\nabla p_{n}^{\mathrm{ms}},\nabla(p_{n}-p_{n}^{ \mathrm{ms}})\right).\]
We will limit the right-hand side of (4.9). In light of residual expression (4.7), we have that
\[r^{\rm ms}(v)=0,\quad\text{for each $v\in{\rm V}^{\rm ms}$}.\]
Denote \(w=p_{n}^{\rm ms}-p_{n}\) and use \(\hat{w}\in{\rm V}^{\rm ms}\), the elliptic projection of \(w\). Thus,
\[r_{n}^{\rm ms}(w)=r_{n}^{\rm ms}(w-\hat{w}) =\Delta_{n}(q_{n},w-\hat{w})-(\phi(\rho(p_{n}^{\rm ms})-\rho(p_{n -1}^{\rm ms})),w-\hat{w})\] \[\quad-\Delta_{n}\left(\tfrac{\kappa}{\mu}\rho(p_{n}^{\rm ms}) \nabla p_{n}^{\rm ms},\nabla(w-\hat{w})\right).\]
Let us rewrite \(r_{n}^{\rm ms}(w-\hat{w})=\sum_{i=1}^{N_{\rm coarse}}\widetilde{r}_{n}^{(i)}( \chi_{i}(w-\hat{w}))\)[10, 22], then
\[\sum_{i=1}^{N_{\rm coarse}}\widetilde{r}_{n}^{(i)}(\chi_{i}(w- \hat{w})) =\Delta_{n}\sum_{i=1}^{N_{\rm coarse}}\left(\int_{\omega_{i}}q_{n} \chi_{i}(w-\hat{w})d\mathrm{x}-\int_{\omega_{i}}\phi\frac{\rho(p_{n}^{\rm ms} )-\rho(p_{n-1}^{\rm ms})}{\Delta_{n}}\chi_{i}(w-\hat{w})d\mathrm{x}\right.\] \[\quad-\left.\int_{\omega_{i}}\tfrac{\kappa}{\mu}\rho(p_{n}^{\rm ms })\nabla p_{n}^{\rm ms}\cdot\nabla\chi_{i}(w-\hat{w})d\mathrm{x}\right).\]
Note that
\[\sum_{i=1}^{N_{\rm coarse}}\widetilde{r}_{i}^{n}(\chi_{i}(w-\hat{w}))\leq\sum_ {i=1}^{N_{\rm coarse}}\|\widetilde{r}_{i}^{n}\|_{{\rm V}_{i}^{*}}\|\chi_{i}(w- \hat{w})\|_{{\rm V}_{i}}, \tag{4.11}\]
where \(\|v\|_{{\rm V}_{i}}=\|v\|_{0,\omega_{i}}^{2}+\Delta_{n}\|(\tfrac{\kappa}{\mu} )^{1/2}\nabla v\|_{0,\omega_{i}}^{2}\), where \(\|\cdot\|_{0,\omega_{i}}\) denotes the \(\mathrm{L}^{2}\)-norm restricted to \(\omega_{i}\). Notice also that,
\[\|(\tfrac{\kappa}{\mu})^{1/2}\nabla\chi_{i}(w-\hat{w})\|_{0,\omega_{i}}\preceq \left(\|w-\hat{w}\|_{s(\omega_{i})}^{2}+\|(\tfrac{\kappa}{\mu})^{1/2}\nabla( w-\hat{w})\|_{0,\omega_{i}}^{2}\right)^{1/2}, \tag{4.12}\]
where \(\|\cdot\|_{s(\omega_{i})}\) represents the \(s\)-norm restricted to \(\omega_{i}\). For the second term on the right-hand side, by using the orthogonality property, _i.e._, \(((\tfrac{\kappa}{\mu})^{1/2}\rho(p_{0})\nabla(w-\hat{w}),\nabla v)=0\), for all \(v\in{\rm V}^{\rm ms}\), we get
\[\|(\tfrac{\kappa}{\mu})^{1/2}\nabla(w-\hat{w})\|_{0,\omega_{i}}^{2}\preceq\|( \tfrac{\kappa}{\mu})^{1/2}\nabla w\|_{0,\omega_{i}}^{2}. \tag{4.13}\]
Now, for the first term on the right-hand side of (4.12), we shall use the duality argument. Let \(g=\widetilde{\kappa}(w-\hat{w})\) and \(z\in{\rm V}_{0}\) be the solution of problem below
\[\int_{\omega_{i}}\kappa\rho(p_{0})\nabla z\cdot\nabla vd\mathrm{x}=\int_{ \omega_{i}}gvd\mathrm{x},\quad\text{for each $v\in{\rm V}_{0}$}.\]
Putting \(v=w-\hat{w}\), using the Cauchy-Schwarz inequality and equation (4.13), we arrives to
\[\|w-\hat{w}\|_{s(\omega_{i})}^{2} =\int_{\omega_{i}}g(w-\hat{w})d\mathrm{x}=\int_{\omega_{i}}\kappa \rho(p_{0})\nabla z\cdot\nabla(w-\hat{w})d\mathrm{x}\] \[=\int_{\omega_{i}}\kappa\rho(p_{0})\nabla(z-\hat{z})\cdot\nabla(w -\hat{w})d\mathrm{x}\] \[\leq\|(\kappa\rho(p_{0}))^{1/2}\nabla(z-\hat{z})\|_{0,\omega_{i}} \|(\kappa\rho(p_{0}))^{1/2}\nabla(w-\hat{w})\|_{0,\omega_{i}}\] \[\preceq\Lambda^{-1/2}\|\widetilde{\kappa}^{-1/2}g\|_{\mathrm{L}^ {2}(\omega_{i})}\|(\tfrac{\kappa}{\mu})^{1/2}\nabla(w-\hat{w})\|_{0,\omega_{i}}\] \[\preceq\Lambda^{-1/2}\|w-\hat{w}\|_{s(\omega_{i})}\|(\tfrac{\kappa }{\mu})^{1/2}\nabla w\|_{0,\omega_{i}}.\]
So, we have
\[\|w-\hat{w}\|_{s(\omega_{i})}\preceq\Lambda^{-1/2}\|(\tfrac{\kappa}{\mu})^{1/2} \nabla w\|_{0,\omega_{i}}. \tag{4.14}\]
Gathering (4.12)-(4.14), we arrive to
\[\|(\tfrac{\kappa}{\mu})^{1/2}\nabla\chi_{i}(w-\hat{w})\|_{0,\omega_{i}}\preceq( 1+\Lambda^{-1})^{1/2}\|(\tfrac{\kappa}{\mu})^{1/2}\nabla w\|_{0,\omega_{i}}.\]
Analogously, we estimate \(\|\chi_{i}(w-\hat{w})\|_{0,\omega_{i}}\preceq\|(\tfrac{\kappa}{\mu})^{1/2} \nabla w\|_{0,\omega_{i}}\). Therefore,
\[\sum_{i=1}^{N_{\text{coarse}}}\|\widetilde{r}_{n}^{(i)}\|_{\text{V}_{i}^{*}} \|\chi_{i}(w-\hat{w})\|_{\text{V}_{i}}\preceq(1+\Lambda^{-1})^{1/2}\sum_{i=1}^ {N_{\text{coarse}}}\|\widetilde{r}_{n}^{(i)}\|_{\text{V}_{i}^{*}}\|(\tfrac{ \kappa}{\mu})^{1/2}\nabla w\|_{0,\omega_{i}}. \tag{4.15}\]
For the last term of the right-hand side of (4.9), we have
\[\Delta_{n}\left(\tfrac{\kappa}{\mu}\left(\rho(p_{n})-\rho(p_{n}^{\text{ms}}) \right)\nabla p_{n}^{\text{ms}},\nabla(p_{n}-p_{n}^{\text{ms}})\right)\preceq \Delta_{n}\|\rho(p_{n})-\rho(p_{n}^{\text{ms}})\|_{0}\|(\tfrac{\kappa}{\mu})^{ 1/2}\nabla(p_{n}-p_{n}^{\text{ms}})\|_{0}. \tag{4.16}\]
Combining (4.15) and (4.16), and using Young's inequality, one can express for (4.9) by summing over all \(n\)
\[\|p_{N_{\text{time}}}-p_{N_{\text{time}}}^{\text{ms}}\|_{0}^{2}+ \sum_{n=1}^{N_{\text{time}}}\Delta_{n}\|(\tfrac{\kappa}{\mu})^{1/2}\nabla(p_{n }-p_{n}^{\text{ms}})\|_{0}^{2} \preceq(1+\Lambda^{-1})\sum_{n=1}^{N_{\text{time}}}\sum_{i=1}^ {N_{\text{coarse}}}\|\widetilde{r}_{n}^{(i)}\|_{\text{V}_{i}^{*}}^{2}\] \[\quad+\sum_{n=1}^{N_{\text{time}}}\Delta_{n}\|\rho(p_{n})-\rho(p_ {n}^{\text{ms}})\|_{0}+\|p_{0}-p_{0}^{\text{ms}}\|_{0}^{2}.\]
The proof is completed by using the discrete Gronwall inequality.
## 5 Numerical results
We now present numerical results of the nonlinear single-phase compressible flow in highly heterogeneous porous media with the performance of the CEM-GMsFEM that are summarized in the following three separate experiments. All parameters on the flow model, boundary, and initial conditions in each numerical experiment are described in detail to allow their proper reproduction of them. The main aim of the simulation is to demonstrate the viability of the proposed numerical approximation and improve the convergence rate shown in Section 4. We implement the CEM-GMsFEM in Matlab language and use the numerical experiments presented in [27, 15] as a reference guide to our three-dimensional experiments. We will use the Euler-backward for the time discretization and a Newton-Raphson method with a tolerance of \(10^{-6}\) for the non-linear problem. Only \(2-4\) Newton's iterations are needed in the computations presented below.
We consider three high-contrast permeability fields that are the disjoint union of a background region with \(10^{5}\) millidarcys and other regions of \(10^{9}\) millidarcys (see Figure 2). We also consider a fractured porous medium. In this case, the permeability value in fractures is much larger than in the surrounding medium. Finally, we employ the first \(30\) layers of SPE10 3D dataset from [7], which is widely used in the reservoir simulation community to test multiscale approaches. All experiments employ parameters of viscosity \(\mu=5\)cP, porosity \(\phi=500\), fluid compressibility \(c=1.0\times 10^{-8}\text{Pa}^{-1}\), the reference pressure \(p_{\text{ref}}=2.00\times 10^{7}\text{Pa}\), and the reference density \(\rho_{\text{ref}}=850\text{kg}/\text{m}^{3}\).
**Example 5.1**.: For our first example, we set a fine grid resolution of \(64^{3}\), with a size of \(h=20\)m, and a coarse grid resolution of \(8^{3}\) of size \(H=8h\). The coarsening factor is chosen due this coarse grid provides the most computationally efficient performance for the method. For the CEM-GMsFEM, we use 4 basis functions and 4 oversampling layers. We know well that the number of bases is sufficient to improve the accuracy of CEM-GMsFEM; see [9]. Then, we have a coarse system with dimension \(4916\,(=729\times\text{number of basis functions})\), and the fine-scale system has a dimension of \(274625\). The permeability field \(\kappa_{1}\) used in this experiment is depicted in Figure 1(a). We define a model configuration as follows: four vertical injectors in each corner and a unit sink in the center of the domain to drive the flow, and employ the full zero Neumann boundary condition and an initial pressure field \(p_{0}=2.16\times 10^{7}\text{Pa}\). We consider a fine grid resolution of \(64^{3}\), whose fine grid size is given by \(h=20\)m, meanwhile the coarse grid resolution of \(8^{3}\), whose coarse size is given by \(H=8h\). The time step \(\Delta_{n}\) is 7 days, and the total time simulation will be \(T=25\Delta_{n}(=175\text{ days})\). Figure 3 shows the pressure profiles with the sink term and zero Neumann boundary conditions in Fig. 3 for different instants at day \(t=77\) and \(t=147\). In this case, we obtain a relative \(L^{2}\)- and \(H^{1}\)-error of \(2.1138\)E-\(03\) and \(3.8058\)E-\(02\) respectively.
Figure 2: Different permeability fields.
**Example 5.2**.: We consider a combination of zero Neumann and nonzero Dirichlet boundary conditions as in [27, 15]. We set a fine grid resolution of \(64^{3}\), with a size of \(h=20\)m, and different coarse grid resolutions of \(4^{3},8^{3}\) and \(16^{3}\). The time step \(\Delta_{n}\) and total time simulation are the same as Example 5.1. We impose zero Neumann condition on boundaries of planes \(xy\) and \(xz\) and let \(p=2.16\times 10^{7}\)Pa in the first \(yz\) plane and \(p=2.00\times 10^{7}\)Pa in the last \(yz\) plane for all time instants, no additional source is imposed. The permeability field used is \(\kappa_{2}\) (Figure 1(b)). The pressure difference will drive the flow, and the initial field \(p_{0}\) linearly decreases along the \(x\) axis and is fixed in the \(yz\) plane. Table 1 shows that numerical results use 4 basis functions on each coarse block with different coarse grid sizes (\(H=4h,8h\) and \(16h\)), where \(\varepsilon_{0}\) and \(\varepsilon_{1}\) denote the relative \(L^{2}\) and energy error estimate between the reference solution and CEM-GMsFEM
Figure 3: Numerical solution of Example 5.1 using full-zero Neumann boundary conditions and high-contrast permeability field \(\kappa_{1}\), see Figure 1(a). The fine-scale reference solution (left) and CEM-GMsFEM solution (right) with 4 basis function and 4 oversampling layers at (a) \(t=77\) and (b) \(t=147\).
solution defined by
\[\varepsilon_{0}=\left(\frac{\sum_{i=1}^{N_{\text{time}}}(p_{i}^{h}-p_{i}^{\text{ ms}})^{2}}{\sum_{i=1}^{N_{\text{time}}}(p_{i}^{h})^{2}}\right)^{1/2},\quad \varepsilon_{1}=\left(\frac{\sum_{i=1}^{N_{\text{time}}}(\frac{\kappa}{\mu})^{1 /2}\rho(p_{0})\nabla(p_{i}^{h}-p_{i}^{\text{ms}})^{2}}{\sum_{i=1}^{N_{\text{ time}}}(p_{i}^{h})^{2}}\right)^{1/2},\]
where \(p_{i}^{h}\) denotes the references solution and \(p_{i}^{\text{ms}}\) is the CEM-GMsFEM approximation for \(i=1,\ldots,N_{\text{time}}\). For instance, for a coarse grid size of \(H=8h\), we obtain the relative errors \(\varepsilon_{0}=4.5581\)E-04 and \(\varepsilon_{1}=2.5249\)E-01. In Figure 4, we depict the numerical solution profiles with a fine grid resolution of \(64^{3}\) and coarse grid resolution of \(8^{3}\) at day \(t=105\) and \(t=140\), which is hard to find any difference between the reference solution and CEM-GMsFEM solution. Therefore, we have a good agreement.
**Example 5.3**.: For the third experiment, we consider the combination of zero Neumann and nonzero Dirichlet boundary conditions as in Example 5.2. We set a fine grid resolution of \(32^{3}\) (fine-scale system with dimension \(35937\)), with a size of \(h=20\)m, and different coarse grid resolutions of \(4^{3},8^{3}\) and \(16^{3}\) (coarse-scale system with dimension \(500\), \(2916\) and \(19652\) respectively). The time step \(\Delta_{n}\) and total time simulation are the same as Example 5.1. The fractured medium \(\kappa_{3}\) used is depicted in Figure 1(c). For this experiment, we employ the framework from [14] and apply the CEM-GMsFEM to the 3D model. The domain \(\Omega\) can be represented by
\[\Omega=\Omega_{0}\cup(\cup_{i}\Omega_{\text{frac},i}),\]
where \(\Omega_{0}\) represents the matrix and subscript frac denotes the fracture regions. Then, we can write the finite element discretization corresponding to equation (2.3)
\[(\phi\partial_{t}\rho(p^{h}),v)+\left(\frac{\kappa}{\mu}\rho(p^{ h})\nabla p^{h},\nabla v\right) =(\phi\partial_{t}\rho(p^{h}),v)_{\Omega_{0}}+\sum_{i}(\phi \partial_{t}\rho(p^{h}),v)_{\Omega_{\text{frac},i}}\] \[=(q,v),\quad\text{for each }v\in\text{V}^{h},\]
In Table 2, we give the convergence rate \(T=25\Delta_{n}(=175\) days) with different coarse-grid sizes \(H\). We notice that the error significantly decreases as the size of the coarse grid is finer. Then, we have that the CEM-GMsFEM gives a good approximation of the solution for the case of the fractured medium. Figure 5 shows the numerical solutions at \(t=70\) and \(t=140\).
**Example 5.4**.: For the last experiment, we consider the same boundary conditions as Example 5.2. The permeability field used is \(\kappa_{4}\) (see Figure 1(d)) and the time step \(\Delta_{n}\) is \(7\) days, with total time simulation will be \(T=26\Delta_{n}(=186\) days). We set a fine grid resolution of \(220\times 60\times 30\) (fine-scale system with a dimension of \(417911\)), with a size of \(h=20\)m, and coarse grid resolution of \(10^{3}\) (coarse-scale system with a dimension \(5324\)). We show the pressure profiles comparison in Figure 6. This experiment obtains an error estimate of \(\varepsilon_{0}=\)1.8377E-03 and \(\varepsilon_{1}=\)3.4547E-01, respectively.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Number of basis & \(H\) & Number of oversampling layers \(m\) & \(\varepsilon_{0}\) & \(\varepsilon_{1}\) \\ \hline
4 & \(4h\) & 3 & 2.4004E-03 & 4.4227E-01 \\
4 & \(8h\) & 4 & 4.5581E-04 & 2.5249E-01 \\
4 & \(16h\) & 5 & 1.4257E-04 & 1.258E-01 \\ \hline \end{tabular}
\end{table}
Table 1: Convergence rate for Example 5.2 with different numbers of oversampling layers \((m)\) with a combination of zero Neumann and nonzero Dirichlet boundary conditions.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Number of basis & \(H\) & Number of oversampling layers \(m\) & \(\varepsilon_{0}\) & \(\varepsilon_{1}\) \\ \hline
4 & \(4h\) & 3 & 1.6111E-03 & 2.9441E-01 \\
4 & \(8h\) & 4 & 1.9532E-04 & 1.1653E-01 \\
4 & \(16h\) & 5 & 1.0160E-04 & 5.1492E-02 \\ \hline \end{tabular}
\end{table}
Table 2: Convergence rate of Example 5.3 with different numbers of oversampling layers \((m)\) with a combination of zero Neumann and nonzero Dirichlet boundary conditions.
Figure 4: Numerical solution of Example 5.2 combining a zero Neumann boundary condition and nonzero Dirichlet boundary condition. High-contrast permeability field \(\kappa_{2}\), fine-scale reference solution (left), and CEM-GMsFEM solution (right) with 4 basis function and 4 number of oversampling layers at (a) \(t=105\) and (b) \(t=140\).
## 6 Concluding remarks
This paper studies the convergence of the numerical approximations to the highly heterogeneous nonlinear single-phase compressible flow by CEM-GMsFEM. We first build an auxiliary for the proposed method by solving spectral problems. Then, we construct a multiscale basis function by solving some constraint energy minimization problems in the oversampling local regions. So, we obtain multiscale basis functions for the pressure. This work defines the elliptic projection in the multiscale space spanned by CEM-GMsFEM basis functions for convergence analysis. Thus, we present the convergence of the semi-discrete formulation. The convergence depends on the coarse mesh size and the decay of eigenvalues of local spectral problems; an _a posteriori_ error
Figure 5: Numerical solution of Example 5.3 combining a zero Neumann boundary condition and nonzero Dirichlet boundary condition. High-contrast permeability field \(\kappa_{3}\). The fine-scale reference solution (left) and CEM-GMsFEM solution (right) with 4 basis function and 4 number of oversampling layers at (a) \(t=70\) and (b) \(t=140\).
estimate is derived underlining discretization. Some numerical examples have been presented to verify the feasibility of the proposed method concerning convergence and stability. We observe that the CEM-GMsFEM is shown to have a second-order convergence rate in the \(L^{2}\)-norm and a first-order convergence rate in the energy norm concerning the coarse grid size.
A foreseeable result in ongoing research is to boost the performance of the coarse-grid simulation, mainly where the source term is singular; one may need to further improve the accuracy of the approximation without additional refinement in the grid. We can enrich the multiscale space for such a goal by adding additional basis functions in the online stage [10]. These new multiscale basis functions are constructed on the oversampling technique and the information on local residuals. Consequently, we could present an adaptive enrichment algorithm to reduce error in some regions with large residuals.
Figure 6: Example 4. Mixed boundary conditions (a) \(t=84\) and (b) \(t=126\).
## Acknowledgement
The research of Eric Chung is partially supported by the Hong Kong RGC General Research Fund (Projects: 14305222 and 14304021).
|
2302.00702 | High-redshift supermassive black hole mergers in simulations with
dynamical friction modelling | In the near future, projects like LISA and Pulsar Timing Arrays are expected
to detect gravitational waves from mergers between supermassive black holes,
and it is crucial to precisely model the underlying merger populations now to
maximize what we can learn from this new data. Here we characterize expected
high-redshift (z > 2) black hole mergers using the very large volume Astrid
cosmological simulation, which uses a range of seed masses to probe down to
low-mass BHs, and directly incorporates dynamical friction so as to accurately
model the dynamical processes which bring black holes to the galaxy center
where binary formation and coalescence will occur. The black hole populations
in Astrid include black holes down to 10$^{4.5}$ M$_\odot$, and remain broadly
consistent with the TNG simulations at scales > 10$^6$ M$_\odot$ (the seed mass
used in TNG). By resolving lower-mass black holes, the overall merger rate is
~5x higher than in TNG. However, incorporating dynamical friction delays
mergers compared to a recentering scheme, reducing the high-z merger rate
mass-matched mergers by a factor of ~2x. We also calculate the expected LISA
Signal-to-Noise values, and show that the distribution peaks at high SNR
(>100), emphasizing the importance of implementing a seed mass well below
LISA's peak sensitivity (10$^6$ M$_\odot$) to resolve the majority of LISA's GW
detections. | Colin DeGraf, Nianyi Chen, Yueying Ni, Tiziana Di Matteo, Simeon Bird, Michael Tremmel, Rupert Croft | 2023-02-01T19:00:43Z | http://arxiv.org/abs/2302.00702v1 | # High-redshift supermassive black hole mergers in simulations with dynamical friction modelling
###### Abstract
In the near future, projects like LISA and Pulsar Timing Arrays are expected to detect gravitational waves from mergers between supermassive black holes, and it is crucial to precisely model the underlying merger populations now to maximize what we can learn from this new data. Here we characterize expected high-redshift (\(z>2\)) black hole mergers using the very large volume Astrid cosmological simulation, which uses a range of seed masses to probe down to low-mass BHs, and directly incorporates dynamical friction so as to accurately model the dynamical processes which bring black holes to the galaxy center where binary formation and coalescence will occur. The black hole populations in Astrid include black holes down to \(\sim 10^{4.5}M_{\odot}\), and remain broadly consistent with the TNG simulations at scales \(>10^{6}M_{\odot}\) (the seed mass used in TNG). By resolving lower-mass black holes, the overall merger rate is \(\sim 5\times\) higher than in TNG. However, incorporating dynamical friction delays mergers compared to a recentering scheme, reducing the high-z merger rate mass-matched mergers by a factor of \(\sim 2\times\). We also calculate the expected LISA Signal-to-Noise values, and show that the distribution peaks at high SNR (\(>\)100), emphasizing the importance of implementing a seed mass well below LISA's peak sensitivity (\(\sim 10^{6}M_{\odot}\)) to resolve the majority of LISA's GW detections.
## 1 Introduction
Supermassive black holes (SMBHs) have been found to exist at the center of galaxies (Kormendy & Richstone, 1995), with a strong correlation to host galaxy properties (Magorrian et al., 1998; Gebhardt et al., 2000; Graham et al., 2001; Ferrarese, 2002; Tremaine et al., 2002; Haring & Rix, 2004; Gultekin et al., 2009; McConnell & Ma, 2013; Kormendy & Ho, 2013; Reines & Volonteri, 2015; Greene et al., 2016; Schutte et al., 2019). These correlations hold true across cosmic time, suggesting a coevolutionary growth between black holes and the galaxies which host them. These galaxies (and the dark matter halos in which they are located) are expected to merge (e.g. Fakhouri et al., 2010; Rodriguez-Gomez et al., 2015). Following a galaxy merger, the central black hole from each progenitor galaxy can migrate toward the galactic center of the newly merged galaxy, where they can form a binary and eventually merge together themselves (e.g. Mayer et al., 2007). Black hole mergers produce strong gravitational wave (GW) signals, and the coalescence of a pair of supermassive black holes found at the center of galaxies will produce the strongest GW signals in the Universe.
In the past several years, gravitational waves produced by black hole mergers have been detected using interferometers (e.g. Abbott et al., 2016), but size limitations due to the ground-based nature of these instruments mean that detections to this point have been limited to mergers between stellar mass black holes. Higher-mass mergers (i.e. between SMBHs) produce GWs with much longer wavelengths, beyond the sensitivity of ground-based interferometers. However, the upcoming Laser Interferometer Space Antenna (LISA) space mission will focus on lower-frequency GWs corresponding to higher-mass mergers, with sensitivities peaking at \(\sim 10^{4}-10^{7}M_{\odot}\)(Amaro-Seoane et al., 2017). Furthermore, Pulsar Timing Arrays should be capable of detecting even higher mass mergers, reaching black holes above \(10^{8}M_{\odot}\)(e.g. Verbiest et al., 2016; Desvignes et al., 2016; Reardon et al., 2016; Arzoumanian et al., 2018). The GWs detected by these observations should provide a new and
powerful mechanism to study SMBHs and their connection with their host galaxies.
SMBH mergers detected through GWs can provide a wide range of constraints on our understanding of black hole - galaxy coevolution, including estimating the rate at which SMBHs merge (e.g. Klein et al., 2016; Salcido et al., 2016; Kelley et al., 2017; Ricarte and Natarajan, 2018; Katz et al., 2020; Volonteri et al., 2020), the expected merger/coalescence timescale (e.g. Volonteri et al., 2020; Banks et al., 2022), how mergers influence the scaling relation between black holes and their host galaxies (e.g. Volonteri and Natarajan, 2009; Simon and Burke-Spolaor, 2016; Shankar et al., 2016), gas environment and accretion efficiencies (e.g. Kocsis et al., 2011; Barausse et al., 2014; Derdzinski et al., 2019), how SMBH seeds initially form (e.g. Sesana et al., 2007; Ricarte and Natarajan, 2018; DeGraf et al., 2021), and the potential connection with host galaxy morphologies (with multimessenger studies that combine GW and electromagnetic information, e.g. Volonteri et al., 2020; DeGraf et al., 2021). To maximize what we can learn from the initial SMBH merger detections, it is important to understand the underlying merging populations.
Cosmological simulations provide an ideal mechanism to characterise merging populations, as they self-consistently model both black holes and galaxy formation, encompass large volumes which provide robust statistical samples, and span a wide redshift range to investigate evolution over cosmic time. Current simulations (e.g. Vogelsberger et al., 2014; Dubois et al., 2014; Schaye et al., 2015; Feng et al., 2016; Pillepich et al., 2018; Henden et al., 2018; Dave et al., 2019; Chen et al., 2022b) resolve a wide range of scales, with BHs ranging from \(\sim 10^{4}-10^{10}M_{\odot}\), resolving black hole growth, mergers, and their host galaxy properties. The majority of cosmological simulations, however, tend to only resolve higher-mass black holes (on the order of \(10^{6}M_{\odot}\)), which misses the majority of LISA-detectable mergers. Additionally, simulations frequently do not physically model the infall/inspiral of the black holes leading to coalescence, which has the potential to significantly impact merging black hole populations (see, e.g. Volonteri et al., 2020; Banks et al., 2022). Here we study the Astrid simulation (Ni et al., 2022), which includes both low-mass seeds (down to \(10^{4.5}M_{\odot}\)), thereby resolving mergers at the peak of LISA's sensitivity; and it directly incorporates dynamical friction to model the orbital dynamics to small scales, providing a more accurate probe of inspiralling black holes, and prevents mergers from occurring during fly-by encounters (Chen et al., 2022a).
This paper is organized as follows: in Section 2 we provide an overview of the Astrid simulation, including black hole and dynamical friction models. In Section 3 we discuss the overall black hole population in Astrid, as well as the expected merging populations. In Section 4 we investigate the gravitational waves emitted by these mergers, and combine with LISA sensitivity to calculate the expected Signal-to-Noise (SNR) ratios for the full merger samples (Section 4.1). Finally, we summarize our conclusions in Section 5.
## 2 Method
In this work, we use Astrid (Bird et al., 2022), a cosmological simulation run using a version of the MP-Gadget smoothed-particle hydrodynamics (SPH) simulation code, a highly scalable version of the Gadget-3 code (Springel et al., 2005). The simulation consists of a (250 \(h^{-1}\) Mpc ) \({}^{3}\) volume containing \(2\times 5500^{3}\) particles, resolving galactic halos down to \(10^{9}M_{\odot}\) from \(z=99\) to \(z=2\). The cosmological parameters for the simulation are based on measurements from Planck Collaboration et al. (2020) (\(\Omega_{0}=0.3089,\Omega_{\Lambda}=0.6911,\Omega_{b}=0.0486,\sigma_{8}=0.82\), and \(h=0.6774\)), and has an initial mass resolution of \(M_{DM}=6.74\times 10^{6}\)\(h^{-1}\)\(M_{\odot}\), \(M_{gas}=1.27\times 10^{6}\)\(h^{-1}\)\(M_{\odot}\) and gravitational softening length of \(\epsilon=1.5\)\(h^{-1}\)\(kpc\).
The Astrid simulation includes detailed models for galaxy formation and evolution, including reionization (Battaglia et al., 2013; Faucher-Giguere, 2020) with self-shielding (Rahmati et al., 2013), star formation (Springel and Hernquist, 2003) with associated feedback (Okamoto et al., 2010) and metal return (Vogelsberger et al., 2013; Pillepich et al., 2018b). For a more detailed description of the physics modelled in this simulation, see Ni et al. (2022); Bird et al. (2022).
Of particular import for this analysis is the implementation of black holes in Astrid (Ni et al., 2022; Chen et al., 2022a). Black holes in the Astrid simulation are treated as collisionless sink particles, inserted into halos above a mass threshold of \(M_{\rm halo}=5\times 10^{9}\)\(h^{-1}\)\(M_{\odot}\) and \(M_{\star}=2\times 10^{6}\)\(h^{-1}\)\(M_{\odot}\) which do not already contain a black hole particle. Rather than using a fixed seed mass for black holes, Astrid selects a mass for each newly-seed black hole from a power law distribution (power law index \(=-1\)) from \(3\times 10^{4}-3\times 10^{5}\)\(h^{-1}\)\(M_{\odot}\), intended to remain broadly consistent with a variety of SMBH formation pathways and their subsequent growth (e.g. Begelman and Rees, 1978; Madau and Rees, 2001; Volonteri et al., 2003; Bromm and Loeb, 2003; Regan and Haehnelt, 2009; Katz et al., 2015; DeGraf and Sijacki, 2020). Once seeded, the black holes grow by merging with other black holes, and via mass accretion following a model based on a Bondi & Hoyle (1944)-like formalism applied to the SPH kernel of the black hole (periods of super-Eddington accretion are permitted, but capped at 2\(\times\) the Eddington rate). Accreting black holes are assumed to produce a bolometric luminosity proportional to the accretion rate (at 10% efficiency), and 5% of the radiated energy is assumed to couple thermally to the surrounding gas (this feedback energy is deposited isotropically among gas particles within the SPH kernel). Please see Ni et al. (2022) for additional details.
Rather than using a repositioning scheme to simply move all black holes toward nearby potential minima, Astrid implements a dynamical friction model for black holes (Tremmel et al., 2015; Chen et al., 2022b). We assume a Maxwellian distribution for the velocity distribution of the surrounding particles (both stars and dark matter), such that the dynamical friction force can be calculated (see Binney and Tremaine, 2008) as
\[\mathbf{F}_{\rm DF}=-4\pi\rho_{sph}\left(\frac{GM_{\rm BH}}{v_{\rm BH}}\right)^ {2}\log(\Lambda)\mathcal{F}\left(\frac{v_{\rm BH}}{\sigma_{v}}\right)\frac{ \mathbf{v}_{\rm BH}}{v_{\rm BH}}\,, \tag{1}\]
where \(\rho_{\rm sph}\) and \(\sigma_{v}\) are the density and velocity dispersion of the surrounding dark matter and star particles, \(\mathbf{v}_{\rm BH}\) is the velocity of the black hole relative to the surrounding medium, \(\Lambda\) is the Coulomb logarithm
\[\Lambda=\frac{b_{\rm max}}{(GM_{\rm BH})/v_{\rm BH}^{2}} \tag{2}\]
with \(b_{\rm max}=20kpc\), and \(\mathcal{F}\) is
\[\mathcal{F}=\mathrm{erf}(x)-\frac{2x}{\sqrt{x}}e^{-x^{2}},x=\frac{v_{\rm BH}}{ \sigma_{v}} \tag{3}\]
from integrating the Maxwellian distribution. This dynamical friction implementation produces physically realistic motion for black holes due to small scale interactions with the nearby matter, and stabilizes the black hole once it reaches the galactic center, which provide more realistic information for the black holes leading up to mergers. For a more detailed discussion of this im
plementation and the black hole orbital information it produces, see Ni et al. (2022); Chen et al. (2022a).
## 3 Black hole / merger populations
Before investigating mergers in the Astrid simulation, we first consider the overall populations of black holes over cosmic time. In Figure 1 we plot the black hole mass function (BHMF) at \(z=\)2, 3, 4, 5, 6, and 7 (solid, dashed, dotted, and dot-dashed, respectively) from the Astrid simulation (red), with a comparison to the TNG300 mass function (blue) at the same redshifts. We find the high end of the mass function (\(M_{\rm BH}>3\times 10^{6}M_{\odot}\)) tends to follow an approximate power law with a slope of \(\sim-1.05\) (for \(M_{\rm BH}>10^{6}M_{\odot}\)). The slope is slightly steeper at earlier times, but the primary evolution is the increase in normalization (increasing by more than 2 dex from z=7 to z=2; see Table 1 for best fitting power-law parameters at each redshift). The TNG300 BHMF is broadly consistent with these Astrid results, except that TNG300 produces more high-mass black holes at later times (\(\sim 3\times\) more \(\sim 10^{8}\)\(h^{-1}\)\(M_{\odot}\) black holes at z=4). As such, we can see that massive black holes are able to grow more efficiently in TNG than Astrid at late times (except for the most-massive end), which we can expect to have some impact on the merging populations as well. However, we note that this will only affect the largest black holes at the latest times, and thus will not influence the majority of mergers when comparing the two simulations.
At lower masses, we see that the two simulations diverge significantly, as a result of the seeding model. In the TNG300 simulation, we see a large spike at the lowest mass BHs (\(\sim 10^{6}M_{\odot}\)). This peak corresponds to the black hole seed mass used in the TNG simulations, and is a result of recently-seeded black holes tending to have very low accretion rates due to stellar feedback, especially at early times (Weinberger et al., 2017, 2018); hence a large number of black holes have not grown much beyond their seed masses. In contrast, the Astrid BHMF gets slightly steeper at lower masses, but does not have any qualitative shift until \(M_{\rm BH}\sim 10^{5.5}M_{\odot}\), below which we find a shallower power law. We recall that the seeding model used in Astrid initializes black holes with a starting mass selected from a power-law distribution with slope of -1 (see Section 2), which produces the behaviour we see here. We note that the best fitting slope for this plateau is slightly shallower than the slope for the seed mass selection (\(\sim-0.9\) rather than the \(-1\) used when seeding), and gets gradually shallower with time (from \(\sim-0.96\) at z=7 to \(\sim-0.5\) at z=2; see also Table 1). This is due to lower-mass seeds gradually growing into the higher mass range for seeding, and hence over time the number of black holes in the high-mass end of the seed range increases. As such the plateau we see in the BHMF is a result of the seed model used combined with a small amount of growth, while for higher-mass black holes (e.g. \(>10^{6}M_{\odot}\)) we find a well-behaved BHMF which is consistent with the TNG BHMF.
Overall, we find that Astrid is similar to TNG300 (and comparable simulations), though with two significant differences. First, Astrid spans a larger range of black hole masses, extending to much smaller black holes resulting from the new seed model. Additionally, Astrid has an improved black hole treatment, including dynamical friction, which more accurately models black hole motion and mergers. As such, we expect the merger rates to be more strongly affected, and a potential impact on high-mass black holes (after longer time to grow), hence the discrepancy between Astrid and TNG300 in Figure 1 increases for late times and high masses.
Next we consider the mergers produced by this black hole population. In Figure 2 we plot the rate at which GW signals will reach the Earth from SMBH mergers, obtained by integrating the number of mergers in the simulation over redshifts, incorporating the cosmic volume at the given redshift:
\[\frac{{\rm d}N}{{\rm d}z\,{\rm d}t}=\frac{1}{z_{2}-z_{1}}\int_{z_{1}}^{z_{2}} \frac{{\rm d}^{2}n(z)}{{\rm d}z\,{\rm d}V_{\rm c}}\frac{{\rm d}z}{{\rm d}t} \frac{{\rm d}V_{\rm c}}{{\rm d}z}\frac{{\rm d}z}{1+z} \tag{4}\]
We see that the TNG simulations (TNG300 - blue; TNG100 - yellow; TNG50 - purple) have broadly comparable behaviour: very rare mergers at high z, increasing with time to a peak at z\(\sim\)2, followed by a decrease in expected merger rates. The high-redshift behaviour in Astrid (red) is broadly similar, in that it starts with rare mergers at high z, and increases with time. However, we find an overall rate significantly higher in Astrid than any of the TNG
Figure 1: The black hole mass function for Astrid (red) and TNG300 (blue), at z=2-7. For each simulation, the black hole seed model is clearly seen at the lowest masses, but above the seed mass the two simulations broadly agree for \(z>4\), though TNG300 produces more high-mass black holes at late times.
Figure 2: Merger rate signal as a function of redshift in Astrid (red), TNG300 (blue), TNG100 (yellow), and TNG50 (purple). Since Astrid includes much lower mass black holes than the TNG simulations, the dashed red line shows the Astrid signal rate when limited to only mergers massive enough to be resolved in TNG300. We see that all five simulations have qualitatively similar behaviour, though with different normalization: Astrid has significantly more mergers (due to the lower-mass black holes). When considering only mergers between blackholes above \(2\times M_{\rm TNG,seed}\) (dotted lines), however, we find a _lower_ merger rate in Astrid.
simulations. Rather than a fundamental difference in mergers, however, this increased rate in Astrid is a result of the seed model initializing lower mass black holes in lower mass galaxies when compared to TNG. As shown in Figure 1, at high masses the simulations are consistent, but Astrid includes many low-mass BHs below TNG's seed mass, and hence will include a significant number of mergers which go unresolved in TNG.
To test this, the dashed lines in Figure 2 show the rates from the Astrid and TNG300 simulations after imposing a cut of \(2\times M_{\rm seed,TNG}\) (i.e. only mergers with \(M_{1}>M_{\rm seed,TNG}\) and \(M_{2}>M_{\rm seed,TNG}\), where \(M_{1}\) (\(M_{2}\)) is the larger (smaller) mass involved in a merger, and \(M_{\rm seed,TNG}\) is the black hole seed mass for the TNG simulations). A cut of \(M>2\times M_{\rm seed,TNG}\) removes black holes which are not yet seeded by TNG, and also ignores the strong peak in the TNG BH population caused by the seed criteria, and thus we are comparing equivalent black hole populations. Here we see that the majority of mergers involve low mass BHs (where at least one BH has \(M_{\rm BH}<2\times M_{\rm seed,TNG}\), hence dashed line well below the solid lines), and when limited only to the equivalent merger populations, Astrid actually has a _lower_ merger rate than the TNG simulations by a factor of \(\sim 3\).
There are several major differences between the simulations which have the potential to influence the merger rates. Once factor is that Astrid seeds black holes into smaller halos, and the dependence of galaxy merger rates on galaxy mass will influence the rate at which BHs seeded in those galaxies end up merging. However, this will be limited primarily to low mass galaxies/BHs, and so would not explain the lower rate in Astrid among high-mass mergers. Another factor is the faster growth in TNG, which produces a larger population of high-mass black holes when compared to Astrid (see Figure 1); at \(z=4\) TNG has \(\sim 50\%\) more black holes above \(2\times M_{\rm seed,TNG}\). Hence we should expect a higher merger rate in TNG than Astrid, based solely on the population of black holes available to merge. Finally, there is the dynamical friction model: Astrid models black hole motion using a dynamical friction prescription (Chen et al., 2022) rather than a recentering scheme. As such, when a pair of galaxies merge together, satellite BHs (those found in the smaller of the two merging galaxies) can take longer to reach the galaxy center where they are then able to merge with the central BH, and the full dynamical friction model allows for flyby interactions rather than assuming an immediate merger when two black holes are sufficiently close together. Thus we expect black holes in simulations which incorporate dynamical friction to generally merge more slowly than those in simulations which use a recentering scheme. They should therefore have a lower overall merger rate (when controlling for resolved masses), precisely as we see when comparing Astrid to TNG300 (dotted lines in Figure 2).
To more directly consider merging masses, we plot the mass-redshift distribution of mergers in Figure 3 for both Astrid (left) and TNG300 (right). Consistent with Figures 1 & 2, we see that the lower seed mass in Astrid provides \(M_{1}\) more than an order of magnitude below what the TNG simulations resolve. Similarly, the lower halo mass threshold for seeding means that Astrid models black hole mergers out to earlier cosmic times, as a result of seeding black holes into smaller galaxies. Above the seed mass scales, however, we see that both Astrid and TNG300 produce remarkably similar distributions, with \(M_{1}>M_{\rm seed,TNG}\) mergers first occurring at \(z\sim 7\), and reaching \(M_{1}\sim 10^{10}M_{\odot}\) at z=2 (though Astrid's larger volume means there are a few unusually massive mergers at slightly earlier times). This is consistent with Figure 1, in that black holes well above the seed mass tend to grow at comparable rates in both Astrid and the TNG simulations at high-z, and thus the mass scale involved in the mergers tends to be similar. Combined with Figure 1, we see that the smaller seeds in Astrid (and the power-law seed mass distribution) are capable of growing to the masses necessary to match high-redshift black
\begin{table}
\begin{tabular}{c c c c c} \hline & \multicolumn{2}{c}{High-mass} & \multicolumn{2}{c}{Low-mass} \\ z & \(a\) & \(b\) & \(c\) & \(d\) \\ \hline
2 & -2.7 & -1.0 & -1.6 & -0.49 \\
3 & -3.3 & -1.0 & -2.0 & -0.60 \\
4 & -3.9 & -1.1 & -2.5 & -0.79 \\
5 & -4.6 & -1.0 & -2.9 & -0.87 \\
6 & -5.4 & -1.1 & -3.3 & -0.93 \\
7 & -6.3 & -1.1 & -3.6 & -0.96 \\ \hline \end{tabular}
\end{table}
Table 1: Best fitting parameters for black hole mass function (Figure 1), fitting \(M_{\rm BH}>10^{6}M_{\odot}\) to \(10^{a}(M/10^{7}M_{\odot})^{b}\), and \(M_{\rm BH}<10^{5.}M_{\odot}\) (seed-mass range) to \(10^{c}(M/10^{7}M_{\odot})^{d}\).
Figure 3: Distribution of primary merger mass (\(M_{1}\)) and redshift for Astrid (left) and TNG300 (right; restricted to \(z>2\) to match Astrid). The different seed models allows Astrid to probe to lower masses and higher redshifts, but produce comparable results above \(M_{\rm seed,TNG}\).
hole observations and produce black hole populations fully consistent with current constraints. A complete analysis of typical black hole growth behaviours is beyond the scope of this paper, but will be discussed in an upcoming work.
To compare merger rates to black hole populations, in Figure 4 we plot the black hole mass function (solid lines), and compare to the merger mass function (dashed line) at \(z=2,3,4,5\). We define the merger mass function to be the mass function of \(M_{1}\) (the mass of the more massive BH involved in a merger) for all mergers which take place in the previous 75 Myr. Here we see that the merger mass function in each simulation is comparable to (though slightly lower than) the corresponding BHMF. In the case of TNG300, we see that above the seed mass both the BHMF and the merger mass function follow a rough power law, and both functions also show a peak at the seed mass (though that spike is smaller in the merger mass function). Similarly, in Astrid we see a rough power law above \(\sim 10^{6}M_{\odot}\), and a plateau below \(\sim 10^{5.5}\cdot M_{\odot}\) which corresponds to the power-law seeding model. Similar to TNG, we see that at the seed-mass scale the merger mass function is significantly below the BHMF: although there is a large population of recently seeded black holes, they are less likely to undergo a merger (as they necessarily need time for their host halos to merge, during which they are able to grow past their seed mass).
Although the ratio between BHMF and merger mass function is qualitatively similar between Astrid and TNG, suggesting a comparable merger rate, when selecting BHs by mass. In Figure 5, we have divided the merger mass function by the BHMF to obtain a characteristic merger rate as a function of \(M_{\rm BH}\) (i.e. the typical number of mergers that a BH with a given mass would undergo in 1 Gyr). As discussed in reference to Figure 4, we expect the merger rate for BHs near the seed mass to generally be much lower than the merger rate for higher mass black holes: we see this explicitly in Figure 5, as both Astrid (red) and TNG300 (blue) show a significant dropoff near their respective seed masses. Above the seed masses, however, both simulations show an approximate power law relating merger rate to \(M_{\rm BH}\), such that high-mass black holes have a slightly higher merger rate than lower-mass black holes. Furthermore, we see that both Astrid and TNG300 have comparable merger rates; however, we note that this is somewhat coincidental, based on near seed-mass mergers. As seen in Figures 1-4, Astrid contains a large number of low-mass black holes which are unresolved in TNG, but which contribute to the merger rates show in Figure 5. We show this explicitly with the dashed line, where we only consider mergers in which \(M_{2}>2\times M_{\rm seed,TNG}\), in which case we have roughly an order of magnitude fewer mergers. Nonetheless, after removing the low-mass (and thus less well-resolved) mergers, we find fairly comparable merger rates except at the highest masses, where Astrid predicts a slightly longer time between mergers.
## 4 Gravitational wave signals
In addition to the underlying black hole merger populations, we consider the gravitational waves emitted by calculating both the frequency and strain of the GW for each merger. We use the characteristic strain, \(h_{s}\), to model the binary signal which accounts for the time the binary spends in each frequency bin (Finn & Thorne, 2000). The characteristic strain is given by (e.g. Moore et al., 2015):
\[h_{s}(f)=4f^{2}|\tilde{h}(f)|^{2} \tag{5}\]
where \(\tilde{h}(f)\) represents the Fourier transform of a time domain signal. To generate the waveforms, we use the phenomenological waveform PhenomD (Husa et al., 2016; Khan et al., 2016) implemented within the gwanccl Python package (Katz & Larson, 2019). The input parameters are the binary masses, merging redshift, and the dimensionless spins of the binary. For the SMBH masses, we do not account for mass growth after the numerical merger. However, we note that the SMBH can potentially gain a significant fraction of its mass during the \(>1\,\)Gyr of time in the dynamical friction (e.g. Banks et al., 2022) or loss-cone scattering phase. The dimensionless spin \(a\) characterizes the alignment of the spin angular momentum with the orbital angular momentum, and the value of \(a\) ranges from \(-1\) to \(1\). However, we do not have any information on the spin of the SMBHs in our simulation. Therefore, following the argument in Katz et al. (2020), we assume a constant dimensionless spin of \(a_{1}=a_{2}=0.8\) for all binaries (e.g. Miller, 2007; Reynolds, 2013).
Figure 4: BHMF (solid) for Astrid (red) and TNG300 (blue) at z=2 (top) to z=5 (bottom), compared to the merger mass function based on the previous 75 Myr (dashed; see text for details). In both simulations the merger mass functions are similar, though the Astrid merger mass function is smaller relative to the overall BHMF, suggesting a lower merger rate.
In Figure 6 we plot the range of frequencies and strains for GW signals emitted by mergers in Astrid (panel a), with each frequency-strain bin colour-coded by the mean redshift of the emitting mergers. We see that the majority of GW signals come from the lowest-redshift mergers (though note that the lowest redshift here means \(z\sim 2\), the latest time probed in this analysis), since the merger rate increases with time (at least at high-z; see Figure 2). We note that high-z does dominate the lowest-strain GW signals, which is not only a direct result the merger being more distant (and hence a weaker signal), but also because high-z mergers tend to involve swallowing a lower-\(M_{2}\) BH (which necessarily produces a smaller strain compared to swallowing a higher-mass secondary BH). We also see an upper bound on the GW signals, however we note that is a result of the limited redshift window reached so far by Astrid (as the simulation continues to lower-z, the upper right area will continue to be filled in, which we see in panels \(b\) and \(d\) for TNG300 mergers).
In Figure 6b, we show the frequency-strain signals for the TNG300 mergers, again colour-coded by the merger redshift (though note the range for the colour bar is different, since TNG is complete to z=0). As in Figure 6a, we see that most signals come from the lowest redshifts (although here that means near \(z=0\)), and again high-z signals dominate the lowest-strains. However, the much larger redshift range makes comparing to these panels problematic; instead, in Figure 6d we plot the GW signals from TNG300, but restricted to the same redshift range as Astrid (\(z>2\)). Here we see a result which is more similar to the Astrid data (Figure 6a), but which spans a much more limited range of frequency and strain. The z\(\sim\)2 limit imposes a strict upper limit on the strain, and we see that both Astrid (Figure 6a) and TNG300 (Figure 6d) have similar distributions when limited to the same redshift range, with the exception that Astrid extends higher frequencies and lower strain, which we expect since Astrid includes black holes below the resolved mass in TNG300. Overall, we see that the GW signals tend to be dominated by the lowest redshift mergers, with an exception for the highest frequency/lowest strain signals, which tend to be at higher redshifts.
Unlike Figure 6a, which spans the majority of the LISA frequency range, the frequency-strain distribution for TNG300 (Figure 6b & d) is limited only to LISAs low-frequency regime. This is a result of Astrid resolving much lower black hole masses, whose mergers produce higher frequency signals. We see this explicitly in Figure 6c, which shows the GW signals in Astrid, but limited to the same mass range as TNG300 (i.e. \(M_{2}>M_{\rm seed,TNG}\)), such that Figure 6c & d show the results from Astrid and TNG300 for the same mass and redshift ranges. Comparing these panels, we see that Astrid has fewer mergers (at fixed mass/redshift ranges) than TNG300, consistent with Figures 2 & 5 which also show fewer mergers. Since the mass scale is well above the seed mass, this is not a result of the seedling prescriptions. Rather it is a result of the dynamical friction modeling which delays black hole mergers to later times relative to a simpler repositioning scheme. Overall, we see that the largest difference between the two simulations is that Astrid produces more high-frequency and low-strain signals, which correspond to low-mass mergers.
### Snr
Having calculated the GW frequency and strain emitted by merging black holes, we also consider the strength of the GW signal received by LISA. We estimate the Signal-to-Noise (SNR) by integrating the ratio of the signal to the noise in the frequency domain. The sky, orientation, and polarization averaged SNR are given by :
\[({\rm SNR})^{2}=\frac{16}{5}\int_{f_{\rm start}}^{f_{\rm end}}\frac{h_{g}^{2} }{h_{N}^{2}}f^{-1}df, \tag{6}\]
Figure 5: The typical merger rate for a BH with a given mass in the Astrid (red) and TNG300 (blue) simulations, as well as the merger rates limited to \(M_{1,2}>2*M_{\rm seed,TNG}\) (dotted lines). Both simulations have comparable total merger rates, but the majority of mergers in Astrid involve a low-mass secondary BH; when restricted to the same secondary mass range, Astrid has a merger rate \(\sim 1\) dex lower than TNG300. [NOTE: This is not the overall rate of mergers in the simulation/universe; rather it is the rate at which an individual BH of a given mass would be expected to undergo mergers; i.e. lower panel would suggest a z=5 BH with M\(\sim 10^{6.5}M_{\odot}\) would be expected to undergo about 1 merger per Gyr. That’s true for both TNG and Astrid, but in Astrid that includes mergers with lower-mass BHs (below the seed mass of TNG); the dashed line only includes mergers where both BHs are above the seed mass of TNG, so it’s closer to a 1-to-1 comparison.]
where \(f_{\rm start}=f(t_{\rm start})\) and \(f_{\rm end}=f(t_{\rm end})\), with \(t_{\rm start}\) and \(t_{\rm end}\) representing the starting and ending time when the signal is observed. Note that here we do not account for the eccentricity of the binaries, and assume circular orbits at the time of merger.
For the current configuration, we assume that the LISA observation lasts for 4 years. We further assume a most optimistic SNR for all mergers by taking \(t_{\rm end}=t_{\rm peak}\) and \(t_{\rm start}=t_{\rm peak}-4\)yrs. Under this assumption, we are always integrating the part of the waveform where the strain is maximized. However, as was discussed in Salcido et al. (2016) and Katz et al. (2020), the actual SNR may be smaller if there is an offset between the LISA observation window and the merger time of the binary.
In Figure 7, we plot the distribution of SNRs from Astrid (solid red) and TNG300 (solid blue) for \(z>2\) mergers, normalized by the simulation volume. In both simulations we find that the SNR peaks at very high values (\(10^{2}-10^{3}\)), with Astrid producing significantly more high-SNR mergers, while TNG300 has a flatter tail extending toward low-SNR. Since the majority of mergers tend to occur between low-\(M_{\rm BH}\) black holes (e.g., see Figures 3-4), these differences are expected due to the different seed criteria between the two simulations. For a more direct comparison, the dashed lines show the SNR distribution limited to mergers in which both merging black holes have \(M_{\rm BH}>2\times M_{\rm TNG,seed}\) (i.e. only including BHs more than double the TNG seed mass, to avoid issues related to black hole seeding). Here we see that the two simulations have a consistent peak at SNR just below \(10^{3}\), though we find that TNG300 has a nearly flat distribution for SNR\(>\)1, whereas Astrid predicts a moderate slope, producing fewer low-SNR mergers. This appears to be due to TNG300 producing more high-mass black holes at \(z\leq 4\) (see Figure 4) as
Figure 6: Frequency-strain signals for GWs emitted by BH mergers over cosmic time (binned data) against the LISA sensitivity curve (dashed black line). Each bin is colour-coded by the mean redshift of the source merger which emitted the given GWs. _(a):_ All signals from the Astrid simulation (limited to \(z>2\)). _(b):_ All signals from the TNG300 simulation, which is complete to z=0 (hence note the different colourscale range), but limited to higher masses. _(c):_ Signals from the Astrid simulation, but limited to the same mass scales as TNG300. _(d):_ Signals from the TNG300 simulation, but limited to the same redshift range (\(z>2\)) as the current state of the Astrid simulation.
Figure 7: Distribution of signal-to-noise ratios (SNR) for \(z>2\) mergers from Astrid and TNG300, normalized by simulation volume (solid lines). Both simulations show qualitatively similar behaviour, with the distribution peaking at high SNR. Because the simulations use different seed masses, we also compare the SNR distribution for mergers in which both merging black holes are above \(2\times\) the TNG seed mass (dashed lines), as well as the distribution of Astrid signals for mergers in which both merging black holes are above \(10^{5.5}\ h^{-1}\ M_{\odot}\) (the high-end of Astrid’s seed mass range).
Figure 8: Distribution of merging black hole masses for Astrid (left) and TNG300 (right), color coded by the mean log(SNR) for the mergers. We find that the strongest SNRs in Astrid are found for \(M_{1},M_{2}\sim 10^{6}M_{\odot}\); this is consistent with TNG300, although the higher seed mass in the TNG simulations means that the peak signals roughly correspond to the seed masses of the simulation. In both simulations, we also note that the SNR is more strongly correlated to \(M_{1}\) than \(M_{2}\).
Figure 9: Distribution of SNR vs. mass (left: \(M_{1}\); right: \(M_{2}\)) for \(z>3\) mergers in Astrid (top) and TNG300 (bottom), color-coded by density of mergers in the simulation volume. Consistent with Figure 8, we see that SNR is much more strongly correlated with \(M_{1}\) than \(M_{2}\), and we again find that Astrid and TNG300 are broadly consistent above the TNG seed mass.
a result of more efficient black hole growth. A detailed comparison of black hole growth efficiency between these simulations is beyond the scope of this paper, and is left for a future work. However, we note that the difference only affects the low-SNR end, and will thus have a comparatively minor impact on the LISA detection rate.
On the other hand, the high-SNR end is clearly significantly impacted by the seeding criteria. Although Astrid and TNG agree when limited to merging masses about \(2\times M_{\rm TNG,seed}\), the majority of mergers are removed when applying such a high mass cut. One of the major advantages of the Astrid simulation is that the lower seed masses allow us to effectively probe lower mass mergers. We show this with the dotted line in Figure 7, where we consider any mergers between BHs above the Astrid seed mass distribution (i.e. \(M_{\rm BH}>10^{5.5}\ h^{-1}\ M_{\odot}\), hence above the seed-mass dominated regime, as seen in Figure 1), which provides us with significantly more mergers. We note that including these lower-mass black holes produces a peak at the high-SNR end which is missed when imposing a higher mass cut (or higher seed masses). Furthermore, we find roughly triple the number of \({\rm SNR}{<10^{2.5}}\) for \(M_{2}>10^{5.5}\ h^{-1}\ M_{\odot}\) than for \(M_{2}>2\times M_{\rm seed,TNG}\), emphasizing the importance of low-mass black holes. Furthermore, we note that the slope of the \({\rm SNR}{<10^{2.5}}\) distribution remains roughly the same when using the lower mass cut. Overall, this suggests that the highest-SNR mergers are primarily between two low-mass BHs which are completely missed with a higher mass cut (hence the strong peak which the dashed line completely misses), while the lower-SNR mergers frequently involve at least one higher-mass black hole, so by including a smaller mass cut we increase the number of mergers at all SNR-ranges but without affecting the slope.
We investigate this in more detail in Figure 8, which shows the mass distribution of merging black holes (\(M_{1}\) vs. \(M_{2}\)), color coded by \(<\log({\rm SNR})>\) (i.e. the mean logarithmic SNR for mergers in the given mass bin), for \(z>3\) mergers in Astrid (left) and TNG300 (right). In the mass scales resolved in both simulations we find comparable SNR results (though we again see more high-mass mergers in TNG as a result of the more efficient growth). As seen in Figure 7, the larger mass range produced by Astrid's smaller seed mass results in significantly more mergers, especially for high SNRs. In particular, the Astrid simulation shows the SNR peaks at \(M_{1},M_{2}\sim 10^{6}M_{\odot}\), with the lower masses having weaker (though still strong) signals (they are below LISA's peak sensitivity, but still well within the detectable range).
Figure 8 also shows that SNR depends primarily on the larger mass (\(M_{1}\)), while the secondary black hole (\(M_{2}\)) has a relatively weak impact on SNR. We show this explicitly in Figure 9, which plots the correlation between SNR and the merging black hole masses (\(M_{1}\) - left; \(M_{2}\) - right) for Astrid (top) and TNG300 (bottom), color coded by the number density of mergers. The majority of mergers occur between two mass black holes (as seen, e.g., in Figure 4), and we again see that the highest SNRs occur when both \(M_{1}\) and \(M_{2}\) are \(\sim 10^{6}M_{\odot}\). We also see that SNR correlates quite strongly with \(M_{1}\), spanning \(\sim\)1 dex at \(M_{1}\sim 10^{6}M_{\odot}\) up to \(\sim\)2 dex at higher masses. In contrast, a low \(M_{2}\) spans a much higher range of SNRs, corresponding to the wide range of possible \(M_{1}\) (i.e. mass of the swallowing black hole), though most low-\(M_{2}\) mergers still have high-SNR (since the majority of mergers will still have a low \(M_{1}\)). The relatively weak dependence on \(M_{2}\) means that the growth of the secondary BH does not play a significant role in the expected SNR for a given merger. However, it is nonetheless very important to include lower mass black holes to probe the full range of mergers which we expect LISA to be able to detect: mergers between \(\sim 10^{6}M_{\odot}\) black holes are both extremely common and produce the strongest gravitational wave signals from LISA, so limiting simulations to higher masses will necessarily miss the majority of mergers and their associated GW signals. On the other hand, we note that the LISA SNR peaks at both \(M_{1}\sim 10^{6}M_{\odot}\) and \(M_{2}\sim 10^{6}M_{\odot}\). So although incorporating smaller seed masses in simulations will provide more accurate estimates for the overall merger rates (which are dominated by low-mass mergers), below \(\sim 10^{5}M_{\odot}\) will primarily consist of lower-SNR signals.
## 5 Conclusions
In this work, we have investigated the black hole population in the Astrid simulation, which seeds black holes at masses as low as \(10^{4.5}\ h^{-1}\ M_{\odot}\), well below what is frequently used in comparable cosmological simulations. We have particularly focused on the mergers between black holes, looking at the overall merger rates, the merging masses, and the expected gravitational wave signals they produce, noting that Astrid directly incorporates a dynamical friction model which produces more realistic merger behaviour. Our main results are as follows:
* The Astrid simulation produces a comparable population of high-redshift black holes when compared to prior simulations (esp. TNG300), but with an alternate seed model which extends to much lower mass black holes. Above the masses resolved by both simulations, we see that Astrid and TNG have comparable mass functions (Figure 1), and also typical merging masses as a function of redshift (Figure 3).
* The overall merger rate is much higher in Astrid than in earlier TNG simulations, primarily due to mergers involving low-mass black holes which are not resolved in simulations with higher seed masses. When considering the same mass scales, Astrid has fewer mergers, likely a result of the added infall time resulting from the dynamical friction model incorporated in Astrid.
* The merger rate for a given black hole is mass-dependent, with massive black holes undergoing mergers more frequently than low-mass black holes. A typical black hole can be expected to undergo a merger every \(\sim\)1-10 Gyr.
* Including low-mass black holes (i.e. lower mass seeding prescriptions) is crucial for modeling LISA detections. High-mass seed models will only probe the low-frequency, high-strain regime within the LISA sensitivity band, while lower seed masses can extend across the full range of potential LISA detections.
* In addition to decreasing the overall merger rate, including dynamical friction can be expected to preferentially affect low-frequency, low-strain GW signals, which are primarily generated by low-\(M_{2}\) mergers. This further emphasizes the importance of including accurate models for both seeding and infall dynamics, as both have the potential to not only affect the expected signal rate, but also the frequency-strain distribution of GW signals that LISA will be expected to detect.
* The SNR distribution follows a rough power law until the highest signals (\({\rm SNR}>10^{2}\)), at which point there is a peak caused by \(M_{\rm BH}\sim 10^{6}M_{\odot}\) mergers. This peak is poorly resolved in simulations with high seed masses, but is 0.5-1.5 dex above the Astrid seed masses, providing a well resolved sample. Additionally, SNR is strongly correlated with the more massive black hole (\(M_{1}\)), and weakly correlated to the less massive black hole (\(M_{2}\); though note that \(M_{1}\) and \(M_{2}\) are correlated to each other).
* The LISA SNR has a weak dependence on redshift, at least in the early universe (\(z>2\)). There is evolution in the merging populations of black holes, with later times including higher-mass (and correspondingly smaller SNR) mergers. However, low mass mergers have stronger signals, are more common, and have relatively weak dependence on redshift (at least for \(z>3\), probed here).
In summary, we have shown that using a wide range of black hole seed masses extending down to \(10^{4.5}\ h^{-1}\ M_{\odot}\) produces a high-mass population of black holes comparable to simulations
which use higher seed masses, while simultaneously allowing us to investigate smaller black holes, mergers, and the associated gravitational waves they emit. In particular, we are able to provide a detailed investigation into mergers spanning much of the LISA sensitivity range, provide estimates for the expected GW signal detections, and the correlation between these detections and the underlying merger masses and redshifts, in a simulation which directly models dynamical friction to produce more accurate merger behaviour.
## Acknowledgements
Astrid was run on the Frontera facility at the Texas Advanced Computing Center. TDM acknowledges funding from the NSF AI Institute: Physics of the Future, NSF PHY-2020295, NASA ATP NNX17AK56G, and NASA ATP 80NSSC18K101. TDM acknowledges additional support from NSF ACT-1614853, NSF AST-1616168, NASA ATP 19-ATP19-0084, and NASA ATP 80NSSC20K0519. SPB was supported by NASA ATP 80NSSC22K1897.
## Data Availability
The code used for the simulation is available at [https://github.com/MP-Gadget/MP-Gadget](https://github.com/MP-Gadget/MP-Gadget). Halo catalogs and BH data are available upon reasonable request to the authors.
|
2308.15432 | Quantum Algorithm for Computing Distances Between Subspaces | Geometry and topology have generated impacts far beyond their pure
mathematical primitive, providing a solid foundation for many applicable tools.
Typically, real-world data are represented as vectors, forming a linear
subspace for a given data collection. Computing distances between different
subspaces is generally a computationally challenging problem with both
theoretical and applicable consequences, as, for example, the results can be
used to classify data from different categories. Fueled by the fast-growing
development of quantum algorithms, we consider such problems in the quantum
context and provide a quantum algorithm for estimating two kinds of distance:
Grassmann distance and ellipsoid distance. Under appropriate assumptions and
conditions, the speedup of our quantum algorithm is exponential with respect to
both the dimension of the given data and the number of data points. Some
extensions regarding estimating different kinds of distance are then discussed
as a corollary of our main quantum algorithmic method. | Nhat A. Nghiem | 2023-08-29T16:46:20Z | http://arxiv.org/abs/2308.15432v2 | # Quantum Algorithm for Computing Distances Between Subspaces
###### Abstract
Geometry and topology have generated impacts far beyond their pure mathematical primitive, providing a solid foundation for many applicable tools. Typically, real-world data are represented as vectors, forming a linear subspace for a given data collection. Computing distances between different subspaces is generally a computationally challenging problem with both theoretical and applicable consequences, as, for example, the results can be used to classify data from different categories. Fueled by the fast-growing development of quantum algorithms, we consider such problems in the quantum context and provide a quantum algorithm for estimating two kinds of distance: Grassmann distance and ellipsoid distance. Under appropriate assumptions and conditions, the speedup of our quantum algorithm is exponential with respect to both the dimension of the given data and the number of data points. Some extensions regarding estimating different kinds of distance are then discussed as a corollary of our main quantum algorithmic method.
## I Introduction
Quantum computation has opened up a completely new frontier in computational science. A vast amount of difficult computational problems have been theoretically shown to be accelerated by quantum computation. Famous examples include integer factorization [1], unstructured database search [2], quantum simulation [3], the task of probing properties of a black-box function [4], solving linear system [5; 6], and approximating topological invariants [7]. More recently, the interplay between quantum science and machine learning, so-called quantum machine learning, has led to many fascinating works, such as quantum neural network [8; 9], quantum convolutional neural network [10], quantum support vector machine [11], etc. Under certain assumptions regarding input access, the exponential speedup is achievable, such as performing supervised learning and unsupervised learning using quantum processors [12] and performing fitting over large data set [13]. Unconditional proof of quantum advantage was provided in [14], where the authors showed that shallow circuits can completely outperform their classical counterparts.
Aside from the aforementioned instances, where the domain of investigation ranges from algebraic problems to data science & machine learning, the potential advantage of quantum computers has also been explored in (computational) topology & geometry domain. Lloyd et al. [15] provided a quantum algorithm, the so-called LGZ algorithm, for computing Betti numbers of simplicial complexes, a classic problem arising from topological data analysis. Many followed-up works, such as [16; 17; 18; 19] have improved the running time, as well as implementation costs of the LGZ algorithm. In [20], the authors outlined a quantum algorithm for problems in computational geometry, showing significant speedup. In [21], a single-query, constant-time quantum algorithm is provided for detecting the homology class of closed curves, which is an interesting problem in computational topology. These examples suggest a potentially fruitful domain where quantum advantage could be further explored, since topology and geometry, while being a classical subject within the pure mathematical domain, have provided a solid foundation for many applications. For example, the field of computational conformal geometry [22], which encompasses modern differential geometry, Riemann geometry, and algebraic topology, has provided many useful tools for challenging problems in computer vision, medical imaging, such as surface classification, registration, etc.
Motivated by these developments, particularly regarding geometry & topology, we tackle the following problem that arises from the same context: computing distance between linear subspaces [23]. This problem enjoys great applications in machine learning and computer vision, etc., but is quite challenging from a computational point of view since it requires the ability to evaluate singular values of possibly large matrices. We will show that quantum algorithmic techniques could enhance such tasks, yielding significant speedup compared to standard classical methods under certain conditions. Our work thus contributes as another example where quantum algorithms could be beneficial for practical problems.
The structure of the paper is as follows. In Section II, we begin with some introduction to topology and geometry so as to review some preliminaries and build up intuition that is very beneficial to understand the framework behind two problems that we wish to solve, which are computing the distance between _linear subspaces_ and between _ellipsoids_. The formal description and the classical solution to those problems are also presented accordingly. In Section III, we introduce some necessary recipes and tools that are crucial in our subsequent construction. Section IV is dedicated to our main result, which is an efficient quantum algorithm for computing the linear subspace distance, as well as giving
details on error analysis and the final statement regarding its running time. The quantum solution to the second problem--computing the distance between ellipsoids is presented in Section V, which is more or less an adaptive version of the method outlined in Section IV and a result combined with some well-known quantum algorithms, such as inverting a dense matrix [24]. We then show that how the main algorithms presented in section VI can be extended to estimate other kinds of distance between subspaces. In section VII, we showcase how Grassmann distance and ellipsoid distance are estimated in the so-called memory model, which is a special type of quantum data structure that allows more subtle loading of classical data. We then conclude with some comments on our results in Sec. VIII and discuss future prospects.
## II Distance between subspaces
Intuitively, geometry and topology typically begin with a _set of points_ and other sets built from them, resulting in the so-called space. The most familiar space is probably the Euclidean space of a given dimension \(n\), denoted as \(R^{n}\), where each 'point' is associated with an \(n\)-tuple \((x_{1},x_{2},...,x_{n})\) (also called coordinates), where each \(x_{i}\in R\). The collection of such points, endowed with addition and scalar multiplication operations, forms a _linear vector space_. These concepts belong to a subject called linear algebra, which has found countless applications in science & engineering, due to its simplicity and versatility. A remarkable property of such Euclidean space is that the _distance_ between two points can be defined as some function of their coordinates. The Euclidean space is an instance of a more general concept called _manifold_, with the distance between two points now generalized as the _geodesic distance_.
_Linear Subspaces:_ Given a vector space \(R^{n}\), let \(\{m_{1},m_{2},...,m_{k}\}\) and \(\{n_{1},n_{2},...,n_{k}\}\) be two collections of linearly independent vectors. Without loss of generalization, we can assume them to be orthonormal sets. Denote \(M_{A}\) and \(M_{B}\) as two \(k\)-dimensional vector subspaces spanned by \(\{m_{1},m_{2},...,m_{k}\}\) and \(\{n_{1},n_{2},...,n_{k}\}\), respectively. Our main problem to compute the separation between \(M_{A}\) and \(M_{B}\). As suggested in [23], \(M_{A}\) and \(M_{B}\) could be regarded as two elements, or two points, of the so-called Grassmannian manifold, Gr(\(k\),\(n\)). The separation between \(M_{A}\) and \(M_{B}\) could be computed as geodesic distance \(d_{M_{A},M_{B}}\) between two points on manifold Gr(\(k\),\(n\)). Following [23], we now describe the approach for computing \(d_{M_{A},M_{B}}\).
First, we organize \(\{m\}:=\{m_{1},m_{2},...,m_{k}\}\) and \(\{n\}:=\{n_{1},n_{2},...,n_{k}\}\) as two column matrices \(M,N\in R^{n\times k}\) (which means that the \(i\)-th column of \(M,N\) are \(m_{i},n_{i}\) respectively). We now perform singular value decomposition (SVD) of \(M^{T}N\):
\[M^{T}N=U\cdot\Sigma\cdot V^{T}, \tag{1}\]
with \(\sigma\in R^{k\times k}\) having diagonal entries \(\{\sigma_{i}\}_{i=1}^{k}\) as singular values of \(M^{T}N\). The so-called Grassmann distance between
Figure 1: **Left figure:** An elementary example of Euclidean space \(\mathbb{R}^{3}\) with two points A and B (and original point O). The distance between A and B can be defined simply based on corresponding coordinates of A and B. **Right figure:** Illustrative example of distance between two subspaces. Instead of viewing A and B as two points, we consider two 1-dimensional subspaces each spanned by \(O\vec{A}\) and \(O\vec{B}\). Intuitively, the angle between \(O\vec{A}\) and \(O\vec{B}\) can be used as a measure of ‘distance’ between two 1-dimensional subspace.
\(M_{A}\) and \(M_{B}\) can be computed as:
\[d_{M_{A},M_{B}}=\sqrt{\sum_{i=1}^{k}\theta_{i}^{2}}, \tag{2}\]
with \(\cos(\theta_{i})=\sigma_{i}\). We remark that since \(\{m\}\) and \(\{n\}\) are orthonormal bases, then \(0\leq\sigma_{i}\leq 1\) for all \(i\).
It is clear that the computational expense lies mostly in this SVD step. The running time of a classical algorithm that can compute the above Grassmann distance is obviously polynomial in \(n\) and \(k\). The key steps include multiplying \(M^{T}\) to \(N\) and performing SVD on \(M^{T}N\). The multiplication of \(k\times n\) matrix \(M^{T}\) and \(n\times k\) matrix \(N\) takes time \(\mathcal{O}(n^{2}k)\), meanwhile performing SVD on a \(k\times k\) matrix using the so-called Jacobi rotations [25] takes \(\mathcal{O}(k^{3})\), yielding the total running time \(\mathcal{O}(n^{2}k+k^{3})\). There are several works that provide quantum algorithms for performing SVD, such as [26; 27]. While there is a certain overlap, our subsequent quantum procedure employs somewhat different techniques since the data input to our problem is different, and in particular, our objective is also different.
_Ellipsoids:_ Given a real symmetric positive definite matrix \(M\in R^{n\times n}\), an ellipsoid \(\mathcal{E}_{A}\) is defined as
\[\mathcal{E}_{A}=\{x\in R^{n}:x^{T}Mx\leq 1\}. \tag{3}\]
The problem is that we need to find the separation, or distance, between two ellipsoids \(\mathcal{E}_{M}\) and \(\mathcal{E}_{N}\). A simple way to compute it is based on the **metric** defined on the cone of real symmetric positive definite matrices [23], which yields distance \(\delta_{M,N}\) between \(\mathcal{E}_{M}\) and \(\mathcal{E}_{N}\):
\[\delta_{M,N}=\sqrt{\sum_{i=1}^{n}\log^{2}(\lambda_{i}(M^{-1}N))}, \tag{4}\]
where \(\lambda_{i}\) is the \(i\)-th eigenvalue of \(A^{-1}B\). Since both \(A\) and \(B\) are real positive definite, \(\{\lambda_{i}\}_{i=1}^{n}\) are real and positive. The running time of a classical algorithm that computes the above distance is similar to that of Grassmann distance, which is \(\mathcal{O}(n^{3})\), as they share similar computational steps, and in this case, we have that \(k=n\).
## III Some Preliminaries
The previous section has provided some introduction and classical methods to compute Grassmann and ellipsoid distance. This section provides a review of necessary quantum tools that we would later use to develop our quantum algorithms.
**Definition 1** (Block Encoding Unitary): _Let \(A\) be some Hermitian matrix of size \(N\times N\). Let a unitary \(U\) have the following form:_
\[U=\begin{pmatrix}A&\cdot\\ \cdot&\cdot\end{pmatrix}.\]
_Then \(U\) is said to be a block encoding of matrix \(A\). Equivalently, we can write:_
\[U=\left|\mathbf{0}\right\rangle\left\langle\mathbf{0}\right|\otimes A+\cdots\]
where \(\left|\mathbf{0}\right\rangle\) denotes the first computational basis state in some larger Hilbert space. It is quite obvious from the above definition that
\[A_{ij}=(\langle\mathbf{0}|\otimes\langle i|)U(\left|\mathbf{0}\right\rangle \otimes\left|j\right\rangle), \tag{5}\]
where \(A_{ij}\) refers to the entry of \(A\) at \(i\)-th row and \(j\)-th column.
The next one is a very powerful technique that is often dubbed as quantum signal processing [28; 29].
**Lemma 1** (Quantum Signal Processing [28]): _Let f be a polynomial of degree d with \(\left|f(x)\right|\leq 1/2\) for all \(x\in[-1,1]\). Let U be a block encoding of some Hermitian matrix A. Then the following transformation_
\[\begin{pmatrix}A&\cdot\\ \cdot&\cdot\end{pmatrix}\longrightarrow\begin{pmatrix}f(A)&\cdot\\ \cdot&\cdot\end{pmatrix}\]
_is the so-called block encoding of \(f(A)\) and can be realized using \(d\) applications of \(U\) and \(U^{T}\) plus one controlled-\(U\) gate._
The above Lemma is the most general statement of quantum signal processing technique, which shows flexibility in how the transformation of given matrix could be done. As being pointed out in [29], by making use of Jacobi-Anger expansion, one can efficiently transform the Hamiltonian operator \(H\) into a high-precision approximation operator of \(\exp(-iHt)\). The scaling cost turned out to be optimal, which demonstrate the surprising power of such quantum signal processing technique.
Before proceeding further, we make the following remark. At the core of quantum signal processing technique is the operation on the block encoding (see Lemma 1 operator. The transformed operator is apparently block encoded. If we wish to apply this operator to some state and extra the measurement outcome from the output state, how can it be done in the subspace of the encoded operator? The answer is simple, as we do not need to worry about it. Let \(U\) denotes the unitary block encoding of some operator \(A\). Let \(\left|\mathbf{0}\right\rangle\left|\Phi\right\rangle\) denotes some state sharing the same dimension as \(U\), where \(\left|\Phi\right\rangle\) has same dimension as \(A\), and obviously \(\left|\mathbf{0}\right\rangle\) accounts for the remaining dimension. As we shall also show later, if we apply \(U\) to \(\left|\mathbf{0}\right\rangle\left|\Phi\right\rangle\), then we have:
\[U\left|\mathbf{0}\right\rangle\left|\Phi\right\rangle=\left|\mathbf{0}\right\rangle A \left|\Phi\right\rangle+\sum_{j\neq 0}\left|j\right\rangle\left|Garbage_{j}\right\rangle \tag{6}\]
If we perform measurement on the above state and conditioning on the extra register being \(\left|\mathbf{0}\right\rangle\), then the resulting state corresponds to the measurement from \(A\left|\Phi\right\rangle\), which is of our interests. Therefore, the effect of high-dimension of the encoding step can be trivially removed by simply conditioning on \(\left|\mathbf{0}\right\rangle\) subspace. Throughout the work, we would simply work in the subspace of encoded operator only, for simplicity.
The last recipe we need to construct our quantum algorithm is called an efficient matrix application, from a simple adaptive version of quantum random walk method [6] and is stated in the following lemma.
**Lemma 2** (Efficient Matrix Application): _Given coherent oracle access to some s-sparse, Hermitian matrix \(H\) of dimension \(n\times n\), and a given \(n\times 1\) state \(\left|b\right\rangle\). Then there is a unitary \(U_{H}\) that acts in the following way,_
\[U_{H}\left|0^{m}\right\rangle\left|b\right\rangle=\left|0^{m}\right\rangle \left(H/s\right)\left|b\right\rangle+\left|\Phi_{\perp}\right\rangle,\]
_where \(\left|\Phi_{\perp}\right\rangle\) is some unimportant state (not properly normalized) that is orthogonal to \(\left|0^{m}\right\rangle\left(H/s\right)\left|b\right\rangle\), i.e, \(\left|0^{m}\right\rangle\left\langle 0^{m}\right|\otimes\mathbf{1}\left|\Phi_{ \perp}\right\rangle=0\). The unitary \(U_{H}\) runs in time_
\[\mathcal{O}\Big{(}\log(n),poly\big{(}\log(\frac{1}{\epsilon})\big{)}\Big{)},\]
_where \(\epsilon\) is the error tolerance._
Since the above prior results were collected for our purpose, we refer the readers to their respective original works for more thorough treatment. Now, we proceed to apply them to construct our main quantum algorithms.
## IV Quantum algorithm for linear subspaces distance
This section is dedicated to the first problem that we described in a previous section: computing the Grassmann distance between linear subspaces, each spanned by an orthonormal set. We sketch the main idea behind our quantum algorithm as follows.
First, we remind that the goal is to perform SVD 1 on \(M^{T}N\) to obtain the singular values for subsequent algebraic operations. In order to achieve such a goal, we need to be able to simulate \(\exp(-iM^{T}Nt)\), followed up with a phase estimation procedure (similar to the HHL algorithm [5]) to 'extract' the singular values and perform our desired
calculation from these values. However, as \(M^{T}N\) is not necessarily symmetric, and hence might not be diagonalizable. We thus aim to simulate \(\exp(-i(M^{T}N)^{T}M^{T}Nt)\) instead. The reason is that, while \(M^{T}N\) is not symmetric, \((M^{T}N)^{T}M^{T}N\) is symmetric, and hence being diagonalizable, with the eigenvalues of \((M^{T}N)^{T}M^{T}N\) are the square of the singular values of \(M^{T}N\). Furthermore, as we mentioned in a previous section, the singular values of \(M^{T}N\) are guaranteed to be positive, and hence, we will not suffer from the sign problem if we take the square root of eigenvalues of \((M^{T}N)^{T}M^{T}N\).
The following section is devoted to the simulating of \(\exp(-i(M^{T}N)^{T}M^{T}Nt)\).
### Encoding Matrix
We first construct a block encoding unitary of \((M^{T}N)^{T}M^{T}N\). For simplicity, let \(K\equiv(M^{T}N)^{T}M^{T}N\). It is worth noting that the entries \(K_{ij}\)'s are exactly the elements in the \(i\)-th column of \(M^{T}N\) multiplied with those in the \(j\)-th column of \(M^{T}N\). Since \(M\) and \(N\) are not necessarily symmetric, we perform the same trick suggested in [5], that is, instead of working with \(M\) and \(N\), we work with their Hermitian embedding matrix \(\tilde{M}\) and \(\tilde{N}\) (both of dimension \((n+k)\times(n+k)\)) via
\[\tilde{M}=\begin{pmatrix}0&M\\ M^{T}&0\end{pmatrix},\quad\tilde{N}=\begin{pmatrix}0&N\\ N^{T}&0\end{pmatrix}. \tag{7}\]
It is straightforward to see that
\[\tilde{M}\tilde{N}=\begin{pmatrix}MN^{T}&0\\ 0&M^{T}N\end{pmatrix}.\]
Without loss of generality, we assume that both \((n+k)\) and \(k\) to be a power of \(2\) (since we can always pad them with extra zero entries), and let \((n+k)=2^{p}k\), which is due to our assumption that both \((n+k)\) and \(k\) are both some powers of \(2\). Before proceeding further, we note the following: if \(\ket{i}\) is \(i\)-th computational basis state, then \(A\ket{i}\) is the \(i\)-th column of the matrix \(A\). Note that lemma 2 allows us to 'apply' \(A\) efficiently, resulting in \(A\ket{i}\) entangled with \(\ket{0^{m}}\). Therefore, if \(\ket{j}\) is some \(j\)-th basis state in the Hilbert space of dimension \(k\), then \(\tilde{M}\tilde{N}\ket{2^{p}}\ket{j}=\ket{2^{p}}M^{T}N\ket{j}\). Therefore, by virtue of lemma 2, we have an efficient procedure to implement the following unitary:
\[U_{K}\ket{\mathbf{0}}\ket{j}=\ket{0^{m}}\ket{0^{m}}\ket{2^{p}}(M^{T}/s)N\ket{ j}+\ket{Garbage}, \tag{8}\]
where \(s=s_{M}s_{N}\), i.e., the product of the sparsity of \(M\) times the sparsity of \(N\). One may wonder why there are two registers \(\ket{0^{m}}\)'s. It comes from the fact that we consecutively use lemma 2 to apply \(\tilde{M}\) and \(\tilde{N}\). In order to construct the unitary encoding of \(K\), we need to slightly modify \(U_{K}\) to obtain different unitaries \(U_{K,1}\) and \(U_{K,2}\) in the following ways.
\(\bullet\) Beginning with \(U_{k}\ket{\mathbf{0}}\ket{j}\), we append another ancilla initialized in \(\ket{00}\):
\[\ket{0^{m}}\ket{0^{m}}\ket{2^{p}}(M^{T}N/s)\ket{j}\ket{00}+\ket{Garbage}\ket{00 }.\]
Use \(X\) gate on the last bit to flip \(\ket{0}\) to \(\ket{1}\), and apply \(C^{2m+1}X\) conditioned on the first two registers being \(\ket{0^{m}}\ket{0^{m}}\) and last bit being \(\ket{1}\) (as the control) to flip the next-to-last qubit (as the target) to obtain:
\[\ket{0^{m}}\ket{0^{m}}\ket{2^{p}}(M^{T}N/s)\ket{j}\ket{01}+\ket{Garbage}\ket{11 }.\]
We denote the above process as \(U_{K,1}\).
\(\bullet\) With the state above, we apply a CNOT using the next-to-last bit as the control bit to flip the last bit, and we obtain:
\[\ket{0^{m}}\ket{0^{m}}\ket{2^{p}}(M^{T}N\ket{j}/s)\ket{01}+\ket{Garbage}\ket{10 }.\]
We denote the combination of \(U_{K,1}\) plus the above extra step as \(U_{K,2}\). It is easy to see the following:
\[\bra{\mathbf{0}}\bra{i}U_{K,2}^{\dagger}U_{K,1}\ket{\bra{0}}\ket{j}=\frac{1}{ s^{2}}\bra{i}(M^{T}N)^{T}M^{T}N\ket{j}=K_{ij}/s^{2}. \tag{9}\]
The above property matches with the definition of block encoding 1. Therefore, we have that \(U_{K,2}^{\dagger}U_{K,1}\) is the unitary block encoding of \(K\equiv(M^{T}N)^{T}M^{T}N\) divided by \(s^{2}\).
### Computing Distance \(d_{M_{A},M_{B}}\)
As we have already succeeded in producing the unitary block encoding of \(K\), it is quite straightforward to apply the tools in Lemma 1 to simulate \(\exp(-iKt)\). We have the following result.
**Lemma 3**: _The evolution \(\exp(-i(M^{T}N)^{T}M^{T}Nt)\) can be simulated to accuracy \(\epsilon_{H}\) in time_
\[\mathcal{O}\Big{(}s^{2}log(n+k)(t+log(\frac{1}{\epsilon_{H}}))\Big{)}, \tag{10}\]
_where \(s=s_{M}s_{N}\)._
The ability to simulate \(\exp(-i(M^{T}N)^{T}M^{T}Nt)\), combined with the QPE subroutine [30; 31] yields the ability to extract the eigenvalues of \((M^{T}N)^{T}M^{T}N\). This method has been used in the famous HHL algorithm [5] to invert a given matrix, hence yielding an efficient quantum algorithm for solving linear systems. In order to compute the Grassmann distance, we need to make some modifications, as we will work with mixed states instead of pure states as in [5]. However, the analysis regarding error provided in [5] still holds in the general case, and we will soon exploit it to prove the efficiency and error bound of our distance calculation procedure.
The distance calculation procedure begins with the following mixed state:
\[\rho=\frac{1}{k}\sum_{i=1}^{k}\left|i\right\rangle\left\langle i\right|, \tag{11}\]
which can be easily prepared by applying \(H^{\log(k)}\otimes I^{\log(k)}\) to \(\left|0\right\rangle^{\log(k)}\otimes\left|0\right\rangle^{\log(k)}\), followed by CNOT layers composed of \(\log(k)\) CNOT gates, then tracing out either register. Since \(\rho\) is diagonal and proportional to the identity matrix in the computational basis state, it is also diagonal in the basis containing eigenvectors of \((M^{T}N)^{T}M^{T}N\). We recall that since \((M^{T}N)^{T}M^{T}N\) is symmetric, its eigenvectors are mutually orthogonal. Let \(\{\left|u_{i}\right\rangle,\lambda_{i}\}_{i=1}^{k}\) denotes these eigenvectors and eigenvalues. We use the following important relations and formulas:
* \(\lambda_{i}=\sigma_{i}^{2}\) for all i, where \(\sigma_{i}\) is the singular value of \(M^{T}N\).
* \(0\leq\sigma_{i}\leq 1\) for all \(i\), i.e., \(\sigma_{i}\) is non-negative for all \(i\).
* Grassmann distance: \[d_{M_{A},M_{B}}=\sqrt{\sum_{i=1}^{k}\theta_{i}^{2}},\] (12)
where \(\cos(\theta_{i})=\sigma_{i}\). We remind the readers that since \(0\leq\sigma_{i}\leq 1\), \(\theta_{i}=\arccos(\sigma_{i})\) can be chosen to lie within the range \((0,\pi/2)\) for all \(i\). Therefore, the Grassmann distance can be rewritten as:
\[d_{M_{A},M_{B}}=\sqrt{\sum_{i=1}^{k}(\arccos(\sigma_{i}))^{2}}=\sqrt{\sum_{i =1}^{k}(\arccos(\sqrt{\lambda_{i}}))^{2}}. \tag{13}\]
In order to compute the above distance, we first run the QPE, with varied time unitary \(\exp(-i(M^{T}N)^{T}M^{T}Nt/C^{2})\) and \(\rho\) as the input, in a similar fashion to the first part of the HHL algorithm [5]. More precisely, we make use of lemma 3 to apply the controlled evolution (as in [5])
\[\sum_{\tau}\left|\tau\right\rangle\left\langle\tau\right|\otimes\exp(-i(M^{T }N)^{T}M^{T}N\tau t_{0}/T), \tag{14}\]
for varying \(\tau\) (the register that holds \(\left|\tau\right\rangle\) is called the phase register) and a fixed \(T\) to the following state as part of the QPE subroutine:
\[\frac{1}{T}\sum_{\tau,\tau^{\prime}=0}^{T-1}\left|\tau\right\rangle\left\langle \tau^{\prime}\right|\otimes\rho, \tag{15}\]
followed by an inverse quantum Fourier transform on the phase register. Ideally, if the QPE is exact, we would obtain the following state:
\[\frac{1}{k}\sum_{i=1}^{k}\left|\lambda_{i}\right\rangle\left\langle\lambda_{i} \right|\otimes\left|u_{i}\right\rangle\left\langle u_{i}\right|. \tag{16}\]
Now we append another ancilla initialized as \(\left|0\right\rangle\) (technically, it should be written \(\left|0\right\rangle\left\langle 0\right|\) as we are dealing with mixed states; however, it does not possess any systematic issue), performing the following rotation controlled by the phase register \(\{\left|\lambda_{i}\right\rangle\}\)
\[\left|0\right\rangle\rightarrow\Big{(}\frac{\arccos(\sqrt{\lambda_{i}})}{( \pi/2)}\left|0\right\rangle+\sqrt{1-(\frac{\arccos(\sqrt{\lambda_{i}})}{(\pi/2 )})^{2}}\left|1\right\rangle\Big{)}, \tag{17}\]
we obtain:
\[\frac{1}{k}\sum_{i=1}^{k}\left|\lambda_{i}\right\rangle\left\langle\lambda_{i} \right|\otimes\left|u_{i}\right\rangle\left\langle u_{i}\right|\otimes\Big{(} \frac{\arccos(\sqrt{\lambda_{i}})}{(\pi/2)}\left|0\right\rangle+\sqrt{1-( \frac{\arccos(\sqrt{\lambda_{i}})}{(\pi/2)})^{2}}\left|1\right\rangle\Big{)} \Big{(}\frac{\arccos(\sqrt{\lambda_{i}})}{(\pi/2)}\left\langle 0\right|+\sqrt{1-( \frac{\arccos(\sqrt{\lambda_{i}})}{(\pi/2)})^{2}}\left\langle 1\right| \Big{)}. \tag{18}\]
The above state seems somewhat complicated. However, we only need to pay attention to the part entangled with the state \(\left|0\right\rangle\left\langle 0\right|\) on the ancilla. If we make a measurement on the ancilla, the probability of measuring \(\left|0\right\rangle\left\langle 0\right|\) is:
\[p_{0}=\frac{4}{\pi^{2}k}\sum_{i=1}^{k}(\arccos(\sqrt{\lambda_{i}}))^{2}=\frac{ 4}{\pi^{2}k}d_{M_{A},M_{B}}^{2}. \tag{19}\]
Hence, once we can estimate \(p_{0}\), for example, by repeating the measurement and counting the frequency of seeing \(0\), then we can estimate \(d_{M_{A},M_{B}}\). In order to estimate \(p_{0}\) to accuracy \(\delta^{2}\), and \(d_{M_{A},M_{B}}\) to accuracy \(\mathcal{O}(\delta)\), we need to repeat the measurement \(\mathcal{O}(1/\delta^{2})\) times.
### Error Analysis
The previous section assumes an ideal case, as we have emphasized, that when the QPE is exact. In reality, there is an error due to finite-bit precision in the QPE, which means that we only obtain the approximated phase, \(\hat{\lambda}_{i}\), instead of \(\lambda_{i}\). Fortunately, this critical issue has been analytically worked out and dealt with in [5]. We strongly refer the readers to [5] for a complete treatment, as we will not repeat the calculation here. Rather, we directly apply their analysis to our context. Denote \(\kappa\) as the conditional number of \(M^{T}N\) (which means that \(\kappa^{2}\) is the conditional number of \((M^{T}N)^{T}M^{T}N\)), if we have
\[t_{0}=\mathcal{O}(\kappa^{2}/\epsilon_{p}),\]
where \(t_{0}\) is a fixed term appearing in the conditional evolution, i.e., in Eqn. 14. Then the analysis from [5] applied here yields the following bound on the error:
\[\sqrt{\frac{4}{\pi^{2}k}\sum_{i=1}^{k}\Big{(}\arccos(\sqrt{\hat{\lambda}_{i}} )-\arccos(\sqrt{\lambda_{i}})\Big{)}^{2}}\leq\epsilon_{p}. \tag{20}\]
The finite-bit precision in the QPE further affects the actual measurement outcome. We summarize our result in the following lemma.
**Lemma 4**: _In non-ideal case, the probability of measuring \(\left|0\right\rangle\left\langle 0\right|\) now becomes:_
\[\tilde{p}_{0}=\frac{4}{\pi^{2}k}\sum_{i=1}^{k}(\arccos(\sqrt{\hat{\lambda}_{ i}}))^{2}, \tag{21}\]
_and_
\[\left|\tilde{p}_{0}-p_{0}\right|\leq\epsilon_{p}. \tag{22}\]
Proof of the above lemma is given in Appendix A. It essentially means that in the non-ideal case, we can only estimate an approximated value of \(p_{0}\), or equivalently, \(d_{M_{A},M_{B}}\), as \(d_{M_{A},M_{B}}=\frac{1}{2}\pi\sqrt{k}\sqrt{p_{0}}\).
As we have outlined a quantum algorithm for computing linear subspace distance, we have seen many sources of error throughout the whole procedure. For the purpose of summary, we revise the main steps of our algorithm with the corresponding error contribution from such steps before establishing the final running time with respect to overall error tolerance, for which we eventually set to \(\epsilon\):
\(\bullet\)_State Preparation:_ Error \(\epsilon_{S}\) induced from matrix application step 2. In principle, \(\epsilon_{S}\) would contributes to the simulation of error of \(\exp(-i(M^{T}N)^{T}M^{T}Nt)\). However, the running time of the preparation step scales as \(\mathcal{O}(\log(1/\epsilon_{S}))\), which is very efficient. Therefore, we can neglect the error contribution from this step, as we can make it very small at a modest cost.
\(\bullet\)_Simulating Evolution:_ Error \(\epsilon_{H}\) induced directly from improper simulation of \(\exp(-i(M^{T}N)^{T}M^{T}Nt)\), as a result of density matrix exponentiation technique [32]. As we stated in lemma 3, the running time scales \(\mathcal{O}(log(1/\epsilon_{H}))\), which is logarithmic and hence, is efficient. We remind that our algorithm shares a similar routine as the HHL algorithm [5], but in their work, the error induced from simulating the evolution of a given matrix, say, \(\exp(-iHt)\) is negligible.
\(\bullet\)_Performing Quantum Phase Estimation:_ The error is resulted from the improper phase estimation \(\epsilon_{P}\). This is probably the most complicated factor in our algorithm. Inaccurate phase estimation results in an inaccurate distance formula, as we point out in equation 21.
\(\bullet\)_Estimating Distance \(d_{M_{A},M_{B}}\) From Measurement Outcome:_ Its error \(\delta\) comes from the estimation of \(\tilde{p_{0}}\), which gives a further running time \(\mathcal{O}(1/\delta^{2})\) as a result of the Chernoff bound. We remark that the quantum amplitude estimation [31] can improve the cost to \(\mathcal{O}(1/\delta)\). This step can be done with \(\mathcal{O}(1)\) time by preparing \(\mathcal{O}(1/\delta^{2})\) identical quantum circuits and running them in parallel before averaging the result statistically.
All in all, the most dominant error comes from the phase estimation step, similar to [5]. For simplicity, we can set the desired error tolerance to be \(\epsilon\). We establish our first main result.
**Theorem 1** (Estimation of Grassmann Distance): _Given access to matrix \(M\) and \(N\in R^{n\times k}\) as defined in section II. Let \(\kappa\) denote the conditional number of \(M^{T}N\). Then the Grassmann distance between \(M_{A}\) and \(M_{B}\), which is spanned by column vectors of \(M\) and \(N\), respectively, can be estimated to additive accuracy \(\epsilon\) in time_
\[\mathcal{O}\Big{(}\kappa^{2}s^{2}\log(n+k)\cdot\frac{1}{\epsilon^{2}}\Big{)}.\]
This is a general statement about the running time of the algorithm that we have outlined. Now, we will discuss some aspects of the algorithm, and particularly, we will show that there are specific scenarios that achieve a better running time.
### Comments
_Conditional Number:_ The dependence on conditional number \(\kappa\) is worth examining further. The concern is, when will \(\kappa\) be small? Since \(M\) and \(N\) are formed by orthogonal vectors that could be drawn from random, it is generally hard to predict how \(\kappa\) would behave. However, in the above paragraph, we have described a geometric picture about the entries of \(M^{T}N\), which is essentially the inner product of corresponding columns of \(M\) and \(N\). We now mention the following interesting result regarding the conditional number of a matrix.
**Theorem 2** (Theorem 1 in [33] (Lower Bound on Singular Value)): _Let \(A\) be \(n\times n\) matrix and assumed to be diagonally dominant by rows and set \(\alpha=\min_{k}(|a_{kk}|-\sum_{j\neq k}|a_{jk}|)\). Then \(\sigma_{min}>\alpha\)._
The above result can equivalently work if \(A\) is diagonally dominant by columns. Recall that the diagonal entries \((M^{T}N)_{ii}\) are the inner product of \(i\)-th column of \(M\) and \(N\). What does it mean for \(M^{T}N\) to be diagonally row/column dominant? If for every \(i\), we have the condition that \(m_{i}\) is'very close' to \(n_{i}\), and quickly becomes almost orthogonal to the remaining vector from \(\{n\}\) as the index runs away from \(i\), then we can guarantee that the matrix \(M^{T}N\) is diagonally row dominant. By virtue of the above theorem, we have that the minimum singular value of \(M^{T}N\) is lower bounded by a constant (independent of dimension), which directly implies that its conditional number \(\kappa\) is upper
bounded by a constant.
_Advantage Over Classical Algorithm:_ We have mentioned in Section II that the seemingly best classical algorithm for computing Grassmann distance takes time
\[\mathcal{O}(n^{2}k+k^{3}). \tag{23}\]
Compared with the quantum running time (Theorem 1), we see that if \(\kappa,s\ll n,k\), for example, are of order \(\approx\mathcal{O}(\log(n,k))\), then there is an exponential speedup with respect to both \(n\) and \(k\). In the previous paragraph, we have mentioned a setting where \(\kappa\) can be small. Recall that \(s=s_{M}s_{N}\), and, therefore, our quantum algorithm performs the best when both \(M\) and \(N\) are sparse. In the dense regime, i.e., \(s\approx\max(n,k)^{2}\), and this means that with respect to \((n,k)\), the quantum running time could be as much as \(\mathcal{O}(\max(n,k)^{4})\), which is much smaller than the classical algorithm, especially when \(\max(n,k)=n\). The above running time holds for all cases. Because even when both \(M\) and \(N\) are sparse, their product is not guaranteed to be sparse. Therefore, one may still need to perform SVD on a dense matrix.
## V Quantum algorithm for computing ellipsoid distance
Now we turn our attention to the second problem: computing the distance between two ellipsoids \(\mathcal{E}_{M}\) and \(\mathcal{E}_{N}\), each defined by a real symmetric positive definite matrix \(M\) and \(N\in R^{n\times n}\), respectively. Without loss of generality, we assume eigenvalues of \(M\) and \(N\) to be bounded in the known range \((1/\kappa,1)\), which is always achievable by rescaling. To recall, such distance is calculated as follows:
\[\delta_{M,N}=\sqrt{\sum_{i=1}^{n}\log^{2}(\lambda_{i}(M^{-1}N))}. \tag{24}\]
While at first, the symmetric property of both \(M\) (and hence of \(M^{-1}\)) and \(N\) may seem useful; however, this does not make calculating the above distance trivial. The reason is that \(M^{-1}N\) is not necessarily symmetric, which can be seen by simply performing the transpose:
\[(M^{-1}N)^{T}=N^{T}(M^{-1})^{T}=NM^{-1},\]
which is different from \(M^{-1}N\) in general.
### Encoding Matrix
For convenience, we first set \(P=M^{-1}N\). In order to resolve the non-symmetric issue, we use the same trick as we did in last section, i.e., we would try to simulate \(\exp(-iP^{T}Pt)\), then doing algebraic operations on its eigenvalues. This problem is somewhat more challenging than the Grassmann distance, as there is a requirement for matrix inversion. This is exactly the utility of the celebrated quantum linear solver algorithm [5; 6]. We remark that, in our case, \(M\) can be dense. The inversion of a dense matrix has been done in [24], with a particular quantum data structure. That kind of quantum data structure is not assumed in our case, as we only work with the familiar blackbox model. Therefore, we would use the result of [6] to achieve the matrix inversion since this method achieves better scaling time on error tolerance.
**Lemma 5** (Matrix Inversion [6]): _Given access to some Hermitian matrix M of size \(n\times n\), an initial state \(b\equiv\ket{b}\). There is a unitary that performs the following map:_
\[U_{M}\ket{0}\ket{b}=\ket{0}\ket{M^{-1}b}\ket{M^{-1}b}+\ket{1}\ket{Garbage}. \tag{25}\]
_The running time of \(U_{M}\) is \(\mathcal{O}\Big{(}s_{M}\log(n)\kappa_{M}poly(\log 1/\epsilon))\Big{)}\) where \(s_{M}\) is the sparsity of matrix M, \(\epsilon\) is some tolerance error (the state \(\ket{M^{-1}b}\ket{M^{-1}b}\) is just an approximation of the state)._
With the above result and combined with lemma 2, we are able to create the state that corresponds to columns of \(P\). More specifically, we use Lemma 2 to achieve the following:
\[U_{N}\ket{0^{m}}\ket{i}=\ket{0^{m}}(N/s_{N})\ket{i}+\ket{Garbage}, \tag{26}\]
where \(s_{N}\) is the sparsity of \(N\). For a reason that would be clear later, we append another ancilla \(\left|0\right\rangle\) and work in a larger Hilbert space. We would have instead:
\[\mathbb{I}\otimes U_{N}\left|0\right\rangle\left|0^{m}\right\rangle\left|i \right\rangle=\left|0\right\rangle\left|0^{m}\right\rangle\left(N/s_{N}\right) \left|i\right\rangle+\left|0\right\rangle\left|Garbage\right\rangle. \tag{27}\]
Next, we use \(\left|0^{m}\right\rangle\) as the control system to flip the first qubit, i.e., we transform the above state to:
\[\left|1\right\rangle\left|0^{m}\right\rangle\left(N/s_{N}\right)\left|i \right\rangle+\left|0\right\rangle\left|Garbage\right\rangle.\]
We then use matrix inversion from Lemma 5, controlled by the first \(\left|1\right\rangle\) to invert \(M\) on the state \(\left(N/s_{N}\right)\left|i\right\rangle\). To be more precise, we need \(\left|0\right\rangle\left(N/s_{N}\right)\left|i\right\rangle\) in order for \(U_{M}\) to be effective. The extra \(\left|0\right\rangle\) can be borrowed from \(\left|0\right\rangle^{m}\), i.e, we have \(\left|0\right\rangle^{m}\left(N/s_{N}\right)\left|i\right\rangle\equiv \left|0\right\rangle^{m-1}\left|0\right\rangle\left(N/s_{N}\right)\left|i\right\rangle\). We further note that this time the unitary \(U_{M}\) in \(5\) is controlled by a qubit being \(\left|1\right\rangle\) (the first register in the above state). We obtain the following unitary denoted as \(U\):
\[U\left|0\right\rangle\left|0^{m}\right\rangle\left|i\right\rangle=\left|1 \right\rangle\left|0\right\rangle^{m-1}\left(\left|0\right\rangle M^{-1}(N/s_{ N})\left|i\right\rangle+\left|1\right\rangle\left|Garbage_{1}\right\rangle\right)+\left|0 \right\rangle\left|Garbage\right\rangle. \tag{28}\]
Since the role of all garbage states is the same, as they do not contribute to the final result, we simplify the above representation as:
\[\left|1\right\rangle\left|0\right\rangle^{m}M^{-1}N/s_{N}\left|i \right\rangle+\left|Garbage\right\rangle. \tag{29}\]
Now, we again use the \(m\)-qubit register as control and, conditioned on they being \(\left|0^{m}\right\rangle\), flip the first qubit back to \(\left|0\right\rangle\). The final state becomes
\[\left|0\right\rangle\left|0^{m}\right\rangle M^{-1}(N/s_{N})\left|i\right\rangle +\left|Garbage\right\rangle, \tag{30}\]
which has a similar form as \(8\). Therefore, we can use the same procedure, with extra two ancilla qubits, as we outlined in section IV.1 to block encode matrix \(C^{2}P^{T}P/s_{N}^{2}\) (recall that \(M^{-1}N\left|i\right\rangle\) is the \(i\)-th column of \(M^{-1}N\) and we have set \(P\equiv M^{-1}N\)). In a clear manner, the application of Lemma 1 yields the following:
**Lemma 6**: _The simulation of \(\exp(-i(M^{-1}N)^{T}M^{-1}Nt)\) can be achieved up to accuracy \(\epsilon\) in_
\[\mathcal{O}\Big{(}s_{M}s_{N}^{2}poly\log(n,\frac{1}{\epsilon}) \cdot\kappa_{M}t\Big{)}.\]
### Computing Ellipsoid Distance
Once we can simulate \(\exp(-iP^{T}Pt)\), we then run the QPE with completely mixed state \(\rho=(1/n)\sum_{i=1}^{n}\left|i\right\rangle\left\langle i\right|\) as the input. After the QPE, we would obtain a state similar to Eqn. 16. We then append an ancilla \(\left|0\right\rangle\) and rotate as the following (again, we are ignoring the mixed state formalism):
\[\left|0\right\rangle\rightarrow\log(\tilde{\lambda_{i}})\left|0 \right\rangle+\sqrt{1-\log(\tilde{\lambda_{i}})^{2}}\left|1\right\rangle, \tag{31}\]
where \(\tilde{\lambda_{i}}\) is the approximated \(i\)-th eigenvalue of \(P\equiv M^{-1}N\). We then measure the ancilla, with the probability of measuring \(\left|0\right\rangle\) could be shown to be:
\[p=\sum_{i=1}^{n}\log^{2}(\tilde{\lambda_{i}})/n=\tilde{\delta}_{M,N}^{2}/n, \tag{32}\]
where \(\tilde{\delta}_{M,N}^{2}\) denotes the approximated value of the real ellipsoids distance. Following the same analysis as in previous section (which is based on analysis in the HHL algorithm), if we choose \(t=\mathcal{O}(\kappa_{P}/\epsilon)\) (where \(\kappa_{P}\) is the conditional number of P) in the simulation of \(\exp(-iP^{T}Pt)\), then we can guarantee an overall error \(\epsilon\), i.e, \(|\tilde{\delta}_{M,N}-\delta_{M,N}|\leq\epsilon\). The value of \(p\) itself can be estimated to accuracy \(\epsilon\) by repeating the measurement \(\mathcal{O}(1/\epsilon^{2})\) times, which can be improved to \(\mathcal{O}(1/\epsilon)\) by employing quantum amplitude estimation [31]. We now establish our results regarding the estimation of the ellipsoid distance.
**Theorem 3** (Estimation of Ellipsoids Distance): _Let \(\mathcal{E}_{M}\) and \(\mathcal{E}_{N}\) be ellipsoids defined by two real symmetric positive definite matrix M and N. Given local access to entries of M and N, the distance between \(\mathcal{E}_{M}\) and \(\mathcal{E}_{N}\), denoted as \(\delta_{M,N}\), can be estimated in time:_
\[\mathcal{O}\Big{(}s_{M}s_{N}^{2}\log(n)\frac{\kappa_{P}\kappa_{M}}{\epsilon^{2 }}\Big{)},\]
_where \(\kappa_{P}\) is the conditional number of \(M^{-1}N\)._
One may wonder that there seems to be a missing factor of order \(\mathcal{O}(polylog(1/\epsilon)\), as it appears in the running time of Lemma 6. The main reason is that we simply absorb that scaling into the \(\mathcal{O}(1/\epsilon^{2})\), which is a result of Hadamard estimation plus an HHL-like procedure.
_Advantage Over Classical Algorithm:_ As we have mentioned from Sec. II, the best classical algorithm for computing ellipsoids distance has running time \(\mathcal{O}(n^{3})\), with the running time dominated by performing classical SVD. If both \(M\) and \(N\) are sparse, with low conditional numbers (of order \(\log(n)\)), then there is exponential speedup. Even when matrix \(M\) is not sparse, i.e., \(s_{M}\in\mathcal{O}(n)\), but \(N\) is sparse and \(M^{-1}N\) has low conditional number, there is polynomial speedup.
In the Grassmann distance case, we have pointed out the specific scenario where the conditional number of the corresponding matrix could be low; however, it is quite difficult to find such a case in the ellipsoid distance case. The reason is apparently due to the inversion \(M^{-1}\). If \(M\) is orthogonal, i.e., \(M^{-1}=M^{T}\), then it becomes similar to the Grassmann distance case, where we can have a specific scenario with a low conditional number.
## VI Extension to Different Kind of Distances
The above two main quantum algorithms were devoted to estimating the so-called Grassmann distance and ellipsoid distance. Here we aim to extend the application of our quantum algorithm, specifically the Grassmannian case, by discussing some other related distances that could also be practically useful [23]. We note that we would adopt the same notations as in section IV.
The first one is called _Asimov distance_, which is defined as following:
\[d_{M,N}^{A}=\theta_{k}, \tag{33}\]
where \(\cos(\theta_{k})=\sigma_{k}\) is the smallest singular value of \(M^{T}N\). Therefore, we first need to find the value of \(\sigma_{k}\). Finding the maximum and minimum eigenvalues of a given Hermitian matrix has been done in [34], based on the (classical) power method. In our case, we are given the ability to simulate \(\exp(-i(M^{T}N)^{T}M^{T}Nt)\). Still, the method of [34] can be adapted in a straightforward manner. It will yield us the approximated minimum eigenvalue of \((M^{T}N)^{T}M^{T}N\), which in turn gives an estimation of \(\sigma_{k}^{2}\). As was analyzed in [34], the random initialization step is quite critical in the performance of the algorithm, as it might diminish the exponential speedup. In general, only quadratic speedup is obtained.
The next one, very close to the Asimov distance, is called _projection distance_, which is defined simply as:
\[d_{M,N}^{P}=\sin(\theta_{k}), \tag{34}\]
where the angle \(\theta_{k}\) is defined in the same way as in the Asimov distance case. It is easy to see that \((d_{M,N}^{P})^{2}=1-\cos(\theta_{k})^{2}=1-\sigma_{k}^{2}\). Therefore, estimating \(\sigma_{k}^{2}\) is sufficient. Such estimation of \(\sigma_{k}\) was just described above by using the method in [34]. Thus, its efficiency is the same as that of the Asimov distance.
Now we discuss the _Chordal distance_, which is defined as follows:
\[d_{M,N}^{C}=\Big{(}\sum_{i=1}^{k}\sin(\theta_{i})^{2}\Big{)}^{1/2}. \tag{35}\]
We note the following:
\[(d_{M,N}^{C})^{2}= \sum_{i=1}^{k}\sin(\theta_{i})^{2} \tag{36}\] \[=k-\sum_{i=1}^{k}\cos(\theta_{i})^{2}\] (37) \[=k-\sum_{i=1}^{k}\sigma_{i}^{2}. \tag{38}\]
Therefore, it suffices to estimate \(\sum_{i=1}^{k}\sigma_{i}^{2}\). This can be done in a very similar manner to Grassmann distance estimation, except that in the rotation step 17, we do not have to do any further arithmetic operation, which makes it even simpler. Therefore, the estimation of the Chordal distance can be done with the same running time as that of the Grassmann distance; see Theorem 1.
## VII Estimating Grassmann distance and ellipsoid distance in the memory model
In the above sections, we work in the standard blackbox model that assumes coherent access to entries of corresponding matrices. Here, we explore how the so-called memory model can potentially enhance the estimation of Grassmann and ellipsoid distances. Particularly, we shall see that the quantum running time in the memory model is sparsity-independent. This is within our expectation, as in [24], the authors also used such a model to construct a quantum linear solver that has running time independent of the sparsity of the given matrix.
To begin with, the memory model was proposed in [35] as a novel quantum architecture that allows efficient load out of classical data. In the standard blackbox model, data entries are accessible individually, whereas, in the memory model, data is usually loaded column/row-wise. In [35], the authors showed how this model can give rise to an efficient quantum algorithm for the recommendation system. In particular, this model, combined with the famous quantum phase estimation algorithm, yields a sparsity-independent quantum linear solver [24]. While the problem of recommendation system was "de-quantized" in the seminal work of Tang [36] (where the author showed that under appropriate assumption regarding input access, there exists a classical algorithm that could solve the recommendation system problem with at most a polynomial slowdown compared to the quantum counterpart), the advantage of a dense quantum linear solver seems to hold due to BQP-completeness of matrix inversion [5]. Throughout this work, we have seen that the blackbox model could yield an efficient quantum algorithm for estimating different kinds of geometric distances. Thereby, it is very interesting to expand the potential of the memory model in solving various computational tasks other than dense linear systems [24].
Before diving into the algorithms, we first recall the features of the memory model.
**Lemma 7** (Data Structure [24; 35]): _Given a matrix \(M\in\mathbb{R}^{n\times m}\). Then there exists a quantum data structure that allows the following coherent mapping in \(\mathcal{O}(poly(\log(mn)))\) time:_
\[U_{M}\ket{\mathfrak{h}}\ket{0}\rightarrow\frac{1}{||A_{i}||} \sum_{j=1}^{n}A_{ij}\ket{j}\ket{i},\] \[U_{N}\ket{0}\ket{j}\rightarrow\frac{1}{||A||_{F}}\sum_{i=1}^{m} ||A_{i}||\ket{i}\ket{j},\]
_where \(i,j\) refers to the row and column index of \(A\)._
Similar to our previously described algorithm in the blackbox model, we will employ the above mapping to construct the block encoding of corresponding matrices, followed by the QPE and measurement to extract the desired distances.
### Grassmann Distance
Recall that in the Grassmann distance problem, the algorithm outlined in section IV relies on the simulation of \(\exp(-i(M^{T}N)^{T}M^{T}Nt)\). The question now is how to perform block-encoding of \(M^{T}N\) given the memory model structure in Lemma 7. We first remind that the matrix \(M^{T}\) is of size \(k\times n\), and the matrix \(N\) is of size \(n\times k\), where \(n\) is the dimension of given data and \(k\) is the number of data points. WOLG, for simplicity, we embed \(M\) and \(N\) into a bigger matrix of size \(\max(k,n)\times\max(k,n)\), with those additional entries set to \(0\). With such simplification, we would work with a square matrix instead, which is more convenient.
Now, we attempt to do block-encoding of \(M^{T}N\) given the memory model structure. We note that, as mentioned in [24], the memory model naturally provides the block-encoding of \(M/|M|_{F}\) and \(N/|N|_{F}\) (we shall derive this in the appendix), where \(|.|_{F}\) refers to the Frobenius norm of these matrices. For convenience, we denote those block-encoding unitaris as \(U_{M}\) and \(U_{N}\), respectively.
We first remind from definition 1 that if a matrix \(A\) is encoded in some unitary \(U\), then it can be written as:
\[U=\left|\mathbf{0}\right\rangle\left\langle\mathbf{0}\right|\otimes A+\cdots \tag{39}\]
It is worthy to observe that, the above unitary \(U\) acts on the state \(\left|\mathbf{0}\right\rangle\left|\phi\right\rangle\), where \(\left|\phi\right\rangle\) has the same dimension as \(A\), as following:
\[U\left|\mathbf{0}\right\rangle\left|\phi\right\rangle=\left|\mathbf{0}\right\rangle A \left|\phi\right\rangle+\sum_{j\neq\mathbf{0}}\left|j\right\rangle\left|\phi _{j}\right\rangle, \tag{40}\]
where \(\left|\phi_{j}\right\rangle\) refers to some redundant state. For some reason that would become clear later, we add an ancilla \(\left|0\right\rangle\) and produce the following:
\[\mathbb{I}\otimes U\left|0\right\rangle\left|\mathbf{0}\right\rangle\left| \phi\right\rangle=\left|0\right\rangle\left|\mathbf{0}\right\rangle A\left| \phi\right\rangle+\left|0\right\rangle\sum_{j\neq\mathbf{0}}\left|j\right\rangle \left|\phi_{j}\right\rangle. \tag{41}\]
With the above observation, let \(\left|i\right\rangle,\left|k\right\rangle\) be some arbitrary computational basis states (with their dimensions corresponding to those of the matrices \(M,N\), respectively), we have the following:
\[\mathbb{I}\otimes U_{M}\left|0\right\rangle\left|\mathbf{0} \right\rangle\left|i\right\rangle =\left|0\right\rangle\left|\mathbf{0}\right\rangle\left(M/|M|_{F} \right)\left|i\right\rangle+\left|0\right\rangle\sum_{j\neq\mathbf{0}}\left|j \right\rangle\left|\phi_{j}\right\rangle\equiv\left|\Phi_{M}\right\rangle, \tag{42}\] \[\mathbb{I}\otimes U_{N}\left|0\right\rangle\left|\mathbf{0} \right\rangle\left|k\right\rangle =\left|0\right\rangle\left|\mathbf{0}\right\rangle\left(N/|N|_{F} \right)\left|k\right\rangle+\left|0\right\rangle\sum_{j\neq\mathbf{0}}\left|j \right\rangle\left|\phi_{j}\right\rangle\equiv\left|\Phi_{N}\right\rangle. \tag{43}\]
\(\bullet\) Now for state \(\left|\Phi_{N}\right\rangle\), we use \(X\) gate to flip the ancilla, e.g., the first qubit to obtain
\[X\otimes\mathbb{I}\left|\Phi_{N}\right\rangle=\left|1\right\rangle\left| \mathbf{0}\right\rangle\left(N/|N|_{F}\right)\left|k\right\rangle+\left|1 \right\rangle\sum_{j\neq\mathbf{0}}\left|j\right\rangle\left|\phi_{j}\right \rangle\equiv\left|\Phi_{N}^{1}\right\rangle. \tag{44}\]
\(\bullet\) Now we use the register \(\left|\mathbf{0}\right\rangle\) as a control register to flip the first qubit back to \(\left|0\right\rangle\) (i.e., flip conditioned the register being \(\mathbf{0}\)), and we obtain the state:
\[\left|0\right\rangle\left|\mathbf{0}\right\rangle\left(N/|N|_{F}\right)\left|k \right\rangle+\left|1\right\rangle\sum_{j\neq\mathbf{0}}\left|j\right\rangle \left|\phi_{j}\right\rangle\equiv\left|\Phi_{N}^{2}\right\rangle. \tag{45}\]
Denote the unitary \(\mathbb{I}\otimes U_{N}\) plus the above two additional steps as \(P_{N}\). As previously pointed out, for arbitrary matrix \(A\), \(A\left|i\right\rangle\) is the \(i\)-th column of \(A\). Furthermore, the entries of matrix \(M^{T}N\) are basically the inner product of columns of \(N\) and \(M\). It is then straightforward to observe that:
\[\left\langle\Phi_{M}|\Phi_{N}^{2}\right\rangle=\left\langle i\right|\frac{M^{ \dagger}}{|M|_{F}}\frac{N}{|N|_{F}}\left|k\right\rangle=\frac{(M^{T}N)_{ik}}{ |M|_{F}|N|_{F}}, \tag{46}\]
where we use that \(M,N\) are real then \(M^{\dagger}=M^{T}\), which is just the transpose. The above property matches perfectly with the definition of block-encoding 1. Therefore, the unitary \((\mathbb{I}\otimes U_{M}^{\dagger})(P_{N})\) is exactly the unitary block-encoding
of \((M^{T}N)/|M|_{F}|N|_{F}\). Note that this unitary has a larger dimension than the initial unitary encoding of \(M\) and \(N\).
Given the block-encoding of \(M^{T}N/|M|_{F}|N|_{F}\), it is very easy to perform the block encoding of \((M^{T}N)^{T}(M^{T}N)/(|M|_{F}|N|_{F})^{2}\). Basically, we can apply the same procedure as we just outlined. Therefore, we do not repeat it here. We simply would proceed with the following lemma:
**Lemma 8**: _In presence of the memory model with given data structure 7 of \(M\) and \(N\), the simulation of \(\exp(-i(M^{T}N)^{T}M^{T}Nt)\) can be achieved up to accuracy \(\epsilon\) in time_
\[\mathcal{O}\Big{(}polylog((n,k))|N|_{F}^{2}|M|_{F}^{2}(t+\log(1/\epsilon)) \Big{)},\]
where we remark that \((n,k)\) refers to \(\max(n,k)\). Using such ability, one can apply essentially the same algorithm outlined in IV.2 to estimate the Grassmann distance. Therefore, we have the following straightforward statement:
**Theorem 4**: _Given access to matrices \(M\) and \(N\) in the memory model, the Grassmann distance between \(M_{A}\) and \(M_{B}\) can be estimated to additive accuracy \(\delta\) in time_
\[\mathcal{O}\Big{(}\kappa^{2}|M|_{F}^{2}|N|_{F}^{2}polylog((n,k)) \cdot\frac{1}{\delta^{2}}\Big{)}.\]
Now, we make the following comparison. Since both \(M\) and \(N\) are of size \(n\times k\) and are assumed to have columns of unit norm, their Frobenius norm is \(\sqrt{k}\). Therefore, the actual quantum running time is \(\mathcal{O}\Big{(}\kappa^{2}k^{2}polylog((n,k))\cdot\frac{1}{\delta^{2}}\Big{)}\). If the conditional number \(\kappa\) does not grow anything faster than \(polylog((n,k))\), this running time is considerably much shorter than the classical running time \(\mathcal{O}(n^{2}k+k^{3})\), as well as the quantum algorithm in the blackbox model (see Sec IV.2) in the dense setting, which could be \(\mathcal{O}(k^{4})\)).
### Ellipsoid Distance
Now, we discuss how ellipsoid distance could be estimated in the memory model. Recall that in the ellipsoids distance problem, we are given two symmetric positive definite matrices \(M\) and \(N\), and we need to estimate the following quantity:
\[\delta_{M,N}=\sqrt{\sum_{i=1}^{n}\log^{2}(\lambda_{i}(M^{-1}N))}, \tag{47}\]
For completeness, we first recall a few key operations as well as important observations.
The first one is, as we also pointed out in the last section, the memory model naturally produces the block encoding unitaries of \(M/|M|_{F}\) and \(N/|N|_{F}\), respectively. We denote them here as \(U_{M}\) and \(U_{N}\) for simplicity.
The second one is the operation of \(U_{M}\) (respectively \(U_{N}\)) on the given arbitrary state \(\ket{\mathbf{0}}\ket{\phi}\):
\[U\ket{\mathbf{0}}\ket{\phi}=\ket{\mathbf{0}}A\ket{\phi}+\sum_{j \neq\mathbf{0}}\ket{j}\ket{\phi_{j}}. \tag{48}\]
The last one is the matrix inversion quantum algorithm that was proposed in [24].
**Lemma 9**: _In the presence of the memory model (see 7), given some initial state \(\ket{b}\), the following unitary could be implemented:_
\[U_{invert}^{M}\ket{0}\ket{b}=\ket{0}CM^{-1}\ket{b}+\ket{1}\ket{Garbage}. \tag{49}\]
_The running time of \(U_{invert}^{M}\) is_
\[\mathcal{O}\Big{(}\kappa_{M}|M|_{F}\cdot polylog(n)\frac{1}{\epsilon}\Big{)},\]
_where \(\kappa_{M}\) is the conditional number of \(M\); \(|M|_{F}\) is the Frobenius norm of \(M\) and \(\epsilon\) is the error tolerance; and \(C\) is the factor that is required for the normalization purpose, e.g., \(C\leq 1/\kappa\)._
Now, we are ready to describe our quantum algorithm in the memory model. It turns out that the quantum algorithm in this case (memory model) is essentially similar to that of blackbox model (see Section V). As we shall see, the only difference is the matrix inversion step, where the memory model supports a faster subroutine.
We begin with some computational basis state \(\ket{i}\) (which shares the same dimension as \(M,N\)), we use \(U_{N}\) to act and obtain:
\[\mathbb{I}\otimes U_{N}\ket{0}\ket{\mathbf{0}}\ket{j}=\ket{0}\ket{\mathbf{0}} \left(N/|N|_{F}\right)\ket{i}+\ket{0}\sum_{j\neq 0}\ket{j}\ket{\phi_{j}}. \tag{50}\]
We make an observation that this state is very similar to Eqn. 26. Following the same procedure as in V (basically the whole paragraph after equation 26), we use \(\ket{\mathbf{0}}\) to flip the first qubit \(\ket{0}\) to \(\ket{1}\), then use \(\ket{1}\) as a controlling qubit to apply Lemma 9. We yield the following state:
\[\ket{0}\ket{0^{m}}M^{-1}(N/|N|_{F})\ket{i}+\ket{Garbage}. \tag{51}\]
Therefore, we have the following:
**Lemma 10**: _In the presence of memory model, the simulation of \(\exp(-iP^{T}Pt)\) can be achieved up to accuracy \(\epsilon\) in_
\[\mathcal{O}\Big{(}\kappa_{M}|M|_{F}\cdot polylog(n)\frac{1}{\epsilon}t|N|_{F}^{ 2}\Big{)}.\]
With this, following the same procedure outlined V, the ellipsoid distance can be estimated.
**Theorem 5** (Ellipsoid Distance in Memory Model): _In the presence of the memory model, the distance between two ellipsoids defined by two real symmetric positive definite matrices \(M\) and \(N\) can be estimated up to an additive accuracy \(\epsilon\) in time:_
\[\mathcal{O}\Big{(}|M|_{F}\cdot polylog(n)|N|_{F}^{2}\frac{\kappa_{P}\kappa_{M}} {\epsilon^{3}}\Big{)},\]
_where \(P\equiv M^{-1}N\)._
## VIII Conclusion
Motivated by fast-pace development of quantum algorithm, as well as the prospect of quantum advantage, we have outlined two quantum algorithms for estimating Grassmann distance and ellipsoid distance between two subspaces formed by corresponding data elements. We have specifically constructed quantum algorithm in both data models, which is the standard blackbox model and the newly developed memory model. Our algorithm is built upon density matrix exponentiation [12], fast quantum matrix application [34] and quantum linear solving algorithm [5; 6]. Under the corresponding assumption regarding the input access, as well as appropriate conditions regarding sparsity and conditional number, our quantum algorithms yield significant speedup compared to classical algorithms that could solve the same problems, e.g, estimating Grassmann distance and ellipsoids distance. The novelty of our approach, or more specifically, the use of quantum phase estimation, lies in the way that we perform algebraic operations directly on the phase registers, and essentially combine them to estimate corresponding distances. As the effort for expanding the applicability of quantum computer still goes on, we strongly believe that the techniques outlined here could find more impactful benefits in devising novel quantum algorithm to solve challenging computational problems. As we mentioned, our work have added to a few existing works, e.g, topological data analysis [15], homology detection [21], that explores the potential of quantum computing methods in (computational) geometry & topology, which is a very fundamental, yet having increasingly significant impacts to real-world problems. What else can stem from this work is a completely interesting avenue.
**Acknowledgement:** The author thanks Tzu-Chieh Wei for careful reading and thoughtful comments on the work. The author also thanks Phenikaa Institute for Advanced Study (PIAS), Phenikaa University for hospitality, where part of the work was done.
|
2304.13396 | Local alpha-removal strength in the mean-field approximation | The local alpha strength is proposed to quantify the possibility to form an
alpha particle at a specific location inside the nucleus. It also provides the
strength of ground and excited states in the residual nuclei after the removal
of the alpha particle. We use the Hartree-Fock-plus-BCS (HF+BCS) method in the
calculation of the local alpha strengths for Sn isotopes. The local alpha
strengths are easily calculable and the results are consistent with recent
experimental data for Sn isotopes. | Takashi Nakatsukasa, Nobuo Hinohara | 2023-04-26T09:14:17Z | http://arxiv.org/abs/2304.13396v3 | # Local alpha strength in the mean-field approximation
###### Abstract
**Background:** The alpha cluster is a prominent feature, not only in light nuclei but also in heavy nuclei. In order to study the alpha-particle formation in the mean-field calculation, the localization function has been extensively utilized. However, the localization function does not guarantee the proximity of four different nucleons which is required by the alpha-particle formation. A simple indicator of the proximity is desired. Recently, experimental measurement of the quasi-free alpha-knockout reaction for Sn isotopes reveals the cross sections with a monotonous decrease with increasing neutron number. [J. Tanaka et al., Science **371**, 260 (2021)]. This is interpreted as evidence of the surface alpha formation.
**Purpose:** We propose a simple and comprehensible quantity to assess the proximity of four nucleons with different spins and isospins. Using this, we examine the recent measurement of alpha-knockout cross sections in Sn isotopes.
**Methods:** The local alpha strength is proposed to quantify the possibility to form an alpha particle at a specific location inside the nucleus. In addition, it provides the strength of ground and excited states in the residual nuclei after the removal of the alpha particle. In order to make the calculation feasible, we introduce several approximations, such as point-alpha, mean-field, and no rearrangement approximations. We use the Hartree-Fock-plus-BCS (HF+BCS) method for the mean-field calculation for Sn isotopes. We also propose another measure, the local alpha probability, which should provide a better correlation with the alpha-knockout cross sections.
**Results:** The calculation of the local alpha strength is extremely easy in the mean-field model with no rearrangement. For even-even Sn isotopes, the local alpha strengths to the ground state of residual nuclei are almost universal in the nuclear surface region. In contrast, the local alpha probability produce strong neutron number dependence consistent with the experiment.
**Conclusions:** The local alpha strength and the local alpha probability are easily calculable in the mean-field models. Recent experimental data for Sn isotopes may be explained by a simple model without explicit consideration of alpha correlation.
## I Introduction
Clustering is an intriguing phenomenon in the nuclear structure. Correlations between nucleons result in the formation of subunits (clusters) inside the nucleus. The most typical cluster is the alpha particle, which is present not only in light nuclei, but also observed in heavy nuclei as the alpha decay phenomena. In light nuclei, prominent clustering often takes place in excited states whose energy is close to the threshold of the corresponding cluster decomposition [1].
The microscopic theories of the clustering phenomena have a long history [1; 2; 3; 4; 5]. In fact, Gamow's theory of the alpha decay [6; 7] was published even before the discovery of the neutron [8]. Most theoretical studies of the cluster structure in the past have been performed with an assumption that a certain cluster structure exists in the nucleus. It is common to construct the cluster wave functions in terms of the Gaussian wave packets [9]. For instance, the antisymmetric molecular dynamics (AMD) [10] and the fermionic molecular dynamics (FMD) [11] were extensively utilized in studies of the nuclear cluster phenomena and heavy-ion reactions. In the AMD and FMD, the cluster structure is not assumed a priori, although the Gaussian wave packet is assumed for a single-particle state [12; 13; 14]. The configuration mixing, which is often treated with the projection and the generator coordinate method [15; 16], plays an important role in the studies of clustering in relatively light nuclei. In the AMD and FMD, since each Gaussian has parameters corresponding to the position of its center and the magnitude of its width, the clustering can be identified by close location of centers of many Gaussian wave packets.
In contrast, the mean-field (energy density functional) theory can provide the optimal single-particle wave functions to minimize the total energy of a Slater determinant. One of the advantageous features of the theory is the capability of describing almost all the nuclei in the nuclear chart using a single energy density functional which is a functional of normal and pair densities. Another advantage can be the treatment of the pairing correlations, which become indispensable especially for heavy nuclei with open-shell configurations. The obtained (generalized) single-particle states have no "centers," and in most cases, they are spread over the entire nucleus. Therefore, in the mean-field theory, it is difficult to find the clustering in terms of the single-particle wave functions. In relatively light nuclei, prominent cluster structure could be observed by the nucleon density profile [17; 18; 19; 20; 21]. However, the identification of the cluster structure has some ambiguity and relies on one's intuition.
There exist some methods aiming to identify and quan
tify the clustering effect in the mean-field states. In case one can intuitively build a model cluster wave function, its overlap with the mean-field state gives a possible measure of the clustering [19]. Recently, another method, which does not require a model wave function, has been proposed to visualize possible cluster correlations using the mean-field wave functions [22]. However, the application of the method seemingly become more and more difficult as the nucleon number increases.
The localization function, introduced into nuclear physics by Reinhard and collaborators [23], is a possible measure of alpha-particle formation. Similar functions were introduced in molecular physics to investigate the shell structure and the chemical bonding [24]. Nevertheless, Since it only needs one-body densities, such as kinetic and current densities, the calculation requires negligible computational cost. In addition, it is given as a function of the spatial coordinates. Thus, one can identify the location of the alpha particles. Because of these advantageous properties, the localization function has been adopted in a number of studies for the cluster correlations within the mean-field theory [25; 26; 27; 28; 29; 30]. However, it should be noted that the localization function does not examine whether the four nucleons exist next to each other. It tells us information on the conditional pair density for particles of the same kind, \(P_{q\sigma}(\mathbf{r},\mathbf{r}^{\prime})\) where \(q=n,p\) and \(\sigma=\pm 1/2\). As is clearly mentioned in Ref. [23], it is just a first step in identifying the region where localized four nucleons are located. The purpose of the present paper is to propose the next step, "local alpha strength", as a measure of _localized four nucleons_ that can be easily estimated in the mean-field theory.
Experimentally, the alpha correlations in nuclei can be investigated by quasi-free alpha knockout reactions [31; 32]. A recent experiment on the alpha knockout reactions in Sn isotopes by Tanaka and collaborators [33] reveals that the cross section monotonous decreases as the neutron number increases. They interpret this trend as a tight interplay between the alpha formation and the neutron skin [34]. The distorted wave impulse approximation study shows that the reaction takes place in a peripheral region and probes the alpha particles in the nuclear surface [35]. Another purpose of the present paper is to examine consistency between the calculated local alpha strength and the result of Ref. [33].
The paper is organized as follows: We propose a feasible measure of four-particle localization, local alpha strength, in Sec. II. In Sec. III, the local alpha strength is applied to Sn and other isotopes. The calculation is compared with the measurement of the alpha-knockout reaction. Concluding remarks are given in Sec. IV.
## II Local alpha strength
### Definition
Let us assume a single Slater-determinant description for the alpha particle and that the orbital part of the single-particle wave functions are all the same and given by \(\phi_{\alpha}(\mathbf{r})\), where the center of mass of the alpha particle is located at the origin. Then, the alpha-particle annihilation operator \(\hat{\alpha}(\mathbf{R})\) at the position \(\mathbf{R}\) is given by
\[\hat{\alpha}(\mathbf{R}) \equiv\int d\mathbf{r}_{1}d\mathbf{r}_{2}d\mathbf{r}_{3}d\mathbf{ r}_{4}\phi_{\alpha}^{*}(\mathbf{r}_{1\mathbf{R}})\phi_{\alpha}^{*}(\mathbf{r}_{2 \mathbf{R}})\phi_{\alpha}^{*}(\mathbf{r}_{3\mathbf{R}})\phi_{\alpha}^{*}( \mathbf{r}_{4\mathbf{R}})\] \[\quad\times\hat{\psi}_{\uparrow}^{(n)}(\mathbf{r}_{1})\hat{\psi }_{\downarrow}^{(n)}(\mathbf{r}_{2})\hat{\psi}_{\uparrow}^{(p)}(\mathbf{r}_{3}) \hat{\psi}_{\downarrow}^{(p)}(\mathbf{r}_{4}) \tag{1}\]
where \(\mathbf{r}_{i\mathbf{R}}=\mathbf{r}_{i}-\mathbf{R}\) (\(i=1,\cdots,4\)) and \(\hat{\psi}_{\sigma}^{(0)}(\mathbf{r})\) indicates the field operator for the particle of the isospin \(q=n,p\) and the spin \(\sigma=\uparrow,\downarrow\). The wave function \(\phi_{\alpha}(\mathbf{r})\) is a well-localized function, normally assumed to be a Gaussian in the cluster model.
In order to investigate the alpha particle in the nucleus, we propose "local alpha strength" defined as
\[S_{\alpha}(\mathbf{r},E)\equiv\langle\Phi_{0}^{A}|\hat{\alpha}^{\dagger}( \mathbf{r})\delta(E-\hat{H})\hat{\alpha}(\mathbf{r})|\Phi_{0}^{A}\rangle, \tag{2}\]
where \(\hat{H}\) is the Hamiltonian, and \(|\Phi_{0}^{A}\rangle\) is the ground state of the nucleus (\(N,Z\)).
The meaning of this quantity is clear if we insert the unity expanded in terms of the complete set for the nucleus (\(N-2,Z-2\)), \(\{|\Phi_{k}^{A-4}\rangle\}\).
\[S_{\alpha}(\mathbf{r},E)=\sum_{k=0}^{\infty}\left|\langle\Phi_{k}^{A-4}|\hat{ \alpha}(\mathbf{r})|\Phi_{0}^{A}\rangle\right|^{2}\delta(E-E_{k}^{A-4}), \tag{3}\]
where \(\hat{H}|\Phi_{k}^{A-4}\rangle=E_{k}^{A-4}|\Phi_{k}^{A-4}\rangle\). Thus, the quantity
\[\mathcal{S}_{\alpha}(\mathbf{r})_{E,\Delta E} = \int_{E-\Delta E/2}^{E+\Delta E/2}S_{\alpha}(\mathbf{r},E^{\prime}) dE^{\prime} \tag{4}\] \[= \sum_{k}^{\Delta E}\left|\langle\Phi_{k}^{A-4}|\hat{\alpha}( \mathbf{r})|\Phi_{0}^{A}\rangle\right|^{2}.\]
provides the strength of the transition to states in the energy range \((E-\Delta E/2,E+\Delta E/2)\) of the residual nucleus, when the alpha particle is removed at the position \(\mathbf{r}\) in the nucleus \((N,Z)\). See also Fig. 1.
It is convenient to define the variable \(E\) with respect to the ground state energy \(E_{0}^{A-4}\), namely, the excitation energy \(E^{\prime}=E-E_{0}^{A-4}\). Hereafter, we denote \(E^{\prime}\) as \(E\) for simplicity. Appropriate smearing of the delta function \(\delta(E-E_{k}^{A-4})\) may be useful for visualizing the strength as a function of \(E\). The local alpha strength \(S_{\alpha}(\mathbf{r},E)\) may provide transition strength to states at the excitation energy \(E\), when the alpha particle is removed at the position \(\mathbf{r}\).
### Approximations
The calculation of Eq. (2) demands a large computational cost in general. We introduce here some approximations to make the computation feasible.
#### ii.2.1 Point-alpha approximation
First, in order to avoid the multiple integrations in Eq. (1), we approximate the wave function \(\phi_{\alpha}(\mathbf{r})\) by the delta function \(\delta(\mathbf{r}-\mathbf{R})\). Thus, in this paper, we use
\[\hat{\alpha}(\mathbf{r})=\hat{\psi}_{\uparrow}^{(n)}(\mathbf{r}) \hat{\psi}_{\downarrow}^{(n)}(\mathbf{r})\hat{\psi}_{\uparrow}^{(p)}(\mathbf{ r})\hat{\psi}_{\downarrow}^{(p)}(\mathbf{r}). \tag{5}\]
This approximation significantly reduces the computational cost. Because we use wave functions of the mean-field calculations, we do not necessarily need to perform multiple integrations with high reliability. Further approximations in the following subsections lead to products of pair densities \(\langle\hat{\psi}_{\uparrow}^{(a)}(\mathbf{r}_{1})\hat{\psi}_{\downarrow}^{(a )}(\mathbf{r}_{2})\rangle\). Without the point-alpha approximation, we need non-local pair densities with \(\mathbf{r}_{1}\neq\mathbf{r}_{2}\), whose behaviors are not well controlled in currently available pairing energy density functionals.
#### ii.2.2 Mean-field approximation
Next, we adopt the mean-field ground state for \(|\Phi_{0}^{A}\rangle\), and the Hamiltonian \(\hat{H}\) is approximated in the mean-field level. \(\hat{H}\) is truncated up to the second order in terms of the quasiparticle (qp) operators defined with respect to the ground state of the residual nucleus \((N-2,Z-2)\).
\[\hat{H}=\sum_{g=n,p}\sum_{i>0}E_{i}^{(a)}\hat{a}_{i}^{(0)\dagger }\hat{a}_{i}^{(q)}+\cdots, \tag{6}\]
where the ground-state energy of the nucleus \((N-2,Z-2)\), \(E_{0}^{A-4}=\langle\Phi_{0}^{A-4}|\hat{H}|\Phi_{0}^{A-4}\rangle\), is subtracted. \(\hat{a}_{i}^{(q)}\) and \(E_{i}^{(q)}\) are the qp annihilation operators and corresponding qp energies. The subscript \(i>0\) means the summation with respect to the qp states with positive qp energies \(E_{i}^{(q)}>0\). In Eq. (3), the excited states \((k>0)\) are given by an even number of qp excitations. Thus, the index \(k\) stands for 2qp, 4qp, \(\cdots\) and the excitation energy \(E_{k}^{A-4}\) (\(k>0\)) in Eq. (3) are given as
\[E_{ij,0}^{A-4} = E_{i}^{(n)}+E_{j}^{(n)}, \tag{7}\] \[E_{0,ij}^{A-4} = E_{i}^{(p)}+E_{j}^{(p)},\] (8) \[E_{ij,i^{\prime},j^{\prime}}^{A-4} = E_{i}^{(n)}+E_{j}^{(n)}+E_{i^{\prime}}^{(p)}+E_{j^{\prime}}^{(p)}, \tag{9}\]
and so on.
With the mean-field construction of the states, \(|\Phi_{0}^{A}\rangle\) and \(|\Phi_{0}^{A-4}\rangle\), one can calculate the transition matrix elements \(\langle\Phi_{k}^{A-4}|\hat{\alpha}(\mathbf{r})|\Phi_{0}^{A}\rangle\) in Eq. (3) as follows. Except for the cases under the presence of proton-neutron (pn) pairing [36; 37] and/or pn mixing [38; 39; 40], the states are normally described by product wave functions of protons and neutrons, \(|\Phi^{A}\rangle=|\Phi^{N}\rangle\otimes|\Phi^{2}\rangle\). Therefore, the transition matrix elements can be also written in the product form, where \(k\) (\(k^{\prime}\)) stands for qp, 2qp, \(\cdots\) indices for neutrons (protons). Thus,
\[S_{\alpha}(\mathbf{r},E)=\sum_{k\geq 0}\sum_{k^{\prime}\geq 0}F_{k}^{(n)}( \mathbf{r})F_{k^{\prime}}^{(p)}(\mathbf{r})\delta(E-E_{kk^{\prime}}^{A-4}), \tag{10}\]
where \(E_{kk^{\prime}}^{A-4}\) are given by Eqs. (7)\(-\)(9), and
\[F_{k}^{(q)}(\mathbf{r})=\left|\langle\Phi_{k}^{N_{q}-2}|\hat{\psi }_{\uparrow}^{(q)}(\mathbf{r})\hat{\psi}_{\downarrow}^{(q)}(\mathbf{r})|\Phi_ {0}^{N_{q}}\rangle\right|^{2}, \tag{11}\]
with \(N_{q}=N\) and \(Z\) for \(q=n\) and \(p\), respectively.
#### ii.2.3 Neglect of rearrangement
We introduce further approximation to neglect the rearrangement of the mean fields. Hence, we assume that the mean fields in nuclei of mass number \(A\) and \(A-4\) (before and after the removal of an alpha particle) are identical. When the neutrons (protons) are in a superfluid phase, we also neglect the change of the chemical potential, which leads to \(|\Phi_{k}^{N_{q}-2}\rangle\approx|\Phi_{k}^{N_{q}}\rangle\) with \(q=n\) (\(p\)). With this approximation, the calculation of the residual states (\(|\Phi_{k}^{A-4}\rangle\)) is no longer required. The mean-field Hamiltonian (6) is now replaced by that for the nucleus \((N,Z)\), in which all the quasiparticle states are defined with respect to the mean-field ground state of \(|\Phi_{0}^{A}\rangle\).
Assuming the Bogoliubov transformation [41],
\[\hat{a}_{i}^{\dagger}=\sum_{\sigma}\int d\mathbf{r}\left\{U_{i}( \mathbf{r}\sigma)\hat{\psi}_{\sigma}^{\dagger}(\mathbf{r})+V_{i}(\mathbf{r} \sigma)\hat{\psi}_{\sigma}(\mathbf{r})\right\}, \tag{12}\]
\[\hat{\psi}_{\sigma}^{\dagger}(\mathbf{r})=\sum_{i>0}\left\{U_{i}^{*}(\mathbf{r} \sigma)\hat{a}_{i}^{\dagger}+V_{i}(\mathbf{r}\sigma)\hat{a}_{i}\right\}, \tag{13}\]
the matrix elements \(F_{k}^{(q)}(\mathbf{r})\) of Eq. (11) are given as
\[F_{0}(\mathbf{r}) =\left|\sum_{i>0}U_{i}(\mathbf{r}\uparrow)V_{i}^{*}(\mathbf{r} \downarrow)\right|^{2}=\left|\kappa(\mathbf{r})\right|^{2}, \tag{14}\] \[F_{ij}(\mathbf{r}) =\left|V_{i}^{*}(\mathbf{r}\uparrow)V_{j}^{*}(\mathbf{r} \downarrow)-V_{j}^{*}(\mathbf{r}\uparrow)V_{i}^{*}(\mathbf{r}\downarrow) \right|^{2}, \tag{15}\]
where the superscript \((q)\) is omitted for simplicity. \(F_{0}(\mathbf{r})\) is nothing but a square of the local pair density, \(|\kappa(\mathbf{r})|^{2}\equiv|\langle\Phi_{0}^{A}|\hat{\psi}_{\uparrow}( \mathbf{r})\hat{\psi}_{\downarrow}(\mathbf{r})|\Phi_{0}^{A}\rangle|^{2}\). It should be also noted that, with this approximation, only the 0qp and 2qp excitations of neutrons and protons contribute to the summation with respect to \(k\) and \(k^{\prime}\) in Eq. (10).
For the transition to the ground state, which may be of most interest, the calculation is feasible with these approximations. Its relative values among different isotopes may be a useful indicator of the alpha-particle knockout probability. However, we should keep in mind that this is based on the approximations adopted, and should be careful especially when we compare the values for nuclei in different mass regions.
#### ii.2.4 HF-plus-BCS approximation
Using the HF-plus-BCS (HF+BCS) approximation, the HFB wave functions are proportional to the HF single-particle states \(\{\phi_{i}(\mathbf{r}\sigma)\}\) as
\[\begin{split} U_{i}(\mathbf{r}\sigma)&=u_{i}\phi_ {i}(\mathbf{r}\sigma),\quad V_{i}(\mathbf{r}\sigma)=-v_{i}\phi_{i}^{*}( \mathbf{r}\sigma),\\ U_{i}(\mathbf{r}\sigma)&=u_{i}\phi_{i}(\mathbf{r} \sigma),\quad V_{i}(\mathbf{r}\sigma)=v_{i}\phi_{i}^{*}(\mathbf{r}\sigma), \end{split} \tag{16}\]
where \(\phi_{i}\) is the time-reversal partner of \(\phi_{i}\). The BCS uv factors, \((u_{i},v_{i})\), are all real and determined by the HF single-particle energies [41]. This recasts Eqs.(14) and (15) into
\[F_{0}(\mathbf{r}) =\left|\kappa(\mathbf{r})\right|^{2}=\left|\sum_{i}u_{i}v_{i} \phi_{i}(\mathbf{r}\uparrow)\phi_{i}(\mathbf{r}\downarrow)\right|^{2} \tag{17}\] \[F_{ij}(\mathbf{r}) =v_{i}^{2}v_{j}^{2}\left|\phi_{i}(\mathbf{r}\uparrow)\phi_{j}( \mathbf{r}\downarrow)-\phi_{j}(\mathbf{r}\uparrow)\phi_{i}(\mathbf{r}\downarrow )\right|^{2}. \tag{18}\]
The summation in Eq. (17) is taken over both \(i\) and \(\bar{i}\) with \(\phi_{\bar{i}}=-\phi_{i}\), \(u_{i}=u_{i}\), and \(v_{\bar{i}}=v_{i}\).1 Since the indices \(ij\) and \(ji\) correspond to the same 2qp excitation, the summation in Eq. (10) is performed with respect to different combinations of 2qp indices, namely with the restriction of \(i>j\).
Footnote 1: Explicitly denoting the time-reversal parts, Eq. (17) can be written as
\[F_{0}(\mathbf{r})=\left|\sum_{i\geq 0}u_{i}v_{i}\left\{\phi_{i}(\mathbf{r} \uparrow)\phi_{i}(\mathbf{r}\downarrow)-\phi_{i}(\mathbf{r}\uparrow)\phi_{i}( \mathbf{r}\downarrow)\right\}\right|^{2}. \tag{19}\]
Here, \(i\gg 0\) indicates the summation is not taken over \(\bar{i}\). Note that it is different from \(i>0\) in Eq. (6).
The pairing gap \(\Delta_{q}\) (\(q=n,p\)) is related to the monopole pairing strength \(G\) as \(\Delta_{q}=\frac{1}{2}G\sum_{i}u_{i}v_{i}\)[41]. In this paper, we adopt the value of \(\Delta_{q}\) as the experimental odd-even mass difference.
#### ii.2.5 No pairing case
In case that there is no pairing (normal phase) in the state \(|\Phi_{0}^{N_{q}}\rangle\), Eq. (14) does not give a transition to the ground state because the pair density trivially vanishes. This is due to the wrong approximation of \(|\Phi_{k}^{N_{q}-2}\rangle\approx|\Phi_{k}^{N_{q}}\rangle\) for the normal phase. In this case, \(|\Phi_{0}^{N_{q}}\rangle\) is the Hartree-Fock (HF) ground-state wave function. Then, we explicitly remove two particles from the occupied orbitals in \(|\Phi_{0}^{N_{q}}\rangle\), and identify it as \(|\Phi_{k}^{N_{q}-2}\rangle\) in Eq. (11). This leads to
\[F_{ij}(\mathbf{r})=\left|\phi_{i}(\mathbf{r}\uparrow)\phi_{j}(\mathbf{r} \downarrow)-\phi_{j}(\mathbf{r}\uparrow)\phi_{i}(\mathbf{r}\downarrow)\right| ^{2}, \tag{20}\]
where \(\phi_{i}\) and \(\phi_{j}\) are the single-particle wave functions for the occupied (hole) states. The expression is equal to Eq. (15) by identifying \(V_{i}^{*}=\phi_{i}\) (\(V_{i}^{*}=0\)) for hole (particle) states. Note that \(ij\) are the two-hole indices which include not only the excited states (\(k>0\)) but also the ground state (\(k=0\)) in \(|\Phi_{k}^{N_{q}-2}\rangle\). For the ground state \(|\Phi_{0}^{N_{q}-2}\rangle\), two particles \(ij\) are removed from the highest occupied orbitals (HOO).
The ground state of the residual nucleus (\(N-2,Z-2\)) is unique when the ground state of the nucleus (\(N,Z\)) is superfluid both in protons and neutrons. However, we should remark here that, for the normal state (no pairing), there may be multiple ground states with the present approximations. In the case that the nucleus \(|\Phi_{0}^{A}\rangle\) is spherical, the HOO with the angular momentum \(j\) should have a degeneracy of \(2j+1\). For \(j>1/2\), since the degeneracy is more than two-fold, the ground state is not unique because different combinations of two holes can give identical energy. This is apparently an undesired consequence of no rearrangement. However, in order to keep the feasibility in the computation, we simply sum up all the possible two-hole indices to produce \(F_{0}\) for nuclei in the normal phase.
\[F_{0}(\mathbf{r})=\sum_{ij\in\text{HOO}}\left|\phi_{i}(\mathbf{r}\uparrow)\phi_ {j}(\mathbf{r}\downarrow)-\phi_{j}(\mathbf{r}\uparrow)\phi_{i}(\mathbf{r} \downarrow)\right|^{2}. \tag{21}\]
### Localization function
We will compare the local alpha strength with the localization function \(C_{\sigma}^{(q)}(\mathbf{r})\) in Sec. III. It may be useful to recapitulate the definition and the meaning of \(C_{\sigma}^{(q)}(\mathbf{r})\), according to Ref. [23].
The conditional probability of finding a nucleon with spin \(\sigma\) and isospin \(q\) at \(\mathbf{r}^{\prime}\) when another nucleon with the same spin and isospin exists at \(\mathbf{r}\) is given by
\[P_{\sigma}^{(q)}(\mathbf{r},\mathbf{r}^{\prime})=\rho_{\sigma}^{(q)}(\mathbf{r} ^{\prime})-\left|\rho_{\sigma\sigma}^{(q)}(\mathbf{r},\mathbf{r}^{\prime}) \right|^{2}/\rho_{\sigma}^{(q)}(\mathbf{r}), \tag{22}\]
where, in the mean-field calculations,
\[\rho_{\sigma\sigma^{\prime}}(\mathbf{r},\mathbf{r}^{\prime})\equiv\sum_{i>0} V_{i}^{*}(\mathbf{r}\sigma)V_{i}(\mathbf{r}^{\prime}\sigma^{\prime}), \tag{23}\]
and \(\rho_{\sigma}(\mathbf{r})=\rho_{\sigma\sigma}(\mathbf{r},\mathbf{r})\). Again, hereafter in this subsection, the superscript \((q)\) is omitted for simplicity. Let us rewrite \(P_{\sigma}(\mathbf{r},\mathbf{r}^{\prime})=P_{\sigma}(\mathbf{R},\mathbf{s})\) in terms of the average and the relative positions, \(\mathbf{R}=(\mathbf{r}+\mathbf{r}^{\prime})/2\) and \(\mathbf{s}=\mathbf{r}-\mathbf{r}^{\prime}\), then, perform a spherical averaging over the angles of \(\mathbf{s}\). Finally, we expand the \(P_{\sigma}(\mathbf{r},s)\) with respect to \(s\) as
\[P_{\sigma}(\mathbf{r},s) \approx\frac{1}{3}\left(\tau_{\sigma}-\frac{1}{4}\frac{(\nabla \rho_{\sigma})^{2}}{\rho_{\sigma}}-\frac{\mathbf{j}_{\sigma}^{2}}{\rho_{ \sigma}}\right)s^{2}\] \[\equiv\frac{1}{3}D_{\sigma}(\mathbf{r})s^{2} \tag{24}\]
Then, the localization function \(C_{\sigma}(\mathbf{r})\) is defined as \(C_{\sigma}(\mathbf{r})=[1+\{D_{\sigma}(\mathbf{r})/\tau^{\mathrm{TF}}( \mathbf{r})\}^{2}]^{-1}\), where the Thomas-Fermi kinetic density \(\tau_{\sigma}^{\mathrm{TF}}(\mathbf{r})=3(6\pi^{2})^{2/3}\rho_{\sigma}^{5/3}\) is introduced to make \(C_{\sigma}(\mathbf{r})\) dimensionless. \(P_{\sigma}(\mathbf{r},s)\to 0\) at \(s\to 0\) is guaranteed by the Pauli exclusion principle. The condition \(P_{\sigma}>0\) restricts the range of \(C_{\sigma}\) as \(0<C_{\sigma}(\mathbf{r})\leq 1\). It is apparent that the smaller the conditional probability \(P_{\sigma}(\mathbf{r},s)\) is, the larger the localization function \(C_{\sigma}(\mathbf{r})\). In other words, \(C_{\sigma}^{(q)}(\mathbf{r})\approx 1\) indicates very little probability of finding two nearby nucleons with the same spin \(\sigma\) and isospin \(q\) around the position \(\mathbf{r}\). We should emphasize that the localization function \(C_{\sigma}^{(q)}\) is, in fact, a "delocalization" measure of the same kind of nucleons. The presence of the alpha particle requires localization of nucleons with different spins and isospins, which cannot be quantified by \(C_{\sigma}^{(q)}\).
## III Numerical results
### Numerical details
In the present paper, instead of full Hartree-Fock-Bogoliubov (HFB) theory, we adopt the HF+BCS theory. We truncate the model space for the pairing correlations. This is introduced by the number of single-particle orbitals. For instance, for Sn isotopes, 82 neutron orbitals obtained in the HF+BCS are adopted for the neutron sector, while the protons are in the normal phase (\(\Delta_{p}=0\)) with 50 fully occupied orbitals. The neutron pairing gaps are determined by the third-order mass difference using the atomic mass evaluation [42; 43]: \(\Delta_{n}=1.4\) MeV, 1.2 MeV, 1.4 MeV, and 1.3 MeV for \(A=112\), 116, 120, and 124, respectively.
We use the Skyrme energy density functional with the SkM* parameter set [44]. We adopt the 3D Cartesian grid representation of the square box, using the computer code developed in Refs. [45; 46; 47]. The 3D grid size is set to be \((1.0\ \mathrm{fm})^{3}\). We adopt all the grid points inside a sphere of the radius of \(R=12\ \mathrm{fm}\). The differentiation is evaluated with the nine-point finite difference. The center-of-mass correction is taken into account by modifying the nucleon's mass as \(m\to m\times A/(A-1)\). The Coulomb potential is calculated by solving the Poisson equation with the conjugate-gradient method, in which the boundary values are constructed with the multipole expansion [48]. The single-particle orbitals are calculated with the imaginary-time method [49]. The iteration is carried out until the self-consistent solution is obtained.
### Even-even Sn isotopes
Since we neglect the rearrangement of the mean fields, the method is suitable for heavy nuclei in which the
Figure 3: Neutron pair density distributions for Sn isotopes (\(A=112\), 116, 120, and 124). The proton pair density vanishes for these isotopes.
Figure 2: Nucleon density distributions for neutrons (solid lines) and protons (dashed) for Sn isotopes (\(A=112\), 116, 120, and 124).
mean-field potentials are relatively stable against the removal of an alpha particle (two protons and two neutrons). Since the ground states of Sn isotopes (\(Z=50\)) represent a typical example of pair-rotational bands in spherical nuclei [50], the mean fields should be stable with respect to the two-neutron removal. In contrast, the two-proton removal is expected to have a certain impact on the mean fields, because \(Z=50\) is a spherical magic number for protons. Nonetheless, Cd isotopes (\(Z=48\)) exhibit typical excitation spectra of spherical vibrator [51]. Thus, it is meaningful to compare the magnitude of the local alpha strengths for different Sn isotopes (\({}^{A}\)Sn\(\rightarrow^{A-4}\)Cd).
#### iii.1.1 Normal density and pair density
First, let us show the density distributions for \({}^{112,116,120,124}\)Sn in Fig. 2. The neutron radius increases as a function of the neutron number, while the proton radius stays almost constant. A dip in the central proton density can be understood as a shell effect because of the full occupation of the high-\(j\) (\(g_{9/2}\)) orbital. As we can expect, the neutron skin develops as increasing the neutron number. The neutron skin should have an impact on the alpha particle formation properties. Reference [34], using the Thomas-Fermi approximation, gave the alpha particle density in the surface region which decreases as the neutron skin increases. The alpha cluster formation is also predicted to have a negative impact on neutron skin thickness.
In Fig. 3, the neutron pair densities are shown. In the present calculation, the central peak at \(r\approx 0\) exists, which may be due to the BCS treatment of the pairing and may change in the HFB calculation. The surface peak is located at \(r\approx 5\) fm, whose shape is similar to each other. Since the proton number \(Z=50\) is magic, the pair density vanishes for protons.
#### iii.1.2 Local alpha strengths
Since the numerical calculation is performed with the vanishing boundary condition, all the quasiparticle energies are discrete. In order to visualize the local alpha strength \(S_{\alpha}(r,E)\) as a function of excitation energy \(E\), we replace the delta function in Eq. (10) by the Gaussian function of the width of \(\gamma=100\) keV. The calculated local alpha strengths for Sn isotopes are shown in Fig. 4. For each isotope, there is an isolated peak corresponding to the ground-ground transition (\(E=0\)). This alpha strength to the ground state is located near the surface region. At excitation energies of \(E\gtrsim 3\) MeV, there are peaks whose magnitude is comparable to or even larger than the transitions to the ground state. In contrast to the ground-ground transition, the strengths are not only in the surface region, but also in the interior region with \(r<3\) fm. This indicates that the alpha particle may exist deep inside the nucleus. However, in the alpha-knockout reaction, these alpha particles are difficult to come out of the nucleus because of the strong absorption. No strength is shown at \(r=0\) in Fig. 4 that shows \(S_{\alpha}(r,E)\) in the range of \(E<10\) MeV. This is because the proton amplitude vanishes at \(r=0\), \(F_{k}^{(p)}(0)=0\). The binding energy of the proton \(s_{1/2}\) state is larger than the \(g_{9/2}\) state, by more than 20 MeV. Thus, non-zero pro
Figure 4: Local alpha strengths \(S_{\alpha}(r,E)\) for Sn isotopes (\(A=112,\,116,\,120,\,124\)). The discrete strengths are smeared by Gaussians with a width of 100 keV.
ton amplitude at the center \(F_{k}^{(p)}(0)\neq 0\) appears only for \(E>40\) MeV.
#### iii.1.3 Residual nuclei in the ground state
In order to examine the structure of the local alpha strengths to the ground state of the residual nuclei, in Fig. 5, we show the strength of Eq. (4) with \(E=0\) and \(\Delta E\to 0+\),
\[\mathcal{S}_{\alpha}^{0}(\mathbf{r}) \equiv\mathcal{S}_{\alpha}(\mathbf{r})_{E=0,\Delta E=2\epsilon}\] \[=\int_{-\epsilon}^{\epsilon}S_{\alpha}(\mathbf{r},E)dE=F_{0}^{(n )}(\mathbf{r})F_{0}^{(p)}(\mathbf{r}), \tag{25}\]
where \(\epsilon\) is a positive infinitesimal. When we remove an alpha particle at the position \(\mathbf{r}\), \(\mathcal{S}_{\alpha}^{0}(\mathbf{r})\) can be regarded as a quantity proportional to the probability that the residual nucleus becomes the ground state. The shape of the peak is almost identical among these isotopes and located at \(r=|\mathbf{r}|\approx 4.7-4.8\) fm. This position approximately corresponds to the position \(r\) that gives \(\rho(\mathbf{r})=(2/3)\times\rho(\mathbf{0})\) (Fig. 2). It is near the surface, however, the radial value \(r\) is significantly smaller than the peak position of the alpha density \(n_{\alpha}(\mathbf{r})\) predicted in Ref. [34]. In fact, the peak position of the alpha density \(n_{\alpha}(\mathbf{r})\) in Ref. [34] is located at \(6.5<r<7.5\) fm, which roughly corresponds to the value \(r\) with \(\rho_{q}(\mathbf{r})\approx\rho_{q}(\mathbf{0})/10\). The alpha density \(n_{\alpha}(\mathbf{r})\) is predicted to vanish in the region of \(r<6\) fm for Sn isotopes [34].
The peak height is similar to each other for \({}^{112,116,120}\)Sn, while it is apparently smaller for \({}^{124}\)Sn. This is naturally understood by the pair density in Fig. 3. The proton matrix elements \(F_{0}^{(p)}(\mathbf{r})\), which is given by Eq. (21), are determined by the HOO, namely \(g_{9/2}\) orbitals. They are surface peaked and approximately identical to each other among all the isotopes. The neutron matrix element \(F_{0}^{(n)}(\mathbf{r})\) is given by \(|\kappa_{n}(\mathbf{r})|^{2}\), according to Eq. (14). Therefore, variations in \(\mathcal{S}_{\alpha}^{0}(\mathbf{r})=F_{0}^{(n)}(\mathbf{r})F_{0}^{(p)}( \mathbf{r})\) come from those in \(F_{0}^{(n)}(\mathbf{r})=|\kappa_{n}(\mathbf{r})|^{2}\). A reduction in \(\kappa_{n}(\mathbf{r})\) at \(r\approx 5\) fm is the reason of the reduced peak height in \(\mathcal{S}_{\alpha}^{0}(\mathbf{r})\) in \({}^{124}\)Sn. This is easily confirmed by artificially increasing the neutron pairing gap: We have found that the peak height of \(\mathcal{S}_{\alpha}^{0}(\mathbf{r})\) increases by about 50 % when we double the pairing gap \(\Delta_{n}\).
The alpha-knockout experimental data in Ref. [33] clearly indicate the monotonous decrease as a function of the neutron number. The experiment measures the missing mass spectra to extract the cross section in which the residual nucleus is in the ground state. Therefore, this isotopic dependence should be related to \(\mathcal{S}_{\alpha}^{0}(\mathbf{r})\) at the nuclear surface. The peak height of \(\mathcal{S}_{\alpha}^{0}(\mathbf{r})\) shown in Fig. 5 is similar to each other except for \({}^{124}\)Sn. Furthermore, they are almost identical at \(r\gtrsim 5.5\) fm where the alpha knockout mainly takes place, namely,
\[\mathcal{S}_{\alpha}^{0}(\mathbf{r})_{A=112}\approx\mathcal{S}_{\alpha}^{0}( \mathbf{r})_{A=116}\approx\mathcal{S}_{\alpha}^{0}(\mathbf{r})_{A=120}\approx \mathcal{S}_{\alpha}^{0}(\mathbf{r})_{A=124}, \tag{26}\]
at \(r\gtrsim 5.5\) fm. In other words, \(\mathcal{S}_{\alpha}^{0}(\mathbf{r})\) is universal for these isotopes in the surface region. This seems to be inconsistent with the experimental observation, at first sight.
However, we need to further examine the relationship between the cross section and the local alpha strength \(\mathcal{S}_{\alpha}^{0}(\mathbf{r})\). Since there is a strong absorption of the alpha particle inside the nucleus, the cross section may not be correlated with the values at the same \(r\), but we should compare those at a fixed value of nucleon density for each isotope. The nuclear radii apparently increase as the neutron number increases, because of the neutron skin effect (Fig. 2), namely \(R_{112}<R_{116}<R_{120}<R_{124}\). Thus, the \(\mathcal{S}_{\alpha}^{0}(\mathbf{r})\) values at the surface (fixed density) decrease as a function of the neutron number.
\[\mathcal{S}_{\alpha}^{0}(\mathbf{R}_{112})>\mathcal{S}_{\alpha}^{0}(\mathbf{R }_{116})>\mathcal{S}_{\alpha}^{0}(\mathbf{R}_{120})>\mathcal{S}_{\alpha}^{0}( \mathbf{R}_{124}). \tag{27}\]
Therefore, the universal behavior of \(\mathcal{S}_{\alpha}^{0}(\mathbf{r})\) may be consistent with the experimental observation.
In order to visualize this neutron number dependence, we define a dimensionless quantity, "local alpha probability," as the \(\mathcal{S}_{\alpha}^{0}(\mathbf{r})\) value relative to the density.
\[P_{\alpha}^{0}(\mathbf{r})\equiv\frac{\mathcal{S}_{\alpha}^{0}(\mathbf{r})}{ \rho_{n\uparrow}(\mathbf{r})\rho_{n\downarrow}(\mathbf{r})\rho_{p\uparrow}( \mathbf{r})\rho_{p\downarrow}(\mathbf{r})}. \tag{28}\]
\(P_{\alpha}^{0}(\mathbf{r})\) can be regarded as the probability to find an alpha particle at the position \(\mathbf{r}\) under the condition that the residual nucleus is in the ground state, normalized to the probability of finding the four kinds of nucleons. The local alpha probability is plotted in Fig. 6. \(P_{\alpha}^{0}(\mathbf{r})\) clearly indicates the monotonous decrease as the neutron number, which is consistent with the experiment [33].
#### iii.1.4 Excited residual nuclei
The local alpha strength, in principle, contains information on alpha knockout to excited residual nuclei. Since the excited states are simply given by neutron 2qp
Figure 5: Local alpha strength to the ground state \(\mathcal{S}_{\alpha}^{0}(\mathbf{r})\) for Sn isotopes (\(A=112\), 116, 120, and 124).
states and proton particle-hole excitations, we should keep in mind that it is a qualitative measure. In Fig. 7, \(S_{\alpha}(\mathbf{r},E)\) integrated over the space \(\mathbf{r}\) is shown for Sn isotopes. The small peak next to the ground state (\(E\approx 2\) MeV) corresponds to proton excitation in which one of the protons is removed from the \(g_{9/2}\) orbit and the other from \(p_{1/2}\). The alpha strengths to some excited states of residual nuclei are as strong as those to the ground state.
It may be of interest to investigate the structure of the local alpha probability when the residual nuclei are excited. Since there are two prominent peaks in Fig. 7, one around \(E\approx 4\) MeV and the other around 9 MeV, we set \(E=4\) (9) MeV and \(\Delta E=2\) MeV, to calculate the local alpha probability as
\[P_{\alpha}^{\rm ex}(\mathbf{r})_{E,\Delta E}\equiv\frac{\mathcal{S}_{\alpha}( \mathbf{r})_{E,\Delta E}}{\rho_{n\uparrow}(\mathbf{r})\rho_{n\downarrow}( \mathbf{r})\rho_{p\uparrow}(\mathbf{r})\rho_{p\downarrow}(\mathbf{r})}, \tag{29}\]
where \(\mathcal{S}_{\alpha}(\mathbf{r})_{E,\Delta E}\) is given by Eq. (4). These are shown in Fig. 8. The local alpha probabilities for excited residual nuclei are enhanced in the low-density region. The monotonous increase as a function of \(r\) is the same as those to the ground state \(P_{\alpha}^{0}(\mathbf{r})\), and seems to be universal. However, their isotopic dependence is not as prominent as \(P_{\alpha}^{0}(\mathbf{r})\). \(P_{\alpha}^{\rm ex}(\mathbf{r})_{E,\Delta E}\) with \(E=9\) MeV and \(\Delta E=2\) MeV for different isotopes are similar to each other. Since we neglect the effects of the rearrangement and the collective states, these numbers should not be taken seriously. Nevertheless, this may suggest that the alpha-knockout reaction with excited residual nuclei may not show the prominent neutron number dependence, in contrast to those for the ground residual nuclei.
Integrating over the entire energy range, the total local alpha strength can be easily estimated in the mean-field approximation as
\[S_{\alpha}^{\rm tot}(\mathbf{r}) = \int_{-\infty}^{\infty}S_{\alpha}(\mathbf{r},E)dE=S_{\uparrow \downarrow}^{(n)}(\mathbf{r})S_{\uparrow\downarrow}^{(p)}(\mathbf{r}), \tag{30}\]
where
\[S_{\uparrow\downarrow}^{(q)}(\mathbf{r}) = \langle\Psi_{0}^{N_{q}}|\psi_{q\downarrow}^{\dagger}(\mathbf{r}) \psi_{q\uparrow}^{\dagger}(\mathbf{r})\psi_{q\uparrow}(\mathbf{r})\psi_{q \downarrow}(\mathbf{r})|\Phi_{0}^{N_{q}}\rangle \tag{31}\] \[= \rho_{\uparrow}^{(q)}(\mathbf{r})\rho_{\downarrow}^{(q)}(\mathbf{ r})-\left|\rho_{\uparrow\downarrow}^{(q)}(\mathbf{r})\right|^{2}+\left|\kappa^{(q)} (\mathbf{r})\right|^{2}.\]
This is shown in Fig. 9. The major contribution to \(S_{\alpha}^{\rm tot}(\mathbf{r})\) is the first term of Eq. (31), which is a local density product of nucleons with spin up and down. Thus, \(S_{\alpha}^{\rm tot}(\mathbf{r})\) of Eq. (30) mainly comes from a trivial density product of four kinds of nucleons. This is nothing but the denominator of Eqs. (28) and (29). If we normalize \(S_{\alpha}^{\rm tot}(\mathbf{r})\) with respect to this trivial density product factor,
\[P_{\alpha}^{\rm tot}(\mathbf{r})\equiv\frac{S_{\alpha}^{\rm tot}(\mathbf{r})} {\rho_{n\uparrow}(\mathbf{r})\rho_{n\downarrow}(\mathbf{r})\rho_{p\uparrow}( \mathbf{r})\rho_{p\downarrow}(\mathbf{r})}, \tag{32}\]
we obtain results shown in the inset of Fig. 9. Again, in the surface region, we observe the clear neutron-number dependence the same as \(P_{\alpha}^{0}(\mathbf{r})\) in Fig. 6.
If we neglect the second and the third terms in Eq. (31), we trivially have \(P_{\alpha}^{\rm tot}(\mathbf{r})=1\). Since the second term of
Figure 8: Local alpha probability for excited residual nuclei in Sn isotopes (\(A=112\), \(116\), \(120\), and \(124\)), which are defined as Eq. (29) with \(E=4\) MeV and \(\Delta E=2\) MeV. Those with \(E=9\) MeV are shifted upwards by \(0.1\).
Eq. (31) vanishes for the time-even ground state, the enhancement effect is due to the third term, namely, the effect of neutron pairing. Therefore, the surface alpha formation may be understood as the fact that the pair density distribution is more extended than the normal density.
#### iii.3.5 Localization function
Before closing this section, we examine the validity of the localization function. The calculated localization function \(C_{\sigma}\) for Sn isotopes is presented in Fig. 10. \(C_{\sigma}\) are approximately identical for all the isotopes. We observe a bump in \(C_{\sigma}\) at \(r\approx 5\) fm for protons and at \(r\approx 5.5\) fm for neutrons. These values \(r\) of the peak positions are larger than the those of \(\mathcal{S}_{\alpha}^{0}(\mathbf{r})\) (Fig. 5). Besides, the profile of the function is significantly different between \(C_{\sigma}(\mathbf{r})\) and \(\mathcal{S}_{\alpha}^{0}(\mathbf{r})\). In comparison with the summed local alpha strength \(S_{\alpha}^{\mathrm{tot}}(\mathbf{r})\) in Fig. 9, we again observe significantly different peak positions and profiles. There is no surface peak, and the peak structure almost disappears in Fig. 9. Therefore, it could be misleading to identify the localization function \(C_{\sigma}(\mathbf{r})\) as the indicator of the alpha-particle formation in the mean-field theory.
## IV Conclusion
In order to quantify the alpha-particle formation, the local alpha strength \(S_{\alpha}(\mathbf{r},E)\) is proposed. When we remove an alpha particle at the position \(\mathbf{r}\) from a nucleus, the final state in the residual nucleus can be expanded in the energy eigenstates. The local alpha strength \(S_{\alpha}(\mathbf{r},E)\) corresponds to the strength to produce the state at an energy \(E\) in the residual nucleus. This quantity is defined with respect to a many-body wave function, thus, it can be calculated using various quantum many-body techniques, in principle. The calculation becomes manageable when we adopt some approximations, such as the mean-field approximation (energy density functional theory). Furthermore, if we neglect the rearrangement of the mean fields after the removal of an alpha particle, the calculation numerically costs little. We use these approximations in the present paper.
We calculate the local alpha strengths for Sn isotopes with \(A=112\), \(116\), \(120\), and \(124\). These nuclei are studied by a recent alpha-knockout experiment, in which the cross sections with the residual nuclei in the ground state clearly indicate a monotonous decreasing trend as a function of the neutron number. This prominent neutron-number dependence is not found in the alpha strength to the ground state, \(\mathcal{S}_{\alpha}^{0}(\mathbf{r})\), of Eq. (25). In fact, the function \(\mathcal{S}_{\alpha}^{0}(\mathbf{r})\) is almost universal in the surface for all these isotopes. Nevertheless, the observed neutron-number dependence is well reproduced by the local alpha probability, \(P_{\alpha}^{0}(\mathbf{r})\), of Eq. (28). The monotonous decrease as a function of neutron number is especially evident at the nuclear surface of \(r\gtrsim 6\) fm, where the alpha-knockout reaction is supposed to take place.
Instead of using \(P_{\alpha}^{0}(\mathbf{r})\), we can also interpret the experimental trend using \(\mathcal{S}_{\alpha}^{0}(\mathbf{r})\) together with the development of the neutron skin. Since knocking out the alpha particle is allowed only in the low-density region, the local alpha strength \(\mathcal{S}_{\alpha}^{0}(\mathbf{r})\) with \(r>R_{c}\) is relevant to the cross section. The critical radius \(R_{c}\) is determined by the critical density \(\rho_{c}\) as \(\rho(R_{c})=\rho_{c}\). \(R_{c}\) must be an increasing function of the neutron number because the neutron radius increases. Therefore, although \(\mathcal{S}_{\alpha}^{0}(\mathbf{r})\) is universal in the Sn isotopes, \(\mathcal{S}_{\alpha}^{0}(R_{c})\) decreases as a function of the neutron number.
The local alpha strength in the present approximations can be calculated with a single state. Therefore, with a proper choice of the mean-field Hamiltonian, it can be evaluated in a time-dependent manner with the time-dependent density functional theory (TDDFT). Recently, the nuclear TDDFT calculations have been renovated to
Figure 9: Energy-integrated local alpha strength for Sn isotopes (\(A=112\), \(116\), \(120\), and \(124\)).
Figure 10: Localization functions for neutrons (solid lines) and protons (dashed lines) for Sn isotopes (\(A=112\), \(116\), \(120\), and \(124\)). The values with spin up (\(\sigma=+1/2\)) \(C_{+1/2}\) are shown in the figure, but those for spin down (\(\sigma=-1/2\)) are identical.
include the pair density [46; 47; 52; 53; 54; 55; 56]. It is of significant interest to investigate the alpha-particle formation probability during nuclear reactions, such as heavy-ion reactions, fusion, and fission.
In the present paper, we neglect the rearrangement of the mean fields before and after the removal of an alpha particle. The numerical calculation is extremely feasible with this approximation. However, it is a drastic approximation even for heavy nuclei. Especially, near the doubly closed nuclei, the nuclear shape may be changed, and the approximation may not be justified. In order to improve this, the calculation with proper treatment of the rearrangement is currently under progress. Furthermore, the inclusion of the proton-neutron pairing is an interesting subject in the future.
###### Acknowledgements.
This work is supported in part by JSPS KAKENHI Grants No. JP18H01209, No. JP19H05142, No. JP23H01167, No. JP20K03964, and No. JP19KK0343. This research in part used computational resources provided by Multidisciplinary Cooperative Research Program in the Center for Computational Sciences, University of Tsukuba.
|
2310.16365 | G-Invariant Representations using Coorbits: Injectivity Properties | Consider a real vector space $\mathcal{V}$ and a finite group $G$ acting
unitarily on $\mathcal{V}$. We study the general problem of constructing a
stable embedding whose domain is the quotient of the vector space modulo the
group action, and whose target space is a Euclidean space.
We construct an embedding $\Psi$ and we study under which assumptions $\Psi$
is injective in the quotient vector space. The embedding scheme we introduce is
based on selecting a fixed subset from the sorted orbit
$\downarrow\langle{U_gw_i},{x}\rangle_{g \in G}$, where $w_i$ are appropriate
vectors. | Radu Balan, Efstratios Tsoukanis | 2023-10-25T05:08:08Z | http://arxiv.org/abs/2310.16365v1 | # G-Invariant Representations using Coorbits: Injectivity Properties
###### Abstract
Consider a real vector space \(\mathscr{V}\) and a finite group \(G\) acting unitarily on \(\mathscr{V}\). We study the general problem of constructing a stable embedding whose domain is the quotient of the vector space modulo the group action, and whose target space is a Euclidean space. We construct an embedding \(\Psi\) and we study under which assumptions \(\Psi\) is injective in the quotient vector space. The embedding scheme we introduce is based on selecting a fixed subset from the sorted orbit \(\downarrow\langle U_{g}w_{i},x\rangle_{g\in G}\), where \(w_{i}\) are appropriate vectors.
## 1 Introduction
Machine learning techniques have impressive results when we feed them with large sets of data. In some cases, our training set can be small but we know that there are some underlying symmetries in the data structure. For example, in graph theory problems each graph is being represented as an adjacent matrix of the labeled nodes of the graph; any relabeling of the nodes shouldn't change the output of our classification or regression algorithm.
A possible solution for this problem is to increase our training set by adding, for each data point of the set, the whole orbit generated by the
group action. One problem that arises is that it is computationally costly to find such highly symmetric function.
Another solution is to embed our data into an Euclidean space \(\mathbb{R}^{m}\) with a symmetry-invariant embedding \(\Psi\) and then use \(\mathbb{R}^{m}\) as our feature space. It is not enough for our embedding to be symmetric invariant, it should also separate data orbits. Finally, we require certain stability conditions so that small perturbations don't affect our predictions. This problem is an instance of _invariant machine learning_[19, 3, 15, 10, 20, 28, 14, 16, 21].
The most common group action in invariant machine learning are permutations [25, 11, 7] reflections [22] and translations [18]. Also, there are very interesting results in the case of equivariant machine learning [24, 20, 27, 26, 9].
Our work is influenced by [15] where it is shown that \(m\approx 2d\) separating invariants are enough for an orbit-separating embedding, and by [12, 23] where the _max filter_ is introduced. We work with a generalization of the _max filter_: instead of choosing the maximum element of the orbit we choose other subsets of orbit. The problem of finding permutation invariant embeddings seems to be closely connected to the phase retrieval problem where there already are a lot of important results [5, 6, 2, 1, 4, 17].
In the first chapter, we introduce our embedding scheme.
In the second chapter, we investigate and construct an injective embedding for the case of a finite subset of a vector space \(\mathscr{V}\).
Finally, in the third chapter, we present an injective Coorbit embedding for a \(d\)-dimensional vector space \(\mathscr{V}\).
### Notation
Let \((\mathscr{V},\langle\cdot,\cdot\rangle)\) be a \(d\)-dimensional real vector space, where \(d\geq 2\). Assume \((G,\cdot)\) is a finite group of order \(|G|=N\) acting unitarily on \(\mathscr{V}\). For every \(g\in G\), we denote by \(U_{g}x\) the group action. On \(\hat{\mathscr{V}}=\mathscr{V}/\sim\), the quotient space with respect to action of group \(G\), we denote by \([x]\) the orbit of vector \(x\), i.e. \([x]=\{U_{g}x:g\in G\}\). Consider now the natural metric, \(\mathrm{d}:\hat{\mathscr{V}}\times\hat{\mathscr{V}}\to\mathbb{R}\), where
\[\mathrm{d}([x],[y])=\min_{h_{1},h_{2}\in G}\lVert U_{h_{1}}x-U_{h_{2}}y \rVert=\min_{g\in G}\lVert x-U_{g}y\rVert.\]
Our goal is to construct a bi-Lipschitz Euclidean embedding on the metric space \((\hat{\mathscr{V}},d)\). Specifically, we want to construct a function \(\Psi:\mathscr{V}\to\mathbb{R}^{m}\) such that
1. \(\Psi(U_{g}x)=\Psi(x),\ \forall x\in\mathscr{V},\ \forall g\in G,\)
2. If \(x,y\in\mathscr{V}\) are such that \(\Psi(x)=\Psi(y)\), then there exist \(g\in G\) such that \(y=U_{g}x\),
3. There are \(0<a<b<\infty\) such that for any \(x,y\in\mathscr{V}\) \[a\,\mathrm{d}([x],[y])^{2}\leq\|\Psi(x)-\Psi(y)\|^{2}\leq b(\mathrm{d}([x],[y ]))^{2}.\]
The invariance property (1) lifts \(\Psi\) to a map \(\hat{\Psi}\) acting on the quotient space \(\hat{\mathscr{V}}=\mathscr{V}/\sim\), where \(x\sim y\) if and only if \(y=U_{g}x\) for some \(g\in G\):
\[\hat{\Psi}:\hat{\mathscr{V}}\to\mathbb{R}^{m},\quad\hat{\Psi}([x])=\Psi(x), \quad\forall[x]\in\hat{\mathscr{V}}.\]
If a \(G\)-invariant map \(\Psi\) satisfies property (2) we say that \(\Psi\) separates the \(G\)-orbits in \(\mathbb{R}^{d}\).
Our construction for the embedding \(\Psi\) is based on a non-linear sorting map.
**Definition 1.1**.: _Let \(\downarrow:\mathbb{R}^{r}\to\mathbb{R}^{r}\) be the operator that takes as input a vector in \(\mathbb{R}^{r}\) and returns a sorted, in decreasing order, vector of length \(r\) with same entries as input vector._
For a number \(p\in\mathbb{N}\), fix a \(p\)-tuple of vectors \(\mathbf{w}=(w_{1},\ldots,w_{p})\in\mathscr{V}^{p}\). For any \(i\in[p]\) and \(j\in[N]\) we define the operator \(\Phi_{w_{i},j}:\mathscr{V}\to\mathbb{R}\) so that \(\Phi_{w_{i},j}(x)\) is the \(j\)-th coordinate of vector \(\downarrow\left\langle U_{g}w_{i},x\right\rangle_{g\in G}\). Now fix a set \(S\subset[N]\times[p]\) such that \(|S|=m\), and for \(i\in[p]\), set \(S_{i}=\{k\in[N]:(k,i)\in S\}\). We denote by \(m_{i}\) the cardinality of the set \(S_{i}\), thus \(m=\sum_{i=1}^{p}m_{i}\). Let \(\ell:\mathbb{R}^{m}\to\mathbb{R}^{2d}\) be a linear transformation and consider the map,
\[\Psi=\Psi_{\mathbf{w},S,\ell}=\ell\circ\Phi_{\mathbf{w},S}:\mathscr{V}\to \mathbb{R}^{2d}\]
with
\[\Phi_{\mathbf{w},S}(x)=[\downarrow\{\Phi_{w_{1},j}(x)\}_{j\in S_{1}},\ldots, \downarrow\{\Phi_{w_{p},j}(x)\}_{j\in S_{p}}]\in\mathbb{R}^{m}. \tag{1}\]
Therefore, our proposal for constructing a stable embedding is the function \(\Psi\) of the form
\[\Psi(x)=\Psi_{\mathbf{w},S,\ell}(x)=\ell(\Phi_{\mathbf{w},S}(x)).\]
For the rest of the paper when the \(p\)-tuple of vectors \(\mathbf{w}\) is clearly implied we will denote by \(\Phi_{i,j}\) the \(\Phi_{w_{i},j}\). Also by \(\{g_{1},\ldots,g_{N}\}\), we will denote an arbitrarily, but fixed, enumeration of the group \(G\).
### Semialgebraic geometry notation
In this section we will follow the notation of [13].
**Definition 1.2**.: _An affine algebraic variety is the set of common zeros over an algebraically closed field \(k\) of some family of polynomials._
**Remark 1.3**.: _In literature sometimes in the definition of affine variety is required the ideal generated by defining polynomials to be prime. In this paper we will call that case irreducible variety._
A generalization of algebraic sets is found in semialgebraic sets, which encompass polynomial inequalities in addition to algebraic equations.
**Definition 1.4**.: _Let \(\mathbb{F}\) be a real closed field. A subset \(S\) of \(\mathbb{F}^{n}\) is a "semialgebraic set" if it is a finite union of sets defined by polynomial equalities of the form \(\{(x_{1},...,x_{n})\in\mathbb{F}^{n}\mid P(x_{1},...,x_{n})=0\}\) and of sets defined by polynomial inequalities of the form \(\{(x_{1},...,x_{n})\in\mathbb{F}^{n}\mid Q(x_{1},...,x_{n})>0\}\)._
**Definition 1.5**.: _Let \(X,Y\) be two varieties. A continuous map \(f:X\to Y\) is called morphism if \(\forall p\in X\) there is a Zariski open set \(U\) containing \(p\) and polynomials functions \(g\) and \(g\) such that \(\forall q\in U\), \(f(q)=\frac{g(q)}{h(q)}\) and \(h(q)\neq 0\)._
Now we will state some results from [13] without proof.
**Proposition 1.6** (Proposition 2.15 in [13]).: _A semialgebraic set \(A\) can be decomposed as the disjoint union of finitely many pieces which are semialgebraically homeomorphic to open hypercubes \((0,1)^{d_{i}}\) of different dimensions._
**Definition 1.7**.: _Let \(A\) be decomposed as the disjoint union of finitely many pieces which are semialgebraically homeomorphic to open hypercubes \(\{(0,1)^{d_{i}}\}_{i\in I}\). Then we define the dimension of \(A\) to be the maximum dimension of hupercubes \((0,1)^{d_{i}}\), i.e. \(\dim(A)=\max_{i\in I}d_{i}\)._
Two corollaries of Tarski-Seidenberg theorem are the following:
**Corollary 1.8** (Corollary 2.4 in [13]).: _If \(A\) is a semialgebraic subset of \(\mathbb{R}^{n+k}\), its image by the projection on the space of the first \(n\) coordinates is a semialgebraic subset of \(\mathbb{R}^{n}\)._
**Corollary 1.9** (Corollary 2.5 in [13]).: _If \(A\) is a semialgebraic subset of \(\mathbb{R}^{n}\), its closure in \(\mathbb{R}^{n}\) is again semialgebraic._
Let \(A\subset\mathbb{R}^{m}\) and \(B\subset\mathbb{R}^{n}\) be semialgebraic sets. A mapping \(f:A\to B\) is called semialgebraic if its graph:
\[\Gamma_{f}=\{(x,y)\in A\times B:y=f(x)\}\]
is a semialgebraic set of \(\mathbb{R}^{m}\times\mathbb{R}^{n}\).
**Proposition 1.10** (Corollary 2.9 and 2.2.1 in [13]).:
1. _If_ \(f:A\to B\) _is a morphism, then it is also semialgebraic._
2. _The direct image and the inverse image of a semialgebraic set by a semialgebraic mapping are semialgebraic._
3. _The composition of two semialgebraic mappings is semialgebraic._
A simple corollary of Corollary 1.8 anf Proposition 1.10(1),(2) is the following:
**Corollary 1.11**.: _Let \(A\subset\mathbb{R}^{m}\) and \(B\subset\mathbb{R}^{n}\) be semialgebraic sets and \(f:A\to B\) be a morphism. Then \(f(A)\) is also an algebraic set._
Finally two very important theorems of semialgebraic geometry are the following:
**Theorem 1.12** (Theorem 3.18 in [13]).: _Let \(A\) be a semialgebraic subset of \(\mathbb{R}^{n}\), and \(f:A\to R^{k}\) a semialgebraic mapping (not necessarily continuous). Then \(\dim f(A)\leq\dim A\)._
**Theorem 1.13** (Theorem 3.20 in [13]).: _Let \(A\subset\mathbb{R}^{n}\) be a semialgebraic set. Its dimension as a semialgebraic set is equal to the dimension, as an algebraic set, of its Zariski closure \(\bar{A}^{S}\)._
A simple corollary of Theorem 1.13 is the following:
**Corollary 1.14**.: _Let a semialgebraic subset \(A\subset\mathbb{R}^{N}\). If \(\dim(A)<n\) then \(A\) is nowhere dense._
Morevoer, note that any semialgebraic set consists of finitely many connected components.
**Theorem 1.15** (Theorem 2.23 in [13]).: _Every semialgebraic set has finitely many connected components which are semialgebraic. Every semialgebraic set is locally connected._
Finally, a very usefull Corolarry of _"Hardt's semialgebraic triviality"_ is the following.
**Corollary 1.16** (Corolarry 4.2 in [13]).: _Let \(A\subset\mathbb{R}^{n}\) be a semialgebraic set and \(f:A\to\mathbb{R}^{k}\) a continuous semialgebraic mapping. For \(d\in\mathbb{N}\), the set_
\[\{b\in\mathbb{R}^{k}:\dim(f^{-1}(b))=d\}\]
_is a semialgebraic subset of \(\mathbb{R}^{k}\) of dimension not greater than \(\dim(A)-d\)._
## 2 Representations of finite subsets of inner product spaces
The first case we examine is when \(\mathscr{A}\) is a finite subset of a real vector space \(\mathscr{V}\). We also assume that \(\mathscr{A}\) is \(G\)-invariant, meaning that for every \(x\in\mathscr{A}\) and for every \(g\in G\), \(U_{g}x\) is also in \(\mathscr{A}\).
**Theorem 2.1**.: _Let \(G\) be a finite subgroup of \(O(d)\) and \(\mathscr{A}\) a finite \(G\)-invariant subset of an inner product space \(\mathscr{V}\). Then, for a generic \(w\in\mathscr{V}\) (with respect to the Zariski topology) and any fixed \(j\in[N]\), the map \(\Phi_{w,j}\) is injective on the quotient space \(\hat{\mathscr{A}}\) and bi-Lipschitz._
Proof.: For fixed \(x,y\in\mathscr{A}\), let
\[\mathscr{W}_{x,y}=\bigcup_{h_{1},h_{2}\in G}\{U_{h_{1}}x-U_{h_{2}}y\}^{\perp}\]
and
\[\mathscr{W}=\bigcup_{\begin{subarray}{c}x,y\in\mathscr{A}\\ x\approx y\end{subarray}}W_{x,y}=\bigcup_{\begin{subarray}{c}x,y\in\mathscr{A }^{\prime}\\ x\approx y\end{subarray}}\bigcup_{h_{1},h_{2}\in G}\{U_{h_{1}}x-U_{h_{2}}y\}^{\perp}.\]
Given \(i\in[N]\) and \(w\in\mathscr{V}\), recall that \(\Phi_{w,j}(x)\) is the \(j\)-th coordinate of vector \(\downarrow\left\langle U_{g}w,x\right\rangle_{g\in G}\). From the definition of the set \(\mathscr{W}\) we notice that for any vector \(w\in\mathscr{W}^{c}\) the operator \(\phi_{w}^{i}\) separates different orbits of elements of \(\mathscr{A}\).
Notice that \(\mathscr{W}\) is a finite union of \((d-1)\)-dimensional subspaces, making it a closed set with zero measure and nowhere dense with zero Lebesgue measure in \(\mathscr{V}\). Consequently, for a generic element \(w\in\mathscr{V}\) with respect to the Zariski topology, it provides an injective embedding \(\phi_{w}^{j}(x)\). However, we still need to demonstrate that if the map \(\phi_{w}^{j}(x)\) is injective, it is also
bi-Lipschitz. That is to find \(a_{w},b_{w}\in\mathbb{R}\) with \(0<a_{w}\leq b_{w}\) such that for all \(x,y\in\mathscr{A}\)
\[a_{w}\operatorname{d}(x,y)\leq|\phi_{w}^{j}(x)-\phi_{w}^{j}(y)|\leq b_{w} \operatorname{d}(x,y).\]
As the set \(\mathscr{A}\) is finite so is \(\mathscr{A}\times\mathscr{A}\). Hence, \(\{\operatorname{d}(x,y)|x,y\in\mathscr{A},\ x\nsim y\}\) is a finite set of positive numbers.
The optimal "bi-Lipschitz constants" are
\[a_{w}=\min_{\begin{subarray}{c}x,y\in\mathscr{A}\\ x\approx y\end{subarray}}\frac{|\phi_{w}^{j}(x)-\phi_{w}^{j}(y)|}{ \operatorname{d}(x,y)}=\min_{\begin{subarray}{c}x,y\in\mathscr{A}\\ x\approx y\end{subarray}}\frac{\left|\max_{g\in G}\langle U_{g}w,x\rangle- \max_{g\in G}\left\langle U_{g}w,y\right\rangle\right|}{\min_{g\in G}\lVert U _{g}x-y\rVert}\]
and
\[b_{w}=\max_{\begin{subarray}{c}x,y\in\mathscr{A}\\ x\approx y\end{subarray}}\frac{|\phi_{w}^{j}(x)-\phi_{w}^{j}(x)|}{ \operatorname{d}(x,y)}=\max_{\begin{subarray}{c}x,y\in\mathscr{A}\\ x\approx y\end{subarray}}\frac{\left|\max_{g\in G}\langle U_{g}w,x\rangle- \max_{g\in G}\left\langle U_{g}w,y\right\rangle\right|}{\min_{g\in G}\lVert U _{g}x-y\rVert}.\]
Notice that the upper Lipschitz bound above is sharp. However, if we don't require sharpness, there is a way to find an easily computable upper Lipschitz bound in the following manner:
Without loss of generality, suppose that
\[\Phi_{w,j}(x)\geq\Phi_{w,j}(y).\]
Let \(g_{j}^{x},g_{j}^{y}\in G\) such that \(\Phi_{w,j}(x)=\langle w,U_{g_{j}^{x}}x\rangle\) and \(\Phi_{w,j}(y)=\langle w,U_{g_{j}^{y}}x\rangle\), respectively, and take \(g_{0}\in G\) satisfying \(\operatorname{d}(x,y)=\lVert x-U_{g_{0}}y\rVert\). Then, from the pigeonhole principle there exists \(k\leq j\) such that \(\langle w,U_{g_{k}^{x}g_{0}}y\rangle\leq\langle w,U_{g_{j}^{y}}y\rangle\). Then, we have
\[|\Phi_{w,j}(x)-\Phi_{w,j}(y)| =\langle w,U_{g_{j}^{x}}x\rangle-\langle w,U_{g_{j}^{y}}y\rangle\] \[\leq\langle w,U_{g_{k}^{x}}x\rangle-\langle w,U_{g_{k}^{x}}U_{g_{ 0}}y\rangle\] \[=\langle w,U_{g_{k}^{x}}(x-U_{g_{0}}y)\rangle\] \[\leq\lVert w\rVert\lVert x-U_{g_{0}}y\rVert\] \[=\lVert w\rVert\operatorname{d}(x,y).\]
Therefore, \(b_{w}=\lVert w\rVert\) is a also upper Lipschitz bound.
## 3 Representation of inner product spaces
Fix \(\mathbf{w}=(w_{1},\ldots,w_{p})\) and take \(S=\{(1,1),\ldots,(1,p)\}\subset[N]\times[p]\). Recall that \(S_{i}=\{k\in[N]:(k,i)\in S\}\). In that case \(\forall i\in[p]\), \(S_{i}=\{1\}\), so \(\Phi_{\mathbf{w},S}\) is the _max filter_ map \((\langle\langle w_{1},x\rangle\rangle,\ldots,\langle\langle w_{m},x\rangle \rangle)^{T}\), where
\[\langle\langle w_{i},x\rangle\rangle =\sup_{g_{1},g_{2}\in G}\,\langle U_{g_{1}}w_{i},U_{g_{2}}x \rangle=\max_{g_{1},g_{2}\in G}\,\langle U_{g_{1}}w_{i},U_{g_{2}}x\rangle\] \[=\max_{g\in G}\,\langle U_{g}w_{i},x\rangle=\max_{g\in G}\, \langle w_{i},U_{g}x\rangle.\]
In [11], it is shown that \(2d\) vectors are enough for the construction of an injective embedding.
**Theorem 3.1** ([11, Lemma 12]).: _Consider any finite subgroup \(G\leq O(d)\). For a generic \(\textbf{w}\in\mathscr{V}^{p}\) and for \(S=\{(1,1),\ldots,(1,p)\}\), the map \(\Phi_{\textbf{w},S}\) separates \(G\)-orbits in \(\mathbb{R}^{d}\) provided that \(p\geq 2d\)._
Our goal is to examine the pairs \((\mathbf{w},S)\) where \(\mathbf{w}\in\mathscr{V}^{p}\) and \(S\) is subset of \([N]\times[p]\) with \(m=|S|\) such that \(\widehat{\Phi_{\mathbf{w}}^{A}}:\hat{\mathscr{V}}\to\mathbb{R}^{m}\) is injective. In other words, we are interested in all the pairs \((\mathbf{w},S)\) for which the following equivalence holds for all \(x,y\in\mathscr{V}\):
\[\Phi_{\mathbf{w},S}(x)=\Phi_{\mathbf{w},S}(y)\iff[x]=[y]. \tag{2}\]
In our next Theorem 3.2, we generalize Theorem 3.1; we show that one can replace the maximum element of the orbit \(\langle U_{g}w,\cdot\rangle\) with any other fixed element of that same orbit.
**Theorem 3.2**.: _Let \(p\geq 2d\) and \(S\subset[N]\times[p]\). Suppose that \(\forall i\in[p]\), \(S_{i}\neq\emptyset\). Then, for a generic with respect to Zariski topology \(\textbf{w}\in\mathscr{V}^{p}\), the map \(\Phi_{\textbf{w},S}\) is injective._
Before we are able to prove Theorem 3.2 we need some additional notation and certain lemmas. Let \(\mathscr{V}\) be an inner product space of dimension \(d\), and \(G\leq O(d)\) a finite subgroup of the group of orthogonal transformations on \(\mathscr{V}\). For a fixed \(w\in\mathscr{V}\) and \(i\in[N]\), recall that \(\Phi_{w,j}(x)\) represents the \(j\)-th coordinate of \(\downarrow(\langle U_{g}w,x\rangle g\in G)\). It's important to note that \(\Phi w,j\) satisfies specific scaling and symmetry properties, which we state in the form of a lemma:
**Lemma 3.3**.: _For \(j\), \(\lambda\) and \(\mathscr{V}\) as above,_
\[\Phi_{\lambda w,j}(x)=\Phi_{w,j}(\lambda x)=\lambda\Phi_{w,j}(x), \forall w,x\in\mathscr{V},\quad\lambda>0, \tag{3}\] \[\Phi_{w,j}(x)=\Phi_{x,j}(w), \forall w,x\in\mathscr{V}. \tag{4}\]
For \(x,y\in\mathscr{V}\) and \(j\in[N]\), define
\[\mathscr{F}_{x,y,j}=\{w\in\mathscr{V}:\Phi_{w,j}(x)=\Phi_{w,j}(y)\}.\]
If \(x\sim y\), then clearly \(\mathscr{F}_{x,y,j}=\mathscr{V}\). For \(x,y\in\mathscr{V}\) with \(x\nsim y\) we want give a geometrical description of \(\mathscr{F}_{x,y,j}\). Let \(r_{1}\in G\) be such that \(\Phi_{w,j}(x)=\langle U_{r_{1}}w,x\rangle\) and \(r_{2}\in G\) such that \(\Phi_{w,j}(y)=\langle U_{r_{2}}w,x\rangle\). Then \(\langle w,U_{r_{1}^{-1}}x-U_{r_{2}^{-1}}y\rangle=0\) which implies that
\[\mathscr{F}_{x,y,j}\subset\bigcup_{h_{1},h_{2}\in G}\{U_{h_{1}}x-U_{h_{2}}y \}^{\perp}. \tag{5}\]
On the other hand, each \(\{U_{h_{1}}x-U_{h_{2}}y\}^{\perp}\subset\mathscr{V}\) is a proper hyperplane because \(U_{h_{1}}x-U_{h_{2}}y\neq 0\) for any \(h_{1},h_{2}\in G\) whenever \(x\nsim y\). As a result, we conclude that \(\mathscr{F}_{x,y,j}\) is contained within a finite union of \((d-1)\)-dimensional hyperplanes.
For a filter bank \(\mathbf{w}=(w_{1},\ldots,w_{p})\) and a set \(S\subset[N]\times[p]\), we denote \(\mathscr{F}_{S}\) as all collections of \(p\)-tuples \(\mathbf{w}=(w_{1},\ldots,w_{p})\) such that the filter bank \(\{\Phi_{i,j}:i\in[p],j\in[S_{i}]\}\) fails to separate all possible non-equivalent points \(x,y\in\mathscr{V}\). This means that,
\[\mathscr{F}_{S}=\Big{\{}\mathbf{w}\in\mathscr{V}^{p} :\exists x,y\in\mathscr{V}\text{ with }x\nsim y\] \[\text{ and }\Phi_{i,j}(x)=\Phi_{i,j}(y),\forall i\in[p],\;\forall j \in S_{i}\Big{\}}.\]
Following the notation in [15], we will refer to the set \(\mathscr{F}_{S}\) as the "bad set" because it contains the set of \(p\)-tuples \(\mathbf{w}\in\mathscr{V}^{p}\) that fail to construct an injective embedding \(\Phi_{\mathbf{w},S}\). We will establish requirements for the set \(S\) so that the "bad set" \(\mathscr{F}_{S}\) is a subset of a Zariski-closed, proper subset of \(\mathscr{V}^{p}\).
Let
\[\Gamma=\{(x,y)\in\mathscr{V}^{2}:x\nsim y\} \tag{6}\]
be the set of all non-equivalent pairs of vectors. It's important to notice that \(\Gamma\) is an open set, with its complement being a finite union of closed linear
subspaces of dimension \(d=\dim(\mathscr{V})\). If the assumptions of Theorem 3.2 are satisfied, we can observe that
\[\mathscr{F}_{A} \subset\bigcup_{(x,y)\in\Gamma}\bigcup_{h_{1},\ldots,h_{2p}\in G} \left(\{U_{h_{1}}x-U_{h_{2}}y\}^{\perp}\times\cdots\times\{U_{h_{2p-1}}x-U_{h_{ 2p}}y\}^{\perp}\right)\] \[=\bigcup_{h_{1},\ldots,h_{2p}=1}^{N}\bigcup_{(x,y)\in\Gamma} \left(\{U_{h_{1}}x-U_{h_{2}}y\}^{\perp}\times\cdots\times\{U_{h_{2p-1}}x-U_{h_{ 2p}}y\}^{\perp}\right).\]
For fixed \(\{h_{1},\ldots,h_{2p}\}\in G\), set
\[\mathscr{F}_{h_{1},\ldots,h_{2p}}=\bigcup_{(x,y)\in\Gamma}\{U_{h_{1}}x-U_{h_{ 2}}y\}^{\perp}\times\cdots\times\{U_{h_{2p-1}}x-U_{h_{2p}}y\}^{\perp}.\]
Notice that because \(G\) is a finite group in order to prove Theorem 3.1 is enough to show that the for any choice of \(h_{1},\ldots,h_{2p}\in G\) the set \((\mathscr{F}_{h_{1},\ldots,h_{2p}})^{c}\) contains a Zariski open nonempty subset of \(\mathscr{V}^{p}\).
Recall that the group \(G\) has size \(N=|G|\). For fixed \(2p\) elements \(h_{1},\ldots,h_{2p}\in G\), we denote by \(h_{g_{1},\ldots,g_{2p}}:\mathscr{V}\times\mathscr{V}\to\mathscr{V}^{p}\) the linear map
\[f_{h_{1},\ldots,h_{2p}}(x,y)=(U_{h_{1}}x-U_{h_{2}}y,\ldots,U_{h_{2p-1}}x-U_{h_ {2p}}y).\]
Observe that
\[\dim(\operatorname{Ran}(f_{h_{1},\ldots,h_{2p}}))=\dim(\mathscr{V}\times \mathscr{V})\leq 2d-r\]
where \(r=\dim(\ker(f_{g_{1},\ldots,g_{2p}}))\).
Next, let \(C=f_{h_{1},\ldots,h_{2p}}(\Gamma)\) denote the image of set \(\Gamma\) through the linear map \(f_{h_{1},\ldots,h_{2p}}\). Note that \(C\) is a semialgebraic subset of \(\mathscr{V}^{p}\) of dimension \(2d-r\).
Consider the set
\[B=C\cap S_{1}(\mathscr{V}^{p}).\]
We have already shown that \(C\) is an open subset of a \(2d-r\)-dimensional subspace of \(\mathscr{V}^{p}\). Thus, \(B\) is an open subset of the \((2d-r-1)\)-dimensional unit sphere in \(\mathscr{V}^{p}\), and hence a \((2d-r-1)\)-smooth manifold.
Moreover \(C\) is an semialgebraic set, so it is a semialgebraic set of dimension \((2d-r-1)\).
Now, let \(\mathbf{w}=(w_{1},\ldots,w_{p})\in(\mathscr{V}\setminus\{0\})^{p}\). For easiness of notation we define \(\{0\oplus x\oplus 0\}_{p}^{i}\in\mathscr{V}^{p}\) to be the element of vector space \(\mathscr{V}^{p}\), where in \(i\)-th
entry is the vector \(x\) and all other \(p-1\) entries are equal with the zero vector. Throughout the rest of this paper and for each such \(\mathbf{w}\), we fix a choice of \(p(d-1)\) vectors \(h_{1},\ldots,h_{p(d-1)}\in\mathscr{V}^{p}\) so that the set
\[\left\{\{0\oplus\frac{w_{1}}{\|w_{1}\|}\oplus 0\}_{p}^{1},\ldots,\{0\oplus \frac{w_{p}}{\|w_{p}\|}\oplus 0\}_{p}^{p},h_{1},\ldots,h_{p(d-1)}\right\}\]
forms a basis in \(\mathscr{V}^{p}\). The choice of \(h_{i}\)'s need not change continuously with \(\mathbf{w}\). Using Gram-Schmidt, we turn this set into an orthonormal basis in \(\mathscr{V}^{p}\) of the form
\[\left\{\{0\oplus\frac{w_{1}}{\|w_{1}\|}\oplus 0\}_{p}^{1},\ldots,\{0\oplus \frac{w_{1}}{\|w_{1}\|}\oplus 0\}_{p}^{p},\mathbf{e}_{1}^{\mathbf{w}}, \ldots,\mathbf{e}_{p(d-1)}^{\mathbf{w}}\right\}.\]
Of course, the vectors \(\mathbf{e}_{1}^{\mathbf{w}},\ldots,\mathbf{e}_{p(d-1)}^{\mathbf{w}}\) depend on \(w\) as well as on the choices of the auxiliary \(p(d-1)\) vectors \(h_{1},\ldots,h_{p(d-1)}\). However, we shall discard the implicit dependency on these auxiliary vectors \(h_{i}\)'s from our notation.
For each \(\mathbf{w}=(w_{1},\ldots,w_{p})\in(\mathscr{V}\setminus\{0\})^{p}\) there is a ball of radius \(\rho_{\mathbf{w}}>0\), \(U_{\mathbf{w}}:=B(\rho_{\mathbf{w}},\mathbf{w})\subset\mathscr{V}^{p}\) open in the ambient space centered at \(\mathbf{w}\) such that for all \(\mathbf{v}=(v_{1},\ldots v_{p})\in B(2\rho_{\mathbf{w}},\mathbf{w})\) we have that the \(pd\) vectors
\[\left\{\{0\oplus\frac{v_{1}}{\|v_{1}\|}\oplus 0\}_{p}^{1},\ldots,\{0\oplus \frac{v_{p}}{\|v_{p}\|}\oplus 0\}_{p}^{p},\mathbf{e}_{1}^{\mathbf{w}}, \ldots,\mathbf{e}_{p(d-1)}^{\mathbf{w}}\right\}\]
still span the \(\mathscr{V}^{p}\). Note that \(\mathbf{e}_{1}^{\mathbf{w}},\ldots,\mathbf{e}_{p(d-1)}^{\mathbf{w}}\) depend on \(\mathbf{w}\) but are independent from \(\mathbf{v}\) Using Gram-Schmidt process we transform this, non necessary orthonormal, basis, into the orthonormal basis
\[\left\{\{0\oplus\frac{v_{1}}{\|v_{1}\|}\oplus 0\}_{p}^{1},\ldots,\{0\oplus \frac{v_{p}}{\|v_{p}\|}\oplus 0\}_{p}^{p},\mathbf{e}_{1}^{\mathbf{w}, \mathbf{v}},\ldots,\mathbf{e}_{p(d-1)}^{\mathbf{w},\mathbf{v}}\right\}.\]
Note that each element of the orthonormal basis we constructed, depends continuously on \(\mathbf{v}\).
For fixed \(\mathbf{x}=(x_{1},\ldots,x_{p})\in\mathscr{V}^{p}\), denote by \(F_{\mathbf{x}}\) the linear subspace
\[F_{\mathbf{x}}=\{\mathbf{y}=(y_{1},\ldots,y_{p})\in\mathscr{V}^{p}\ :\ \langle y_{1},x_{1} \rangle=\cdots=\langle y_{p},x_{p}\rangle=0\}.\]
Note that for each \(\mathbf{w}\in(\mathscr{V}\setminus\{0\})^{p}\) and \(\mathbf{v}\in U_{\mathbf{w}}\), the orthonormal set \(\mathbf{e}_{1}^{\mathbf{w},\mathbf{v}},\ldots,\mathbf{e}_{p(d-1)}^{\mathbf{w},\mathbf{v}}\) is an orthonormal basis for the linear space \(F_{\mathbf{v}}\).
Now for \(M\subset(\mathscr{V}\setminus\{0\})^{p}\), let \(E_{M}=\{(\mathbf{x},\mathbf{y}):\mathbf{x}\in M,\mathbf{y}\in F_{\mathbf{x}}\}\) denote a subset of \(\mathscr{V}^{2p}\) and \(\pi:E_{M}\to M\) be the projection on the first component, i.e. \(\pi(\mathbf{x},\mathbf{y})=\mathbf{x}\).
**Proposition 3.4**.: _Suppose that \(M\subset(\mathscr{V}\setminus\{0\})^{p}\) is a \(k\)-dimensional algebraic variety. Then, \((E_{M},\pi,M)\) is an real analytic vector bundle with a \(k\)-dimensional base \(M\), bundle projection \(\pi\), \(k+p(d-1)\)-dimensional total space \(E_{M}\), and linear fibers of dimension \(p(d-1)\)._
Proof.: For each \(\mathbf{w}=(w_{1},\ldots,w_{p})\in M\), consider the map \(\psi_{\mathbf{w}}:\pi^{-1}(U_{\mathbf{w}})\to U_{\mathbf{w}}\times\mathbb{R}^{p (d-1)}\) defined by
\[\psi_{\mathbf{x}}(\mathbf{v},z)=(\mathbf{v},(\langle z,e_{1}^{\mathbf{w}, \mathbf{v}}\rangle,\ldots,\langle z,e_{p(d-1)}^{\mathbf{w},\mathbf{v}})))\]
where \(\mathbf{v}=(v_{1},\ldots,v_{p})\in U_{\mathbf{w}}\), and the map \(\phi_{\mathbf{w}}:U_{\mathbf{w}}\times\mathbb{R}^{p(d-1)}\to\pi^{-1}(U_{ \mathbf{w}})\) defined by
\[\phi_{\mathbf{w}}(\mathbf{v},(c_{1},\ldots,c_{p(d-1)}))=(\mathbf{v},\sum_{i=1 }^{p(d-1)}c_{i}e_{i}^{w,v}).\]
It is clear that \(\phi_{\mathbf{w}}\circ\psi_{\mathbf{w}}=\mathrm{id}\) and \(\psi_{\mathbf{w}}\circ\phi_{\mathbf{w}}=\mathrm{id}\) and hence both maps are bijections. Additionally, both \(\phi_{\mathbf{w}}\) and \(\psi_{\mathbf{w}}\) are continuous and, therefore, homeomorphisms. This shows that \((E_{M},\pi_{M},M)\) is a topological vector bundle.
**Proposition 3.5**.: _Recall that \(B=f_{g_{1},\ldots,g_{2p}}(\Gamma)\subset(\mathscr{V}\setminus\{0\})^{m}\) is semialgebraic set of dimension \(2d-r-1\), where \(r=\dim(\ker f_{g_{1},\ldots,g_{2p}})\). There exists a finite collection of trivial vector bundles \((E_{j},\pi_{j},B_{j})\) with base manifolds \(B_{j}\) of same dimension, bundle projections \(\pi_{j}\), total spaces \(E_{j}=E_{B_{j}}\) (compatible with the definition \(E_{M}\) introduced earlier), and linear fibers of dimension \(m(d-1)\) such that \(\bigcup_{j}C_{j}=C\) and \(\bigcup_{j}E_{j}=E_{B}\). Thus, \((E_{j},\pi_{j},B_{j})\) provide a finite cover for the vector bundle \((E_{B},\pi,C)\)._
Proof.: We want to find a finite cover, \(\{B_{j}\}_{j=1}^{L}\), of \(B\) so that each \((E_{j},\pi_{j},B_{j})\) is a trivial vector bundle.
The product of unit spheres \(S_{1}(\mathscr{V})^{p}\) is compact, and hence we can find a finite collection \(\{\mathbf{w}_{i}\}_{i=1}^{L}\), \(\mathbf{w}_{i}\in S_{1}(\mathscr{V})^{p}\) such that \(\{U_{\mathbf{w}_{i}}\}_{i=1}^{L}\) is a cover of \(S_{1}(\mathscr{V})^{p}\), where each \(U_{\mathbf{w}}\) is some ball centred at \(\mathbf{w}\). Next, define
\[\tilde{U}_{\mathbf{w}}=\{\mathbf{v}=(v_{1},\ldots,v_{p})\in(\mathscr{V} \setminus\{0\})^{p}:(\frac{v_{1}}{\|v_{1}\|},\ldots,\frac{v_{p}}{\|v_{p}\|}) \in U_{\mathbf{w}}\},\]
and note that the sets \(B_{i}=\tilde{U}_{\mathbf{w}_{i}}\cap B\), for \(i\in[L]\), form a finite cover of \(B\).
Now, we will show that the triple \((E_{j},\pi,B_{j})\) is a trivial vector bundle. For this, we have to find \(p(d-1)\) independent global sections. For any
\((v_{1},\ldots,v_{p})\in B_{j}\), recall that the following set of vectors forms an orthonormal basis:
\[\Big{\{}\{0\oplus\frac{v_{1}}{\|v_{1}\|}\oplus 0\}_{p}^{1},\ldots,\{0\oplus\frac{v _{p}}{\|v_{p}\|}\oplus 0\}_{p}^{p},\mathbf{e}_{1}^{\mathbf{w},\mathbf{v}}, \ldots,\mathbf{e}_{p(d-1)}^{\mathbf{w},\mathbf{v}}\Big{\}}.\]
Now, if we define the maps \(s_{l,j}:B_{j}\to E_{j}\) by
\[s_{l}(\mathbf{v})=(\mathbf{v},e_{l}^{\mathbf{w}_{j},\mathbf{v}}),\]
it is clear that \(\{s_{l,j}\}_{l=1}^{p(d-1)}\) forms a set of \(p(d-1)\) independent global sections in \(B_{j}\). We conclude that \((B_{j},\pi,E_{j})\) is a trivial vector bundle.
Now we can complete the proof of Theorem 3.2.
Proof of Theorem 3.2.: Now, define the map \(\mathrm{P}_{j}:B_{j}\times\mathbb{R}^{p(d-1)}\to\mathscr{V}^{p}\) by
\[\mathrm{P}_{j}(\mathbf{v},\mathbf{c})=\sum_{i=1}^{p(d-1)}c_{i}e_{i}^{\mathbf{ w}^{j},\mathbf{v}}.\]
We have already shown that for a fixed \(\mathbf{w}\in\mathscr{V}^{p}\) the mapping \(\mathbf{v}\mapsto e^{\mathbf{w},\mathbf{v}}\) is semialgebraic, hence \(\mathrm{P}_{j}\) is also morphism as a linear combination of semialgebraic maps. Observe that
\[\bigcup_{j}\mathrm{P}_{j}(B_{j}\times\mathbb{R}^{p(d-1)})=\mathscr{F}_{g_{1}, \ldots g_{2p}}.\]
Notice that \(B_{j}\) is an semialgebraic set as a intersection of two semialgebraic sets, so \(B_{j}\times\mathbb{R}^{p(d-1)}\) is a semialgebraic set of dimension
\[2d-r-1+p(d-1)\leq 2d-1+p(d-1)\]
and also that
\[p\geq 2d\implies 2d-1+p(d-1)<pd.\]
For every \(j\), \(\mathrm{P}_{j}\) is semialgebraic, and \(B_{j}\times\mathbb{R}^{p(d-1)}\) is a semialgebraic set of dimension at most \(2d-1+p(d-1)\), so from theorem 1.12 \(\mathrm{P}_{j}(B_{j}\times\mathbb{R}^{p(d-1)})\) is a semialgebraic set of dimension at most \(2d-1+p(d-1)\) and from corollary 1.14 it is a nowhere dense set with zero Lebesgue measure.
### Coorbit embedding
Up to this point, we have focused on the scenario where we used only one element from each column of the matrix \(S\) for the construction of the embedding \(\Phi_{\mathbf{w},S}\). Now, we aim to explore the situation where we are permitted to use more than one element from each column.
We will demonstrate that in this case, one can find \(p\) smaller than \(2d\) such that for almost every \(\mathbf{w}\in\mathscr{V}^{p}\), the mapping \(\Phi_{\mathbf{w},S}\) is injective in the quotient space \(\hat{\mathscr{V}}\).
**Theorem 3.6**.: _Let \(G\) be a finite group acting unitarily on \(\mathscr{V}\cong\mathbb{R}^{d}\). For \(1\leq n\leq N-1\), let \(\gamma_{n}\) be the \(n\)-th entry of the sorted in decreasing order vector_
\[\gamma=\downarrow\{\min_{\lambda\in\operatorname{Sp}(g)}\operatorname{rank}[g- \lambda\operatorname{I}]\}_{g\neq\operatorname{I}_{d}}\]
_where_
\[\operatorname{Sp}(g)=\{\lambda\in\mathbb{R}:\det(U_{g}-\lambda \operatorname{I})=0\}\]
_and_
\[p_{n}=2d-\gamma_{N-n+1}.\]
_Notice that \(p_{n}\geq d+1\). Choose an integer \(p\) such that \(p_{n}\leq p\leq 2d\) and a set \(S\subset[p]\times[N]\) such that \(|S_{k}|=n\) for \(1\leq i\leq 2d-p\) and \(|S_{i}|=1\) for \(2d-p+1\leq i\leq p\). Note that \(S\) has cardinality of \(m=(2d-p)n+2p-2d\). Then, for a generic with respect to Zariski topology, \(\textbf{w}\in\mathscr{V}^{p}\), the map \(\Phi_{\textbf{w},S}\) is injective, i.e. for all \(x,y\in\mathscr{V}\) it holds_
\[\Phi_{\textbf{w},S}(x)=\Phi_{\textbf{w},S}(y)\iff x\sim y.\]
To prove Theorem 3.6, we will employ a procedure similar to the one used for Theorem 3.2, and thus, our notation will also be analogous.
Recall that
\[\mathscr{F}_{S}=\{\textbf{w}\in\mathscr{V}^{p}:\exists(x,y)\in \Gamma\text{ such that }\Phi_{i,j}(x)=\Phi_{i,j}(y)\ \forall(i,j)\in S\}\]
where \(\Gamma\) has been defined in (6).
To establish the proof of Theorem 3.6, it suffices to demonstrate that for every \(S\) satisfying the assumptions of the theorem, the set \((\mathscr{F}_{S})^{c}\) contains a Zariski open nonempty of \(\mathscr{V}^{p}\).
For fixed \(r\in[N]\) we define the set of group elements
\[H_{r}^{*}=\{g_{i_{1}},\ldots,g_{i_{r}}\in G^{r}:1\leq i_{1}<\cdots<i_{r}\leq N\}.\]
Notice that
\[\mathscr{F}_{S} =\{\mathbf{w}\in\mathscr{V}^{p}:\exists x,y\in\mathscr{V},\ x \nsim y:\Phi_{i,j}(x)=\Phi_{i,j}(y),\ \forall(i,j)\in S\}\] \[\subset\bigcup_{\begin{subarray}{c}(x,y)\in\Gamma\ \pi_{1},\ldots,\pi_{p}\in S_{N}\\ \sigma_{1},\ldots,\sigma_{p}\in S_{N}\end{subarray}}\{\mathbf{w}\in\mathscr{V }^{p}:\langle x,U_{g_{\pi_{i}(j)}}w_{i}\rangle=\langle y,U_{g_{\sigma_{i}(j)}} w_{i}\rangle,\ \forall(i,j)\in S\}\] \[=\bigcup_{\begin{subarray}{c}\pi_{1},\ldots,\pi_{p}\in S_{N}\\ \sigma_{1},\ldots,\sigma_{p}\in S_{N}\end{subarray}}\bigcup_{(x,y)\in\Gamma }\bigotimes_{k=1}^{p}\{\mathbf{w}\in\mathscr{V}^{p}:\langle U_{g_{\pi_{i}(j)} }^{-1}x-U_{g_{\sigma_{i}(j)}}y,w_{i}\rangle=0,\ \forall(i,j)\in S\}\] \[=\bigcup_{a_{i},b_{i}\in H_{m_{i}}^{*}}\bigcup_{(x,y)\in\Gamma} \bigotimes_{i=1}^{p}\Big{(}\bigcap_{j=1}^{m_{i}}\{U_{a_{i}(j)}x-U_{b_{i}(j)}y \}^{\perp}\Big{)}.\]
For fixed \(a_{i},b_{i}\in H_{m_{i}}^{*}\), \(i\in[p]\), we introduce the set
\[\mathscr{F}_{\begin{subarray}{c}a_{1},\ldots,a_{p}\\ b_{1},\ldots,b_{p}\end{subarray}}=\bigcup_{(x,y)\in\Gamma}\Big{(}\bigotimes_{i =1}^{p}\bigcap_{j=1}^{m_{i}}\{U_{a_{i}(j)}x-U_{b_{i}(j)}y\}^{\perp}\Big{)}.\]
Notice that because group \(G\) is finite, it suffices to show that for any choice of \(a_{i},b_{i}\in H_{m_{i}}^{*},\ i\in[p]\) where \(m_{i}\) and \(p\) satisfy the requirements of Theorem 3.1, the set \((\mathscr{F}_{\begin{subarray}{c}a_{1},\ldots,a_{p}\\ b_{1},\ldots,b_{p}\end{subarray}})^{c}\) contains a Zariski open, subset of \(\mathscr{V}^{p}\).
**Definition 3.7**.: _For fixed \(r\in[N]\), \(q\in[r-1]\) and \(a,b\in H_{r}^{*}\), we define the following set:_
\[\Gamma_{q}^{a,b}=\{(x,y):\dim(\operatorname{span}(U_{a(1)}x-U_{b(1)}y,\ldots,U _{a(m_{i})}x-U_{b(m_{i})}y)^{\perp})\geq d-r+q\}.\]
Furthermore, for \(q\in[r-2]\), let
\[\Delta_{q}^{a,b} =\Gamma_{q}^{a,b}\setminus\Gamma_{q+1}^{a,b}=\] \[=\{(x,y)\in\mathscr{V}^{2}:\dim(\operatorname{span}(U_{a(1)}x-U_{ a(1)}y,\ldots,U_{a(r)}x-U_{a(r)}y)^{\perp})=d-r+q\}.\]
Also, for \(a_{i},b_{i}\in H_{m_{i}}^{*}\), \(i\in[p]\) and \(x,y\in\mathscr{V}\), let
\[q_{i}=m_{i}-\dim(\operatorname{span}(U_{a_{i}(1)}x-U_{b_{i}(1)}y,\ldots,U_{a_ {i}(m_{i})}x-U_{b_{i}(m_{i})}y)).\]
Notice that
\[\Gamma\subset\bigcup_{\begin{subarray}{c}q_{1},\ldots,q_{p}\\ q_{i}\in[m_{i}-1]\end{subarray}}\bigcap_{k=1}^{p}\Delta_{q_{k}}^{a_{k},b_{k}}.\]
Therefore,
\[\mathscr{F}_{\begin{subarray}{c}a_{1},\ldots,a_{p}\\ b_{1},\ldots,b_{p}\end{subarray}}\subset\bigcup_{\begin{subarray}{c}q_{1}, \ldots,q_{p}\\ q_{i}\leq m_{i}-1\end{subarray}}\bigcup_{(x,y)\in\bigcap_{k=1}^{p}\Delta_{q_{k }}^{a_{k},b_{k}}}\big{(}\bigotimes_{i=1}^{p}\operatorname{span}(U_{a_{i}(1)}x- U_{b_{i}(1)}y,\ldots,U_{a_{i}(m_{i})}x-U_{b_{i}(m_{i})}y)^{\perp}\Big{)}.\]
Recall that Theorem 3.6 assumes that \(m_{1}=\cdots=m_{q}=n\) and \(m_{q+1}=\cdots=m_{p}=1\). Let
\[\mathscr{F}_{\begin{subarray}{c}a_{1},\ldots,a_{p}\\ b_{1},\ldots,b_{p}\end{subarray}}^{1}=\bigcup_{(x,y)\in\bigcap_{k=1}^{p} \Delta_{m_{k}-1}^{a_{k},b_{k}}}\big{(}\bigotimes_{i=1}^{p}\operatorname{span}( U_{a_{i}(1)}x-U_{b_{i}(1)}y,\ldots,U_{a_{i}(m_{i})}x-U_{b_{i}(m_{i})}y)^{\perp}\big{)}\]
and
\[\mathscr{F}_{\begin{subarray}{c}a_{1},\ldots,a_{p}\\ b_{1},\ldots,b_{p}\end{subarray}}^{2}=\bigcup_{\begin{subarray}{c}q_{1}, \ldots,q_{p}\\ q_{i}\leq m_{i}-2,i\in[q]\end{subarray}}\bigcup_{(x,y)\in\bigcap_{k=1}^{p} \Delta_{q_{k}}^{a_{k},b_{k}}}\big{(}\bigotimes_{i=1}^{p}\operatorname{span}(U_ {a_{i}(1)}x-U_{b_{i}(1)}y,\ldots,U_{a_{i}(m_{i})}x-U_{b_{i}(m_{i})}y)^{\perp} \big{)}.\]
Notice that for \(a,b\in H_{n}^{*}\)
\[\Gamma_{n-1}^{a,b}=\Delta_{n-1}^{a,b} =\{(x,y)\in\mathscr{V}^{2}:\exists\textbf{c}=(c_{1},\ldots,c_{n-1 })\in\mathbb{R}^{n-1}\] \[:(U_{a(1)}x-U_{b(1)}y-c_{i}(U_{a(i+1)}x-U_{b(i+1)}y)=0,\ \forall i\in[n-1]\}.\]
We define \(\Lambda_{i}=(\lambda_{i-1},\lambda_{i})\) for \(i\in[k+1]\), where by a slightly abuse of notation we let \(\lambda_{0}=-\infty\) and \(\lambda_{k+1}=+\infty\). For fixed \(a,b\in H_{n}^{*}\) let the map \(\ell_{a,b}:\mathbb{R}^{n-1}\times\mathscr{V}^{2}\to\mathscr{V}^{n-1}\) defined by,
\[\ell_{a,b}(c_{1},\ldots,c_{n-1},x,y)= U_{a(1)}x-U_{b(1)}y-c_{1}(U_{a(2)}x-U_{b(2)}y),\] \[\ldots, U_{a(1)}x-U_{b(1)}y-c_{n-1}(U_{a(n)}x-U_{b(n)}y).\]
We also define the following auxiliary set:
\[\Gamma_{\begin{subarray}{c}a_{1},\ldots,a_{p}\\ b_{1},\ldots,b_{p}\end{subarray}}^{1}= \{(\textbf{C},x,y)\in\mathbb{R}^{q\times(n-1)}\times\Gamma:\ell _{a_{i},b_{i}}(\textbf{C_{i}},x,y)=0,\ \forall i\in[q]\}.\]
Notice that
\[\mathscr{F}^{1}_{\begin{subarray}{c}a_{1},\ldots,a_{p}\\ b_{1},\ldots,b_{p}\end{subarray}}\subset\bigcup_{(\boldsymbol{C},x,y)\in\Gamma^ {1}_{a_{1},\ldots,a_{p}}\atop b_{1},\ldots,b_{p}}\{U_{a_{1}(1)}x-U_{b_{1}(1)}y \}^{\perp}\times\cdots\times\{U_{a_{p}(1)}x-U_{b_{p}(1)}y\}^{\perp}\]
and
\[\mathscr{F}^{2}_{\begin{subarray}{c}a_{1},\ldots,a_{p}\\ b_{1},\ldots,b_{p}\end{subarray}}\subset\bigcup_{\begin{subarray}{c}q_{1}, \ldots,q_{p}\\ q_{i}\leq m_{i}-2,i\in[q]\end{subarray}}\bigcup_{(x,y)\in\cap_{k=1}^{p}\Delta ^{a_{k},b_{k}}_{q_{k}}}\big{(}\bigotimes_{i=1}^{p}\operatorname{span}(U_{a_{i} (1)}x-U_{b_{i}(1)}y,\ldots,U_{a_{i}(m_{i})}x-U_{b_{i}(m_{i})}y)^{\perp}\big{)}.\]
Now we will show some helpful lemmas before showing that \(\mathscr{F}^{2}_{\begin{subarray}{c}a_{1},\ldots,a_{p}\\ b_{1},\ldots,b_{p}\end{subarray}}\) is a zero measure subset of \(\mathscr{V}^{p}\).
For fixed \(h_{1},\ldots,h_{2m}\in G\), let \(f_{h_{1},\ldots,h_{2m}}:\mathscr{V}\times\mathscr{V}\to\mathscr{V}^{m}\) denote the linear map
\[f_{h_{1},\ldots,h_{2m}}(x,y)=(U_{h_{1}}x-U_{h_{2}}y,\ldots,U_{h_{2m-1}}x-U_{h _{2m}}y).\]
Now, let
\[C_{2}=f_{h_{1},\ldots,h_{2m}}(\bigcup_{\begin{subarray}{c}q_{1},\ldots,q_{p} \\ q_{i}\leq m_{i}-2,i\in[q]\end{subarray}}\bigcap_{k=1}^{p}\Delta^{a_{k},b_{k}}_{ q_{k}})\]
denote the image of the open set
\[\bigcup_{\begin{subarray}{c}q_{1},\ldots,q_{p}\\ q_{i}\leq m_{i}-2,i\in[q]\end{subarray}}\bigcap_{k=1}^{p}\Delta^{a_{k},b_{k}}_{ q_{k}}\subset\mathscr{V}^{2}\]
through the linear map \(f_{h_{1},\ldots,h_{2m}}\). Note \(C_{2}\) is a semialgebraic set of dimension \(2d-r\leq 2d\).
Following the notation of previous section we define the set,
\[B_{2}=C_{2}\cap S^{1}(\mathscr{V}^{m}).\]
We have already shown that \(C_{2}\) is a semialgebraic of dimension \(2d-r\leq 2d\), therefore \(B_{2}\) is a open subset of a \((2d-r-1)\)-dimensional unit sphere in \(\mathscr{V}^{2d}\) and hence a semialgebraic set in \(\mathscr{V}^{m}\) of dimension \(2d-r-1\leq 2d-1\).
For fixed \(\mathbf{w}=(w_{1},\ldots,w_{m})\in B_{2}\) notice that for every \(k\in[q]\), exists \((k-1)n+1\leq i_{k},j_{k}\leq kn\) such that \(w_{i_{k}}\) and \(w_{j_{k}}\) are linearly independent
vectors. After perform permutation on elements \(\{w_{(k-1)n+1},\ldots,w_{kn},\ k\in[q]\}\) we can always assume that \(i_{k}=(k-1)n+1\) and \(j_{k}=(k-1)n+2\) Also, for any pair \((w_{i_{k}},w_{j_{k}})\), let \((e(w_{i_{k}}),e(w_{j_{k}}))\) be the corresponding vectors after we perform the Gram-Schmidt process to the pair \((w_{i_{k}},w_{j_{k}})\). Notice that we can choose vectors \(f_{1},\ldots,f_{d(p-2)}\in\mathscr{V}\) so that
\[\Big{\{}\{0\oplus w_{1}\oplus 0\}_{p}^{1},\{0\oplus w_{2} \oplus 0\}_{p}^{1},\{0\oplus w_{n+1}\oplus 0\}_{p}^{2},\] \[\{0\oplus w_{n+2}\oplus 0\}_{p}^{2},\ldots,\{0\oplus w_{(q-1)n+1} \oplus 0\}_{p}^{q},\{0\oplus w_{(q-1)n+2}\oplus 0\}_{p}^{q},\] \[\{0\oplus w_{nq+1}\oplus 0\}_{p}^{q+1},\ldots,\{0\oplus w_{m} \oplus 0\}_{p}^{p},f_{1},\ldots,f_{d(p-2)}\Big{\}}\]
forms a basis in \(\mathscr{V}^{p}\). Use Gram-Schmidt to turn this set into an orthonormal basis in \(\mathscr{V}^{p}\) of the form
\[\Big{\{}\{0\oplus e(w_{1})\oplus 0\}_{p}^{1},\{0\oplus e(w_{2}) \oplus 0\}_{p}^{1},\{0\oplus e(w_{n+1})\oplus 0\}_{p}^{2},\] \[\{0\oplus e(w_{n+2})\oplus 0\}_{p}^{2},\ldots,\{0\oplus e(w_{(q-1 )n+1})\oplus 0\}_{p}^{q},\ldots,\{0\oplus e(w_{(q-1)n+2})\oplus 0\}_{p}^{q},\] \[\{0\oplus\frac{w_{nq+1}}{\|w_{nq+1}\|}\oplus 0\}_{p}^{q+1}, \ldots,\{0\oplus\frac{w_{m}}{\|w_{m}\|}\oplus 0\}_{p}^{p},h_{1},\ldots,h_{d(p-2)} \Big{\}}\]
For all \(\mathbf{v}=(v_{1},\ldots v_{m})\in B(2\rho_{w},w)\) we have that the \(pd\) vectors
\[\Big{\{}\{0\oplus e(v_{1})\oplus 0\}_{p}^{1},\{0\oplus e(v_{2}) \oplus 0\}_{p}^{1},\{0\oplus e(v_{n+1})\oplus 0\}_{p}^{2},\] \[\{0\oplus e(v_{n+2})\oplus 0\}_{p}^{2},\ldots,\{0\oplus e(v_{(q- 1)n+1})\oplus 0\}_{p}^{q},\{0\oplus e(v_{(q-1)n+2})\oplus 0\}_{p}^{q},\] \[\{0\oplus\frac{v_{nq+1}}{\|v_{nq+1}\|}\oplus 0\}_{p}^{q+1}, \ldots,\{0\oplus\frac{v_{m}}{\|v_{m}\|}\oplus 0\}_{p}^{p},h_{1},\ldots,h_{d(p-2)} \Big{\}}\]
still span the \(\mathscr{V}^{p}\). Using Gram-Schmidt process we transform this basis into an orthonormal one:
\[\Big{\{}\{0\oplus e(v_{1})\oplus 0\}_{p}^{1},\{0\oplus e(v_{2}) \oplus 0\}_{p}^{1},\{0\oplus e(v_{n+1})\oplus 0\}_{p}^{2},\] \[\{0\oplus e(v_{n+2})\oplus 0\}_{p}^{2},\ldots,\{0\oplus e(v_{(q- 1)n+1})\oplus 0\}_{p}^{q},\{0\oplus e(v_{(q-1)n+2})\oplus 0\}_{p}^{q},\] \[\{0\oplus\frac{v_{nq+1}}{\|v_{nq+1}\|}\oplus 0\}_{p}^{q+1}, \ldots,\{0\oplus\frac{v_{m}}{\|v_{m}\|}\oplus 0\}_{p}^{p},\mathbf{e}_{1}^{ \mathbf{w},\mathbf{v}},\ldots,\mathbf{e}_{d(p-2)}^{\mathbf{w},\mathbf{v}}\Big{\}}\]
Following the notation of Theorem 3.2 for each \(\mathbf{x}=(x_{1},\ldots,x_{m})\in\mathscr{V}^{m}\), we denote by \(F_{\mathbf{x}}\) the linear subspace \(F_{\mathbf{x}}=\{x_{1}\}^{\perp}\cap\cdots\cap\{x_{n}\}^{\perp}\times\cdots \times\{x_{2q-n+1}\}^{\perp}\cap\)
\(\cdots\cap\{x_{2q}\}^{\perp}\times\{x_{2q+1}\}^{\perp}\times\cdots\times\{x_{m}\}^ {\perp}\subset\mathscr{V}^{p}\). Note that for each \(\mathbf{w}\in B_{2}\) and \(\mathbf{v}\in U_{\mathbf{w}}\), the orthonormal set \(\mathbf{e}_{1}^{\mathbf{w},\mathbf{v}},\ldots,\mathbf{e}_{d(p-2)}^{\mathbf{w}, \mathbf{v}}\) is an orthonormal basis for the linear space \(F_{\mathbf{v}}\). Finally, for any subset \(M\) of \(B_{2}\), let \(E_{M}=\{(\mathbf{x},\mathbf{y}):\mathbf{x}\in M,\mathbf{y}\in F_{\mathbf{x}}\} \subset\mathscr{V}^{2d+p}\) and \(\pi:E_{M}\to M\) be the projection on first component, that is \(\pi(\mathbf{x},\mathbf{y})=\mathbf{x}\).
**Proposition 3.8**.: _Suppose that \(M\subset B_{2}\) is a \(l\)-dimensional manifold. Then \((E_{M},\pi,M)\) is vector bundle, with \(l\)-dimensional base \(M\), bundle projection \(\pi\), \(l+d(p-2)\)-dimensional total space \(E_{M}\), and linear fibers of dimension \(d(p-2)\)._
For each \(\mathbf{w}=(w_{1},\ldots,w_{m})\in M\), let \(\psi_{\mathbf{w}}:\pi^{-1}(U_{\mathbf{w}})\to U_{\mathbf{w}}\times \mathbb{R}^{d(p-2)}\), where,
\[\psi_{\mathbf{w}}(\mathbf{v},\mathbf{z})=(\mathbf{v},(\langle\mathbf{z}, \mathbf{e}_{1}^{\mathbf{w},\mathbf{v}}\rangle,\ldots,\langle\mathbf{z}, \mathbf{e}_{d(p-2)}^{\mathbf{w},\mathbf{v}}\rangle))\]
and \(\phi_{\mathbf{w}}:U_{\mathbf{w}}\times\mathbb{R}^{d(p-2)}\to\pi^{-1}(U_{ \mathbf{w}})\) where
\[\phi_{\mathbf{w}}(\mathbf{v},(c_{1},\ldots,c_{d(p-2)}))=(\mathbf{v},\sum_{i= 1}^{d(p-2)}c_{i}\mathbf{e}_{i}^{\mathbf{w},\mathbf{v}})\]
It is clear that \(\phi_{\mathbf{w}}\) and \(\psi_{\mathbf{w}}\) are inverse to each other and hence they both are bijections. Furthermore, both \(\phi_{\mathbf{w}}\) and \(\psi_{\mathbf{w}}\) are continuous. Hence \(\phi_{\mathbf{w}}\) and \(\psi_{\mathbf{w}}\) are homeomorphisms. This proves that \((E_{M},\pi_{M},M)\) is a topological vector bundle.
**Proposition 3.9**.: _There exists a finite collection of trivial vector bundles \((E_{2,j},\pi_{j},B_{2,j})\), with base manifolds \(B_{2,j}\), bundle projections \(\pi_{j}\), total spaces \(E_{2,j}=E_{B_{2,j}}\) (compatible with the definition \(E_{M}\) introduced earlier), and linear fibers of dimension \(d(p-2)\), such that, \(\bigcup_{j}B_{2,j}=B_{2}\) and \(\bigcup_{j}E_{2,j}=E_{2,j}\). They provide a finite cover of the vector bundle \((E_{2},\pi,B_{2})\)._
Proof.: We want to show that we can find a finite cover of \(B_{2}\), \(\{B_{2,j}\}_{j=1}^{L}\), such that each \((E_{j},\pi_{2,j},B_{2,j})\) is a trivial vector bundle. Note that the set
\[D=\{(x_{1},\ldots,x_{m})\in\mathscr{V}^{m}\text{ such that }\|x_{i}\|=1,\ \forall i\in[m]\] \[\text{ and }\langle x_{n(\kappa-1)},x_{n(\kappa-1)+1}\rangle=0, \text{ for every }k\in[q]\}\]
is compact, hence we can find a finite collection \(\{\mathbf{w}_{i}\}_{i=1}^{K}\in D\), such that \(\{U_{\mathbf{w}_{i}}\}_{i=1}^{K}\) is a cover of \(D\).
We also define the sets
\[Z=\{x_{1},\ldots,x_{m}\in\mathscr{V}:x_{n(\kappa-1)}\neq\lambda x_{n(\kappa-1)+1}, \forall k\in[q],\forall\lambda\in\mathbb{R}\}\]
and
\[\tilde{U}_{\mathbf{w}}= \{(u_{1},\ldots,u_{m})\in Z:\] \[(e(u_{1}),e(u_{2}),u_{3},\ldots,u_{n},e(u_{n+1}),e(u_{n+2}),u_{n+ 3},\ldots,u_{m})\in D\}\]
i.e. \(\tilde{U}_{\mathbf{w}}\) contains all \((v_{1},\ldots,v_{m})\in Z\) such that if we replace \(v_{n(\kappa-1)}\) and \(v_{n(\kappa-1)+1}\) with \(e(v_{n(\kappa-1)})\) and \(e(v_{n(\kappa-1)+1})\) respectively, the transformed vector belongs in \(D\).
Note that the sets \(B_{2,i}=\tilde{U}_{\mathbf{w}_{i}}\cap B_{2}\), \(1\leq i\leq K\), collectively form a finite cover of \(B_{2}\).
To demonstrate that the triple \((E2,j,\pi_{j},B2,j)\) is a trivial vector bundle, it suffices to find \(d(p-2)\) independent sections. For any \(\mathbf{v}=(v_{1},\ldots,v_{2m})\in B_{2,j}\), recall that the following set of vectors forms an orthonormal basis:
\[\Big{\{} \{0\oplus e(v_{1})\oplus 0\}_{p}^{1},\{0\oplus e(v_{2})\oplus 0 \}_{p}^{1},\{0\oplus e(v_{(n+1)})\oplus 0\}_{p}^{2},\] \[\{0\oplus e(v_{n+2})\oplus 0\}_{p}^{2},\ldots,\{0\oplus e(v_{(q-1 )n+1})\oplus 0\}_{p}^{q},\ldots,\{0\oplus e(v_{(q-1)n+2})\oplus 0\}_{p}^{q},\] \[\{0\oplus\frac{v_{nq+1}}{\|v_{nq+1}\|}\oplus 0\}_{p}^{nq+1}, \ldots,\{0\oplus\frac{v_{m}}{\|v_{m}\|}\oplus 0\}_{p}^{p},\mathbf{e}_{1}^{ \mathbf{w},\mathbf{v}},\ldots,\mathbf{e}_{d(p-2)}^{\mathbf{w},\mathbf{v}}\Big{\}}.\]
Now let \(s_{l}:B_{2,j}\to E_{j}\), be defined by
\[s_{l}(\mathbf{v})=(\mathbf{v},e_{l}^{\mathbf{w}_{j},\mathbf{v}}).\]
Then \(\{s_{l}\}_{l=1}^{d(p-2)}\) form a set of \(d(p-2)\) independent global section in \(C_{2,j}\), so \((B_{2,j},\pi_{j},E_{j})\) is a trivial vector bundle.
**Proposition 3.10**.: _For any \(a_{i},b_{i}\in H^{*}_{m_{i}}\), \(i\in[p]\). \(\mathscr{F}^{2}_{a_{1},\ldots,a_{p}}\) is a nowhere dense set with zero lebesgue measure._
Proof.: Let \(\mathrm{P}_{2,j}:B_{2,j}\times\mathbb{R}^{d(p-2)}\to\mathscr{V}^{p}\) where,
\[\mathrm{P}_{2,j}(\mathbf{v},\mathbf{c})=\sum_{i=1}^{d(p-2)}c_{i}e_{i}^{\mathbf{ w}_{j},\mathbf{v}}.\]
We have already shown that \(\mathrm{P}_{2,j}\) is semialgebraic map as a linear combination of semialgebraic maps. We notice that
\[\bigcup_{j}\mathrm{P}_{2,j}(B_{2,j}\times\mathbb{R}^{d(p-2)})\supset\mathscr{F} ^{2}_{\begin{subarray}{c}a_{1},\ldots,a_{p}\\ b_{1},\ldots,b_{p}\end{subarray}}.\]
Because \(\mathrm{P}_{2,j}\) is semialgebraic and for every \(j\), \(B_{2,j}\times\mathbb{R}^{p(d-1)}\) is a semialgebraic set of dimension \(2d-1-r+p(d-1)\) from Theorem 1.12 \(\mathrm{P}_{2,j}(B_{2,j}\times\mathbb{R}^{p(d-1)})\) is a semialgebraic set of dimension \(\leq 2d-1-r+p(d-1)\) and from Corollary 1.14 it is a nowhere dense set with zero Lebesgue measure.
Now, we still need to estimate the algebraic dimension of \(\mathscr{F}^{1}_{\begin{subarray}{c}a_{1},\ldots,a_{p}\\ b_{1},\ldots,b_{p}\end{subarray}}\).
**Lemma 3.11**.: _For fixed \(a_{i},b_{i}\in H^{*}_{m_{i}}\), the set \(\Gamma^{1}_{\begin{subarray}{c}a_{1},\ldots,a_{p}\\ b_{1},\ldots,b_{p}\end{subarray}}\) is a closed subset of \(\mathbb{R}^{q\times(n-1)}\times\mathscr{V}^{2}\)._
Proof.: Let \(\{(\boldsymbol{C}_{n},x_{n},y_{n})\}\to(\boldsymbol{C},x,y)\) be a convergence sequence in \(\Gamma^{1}_{\begin{subarray}{c}a_{1},\ldots,a_{p}\\ b_{1},\ldots,b_{p}\end{subarray}}\). In order to prove our lemma we need to show that \((\boldsymbol{C},x,y)\) is an element of \(\Gamma^{1}_{\begin{subarray}{c}a_{1},\ldots,a_{p}\\ b_{1},\ldots,b_{p}\end{subarray}}\). Because \((\boldsymbol{C}_{n},x_{n},y_{n})\in\Gamma^{1}_{\begin{subarray}{c}a_{1}, \ldots,a_{p}\\ b_{1},\ldots,b_{p}\end{subarray}}\)
\[\ell_{a_{i},b_{i}}((\boldsymbol{C}_{n})_{i},x_{n},y_{n})=0\,\forall i\in[q], \forall n\in\mathbb{N}.\]
But \(\ell_{a_{i},b_{i}}\) is continuous function so \(\ell_{a_{i},b_{i}}(\boldsymbol{C},x,y)=0\), therefore
\[(\boldsymbol{C},x,y)\in\Gamma^{1}_{\begin{subarray}{c}a_{1},\ldots,a_{p}\\ b_{1},\ldots,b_{p}\end{subarray}}.\]
**Lemma 3.12**.: _For fixed group elements \(h_{1},\ldots,h_{2p}\in G\), let the map \(f_{h_{1},\ldots,h_{2p}}:\mathbb{R}^{q\times(n-1)}\times\mathscr{V}^{2}\to \mathscr{V}^{p}\), defined by \(f_{h_{1},\ldots,h_{2p}}(\boldsymbol{C},x,y)=U_{h_{1}}x-U_{h_{2}}y,\ldots,U_{h_ {2p-1}}x-U_{h_{2p}}y\). Then, the set \(f_{h_{1},\ldots,h_{2p}}(\Gamma^{1}_{\begin{subarray}{c}a_{1},\ldots,a_{p}\\ b_{1},\ldots,b_{p}\end{subarray}})\) is a semialgebraic set of dimension at most \(2d-\gamma_{n}\)._
In order to prove Lemma 3.12 we to create a suitable partition of the set \(\Gamma^{1}_{\begin{subarray}{c}a_{1},\ldots,a_{p}\\ b_{1},\ldots,b_{p}\end{subarray}}\).
Note that the set \(\Gamma^{1}_{\begin{subarray}{c}a_{1},\ldots,a_{p}\\ b_{1},\ldots,b_{p}\end{subarray}}\) can be expressed as the disjoint union of the following auxiliary sets.
\[\Gamma^{3}_{\begin{subarray}{c}a_{1},\ldots,a_{p}\\ b_{1},\ldots,b_{p}\end{subarray}}=\Gamma^{1}_{\begin{subarray}{c}a_{1},\ldots,a _{p}\\ b_{1},\ldots,b_{p}\end{subarray}}\cap(E^{q\times(n-1)}_{G}\times\mathscr{V}^{2})\]
\[\Gamma^{4}_{a_{1},\ldots,a_{p}}=\Gamma^{1}_{a_{1},\ldots,a_{p}}\setminus\Gamma^{3} _{a_{1},\ldots,a_{p}}.\]
\(\quad\quad\quad\quad\quad\quad\quad\quad\Gamma^{4}_{a_{1},\ldots,a_{p}}= \Gamma^{1}_{a_{1},\ldots,a_{p}}\setminus\Gamma^{3}_{a_{1},\ldots,a_{p}}.\)
\(\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\
From Tarski-Seidenberg theorem and Corollary 4.2 of [13] we have that \(f_{g_{1},\ldots,g_{2p}}(\Gamma^{4}_{a_{1},\ldots,a_{p}})\) is semialgebraic set, of dimension at most \((d+1)\).
Note that \(f_{h_{1},\ldots,h_{2p}}(\Gamma^{4}_{a_{1},\ldots,a_{p}})\) is homogeneous, i.e. if
\[(x_{1},\ldots,x_{p})\in f_{h_{1},\ldots,h_{2p}}(\Gamma^{4}_{a_{1},\ldots,a_{p} })\]
then
\[(\lambda x_{1},\ldots,\lambda x_{p})\in f_{h_{1},\ldots,h_{2p}}(\Gamma^{4}_{a _{1},\ldots,a_{p}}),\ \forall\lambda\in\mathbb{R}\]
Thus we conclude that \(f_{h_{1},\ldots,h_{2p}}(\Gamma^{4}_{a_{1},\ldots,a_{p}})\cap S^{1}(\mathscr{V} ^{p})\) is a semialgebraic set of dimension at most \(d\).
**Proposition 3.14**.: \(\Gamma^{3}_{a_{1},\ldots,a_{p}}\) _is a finite union of linear subspaces of \(\mathscr{V}^{2}\) of dimension at most \(p_{n}=2d-\gamma_{N-n+1}\)._
Proof.: From the fact that \(E_{G}\) is a finite set we conclude that, dimension of \(\Gamma^{3}_{a_{1},\ldots,a_{p}}\) is less than
\[\max_{\begin{subarray}{c}\mathbf{c}\in E^{n-1}_{G}\\ a,b\in H^{*}\end{subarray}}\dim\{(x,y)\in\mathscr{V}\times\mathscr{V}:(U_{h_{1 }}-\lambda U_{h_{2}})x-(U_{h_{2k+1}}-\lambda U_{h_{2k+1}})y=0,\ \forall k\in[n-1]\}.\]
Notice, however, that whenever \((U_{h_{1}}-\lambda U_{h_{2}})x-(U_{h_{2k+1}}-\lambda U_{h_{2k+2}})y=0\) the vector \((x,y)\) lies inside the kernel \(\ker\{u_{h_{1}}-\lambda U_{h_{2}}\mid U_{h_{2k+1}}-\lambda U_{h_{2k+2}}\},\ \forall k\in[n-1]\).
Therefore, we get that
\[\max_{\begin{subarray}{c}\lambda\in E_{G}\\ a,b\in H^{*}\end{subarray}}\min_{k\in[n-1]}\dim(\ker\{U_{h_{1}}-\lambda U_{h_ {2}}\mid U_{h_{2k+1}}-\lambda U_{h_{2k+2}}\})\] \[=\max_{\begin{subarray}{c}\lambda\in E_{G}\\ a,b\in H^{*}\end{subarray}}\min_{k\in[n-1]}\{2d-\text{rank}[U_{h_{1}}-\lambda U _{h_{2}}\mid U_{h_{2k+1}}-\lambda U_{h_{2k+2}}]\}\] \[=2d-\min_{\begin{subarray}{c}\lambda\in E_{G}\\ a,b\in H^{*}\end{subarray}}\max_{k\in[n-1]}\text{rank}[U_{h_{1}}-\lambda U_{h_ {2}}\mid U_{h_{2k+1}}-\lambda U_{h_{2k+2}}].\]
Next, we make the following two observations:
1. If we chose \(h_{1}=h_{2k+1}\) and \(h_{2}=h_{2k+2}\), then \[\text{rank}[U_{h_{1}}-\lambda U_{h_{2}}\mid h_{2k+1}-\lambda U_{h_{2k+1}}]= \text{rank}[U_{h_{1}}-\lambda U_{h_{2}}].\]
2. \(\operatorname{rank}[U_{h_{1}}-\lambda U_{h_{2}}]=\operatorname{rank}[U_{h_{1}h_{2}^{ -1}}-\lambda U_{h_{2}h_{2}^{-1}}]=\operatorname{rank}[U_{h_{1}h_{2}^{-1}}- \lambda\operatorname{I}]\).
So, we conclude that
\[\min_{c\in E_{G}}\max_{k\in[n-1]}\operatorname{rank}[U_{h_{1}}- \lambda U_{h_{2}}\mid U_{h_{2k+1}}-\lambda U_{h_{2k+2}}]=\] \[\min_{\begin{subarray}{c}H\subset G\\ |H|=n-1,\operatorname{I}_{d}\notin H\end{subarray}}\max_{h\in H}\operatorname {rank}[U_{h}-\lambda\operatorname{I}_{d}]=\gamma_{N-n+1}\]
Therefore, \(\Gamma^{3}_{\begin{subarray}{c}a_{1},\ldots,a_{p}\\ b_{1},\ldots,b_{p}\end{subarray}}\) is a finite union of linear subspaces of dimension at most
\[p_{n}=2d-\gamma_{N-n+1}.\]
**Lemma 3.15**.: _For fixed \(h_{1},\ldots,h_{2p}\in G\), \(a,b\) the set_
\[f_{h_{1},\ldots,h_{2p}}(\Gamma^{3}_{\begin{subarray}{c}a_{1},\ldots,a_{p}\\ b_{1},\ldots,b_{p}\end{subarray}})\]
_is a semialgebraic set of dimension at most \(p_{n}\)._
Proof.: Recall that the set
\[f_{h_{1},\ldots,h_{2p}}(\Gamma^{3}_{\begin{subarray}{c}a_{1},\ldots,a_{p}\\ b_{1},\ldots,b_{p}\end{subarray}})\]
is a finite union of linear subspaces of \(\mathscr{V}^{p}\) of dimension at most \(2d-\gamma_{N-n+1}=p_{n}\). Also because \(f_{h_{1},\ldots,h_{2p}}(\Gamma^{3}_{\begin{subarray}{c}a_{1},\ldots,a_{p}\\ b_{1},\ldots,b_{p}\end{subarray}})\) is an open set with respect to topology induced by \(f_{h_{1},\ldots,h_{2p}}\). We conclude that
\[f_{h_{1},\ldots,h_{2p}}(\Gamma^{3}_{\begin{subarray}{c}a_{1},\ldots,a_{p}\\ b_{1},\ldots,b_{p}\end{subarray}})\]
is a semialgebraic set of dimension at most \(p_{n}\).
We have shown that \(f_{g_{1},\ldots,g_{2p}}(\Gamma^{3}_{\begin{subarray}{c}a_{1},\ldots,a_{p}\\ b_{1},\ldots,b_{p}\end{subarray}})\) is a semialgebraic set of dimension at most \(p_{n}\). Notice now, that each of these manifolds is homogeneous, i.e. if
\[(x_{1},\ldots,x_{p})\in f_{g_{1},\ldots,g_{2p}}(\Gamma^{3}_{\begin{subarray}{ c}a_{1},\ldots,a_{p}\\ b_{1},\ldots,b_{p}\end{subarray}})\]
then
\[(\lambda x_{1},\ldots,\lambda x_{p})\in f_{g_{1},\ldots,g_{2p}}(\Gamma^{3}_{ \begin{subarray}{c}a_{1},\ldots,a_{p}\\ b_{1},\ldots,b_{p}\end{subarray}}),\ \forall\lambda\in\mathbb{R}\]
Thus we conclude that \(f_{g_{1},\ldots,g_{2p}}(\Gamma^{3}_{\begin{subarray}{c}a_{1},\ldots,a_{p}\\ b_{1},\ldots,b_{p}\end{subarray}}))\cap S^{1}(\mathscr{V}^{p})\) is a semialgebraic set of dimension at most \(p_{n}-1\).
**Proposition 3.16**.: _For any \(a_{i},b_{i}\in H^{*}_{m_{i}}\), \(i\in[p]\). \(\mathscr{F}^{1}_{a_{1},\ldots,a_{p}}\) is a nowhere dense set with zero lebesgue measure._
Proof.: Let \(B_{3}=\big{(}f_{g_{1},\ldots,g_{2p}}(\Gamma^{3}_{a_{1},\ldots,a_{p}})\cup f_{g_ {1},\ldots,g_{2p}}(\Gamma^{4}_{a_{1},\ldots,a_{p}})\big{)}\cap S^{1}(\mathscr{V }^{p})\). We showed that \(B_{3}\) is semialgebraic set of dimension at most \(p_{n}-1\).
Following the proof of Proposition 3.5 we construct a finite set \(\{\mathbf{w}^{j}\}_{j=1}^{M}\), a finite cover \(\{B_{j}\}_{j=1}^{M}\) of \(B\) and a map \(\mathrm{P}_{2,j}:B_{3,j}\times\mathbb{R}^{p(d-1)}\to\mathscr{V}^{p}\) by
\[\mathrm{P}_{3,j}(\mathbf{v},\mathbf{c})=\sum_{i=1}^{p(d-1)}c_{i}e_{i}^{ \mathbf{w}^{j},\mathbf{v}}.\]
Observe that
\[\bigcup_{j\in M}\mathrm{P}_{3,j}(B_{3,j}\times\mathbb{R}^{p(d-1)})\supset \mathscr{F}^{2}_{a_{1},\ldots,a_{p}}.\]
Notice that \(B_{3,j}\times\mathbb{R}^{p(d-1)}\) is a semialgebraic set of dimension at most \(p_{n}-1+p(d-1)\). Because \(\mathrm{P}_{3,j}\) is semialgebraic, from theorem 1.12 we conclude that \(\mathrm{P}_{3,j}(B_{3,j}\times\mathbb{R}^{p(d-1)})\) is a semialgebraic set of dimension at most \(p_{n}-1+p(d-1)\) and because \(p\geq p_{n}\implies p_{n}-1+p(d-1)<pd\) from corollary 1.14 it is a nowhere dense set with zero Lebesgue measure.
Proof.: (Theorem 3.1.) For fixed \(p\geq p_{n}\) and \(S\in[p]\times[N]\), recall that the set of \(p\)-tuples of vectors \(\mathbf{w}=(w_{1},\ldots,w_{p})\) such that the pair \((\mathbf{w},S)\) fails to induce an injective embedding \(\Phi_{\mathbf{w},S}\) is denoted by \(\mathscr{F}_{S}\).
Recall also that in order to prove that \(\mathscr{F}_{S}\) has zero Lebesgue measure and is nowhere dense, it suffices to show the same for the set \(\mathscr{F}_{a_{1},\ldots,a_{p}}\) for any \(a_{i},b_{i}\in H^{*}_{m_{i}}\), \(i\in[p]\).
In Chapter 3.1, we showed that \(\mathscr{F}_{a_{1},\ldots,a_{p}}=\mathscr{F}^{1}_{a_{1},\ldots,a_{p}}\cup \mathscr{F}^{2}_{a_{1},\ldots,a_{p}}\). But if \(p\geq p_{n}\), Proposition 3.16 demonstrates that \(\mathscr{F}^{1}_{a_{1},\ldots,a_{p}}\) has zero measure and is nowhere dense, and Proposition 3.10 demonstrates that \(\mathscr{F}^{2}_{a_{1},\ldots,a_{p}}\) has zero measure and is nowhere dense. Therefore, Theorem 3.6 is proved.
**Remark 3.17**.: _In Theorem 3.6 we demonstrated that if we use more than one element per Coorbit we need less than \(2d\) windows for the construction of an injective embedding. Unfortunately the dimension of the target space can be greater than \(2d\) but in [8] we showed that a generic linear projection in \(\mathbb{R}^{2d}\) preserves both injectivity and stability properties._ |
2306.04087 | Accelerating 128-bit Floating-Point Matrix Multiplication on FPGAs | General Matrix Multiplication (GEMM) is a fundamental operation widely used
in scientific computations. Its performance and accuracy significantly impact
the performance and accuracy of applications that depend on it. One such
application is semidefinite programming (SDP), and it often requires binary128
or higher precision arithmetic to solve problems involving SDP stably. However,
only some processors support binary128 arithmetic, which makes SDP solvers
generally slow. In this study, we focused on accelerating GEMM with binary128
arithmetic on field-programmable gate arrays (FPGAs) to enable the flexible
design of accelerators for the desired computations. Our binary128 GEMM designs
on a recent high-performance FPGA achieved approximately 90GFlops, 147x faster
than the computation executed on a recent CPU with 20 threads for large
matrices. Using our binary128 GEMM design on the FPGA, we successfully
accelerated two numerical applications: LU decomposition and SDP problems, for
the first time. | Fumiya Kono, Naohito Nakasato, Maho Nakata | 2023-06-07T01:16:50Z | http://arxiv.org/abs/2306.04087v1 | # Accelerating 128-bit Floating-Point Matrix Multiplication on FPGAs
###### Abstract
General Matrix Multiplication (GEMM) is a fundamental operation widely used in scientific computations. Its performance and accuracy significantly impact the performance and accuracy of applications that depend on it. One such application is semidefinite programming (SDP), and it often requires binary128 or higher precision arithmetic to solve problems involving SDP stably. However, only some processors support binary128 arithmetic, which makes SDP solvers generally slow. In this study, we focused on accelerating GEMM with binary128 arithmetic on field-programmable gate arrays (FPGAs) to enable the flexible design of accelerators for the desired computations. Our binary128 GEMM designs on a recent high-performance FPGA achieved approximately 90GFlops, 147x faster than the computation executed on a recent CPU with 20 threads for large matrices. Using our binary128 GEMM design on the FPGA, we successfully accelerated two numerical applications: LU decomposition and SDP problems, for the first time.
Matrix Multiplication, binary128, Systolic Arrays, Intel FPGA SDK for OpenCL, Performance Benchmarking, LU Decomposition, Semidefinite Programming
## I Introduction
General Matrix Multiplication (GEMM) is a crucial computation in various scientific and engineering algorithms. Its precision plays a significant role in determining the accuracy of the target applications. Different applications have different precision requirements for the number of bits used to represent floating-point (FP) numbers. As defined by the IEEE 754 standard [1], FP formats and arithmetic are available in various precisions, including binary16 (also known as half-precision), binary32 (single-precision), binary64 (double-precision), and binary128 (quadruple-precision). The suffix in each format indicates the number of FP bits supported by the respective format, with higher numbers indicating higher precision.
In machine learning (ML) using artificial neural networks, it has been shown that binary16 is sufficient for storing the weights of these networks. This has led to the development of hardware architectures that support highly parallel computation using binary16 arithmetic. One example is the TensorCore on recent NVIDIA graphics processing units (GPUs), designed for matrix multiplication with lower precision and has multiplication and accumulation performed in binary16 and binary32 arithmetics, respectively. Other ML accelerators, such as Google's TPUv3 [2], also support the bfloat16 format, an extended half-precision FP format.
On the other hand, operations with higher precision, such as binary128, are also required by specific applications. One example is Semidefinite Programming (SDP), a natural extension of linear programming that aims to minimize linear functions subject to certain constraints. In semidefinite programming, it is common to solve given problems using the Primal-Dual Interior-Point Methods (PDIPM) [3]. However, according to these methods, SDP is numerically unstable near the optimal solution because the variable matrices become singular [4, 5]. Therefore, Nakata [5] proposed using higher precision numbers to solve optimization problems using SDP to maintain the desired numerical accuracy.
However, since few processors represented by the IBM z13 processor [6] support binary128 as hardware, the performance of applications relying on binary128 arithmetic is typically 100 to 1000x slower than that only relying on binary64. Therefore, the acceleration of binary128 arithmetic is crucial for accelerating SDP.
In this research, we implemented GEMM in binary128 arithmetic on Field Programmable Gate Arrays (FPGAs). The advantage of targeting FPGAs is their flexibility in optimizing accelerators for target computations. Additionally, while GPUs are designed with many parallel processors and fast memories, FPGAs are simply an array of logic gates that allow us to reconfigure designs and how they work during computation. This characteristic of FPGA enables us to create a suitable design for specific calculations while minimizing the use of hardware resources. As a result, energy consumption during computation on FPGAs is typically much lower than on GPUs.
Nagasu _et al_. [7] compared the energy consumption of FPGA and GPU computations for the same tsunami modeling application and demonstrated the effectiveness of FPGAs. They showed that their implementation on the Arria10 FPGA consumed approximately 5x less energy than the initial implementation on an AMD Radeon GPU.
Implementing logic designs on FPGAs is typically more challenging than parallel programming on GPUs because logic designs must be written in Hardware Description Language (HDL). To alleviate this difficulty, we adopt Intel's OpenCL-based high-level synthesis (HLS) techniques for our binary128 GEMM designs in this research.
To design high-performance GEMM operations on FPGAs, it is essential to utilize pipeline parallelism and create a
systolic array [8]. Matteis _et al_. [9] developed FBLAS, a numerical library inspired by the open-source implementation of the Basic Linear Algebra Subroutines (BLAS) for Intel FPGAs. rBLAS also provides a version of the systolic array design for its GEMM implementation. In this research, we extended it to support various FP precisions.
The OpenCL standard supports neither arithmetic operations of binary128 nor that of higher precision than binary128. Furthermore, the OpenCL standard only supports arithmetic operations in binary32 and binary64 [1]. While a recent version of the OpenCL SDK for Intel FPGAs supports specific FP precisions, its main target is binary16 and bfloat16 for machine learning.
In this research, we adopted customized FP units developed by Nakasato _et al_. [10] that support various FP formats, including the binary128 format. Nevertheless, this paper focused on developing and evaluating binary128 format FP addition, multiplication units, and acceleration of binary128 GEMM operations.
The main contributions of this research are as follows:
* We implemented fast GEMM designs in the binary128 format on FPGAs
* We developed an application interface compatible with the standard BLAS library.
* We evaluated the performance of our binary128 GEMM designs with practical applications.
While this research builds upon the preceding work, we successfully integrated our binary128 GEMM designs into MPLAPACK [11], an extension of all BLAS and LAPACK (Linear Algebra PACKage) routines to support multi-precision FP operations, including binary128. Therefore, the designs can also be immediately used in numerical applications that utilize MPLAPACK as a backend.
Our binary128 GEMM design implemented on Terasic DE10a-Net Agilex FPGA achieved 90.9GFlops by utilizing maximum hardware resources. Furthermore, its integration to practical applications of blocked LU decomposition and SDP contributed to at most 5.3x and 2x speed-up compared with the computation on a recent Intel i9-10900 CPU with 20 threads parallelization by OpenMP, respectively.
This paper first presents a brief specification of our binary128 GEMM designs. Then, to inspect the fundamental characteristics of the designs, we first evaluate their performance on Terasic DE5a-Net Arria10 FPGA. Based on the analysis obtained by this evaluation, we focus on more practical benchmarking by using Nallatech (BitWare) 520N Stratix10 FPGA, which is installed on a supercomputer system in operation, and Agilex FPGA, the latest and high-end Intel FPGA. Finally, we discuss the applications of our binary128 GEMM design by integrating it to blocked LU decomposition and SDP problems.
## II Related Works
The study of GEMM in high-precision arithmetic is a popular topic in multiple-precision research, but previous studies have mainly focused on CPU or GPU implementations.
Nakasato [12] accelerated the GEMM routines for binary32, binary64, and 128-bit double-double (DD) [13, 14] precision on the AMD Cypress GPU. Also, Nakata _et al_. [15] presented a fast GEMM implementation in DD precision on NVIDIA GPUs. In the paper, they have applied their GEMM implementation in DD precision to the algorithm in SDP. Kouya [16] implemented LU decomposition supporting multi-precision floating-point numbers such as DD, triple-double (TD), and quad-double (QD). With AVX vectorization, the implementation successfully accelerated the LU decomposition for Intel and AMD CPUs.
Joldes _et al_. [17] developed CAMPARY, a multi-precision arithmetic library for NVIDIA GPUs based on the CUDA programming model, which supports DD, TD, and QD precision. Isupov and Knyazkov have been working on MPRES-BLAS for NVIDIA GPUs [18], which is an interval evaluation for the fractional representation of numbers in the Residue Number System (RNS) [19] to represent arbitrary precision numbers. MPRES-BLAS was the fastest among CAMPARY and CUMP [20] GEMM implementation for 424-bit precision.
Mukunoki _et al_. [21] also had proposed a fast GEMM implementation in binary128 or less precision based on the Ozaki scheme [22], an accurate GEMM algorithm by representing FP numbers as non-overlapping sums of FP numbers. They showed the performance evaluation of their method on CPUs and prospects of extension for GPUs.
However, research for GEMM in high-precision arithmetic on FPGA has yet to be seen. Licht _et al_. [23] targeted Xilinx FPGAs to implement GEMM by using systolic array designs. Afterward, they experimented with their GEMM to support various FP precision up to 1024-bits [24] extending the implementation of the Multiple Precision Floating-Point Reliable (MPFR) [25] library. Although their motivation lies in the acceleration of an SDP solver, the practical evaluation of their designs still needs to be done.
## III Matrix Multiplication for FPGA
### _Implementation_
The GEMM routine in BLAS performs matrix multiplication for matrices \(A\) and \(B\) as follows:
\[C=\alpha AB+\beta C, \tag{1}\]
where \(\alpha\) and \(\beta\) are scalar parameters. Listing 1 presents the API in C language to the GEMM routine for multi-precision FP numbers called _Regemm_ provided by MPLAPACK [11]. Note that _Float128 is the standard data type in C language for binary128, as defined in ISO/IEC TS 18661-3:2015 [26]. MPLAPACK utilizes _Float128 through the GNU C++ compiler via GNU extensions. The first two arguments specify the transpose operation of matrices \(A\) and \(B\). The three arguments _lda_, _ldb_, and _ldc_ represent the leading dimensions of matrices \(A\), \(B\), and \(C\), respectively.
In the practical implementation of the GEMM routine, calculating the matrix multiplication \(AB\) is a critical part of its computation. Assume that we have two matrices \(A\) and
with sizes \(m\times k\) and \(k\times n\), respectively. Then, an element of the resulting matrix \(C^{\prime}=AB\) is computed by the summation as follows:
\[C^{\prime}_{ij}=\sum_{p=0}^{k-1}A_{ip}\times B_{pj}, \tag{2}\]
where \(i,j\), and \(p\) are indices ranging \(0\leq i<m\), \(0\leq j<n\), and \(0\leq p<k\), respectively. The calculation of the whole matrix \(C^{\prime}\) involves a 3-level nested loop.
Fig. 1 illustrates the design of a systolic array for our binary128 GEMM design derived from rBLAS [9]. This design is characterized by a 2-D array of processing elements (PE) aligned \(P_{C}\times P_{R}\). Each PE calculates Eq. (2) for assigned sub-matrices of \(A\) and \(B\). The size of the sub-matrices \(A\) and \(B\) and the value of \(P_{C}\times P_{R}\) determine how the input matrices are partitioned.
In the computation flow, the input matrices \(A\) and \(B\) are read from main memory via the Read module and sent to the PEs through the Feed module. \(A\) is sent by column, and \(B\) is sent by row, assuming that both matrices are not transposed. They are first received by PEs with IDs \((P_{R}-1,0)\) or \((0,P_{C}-1)\) and forwarded to the adjacent PEs in the systolic array on each clock cycle. Each PE accumulates the result of a multiply-add operation for the same element in \(C^{\prime}\) and sends it to the Drain module, which is eventually collected by the Store module to be written back to the main memory.
More specifically, rBLAS is a generator of OpenCL kernels for the systolic array. The generated systolic array consists of four OpenCL kernels: two kernels that combine the Read and Feed modules for \(A\) and \(B\), one Store kernel for \(C\), and a main kernel for the array of PEs and Drain module. The main kernel explicitly calls a function for one PE in a loop. By fully unrolling the loop, the main kernel defines the systolic array. Because the computation task of a PE is just a multiply-add operation, we can replace the multiply-add operation in the original design with any multiply-add unit for a desired FP format. This enables us to create a systolic array design corresponding to the designated precision.
In addition, to replace the multiply-add operation, we modify and extend the other three kernels for the Read, Feed, and Store modules to support a wider memory bus for binary128 arithmetic. We also extend the original kernels to optimize load and store operations from DRAM. The Read and Feed kernels are equipped with a memory buffer in front of the Feed module. In the original design, the memory buffer is called a memory tile and explicitly instantiated as a 1-D array. The memory tile acts as a cache memory to store a sub-matrix of \(A\) and reuse the sub-matrix many times. The exploitation of the memory tile reduces the pressure on the memory bandwidth of DRAM and improves the performance of our binary128 GEMM designs, as shown in the later section.
The number of PEs in the present systolic array is \(P_{R}\times P_{C}\). We instantiate \(P_{R}\times P_{C}\) binary128 multiply-add units. The additional computations in the definition of the GEMM, as shown in Eq. (1), require the computation of two scaler-matrix multiplications and one matrix addition, which are very costly in a GEMM design on an FPGA. In the present systolic array, we need additional \(P_{C}\) multiply units for \(\alpha A\), a load unit for \(C\), \(P_{C}\) multiply units for \(\beta C\), and \(P_{C}\) add units for the summation of \(\alpha A\) and \(\beta C\). Except for the multiply units for \(\alpha A\), which can be merged with the Feed module, the other units are only activated in the final stage of the GEMM operation at the Store module. Therefore, in this research, we only calculate Eq. (2) on an FPGA, while the host CPU handles the transpose operations and other additional operations involving \(\alpha\) and \(\beta\). Supporting those additional operations, we develop an API that is compatible with the standard _Rgemm_ provided by MPLAPACK. It enables us to use our binary128 GEMM designs immediately in numerical applications with minimal changes.
### _Performance Models_
Here, we summarize the performance models for our binary128 GEMM design. In this section, \(f\) represents the clock frequency of the logic circuit design in MHz.
#### Iii-B1 Performance of GEMM
The peak performance of the designs depends on the layout of systolic arrays, as shown in Fig. 1. When we use \(P_{R}\times P_{C}\) PEs, the peak performance \(F_{\rm peak}\) (GFIops) is given by Eq.(3).
Fig. 1: Systolic Array Design for the GEMM operation
\[F_{\rm peak}=\frac{2\times P_{R}\times P_{C}\times f\times 10^{6}}{10^{9}} \tag{3}\]
The measured performance \(F_{\rm perf}\) of the designs in GFIops is calculated by Eq. (4), where \(T_{\rm exec}\) is the execution time in seconds.
\[F_{\rm perf}=\frac{2mnk}{T_{\rm exec}\times 10^{9}} \tag{4}\]
In Eq. (2), \(m,n\) and \(k\) denote the matrix size parameters. For the multiplication of \(n\times n\) square matrices, the number of FP operations is \(2n^{3}\).
#### Iii-A2 Memory Bandwidth Requirement
The performance of the designs is also affected by memory bandwidth of an FPGA board. \(P_{R}\times P_{C}\) systolic array takes \(P_{R}+P_{C}\) inputs conveyed by two vertical and horizontal Feed pipelines at every cycle. Thus, the required memory bandwidth \(B_{\rm req}\) (GB/s) is given by Eq. (5).
\[B_{\rm req}=\frac{(P_{R}+P_{C})\times f\times 10^{6}\times N_{\rm Byte}}{10^{9}} \tag{5}\]
\(N_{\rm Byte}\) represents the word size established as 16 bytes in the present work. If the systolic array consists of \(8\times 8\) PEs, \(B_{\rm req}\) equals \(256f\times 10^{-3}\) GB/s. For example, the requirement \(B_{\rm req}\) becomes 51.2GB/s for the design where the clock frequency \(f\) is 200MHz. To fully utilize all PEs in the designs, \(B_{\rm req}\) must be smaller than the memory bandwidth of a target FPGA board.
## IV Performance Evaluation
This section presents the performance evaluation of our binary128 GEMM designs on three FPGA systems.
### _Benchmarking Conditions_
#### Iv-A1 Target FPGA Systems
Table I shows the specification of FPGAs used in this benchmarking: Terasic DE5a-Net Arria10, Nallatech (BittWare) 520N Stratix10, and Terasic DE10a-Net Agilex. The Stratix10 FPGA is a computation node of Cygnus, a supercomputer system operating at the University of Tsukuba in Japan since 2019. We use Intel FPGA SDK for OpenCL to design and implement our binary128 GEMM designs. A different host system hosts each FPGA as specified in the bottom rows of Table I.
#### Iv-A2 Evaluation Method
We first evaluate our binary128 GEMM designs for square matrices by scaling \(n\). Also, we evaluate the performance of multiplying non-square matrices with sizes \(m\times k\) and \(k\times n\) as more realistic and practical evaluations. To calculate the performance in GFIops, Eqs. (3) and (4) are used. The computation time \(T_{\rm exec}\) in Eq. (4) is the average of three trials in each benchmarking. As a target of comparison, we use a baseline of the _Regemm_ executed on the host system of Agilex (i9-10900 CPU) with 20 threads by OpenMP parallelization.
Besides, we compare numerical accuracy with the _Regemm_ routine provided by MPLAPACK on a CPU. As shown in Eq. (6), we calculate the average L1 norm of the difference between two \(n\times n\) matrices as \(E_{\rm L1}\) throughout the evaluation.
\[E_{\rm L1}=\frac{\sum_{i=0}^{n-1}\sum_{j=0}^{n-1}\left|C_{ij}^{F}-C_{ij}^{R} \right|}{n^{2}}, \tag{6}\]
In Eq. (6), \(C^{F}\) and \(C^{R}\) denote the result matrices by our implementation for FPGAs and _Regemm_, respectively. \(E_{\rm L1}\) allows us to determine how accurately our binary128 GEMM designs match the results of the reference implementation.
To highlight the main characteristics of computational performance, we begin by evaluating the designs on the Arria10 FPGA in this section. The following section covers the performance evaluation of the designs on newer FPGAs, including Stratix10 and Agilex.
### _Benchmarking Results on Arria10_
#### Iv-B1 Evaluation for Square Matrices
We present benchmarking results for our binary128 GEMM designs. The systolic array consists of PEs arranged in a square with \(P_{R}=P_{C}=2,4,\) and 8. Table II shows the logic synthesis results on the Arria10 FPGA system.
Our binary128 GEMM design requires more DSP blocks for larger PE arrays. Therefore, the number of available DSP blocks is the primary constraint for the design. The row labeled Fmax shows the clock frequency of each design. Therefore, their peak performance \(F_{\rm peak}\) is shown in the last row based on Eq. (3).
Fig. 2 shows the performance of each design on Arria10. The matrix size \(n\) ranges from 64 to 4096. The performance of designs \(F_{\rm perf}\) with \(2\times 2\), \(4\times 4\), and \(8\times 8\) PEs is at a maximum of 1.88, 7.1, and 15.0GFlops, respectively. Since each PE can work independently for data streaming and operations on the systolic array, the performance is proportional to the number of PEs in the design.
However, with a small \(n\), the computation load for each PE is not sufficiently high to reach the maximum performance of the designs. It reaches the peak at a specific \(n\), such as
Fig. 2: Performance of our binary128 GEMM designs for square matrices on Arria10 FPGA
\(n=2048\) for \(8\times 8\) PEs, and the performance scaling becomes flat at larger \(n\).
We then evaluate the numerical error \(E_{\mathrm{L1}}\) of computation results between our binary128 GEMM designs and the _Rgemm_ routine based on Eq. (6). \(E_{\mathrm{L1}}\) for \(n<512\) is distributed between \(10^{-31}\) and \(10^{-30}\). As we set \(n\) to 4096, \(E_{\mathrm{L1}}\) increases to \(2.0\times 10^{-28}\). The layout of PEs does not make a significant difference in \(E_{\mathrm{L1}}\).
Regarding the comparison between \(F_{\mathrm{perf}}\) and \(F_{\mathrm{peak}}\), a ratio to designs of \(2\times 2\), \(4\times 4\), and \(8\times 8\) PEs is 99.5%, 97.3%, and 58.2%, respectively. Recall that the memory bandwidth requirement \(B_{\mathrm{req}}\) is given by Eq. (5). As we substitute Fmax of each design in Fig. 2 with \(f\) in Eq. (5), we find \(B_{\mathrm{req}}\) 15.1GB/s, 29.2GB/s and 51.5GB/s for \(2\times 2\), \(4\times 4\) and \(8\times 8\), respectively.
Our Arria10 system has two DDR3 memories that provide 34.2GB/s of the total bandwidth. It is sufficient for the designs of \(2\times 2\) and \(4\times 4\) PEs. As a result, their \(F_{\mathrm{perf}}\) is close to the peak. However, the design of \(8\times 8\) PEs requires 51.5GB/s, which is 1.5x more significant than the available bandwidth. Therefore, the design of \(8\times 8\) PEs is limited by memory transfer from DRAM. As a result, we see that the ratio between \(F_{\mathrm{perf}}\) and \(F_{\mathrm{peak}}\) is much lower than that of other designs of fewer PEs.
#### Iv-B2 Effects of Memory Buffer for The Systolic Array
To enhance performance, we instantiate more PEs in our binary128 GEMM design. However, the memory bandwidth of the FPGA board poses a limitation. Therefore, the systolic array generated by FBLAS has a module called memory tile in front of the Feed module. It is a local memory buffer working as a cache memory for each PE to mitigate the memory bandwidth requirements provided in Eq. (5). As the systolic array incorporates a more significant number of
PEs, increasing the size of \(M_{\texttt{Tile}}\) is necessary to provide the larger buffer in our binary128 GEMM designs.
The results presented in Sec. IV-B1 were all obtained by the designs with \(M_{\texttt{Tile}}=32\). We then conduct additional benchmarking to further investigate the potential performance improvement by adopting a larger value of \(M_{\texttt{Tile}}\). Fig. 3 illustrates the performance of the GEMM by using the designs of \(4\times 4\) and \(8\times 8\) PEs where \(M_{\texttt{Tile}}\) ranges from 24 to 256.
The figure shows the performance of each design for four matrices where \((k,n)=(4096,512),(4096,2048)\), \((2048,2048)\), \((4096,4096)\) assuming \(m=k\). Computations using the design of \(4\times 4\) PEs are not affected by the change of \(M_{\texttt{Tile}}\) since their \(B_{\texttt{req}}\) (30.25GB/s) is within the board memory bandwidth (34.2GB/s).
On the other hand, we see that using a larger \(M_{\texttt{Tile}}\geq 64\) improves the performance of the \(8\times 8\) PEs. In those cases, the performance increases by 1.5 to 2x compared to the design with \(M_{\texttt{Tile}}=32\) and reaches its peak at \(M_{\texttt{Tile}}=128\). In contrast, the smaller \(M_{\texttt{Tile}}\leq 24\) causes even lower performance. For the square matrix with \(n=4096\), we achieved 21.6GFlops at \(M_{\texttt{Tile}}=128\), 84% of \(F_{\texttt{peak}}\) in Table II. We also see that this \(M_{\texttt{Tile}}\) scaling is effective in multiplying tall-skinny matrices where \(n\) is relatively much smaller than \(k\). The larger \(M_{\texttt{Tile}}\) reduces a bottleneck of the current implementation to some extent.
#### Iv-B3 Evaluation for Non-square matrix
In computation of square matrices, we found that the performance of our binary128 GEMM designs was ideal, except for the memory bandwidth constraint caused by the large PE layout. We then evaluate the performance for non-square matrices. Fig. 4 shows the result gained by multiplications of \(m\times k\) and \(k\times n\) matrices where \(m\) and \(k\) are fixed at \(m=k=4096\) and only \(n\) is varied between 32 and 4096. In this evaluation, we set \(M_{\texttt{Tile}}=128\) in all designs.
In the case of multiplication with rectangular matrices, the current systolic array design is ineffective due to load imbalance among PEs. However, when the layout of PEs is small, such as \(2\times 2\) PEs, the performance does not drop even for multiplication with \(4096\times 128\) compared to \(4096\times 4096\).
However, the multiplication on the design of \(8\times 8\) PEs clearly shows a performance degradation for any \(n\). In particular, for the computations of tall-skinny matrices where \(n\) is much smaller than \(k\), the design of \(8\times 8\) PEs performs far from its maximum capacity. The performance is as low as that of \(2\times 2\) PEs. When we similarly fix \(m\) and \(n\) to \(m=n=4096\) and scale \(k\) between 32 to 4096, the computation of each design shows the same result as in Fig. 4.
### _Benchmarking Results on Stratix10 and Agilex_
We then evaluate our binary128 GEMM designs on Stratix10 and Agilex FPGAs under the same benchmarking conditions. Based on the previous evaluation of Arria10, the designs targeted in this section are \(8\times 8\) PEs with \(M_{\texttt{Tile}}=128\). Additionally, we implemented a design of \(8\times 16\) PEs with \(M_{\texttt{Tile}}=256\) and 512 to utilize the abundant hardware resources on Stratix10 and Agilex. However, their resources are still insufficient to implement \(16\times 16\) PEs due to the limited number of available logic cells.
Table III summarizes the logic synthesis results of our designs implemented on each FPGA. As we increase the size of the memory buffer on each PE by scaling \(M_{\texttt{Tile}}\), the utilization of memory bits and RAM blocks on the FPGAs accordingly increases. However, this does not cause problems on the Stratix10 and Agilex FPGA systems when we set \(M_{\texttt{Tile}}=512\) for \(8\times 16\) PEs. As a result, Fmax and \(F_{\texttt{peak}}\) for our binary128 GEMM designs on Stratix10 and Agilex are much higher than those on Arria10.
Fig. 5 shows the performance of our binary128 GEMM designs on the two FPGAs. On FPGA systems of Stratix10 and Agilex, we could execute GEMM with the size of a maximum \(n=24576\) thanks to their large board memory. For comparison, we plot the performance on a host CPU (i9-10900) in the Agilex FPGA system.
We first focus on results for Stratix10. The design of \(8\times 8\) PEs with \(M_{\texttt{Tile}}=128\) almost reached its peak performance at \(n=4096\). The performance scaling for larger \(n\) is at 32.8GFlops, 99% of the peak. \(8\times 16\) PEs with \(M_{\texttt{Tile}}=256\) similarly reached a peak of 45.0GFlops at around \(n=12000\)
Fig. 4: Performance of our binary128 GEMM designs on Arria10 FPGA for non-square matrices where \(n\) ranges from 32 to 4096
Fig. 3: Performance of our binary128 GEMM designs on Arria10 FPGA with \(M_{\texttt{Tile}}=24\) to 256
However, compared to the design of \(8\times 8\) PEs, its performance improvement is sluggish because the Fmax of the \(8\times 16\) PEs significantly dropped and led to a low \(F_{\mathrm{peak}}\) of the design.
As we examine the performance of the designs on Agilex, the optimization of PE layout and \(M_{\mathtt{Tile}}\) successfully contributed to performance improvement. While the design of \(8\times 8\) PEs with \(M_{\mathtt{Tile}}=128\) certainly performs effectively, that of \(8\times 16\) PEs with \(M_{\mathtt{Tile}}=512\) is much better. The computation by the \(8\times 16\) PEs achieved 90.9GFlops, 91% of the peak, for the largest matrix size of \(n=24576\) in contrast to one by the \(8\times 8\) PEs yielding 50.4GFlops at \(n=18000\), about 96% of its peak.
The importance of the size of \(M_{\mathtt{Tile}}\) can be easily understood by comparing it with a reference plot for the design of \(8\times 16\) PEs with \(M_{\mathtt{Tile}}=128\) on Agilex. If we set \(M_{\mathtt{Tile}}=128\), the performance of the design is at most 77GFlops, which is only 77% of the peak. In particular, a trench in the plot at \(n=16384\) results in a significant performance drop to 54.1GFlops around that point. One reason may be that those specific large matrices accidentally cause accesses that stride over different memory banks on four independent DIMMs on the Agile FPGA board. However, the memory buffer exploited by the larger \(M_{\mathtt{Tile}}\) (e.g. 512) helps to alleviate problems related to unexpected memory access patterns and facilitates steady performance improvement.
Finally, our binary128 GEMM design is very high performance compared to the _Rgemm_ routine executed on the CPU with 20 threads. Its performance settles at 650MFlops for \(n>1024\). Therefore, we have a significant advantage in processing large matrices. The design of \(8\times 16\) PEs with \(M_{\mathtt{Tile}}=512\) on Agilex is 145x faster than the computation on a recent CPU with the maximum number of threads.
In addition, we show the performance of our binary128 GEMM designs for non-square matrices on Stratix and Agilex FPGAs. Fig. 6 shows the benchmarking result when \(m\) and \(n\) are fixed to \(m=n=16384\), and \(k\) is scaled between 32 and 16384. As presented in the benchmarking on Arria10, the performance drop for ratios of \(n:k<2:1\) is not significant. However, for tall-skinny matrices where \(k\) is particularly small, like \(k\leq 128\), even the performance on Agilex is just a few GFlops. As a result, the advantage of our binary128 GEMM designs compared to computation on CPUs is lost.
## V Application of binary128 Matrix Multiplication
Once we have our binary128 GEMM designs by the systolic array architecture, we can accelerate practical applications which require binary128 GEMM operations. We here describe two applications of our implementation with performance evaluation. In this section, \(\mathbb{R}^{n\times n}\) denotes \(n\times n\) real matrices.
### _Blocked LU Decomposition_
#### V-A1 Problem Specification of LU Decomposition
The LU Decomposition is a fundamental operation in numerical analysis that factorizes the given square matrix \(A\) as a multiplication of lower and upper triangular matrices like \(A=LU\) where \(L\) and \(U\) are lower and upper triangular matrices, respectively. Based on BLAS routines, the LU decomposition in binary64 precision is implemented as a routine called _dgetr_ in LAPACK. The _dgeef_ routine adopts a blocked LU decomposition algorithm thoroughly investigated and implemented for every supercomputer in the last four decades. Its variation is the most famous parallel benchmarking program called LINPACK. The blocked LU decomposition algorithm effectively solves dense linear equations on accelerator architectures like GPU since its computation is mainly processed as GEMM operations.
Let us consider the LU decomposition for a matrix \(A\in\mathbb{R}^{n\times n}\) with the block size \(b\), as shown in Fig. 7. Then, we obtain \(L\) and \(U\) on \(A\) by repeating the following procedure recursively.
1. Divide \(A\) into 4 sub-matrices: \(A_{11}\in\mathbb{R}^{b\times b}\), \(A_{12}\in\mathbb{R}^{b\times(n-b)}\), \(A_{21}\in\mathbb{R}^{(n-b)\times n}\), and \(A_{22}\in\mathbb{R}^{(n-b)\times(n-b)}\).
2. Perform decomposition \(A_{11}=L_{11}U_{11}\).
3. Solve \(U_{12}\) that satisfies \(L_{11}U_{12}=A_{12}\).
4. Solve \(L_{21}\) that satisfies \(L_{21}U_{11}=A_{21}\).
5. Update \(A_{22}\) by \(A_{22}=A_{22}-L_{21}U_{12}\).
6. If \(n-b>0\) still holds, go back to step 1 after substituting \(A\) with \(A_{22}\).
Fig. 5: Performance of our binary128 GEMM designs for square matrices on Stratix10 and Agilex FPGAs
Fig. 6: Performance of our binary128 GEMM designs on Stratix10 and Agilex FPGAs for non-square matrices where \(k\) ranges from 32 to 16384
In step 5, we have matrix multiplication \(L_{21}U_{12}\). When \(b=1\), the blocked LU decomposition is reduced to a non-blocked routine called _dgetrf2_ in LAPACK. When \(b\) is large enough, the computation of _dgetrf_ is dominated by GEMM operations in step 5. Accordingly, it can be accelerated by GEMM routines on GPUs or FPGAs.
In MPLAPACK [11], all BLAS and LAPACK routines are extended to support multi-precision FP operations, including binary128. We modify an extended version of _dgetrf_ in MPLAPACK called _Rgetrf_, which calls the _Rgemm_ routine. In this paper, we replace calls to _Rgemm_ with our binary128 GEMM operations executed on FPGAs.
The number of FP operations in the LU decomposition algorithm is \(\frac{2n^{3}}{3}-\frac{n^{2}}{2}+\frac{5n}{6}\)[27]. Here, we regard it as \(\frac{2n^{3}}{3}\). Therefore, \({F_{\rm perf}^{\prime}}\) as shown in Eq. (7) gives the computation performance for the following evaluation.
\[{F_{\rm perf}^{\prime}}=\frac{2n^{3}}{3\times T_{\rm execc}\times 10^{9}} \tag{7}\]
#### Iv-B2 Evaluation of GEMM for LU Decomposition
We assume that an input \(n\times n\) matrices whose elements are given by random numbers in a range of \([0.0,1.0)\). Then, the input matrices can be factorized by the LU decomposition. We decompose the square matrices by applying our binary128 GEMM designs in the algorithm.
Based on the evaluation in the previous section, we measure the performance of blocked LU decomposition with the design of \(8\times 16\) PEs on Agilex FPGA. We scale the size of matrices \(n\) and apply different block sizes \(b\) to find the optimal size of \(b\). As a comparison, we present a result on the design of \(8\times 16\) PEs on Stratix10 where \(b=128\). We also give an another comparison with a result obtained through computation using only the host CPU (Intel Core i9-10900). In that computation, the _Rgetrf_ routine in MPLAPACK takes charge of the LU decomposition with 20 threads by OpenMP parallelization.
Fig. 8 summarizes our results of the LU decomposition. For Agilex FPGA, we present the performance in each case of \(b=108,128,144\). The black line shows the performance scaling obtained by the computation on the CPU.
We observe that \(b=108\) yields the best performance on the Agilex FPGA as represented by 2.5GFlops at \(n=20000\). However, with a large matrix of \(n=24576\), a higher \(b\) yields the peak. We can see in the figure that the highest performance is 2.6GFlops obtained with \(b=144\) for the matrix of \(n=24576\). On the other hand, the performance deteriorates when we apply even larger values of \(b\) such as \(b=192\) and \(256\), yielding 2.3GFlops and 2.1GFlops, respectively. Similarly, the design on the Stratix10 FPGA is superior to the CPU computation for \(n>3000\). Although it is slower than the computation on the Agilex FPGA, it finally reaches 2.2GFlops at \(n=20000\), which is 4.7x faster than that of the CPU.
Since the performance on FPGAs improves slowly by scaling \(n\) until computation data saturate every PE, the performance on the CPU for small \(n\) is superior to that of FPGAs. When the matrix size \(n=512\), the smallest size in this evaluation, the performance on the CPU is 278MFlops which is 2 to 3x faster than that of FPGAs. We see that the intersection of the performance scaling between the CPU and FPGAs is around \(n=1536\). The performance of the CPU execution does not improve for \(n>2000\), which is 458MFlops at \(n=24576\). In contrast, the performance of the LU decomposition by using our binary128 GEMM designs on Agile FPGA is at a maximum of 5.3x faster than that of the CPU.
We compare the decomposed matrices \(L\) and \(U\) calculated by the designs on FPGAs with the reference result calculated by the CPU by using Eq. (6). In the case of \(n\leq 1536\), where the CPU computation is still faster than FPGAs, we find \(E_{\rm L1}\)\(\sim 10^{-31}\). On the other hand, as we test for the matrix of \(n=24576\), we find \(E_{\rm L1}\)\(\sim 10^{-28}\). This consequence is the same as we expected, considering the previous evaluation of our binary128 GEMM design.
Finally, we compare our results with those of previous work by Kouya [16], who presented optimizations of LU decomposition using DD arithmetic. Specifically, they have applied memory blocking and vectorization using AVX2 instructions and evaluated the performance on an Intel Core i9-10900X
Fig. 8: Performance of LU decomposition on Stratix10 and Agilex FPGAs
Fig. 7: Blocked LU Decomposition of a matrix \(A\) where the block size is \(b\)
CPU. According to their benchmarking for \(n=1024\), the performance of a conventional blocked LU decomposition code with \(b=64\) was 132MFlops. Similarly, the performance of a vectorized LU decomposition code with \(b=32\) was 363MFlops. In contrast, our result with the design of \(8\times 16\) PEs achieved 324.5MFlops for \(n=1024\) and \(b=108\) on an Agilex FPGA. Even the fastest design on the high-end FPGA is not significantly beneficial for small matrices. As a result, from performance perspective for small matrices, our binary128 GEMM designs are inferior to the vectorized LU decomposition code on a CPU.
However, we emphasize that our designs on recent FPGAs are much more effective for large \(n\). With the current best performance of our LU decomposition being 2.5GFlops, our FPGA designs are superior for large matrices. It is also worth noting that our work and the work by Kouya [16] use different FP formats. DD arithmetic is well suited for recent high-end CPUs equipped with vector arithmetic units such as AVX2 and AVX512 instructions on the x86-64 ISA, Neon, and SVE instructions on the ARM ISA.
### _Semidefinite Programming (SDP)_
SDP is an optimization problem to minimize or maximize a given linear function under the constraint of symmetric semidefinite matrices. It has vast possible applications in engineering [28], finance [29], quantum chemistry [30], and physics [31], which have been investigated for a long time.
SDPA [32] is a numerical implementation and software package for SDP written in C++ [33]. The algorithm used in the SDPA is called the PDIPM, one of the iteration methods for SDP. Previous research [5] has extended the SDPA to support various precision FP operations such as SDPA-GMP, -DD, and QD [5]. The GMP version uses arbitrary precision arithmetic. Thus, a user must specify the precision beforehand. These extended versions of the SDPA use a part of MPLAPACK [11] as a back-end, mainly through calling the _Rgemm_ routine.
To determine which parameters are utilized in GEMM routines called from the SDPA, we conduct 92 problems provided by SDPLIB [34] using SDPA-binary128 with MPLAPACK. As we are currently focusing on accelerating GEMM routines in our work, we have modified the code to record the 13 arguments specified in Listing 1 for the _Rgemm_ routine during the execution of all problems.
Analysis of the collected data reveals that the SDPA frequently calls the _Rgemm_ routine with non-square matrices, and none of the leading dimensions of the matrices in the _Rgemm_ routine equal \(m\), \(n\), or \(k\). Of the over 800 combinations of arguments recorded in the collected data, we find only 50 combinations where the condition \(n=m=k=lda=ldb=ldc\) holds. As shown in Sec. IV-B1, the performance of our binary128 GEMM designs on FPGAs for non-square matrices is inferior to that for square matrices.
Based on our analysis, we evaluate the performance of the SDPA calling _Rgemm_ operation accelerated by an FPGA only when either two conditions are satisfied; (1) \(m\) equals \(n\) or (2) \(m\times n\times k\) is larger than a predefined parameter \(N_{\min}=10^{6}\). We test different \(N_{\min}\) and find that \(N_{\min}=10^{6}\) to \(10^{7}\) is optimal for the SDPA. We only present the performance benchmarking of the SDPA on Agilex FPGA for selected problems from SDPLIB shown in Table IV. We present the elapsed time per iteration of the SDPA-binary128 on the three systems: CPU-A (Intel Xeon Gold 5122 4 cores @ 3.60GHz), CPU-B (Intel i9-10900 CPU 10 cores @ 2.80GHz), and CPU-B using our binary128 GEMM design of \(8\times 16\) PEs on Agilex. The performance with the FPGA is 2 to 4x and roughly 1.5x faster than that of CPU-A and CPU-B, respectively. Note that the performance of SDPA-binary128 on CPUs is proportional to the number of cores on a given CPU.
We verify that each solution computed by our binary128 GEMM design improves upon the solution obtained via double-precision calculations. As illustrated in Table V, we present the relative gaps, primal/dual feasible errors, and the numbers of iterations for problems theta2, theta3, theta4, theta6, and controll11 from SDPLIB, as computed on CPU-B using binary128, FPGA (Agilex) using our design, the DD precision version [5], and the double precision version [32]. As smaller errors indicate better results, the solutions obtained via our binary128 GEMM design exhibit an improvement over those obtained via double precision calculations and are of comparable or slightly superior quality to those obtained via DD arithmetic. Our binary128 _Rgemm_ accelerated by FPGAs effectively accelerates the PDIPM for SDP problems.
### _Discussions on Application Performance_
The blocked LU decomposition algorithm _Regertf_ outlined in Sec. V-A employs the _Rgemm_ operation to compute \(A_{22}=L_{21}U_{12}\), where both matrices are non-square and skinny. \(L_{21}\) and \(U_{12}\) are matrices of dimensions \(b\times k\) and \(k\times b\), respectively. During the loop from step 2 to step 6, \(k\) is reduced as \(k=n-pb\), where \(p\) represents the iteration number starting from \(p=1\). At an initial phase of the algorithm, \(k\) is
large enough such that our binary128 GEMM designs on the Agilex FPGA effectively accelerate the performance of _Rgetrf_. However, as \(k\) becomes much smaller than \(n\) at a later phase of the algorithm, the acceleration by the Agilex FPGA becomes ineffective. The blocking size \(b\) also impacts the performance of the GEMM on FPGAs. For instance, if \(b\) is too small, the performance of _Rgemm_ on FPGAs is significantly reduced, as depicted in Figs. 2 and 5.
On the other hand, the PDIPM frequently calls the _Rgemm_ operation for small non-square matrices with a wide range of combinations of matrix sizes \(n\), \(k\), and \(m\). The largest matrix size in all problems presented in Table IV is only \(n=k=m=2000\). With a matrix size of \(n=k=m=2000\), the performance of _Rgemm_ on FPGAs is half the peak performance. In most cases, the algorithm calls the _Rgemm_ operation for much smaller matrices when it is not executed on the FPGA. In a previous evaluation of a fast GEMM in DD arithmetic on GPUs by Nakata _et al_. [15], it was shown that the performance of the PDIPM in DD arithmetic accelerated by a GPU is more than 10x faster than that on a CPU with four cores. According to their results, the size of matrices does not significantly affect the performance of _Rgemm_ on GPU. Therefore, they have always utilized the GPU, except for very small matrices.
Despite the superior performance of our accelerated _Rgemm_ implementation on the Agilex FPGA, which is more than 100x faster than the reference _Rgemm_ on a 10-core CPU, the two applications evaluated in this section are not substantially accelerated by the FPGA. Therefore, to make our binary128 GEMM designs on FPGAs more practical for real-world applications, we will need to extensively modify the systolic array design generated by rBLAS to address the performance degradation for small matrices and non-square matrices. A potential solution is to develop an extended version of _Rgemm_ that incorporates another level of blocking in the host code. Specifically, we could develop a new _Rgemm_ API based on a batched GEMM algorithm [35]. It would allow us to instantiate multiple systolic arrays on an FPGA to handle the batched GEMM algorithm. One of a hardware implementation of a batched GEMM algorithm focusing on 64 and smaller bits of FP numbers was reported by Ledoux _et al_. [36]. Their systolic array design leverages a stalling-free output scheme for the output matrix \(C\) to maximize the overlap of host data transfers with GEMM computations.
## VI Conclusion
In this paper, we presented our binary128 GEMM implementation and its evaluation of different Intel FPGAs, and its integration into numerical applications such as blocked LU decomposition and SDP. Our GEMM designs on FPGAs are based on the 2-D systolic array generated by the rBLAS library. Furthermore, by optimizing memory buffer size, which stores reused data in fast on-chip memory, we successfully implemented \(8\times 16\) PEs to accelerate the GEMM in binary128 arithmetic on FPGAs.
The benchmarking in this paper showed that our implementation is particularly advantageous when computing large matrices of size \(n>10^{4}\). For example, in our evaluation of our binary128 GEMM implementation on the Agilex FPGA, the performance was 90.9GFIops, 91% of the estimated peak performance of the design. This resulted in a 147x speed-up compared to the _Rgemm_ routine provided by MPLAPACK on an i9-10900 CPU with 20 threads.
Further benchmarking of various matrix multiplications showed that our designs are pretty effective to accelerate GEMM operations for square and almost-square matrices. In other words, LU decomposition can be solved faster using our implementation than with existing CPU routines. However, our design was not effective at handling tall-skinny matrices, commonly found to solve semidefinite programming.
Our current systolic array designs for GEMM operations are based on the OpenCL kernels generated by the latest version of rBLAS [37]. The rBLAS is designed to be flexible and accommodate various kernel configurations for different BLAS routines, such as General Matrix-Vector Multiplication (GEMV) and Triangular Solve with Multiple Right-Hand Sides (TRSM). However, in this study, we extracted only the systolic array kernels of GEMM for our work. Extending our work to other BLAS routines would be an interesting area for future research.
There is still room for optimization to improve the performance of our GEMM design when we use it to calcu
late tall-skinny matrix multiplications. Further optimizations are necessary to achieve the desired performance, especially for SDP problems. In future work, we will compare such optimized GEMM designs with other high-precision GEMM implementations on accelerators. Another area of future work will be to explore other FP formats in our GEMM designs by replacing the current binary128 multiply-add units with multiply-add units in different arithmetic.
## Acknowledgment
A part of this paper is based on results obtained from a project, JPNP16007, commissioned by the New Energy and Industrial Technology Development Organization (NEDO). This work was partly supported by MEXT as "Feasibility studies for the next-generation computing infrastructure" and KAKENHI Grant Number JP23K11133.
This research in part used computational resources of Cygnus provided by Multidisciplinary Cooperative Research Program in Center for Computational Sciences, University of Tsukuba.
We thank Prof. Ishikawa, High Energy Accelerator Research Organization, and Prof. Daisaka, Hitotsubashi University, Japan, for their help evaluating our designs on Stratix10.
|
2302.09054 | Hanging cables and spider threads | It has been known for more than 300 years that the shape of an inelastic
hanging cable, chain, or rope of uniform linear mass density is the graph of
the hyperbolic cosine, up to scaling and shifting coordinates. But given two
points at which the ends of the cable are attached, how exactly should we scale
and shift the coordinates? Many otherwise excellent expositions of the problem
are a little vague about that. They might for instance give the answer in terms
of the tension at the lowest point, but without explaining how to compute that
tension. Here we discuss how to obtain all necessary parameters. To obtain the
tension at the lowest point, one has to solve a nonlinear equation numerically.
When the two ends of the cable are attached at different heights, a second
nonlinear equation must be solved to determine the location of the lowest
point. When the cable is elastic, think of a thread in a spider's web for
instance, the two equations can no longer be decoupled, but they can be solved
using two-dimensional Newton iteration. | Christoph Börgers | 2023-02-16T16:51:07Z | http://arxiv.org/abs/2302.09054v1 | # Hanging cables and spider threads
###### Abstract
It has been known for more than 300 years that the shape of an inelastic hanging cable, chain, or rope of uniform linear mass density is the graph of the hyperbolic cosine, up to scaling and shifting coordinates. But given two points at which the ends of the cable are attached, _how_ exactly should we scale and shift the coordinates? Many otherwise excellent expositions of the problem are a little vague about that. They might for instance give the answer in terms of the tension at the lowest point, but without explaining how to compute that tension. Here we discuss how to obtain all necessary parameters. To obtain the tension at the lowest point, one has to solve a nonlinear equation numerically. When the two ends of the cable are attached at different heights, a second nonlinear equation must be solved to determine the location of the lowest point. When the cable is elastic, think of a thread in a spider's web for instance, the two equations can no longer be decoupled, but they can be solved using two-dimensional Newton iteration.
## Introduction
The shape of a hanging cable (or chain or rope) is called a _catenary_; see Fig. 1 for examples. In 1691, in three papers published back-to-back in the same journal
Figure 1: Examples of catenaries.
**[2, 9, 11]**, the _inelastic_ catenary was found to be described, up to shifting coordinates, by an equation of the form
\[\frac{y}{\lambda}=\cosh\frac{x}{\lambda}. \tag{1}\]
(The notation was different back then; the notion of hyperbolic cosine did not exist yet.) The parameter \(\lambda>0\) is a length, and we will refer to it as the _shape parameter_. It is also sometimes called the _catenary parameter_. Note that \(x\) and \(y\) must be scaled the same way. No hanging chain is described by \(y=\cosh(2x)\), if the same length unit is used for \(x\) and for \(y\).
Many excellent presentations of the derivation of (1) are available. I'll give my own below. To find \(\lambda\), one must (numerically) solve a nonlinear equation. This can be done using Newton's method, and with a suitably chosen initial guess, convergence is guaranteed for convexity reasons.
Countless variations have been studied. Perhaps the simplest is the question of what happens when the two ends are not anchored at the same height. For an example, see the left lower panel of Fig. 1, which depicts the Queshuachaca Rope Bridge in Peru.
The shape is still a hyperbolic cosine, but now the location of the lowest point is no longer obvious by symmetry. Two coupled nonlinear equations determine the shape parameter and the location of the lowest point. There is an algebraic trick by which the system can be decoupled, making it possible to solve first for the shape parameter, then for the location of the lowest point. Each requires the solution of a (scalar) nonlinear equation. For both equations, convergence of Newton's method is guaranteed, if the initial guess is chosen judiciously, again for convexity reasons.
An elastic cable, for instance a spider thread, is not described by a hyperbolic cosine, and in fact it is no longer possible to write \(y\) as a function of \(x\) explicitly at all. However, both \(x\) and \(y\) can still be written explicitly as functions of \(s=\mathrm{arc}\) length in the absence of tension. This, too, has been known for centuries [5]. Again there is a system of two coupled nonlinear equations in two unknowns determining the shape parameter and the lowest point, but there is no longer an algebraic trick decoupling the equations. One must solve for the shape parameter and the lowest point simultaneously. Newton's method in two dimensions, starting with the parameter values for the inelastic case, does this reliably and with great efficiency.
### Inelastic cable with both ends at the same height
We think of a cable hanging in an \((x,y)\) plane that is perpendicular to the ground. The ends are attached at \((x,y)=(A,H)\) and \((x,y)=(B,H)\), and the length \(L\) of the cable is greater than \(B-A\), so the cable sags. We call the coordinates of the bottom point \(x_{\min}\) and \(y_{\min}\). This is the most standard catenary problem.
The conventional derivation of the hyperbolic cosine.Focus on a segment of the cable between the bottom point \((x_{\min},y_{\min})\) and some point on the right, \((x,y)\) with \(x>x_{\min}\) and \(y>y_{\min}\); see Fig. 2. We could similarly discuss a segment between \((x_{\min},y_{\min})\) and some point on the left, \((x,y)\) with \(x<x_{\min}\) and \(y>y_{\min}\), with analogous conclusions. The part of the cable to the right of \((x,y)\) pulls on this segment with a certain force tangential to the cable. We call the magnitude of this force the _tension_ at \((x,y)\), and denote it by \(T\).
If you are like me and feel a slight discomfort now, perhaps not being _entirely_ sure that you know what "tension" really means, and in what sense parts of the cable pull on other parts of the cable, then read the discussion of the _elastic_ cable below for a better explanation of the inelastic one.
The part of the cable to the left of \((x_{\min},y_{\min})\) pulls on the segment with a force of magnitude \(T_{\min}\), the tension at the bottom point \((x_{\min},y_{\min})\). Since the cable is stationary, the horizontal components of the two tension forces must balance:
\[T\cos\alpha=T_{\min} \tag{2}\]
where the definition of \(\alpha\in[0,\frac{\pi}{2})\) is indicated in Fig. 2. The notation \(T_{\min}\) is doubly appropriate; not only is it the tension at the lowest point, it is also the minimal tension, as eq. (2) shows.
Similarly, the weight of the cable segment between \((x_{\min},y_{\min})\) and \((x,y)\) must balance the vertical tension forces. Denote by \(\Delta s\) the length of the segment between \((x_{\min},y_{\min})\) and \((x,y)\). Also, denote by \(\rho\) the _linear mass density_, that is the mass per unit length, of the cable. We assume \(\rho\) to be constant. The mass of our segment of length \(\Delta s\) is \(\rho\Delta s\). Therefore the weight of the segment between \((x_{\min},y_{\min})\) and \((x,y)\) is \(\rho g\Delta s\), where \(g\) is the gravitational acceleration. This weight must be balanced by the vertical component of the tension force at \((x,y)\), since at \((x_{\min},y_{\min})\), the vertical component of the tension force is zero:
\[T\sin\alpha=\rho g\Delta s. \tag{3}\]
We divide (3) by (2) to obtain
\[\tan\alpha=\frac{\rho g\Delta s}{T_{\min}}.\]
If we think of \(y\) as a function of \(x\) in Fig. 2, then \(\tan\alpha\) is the derivative of \(y\) with respect to \(x\). We'll denote this derivative by \(y^{\prime}(x)\). So
\[y^{\prime}(x)=\frac{\rho g\Delta s}{T_{\min}}. \tag{4}\]
Figure 2: Tension forces on a segment of the hanging cable.
The arc length \(\Delta s\) is a function of \(x\):
\[\Delta s=\int_{x_{\rm min}}^{x}\sqrt{1+y^{\prime}(u)^{2}}du,\]
where again \(y^{\prime}\) denotes the derivative of \(y\), and we use the letter \(u\) for no better reason than that it isn't \(x\), \(y\), or \(s\) (which we reserve for arc length). Therefore
\[y^{\prime}(x)=\frac{\rho g}{T_{\rm min}}\int_{x_{\rm min}}^{x}\sqrt{1+y^{ \prime}(u)^{2}}du.\]
Differentiating both sides, we get the second-order differential equation
\[y^{\prime\prime}(x)=\frac{\rho g}{T_{\rm min}}\sqrt{1+y^{\prime}(x)^{2}}. \tag{5}\]
To simplify the notation, we write
\[\lambda=\frac{T_{\rm min}}{\rho g}. \tag{6}\]
Note that \(\lambda\) is a length, since \(T_{\rm min}\) is a force, and \(\rho g\) is a force per unit length. This is the parameter that we will call the _shape parameter_. Equation (5) implies that \(z(x)=y^{\prime}(x)\) satisfies the first-order differential equation
\[z^{\prime}(x)=\frac{1}{\lambda}\sqrt{1+z(x)^{2}}.\]
By separation of variables, remembering that \(\int\frac{1}{\sqrt{1+z^{2}}}dz=\sinh^{-1}z+C\), and using that \(z(x_{\rm min})=y^{\prime}(x_{\rm min})=0\), we find
\[z(x)=y^{\prime}(x)=\sinh\left(\frac{x-x_{\rm min}}{\lambda}\right).\]
We integrate one more time to obtain
\[y(x)=\lambda\cosh\left(\frac{x-x_{\rm min}}{\lambda}\right)-\lambda+y_{\rm min}. \tag{7}\]
We picked the constant of integration so that \(y\) comes out to be \(y_{\rm min}\) when \(x=x_{\rm min}\). We re-write eq. (7) as
\[\frac{y-y_{\rm min}}{\lambda}=\cosh\left(\frac{x-x_{\rm min}}{\lambda}\right) -1. \tag{8}\]
The most remarkable thing about the catenary has now been said: With appropriate shifting and scaling of the coordinates (with \(x\) and \(y\) scaled exactly the same way -- that is, using the same length units for \(x\) and \(y\)), it is a hyperbolic cosine. But what are \(x_{\rm min}\), \(y_{\rm min}\), and \(\lambda\)?
By symmetry,
\[x_{\rm min}=\frac{A+B}{2}. \tag{9}\]
If we knew \(\lambda\) as well, then \(y_{\min}\) could be obtained from (7), using that \(y=H\) when \(x=B\):
\[y_{\min}=H-\lambda\cosh\left(\frac{B-A}{2\lambda}\right)+\lambda. \tag{10}\]
However, we still have to determine the shape parameter \(\lambda\), or equivalently (see eq. (6)) the tension \(T_{\min}\) at the lowest point.
The equation for \(\lambda\), or equivalently, for the tension at the lowest point.We obtain \(\lambda\) from the fact that the cable has length \(L\). By eq. (7), this means:
\[\int_{A}^{B}\sqrt{1+\sinh^{2}\left(\frac{x-x_{\min}}{\lambda} \right)}\;dx=L \tag{11}\] \[\Leftrightarrow \int_{A}^{B}\cosh\left(\frac{x-x_{\min}}{\lambda}\right)\;dx=L\] \[\Leftrightarrow \lambda\sinh\left(\frac{x-x_{\min}}{\lambda}\right)\bigg{|}_{A} ^{B}=L\] \[\Leftrightarrow \sinh\left(\frac{B-A}{2\lambda}\right)=\frac{L}{2\lambda}. \tag{12}\]
Equation (12) is the nonlinear equation that determines \(\lambda\). It is convenient here to make a minor change of coordinates:
\[\xi=\frac{B-A}{2\lambda}, \tag{13}\]
so eq. (12) becomes
\[\sinh\xi-\frac{L}{B-A}\xi=0. \tag{14}\]
Finding \(\lambda\).To find \(\lambda\), we solve eq. (14) for \(\xi\). The following proposition provides details.
**Proposition 1**.: _Equation (14) has exactly one positive solution \(\xi\), and consequently eq. (12) has exactly one positive solution \(\lambda\). Newton's method, applied to (14) with initial guess \(\sqrt{6\frac{L}{B-A}}\), is assured to converge to the positive solution._
Proof.: Existence and uniqueness of a positive solution follow from the convexity of \(\sinh\xi\) for \(\xi\geq 0\), and from \(\sinh(0)=0\), \(\sinh^{\prime}(0)=1\), and \(\frac{L}{B-A}>1\) (which holds by assumption -- the cable says); see Fig. 3A.
To show that Newton's method, when starting at \(\sqrt{6\frac{L}{B-A}}\), converges to the positive solution, it suffices, because of the convexity of the graph of \(\sinh\xi-\frac{L}{B-A}\xi\), to prove that \(\sqrt{6\frac{L}{B-A}}\) is an upper bound for the positive solution; see Fig. 3B.
The following argument proves that \(\sqrt{6\frac{L}{B-A}}\) is indeed an upper bound for the positive solution of eq. (14). Since \(\sinh\xi=\xi+\frac{\xi^{3}}{3!}+\frac{\xi^{5}}{5!}+\ldots\) we have for \(\xi>0\):
\[\sinh\xi>\frac{L}{B-A}\xi\;\;\Leftarrow\;\;\frac{\xi^{3}}{6}\geq\frac{L}{B-A }\xi\;\;\Leftrightarrow\;\;\frac{\xi^{2}}{6}\geq\frac{L}{B-A}\;\; \Leftrightarrow\;\;\xi\geq\sqrt{6\frac{L}{B-A}}.\]
So if \(\xi\geq\sqrt{6\frac{L}{B-A}}\), then \(\xi\) does not solve eq. (14). In other words, any solution of eq. (14) is smaller than \(\sqrt{6\frac{L}{B-A}}\).
Summary.The shape of the inelastic cable, hung up so that both ends are at the same height, is found as follows.
1. Find the positive solution of eq. (14) using Newton's method with initial guess \(\sqrt{6\frac{L}{B-A}}\), and define \(\lambda=\frac{B-A}{2\xi}\).
2. Define \(x_{\min}\) and \(y_{\min}\) according to eqs. (9) and (10).
3. The shape of the hanging cable is given by eq. (8).
### Afterthoughts.
1. The equation is derived by considering the balance of vertical and horizontal forces on a segment between the lowest point and another point. However, this implies balance of vertical and horizontal forces on any segment along the cable.
2. The shape parameter is computed without knowledge of \(\rho\). The weight of the cable is irrelevant to its shape.
3. The _tension_ in the cable, of course, does depend on the weight. The tension at the lowest point, for instance, is \(T_{\min}=\rho g\lambda\) (compare eq. (6)).
4. As \(L\) tends to \(B-A\), the positive solution \(\xi\) of (14) tends to \(0\). Therefore \(\lambda\) tends to \(\infty\), and so does \(T_{\min}=\rho g\lambda\). Therefore it is impossible for the cable not to sag at all; that would require infinite tension.
Inverted catenaries in architecture.There are countless examples of arches approximately in the shape of (upside-down) catenaries in architecture, as well as domes approximately in the shape of _catenary rotation surfaces_[6, Chapter 7]. Such a surface is obtained by rotating a catenary around its vertical axis of symmetry [14].1
Footnote 1: It is not to be confused with the _catenoid_ obtained by rotating a catenary around the _horizontal_ (\(x\)–)axis; see [4] for a recent fascinating discussion of catenoids.
Catenary arches have a special stability property, the mirror image of the force balance that leads to the equation of the catenary; the horizontal and vertical forces on
any segment are in balance. A catenary rotation surface does not have the analogous property but is not far from a surface that does [3, 14].
Figure 4 shows examples of catenary arches and domes in panels A-C. Panel D of the figure shows the Gateway Arch in St. Louis, and it is _not_ an inverted catenary; it is instead a curve of the form
\[\frac{y}{\lambda_{y}}=-\cosh\frac{x}{\lambda_{x}}\]
(after shifting the coordinates appropriately), with \(\lambda_{x}\approx 1.45\lambda_{y}\)[15]. This is called a _weighted catenary_.
### Inelastic cable with ends at different heights
Now we assume that the ends are attached at \((x,y)=(A,H)\) and \((x,y)=(B,K)\), and without loss of generality \(H\leq K\). We assume that the length \(L\) of the cable is greater than the distance between \((A,H)\) and \((B,K)\), so the cable sags:
\[L>\sqrt{(B-A)^{2}+(K-H)^{2}}. \tag{15}\]
Figure 4: Examples of catenary and catenary domes in architecture. A: The Arch of Ctesiphon, a Persian monument in present-day Iraq, about 1500 years old. B: The dome of the cathedral of Florence, built between 1296 and 1436. C: Traditional houses of the Musgum people in Cameroon. D: The Gateway Arch in St. Louis. It is a _weighted catenary_. The black curves are catenaries in panels A–C, and a weighted catenary in panel D.
It's still a hyperbolic cosine.Our previous arguments still show that the solution is of the form given by eq. (8), repeated here for convenience:
\[\frac{y-y_{\min}}{\lambda}=\cosh\left(\frac{x-x_{\min}}{\lambda}\right)-1. \tag{8}\]
The complication is that there is no symmetry argument telling us the value of \(x_{\min}\) any longer. In fact, \(x_{\min}\) could even be to the left of \(A\).
Two equations for the two unknowns \(\boldsymbol{\lambda}\) and \(\boldsymbol{x_{\min}}\).The three parameters \(\lambda\), \(x_{\min}\), and \(y_{\min}\) must be chosen so that three conditions hold:
\[y(A)=H,\;\;\;y(B)=K,\;\;\;\mbox{length of cable}=L.\]
However, we can easily derive two equations for the two parameters \(\lambda\) and \(x_{\min}\):
\[y(B)-y(A)=K-H,\;\;\;\mbox{length of cable}=L,\]
or explicitly, using (7) and (11),
\[\cosh\left(\frac{B-x_{\min}}{\lambda}\right)-\cosh\left(\frac{A- x_{\min}}{\lambda}\right) = \frac{K-H}{\lambda}, \tag{16}\] \[\sinh\left(\frac{B-x_{\min}}{\lambda}\right)-\sinh\left(\frac{A- x_{\min}}{\lambda}\right) = \frac{L}{\lambda}. \tag{17}\]
Once \(\lambda\) and \(x_{\min}\) are known, \(y_{\min}\) can be obtained from \(y(B)=K\) using (7):
\[y_{\min}=K-\lambda\cosh\left(\frac{B-x_{\min}}{\lambda}\right)+\lambda. \tag{18}\]
Finding \(\boldsymbol{\lambda}\).Now there is an algebraic trick. We square (16) and (17), subtract them from each other, and use \(\cosh^{2}u-\sinh^{2}u=1\) and \(\cosh u\cosh v-\sinh u\sinh v=\cosh(u-v)\) for all \(u\) and \(v\). We thereby get this:
\[2-2\cosh\left(\frac{B-A}{\lambda}\right)=\frac{(K-H)^{2}-L^{2}}{\lambda^{2}},\]
or equivalently,
\[\cosh\left(\frac{B-A}{\lambda}\right)-1=\frac{L^{2}-(K-H)^{2}}{2\lambda^{2}}. \tag{19}\]
This is a single equation for \(\lambda\). It can be written in a more appealing way by using one more hyperbolic trigonometric formula: \(\cosh u=1+2\sinh^{2}\frac{u}{2}\). With that (19) becomes
\[\sinh^{2}\left(\frac{B-A}{2\lambda}\right)=\frac{L^{2}-(K-H)^{2}}{4\lambda^{2}},\]
or equivalently,
\[\sinh\left(\frac{B-A}{2\lambda}\right)=\frac{\sqrt{L^{2}-(K-H)^{2}}}{2\lambda}. \tag{20}\]
Equation (20) is precisely the same as eq. (12), except that \(L\) has been replaced by \(\sqrt{L^{2}-(K-H)^{2}}\). Notice also that (15) implies that \(\sqrt{L^{2}-(K-H)^{2}}>B-A\). Proposition 1 therefore applies, with \(L\) replaced by \(\sqrt{L^{2}-(K-H)^{2}}\). Equation (20) has a unique positive solution \(\lambda\), and we can compute \(\xi=\frac{B-A}{2\lambda}\) using Newton's method, applied to
\[\sinh\xi-\frac{\sqrt{L^{2}-(K-H)^{2}}}{B-A}\;\xi=0, \tag{21}\]
with initial guess \(\sqrt{\frac{6\sqrt{L^{2}-(K-H)^{2}}}{B-A}}\).
Finding \(x_{\min}\).Once \(\lambda\) is known, \(x_{\min}\) can be obtained by solving eq. (16) for \(x_{\min}\). The following proposition provides details.
**Proposition 2**.: _Let \(A<B\), \(H\leq K\), and \(\lambda>0\). Let, for \(x\in\mathbb{R}\),_
\[g(x)=\cosh\left(\frac{B-x}{\lambda}\right)-\cosh\left(\frac{A-x}{\lambda} \right)-\frac{K-H}{\lambda} \tag{22}\]
_so eq. (16) becomes \(g(x_{\min})=0\)._
1. \(g\) _is strictly decreasing with_ \(\lim_{x\to-\infty}=\infty\) _and_ \(\lim_{x\to\infty}g(x)=-\infty\)_, so there is a unique solution,_ \(x_{\min}\)_, of_ \(g(x)=0\)_,_
2. \(g\left(\frac{A+B}{2}\right)\leq 0\)_, so_ \(x_{\min}\leq\frac{A+B}{2}\)_,_
3. \(g^{\prime\prime}(x)>0\) _for_ \(-\infty<x<\frac{A+B}{2}\)_, and_
4. _Newton's method for_ \(g(x)=0\)_, starting with the initial guess_ \(\frac{A+B}{2}\)_, converges to the unique solution,_ \(x_{\min}\)_, of_ \(g(x)=0\)_._
See Fig. 5 for illustration.
Proof.: (a) For all \(x\),
\[g^{\prime}(x)=\frac{1}{\lambda}\left(\sinh\left(\frac{A-x}{\lambda}\right)- \sinh\left(\frac{B-x}{\lambda}\right)\right)<0\]
because \(\lambda>0\), \(A<B\), and \(\sinh\) is a strictly increasing function. Using the definition of \(\cosh\),
\[g(x)=\frac{e^{(B-x)/\lambda}+e^{-(B-x)/\lambda}}{2}-\frac{e^{(A-x)/\lambda}+e ^{-(A-x)/\lambda}}{2}-\frac{K-H}{\lambda} \tag{23}\]
As \(x\rightarrow-\infty\), (23) equals
\[\frac{e^{(B-x)/\lambda}-e^{(A-x)/\lambda}}{2}+O(1)\]
(the notation \(O(1)\) means "terms that remain bounded in the limit"), and
\[\frac{e^{(B-x)/\lambda}-e^{(A-x)/\lambda}}{2}=e^{(A-x)/\lambda}\;\frac{e^{(B-A )/\lambda}-1}{2}\rightarrow\infty\]
as \(x\rightarrow-\infty\). One sees in a similar way that \(g(x)\rightarrow-\infty\) as \(x\rightarrow\infty\).
(b)
\[g\left(\frac{A+B}{2}\right)=-\frac{K-H}{\lambda}\leq 0\]
because \(H\leq K\).
(c)
\[g^{\prime\prime}(x)=\frac{1}{\lambda^{2}}\left(\cosh\left(\frac{B-x}{\lambda} \right)-\cosh\left(\frac{A-x}{\lambda}\right)\right)=\frac{1}{\lambda^{2}} \left(g(x)+\frac{K-H}{\lambda}\right)\]
is strictly decreasing since \(g\) is known to be strictly decreasing by (a). Since \(g^{\prime\prime}\left(\frac{A+B}{2}\right)=0\), (c) follows.
(d) After (a)-(c) have been proved, this is so clear pictorially (see Fig. 5) that we'll refrain from proving it analytically.
Summary.The shape of the inelastic cable, hung up so that the two ends are at different heights, is found as follows.
1. Find the positive solution of eq. (21) using Newton's method with initial guess \(\sqrt{6\frac{\sqrt{L^{2}-(K-H)^{2}}}{B-A}}\), and define \(\lambda=\frac{B-A}{2\xi}\).
2. With \(g\) defined as in (22), solve \(g(x_{\min})=0\) for \(x_{\min}\), using Newton's method with initial guess \(\frac{A+B}{2}\).
3. Compute \(y_{\min}\) from eq. (18).
4. The shape of the hanging cable is given by eq. (8).
Two examples are shown in Fig. 6.
Figure 5: Illustration of Newton’s method applied to \(g(x)=0\).
Afterthought.As \(L\) tends to \(\sqrt{(A-B)^{2}+(K-H)^{2}}\), the solution of (21) tends to zero, and therefore \(\lambda\) tends to \(\infty\). Again we see that it is impossible for the cable to have no sag at all.
Arc length parametrization.We will parametrize the hanging cable with respect to arc length. That's entirely unnecessary, but it will make the analogy with the elastic case discussed later more transparent.
From eq. (8), we see that the arc length \(s\) between the left end point of the cable and the point at \(x\in[A,B]\) is
\[s=\int_{A}^{x}\sqrt{1+\sinh^{2}\left(\frac{u-x_{\min}}{\lambda}\right)}\,du.\]
Using \(\sqrt{1+\sinh^{2}}=\cosh\) we evaluate the integral and find
\[s=\lambda\sinh\left(\frac{x-x_{\min}}{\lambda}\right)+\lambda\sinh\left(\frac {x_{\min}-A}{\lambda}\right). \tag{24}\]
The arc length parameter associated with \(x_{\min}\), in particular, is
\[s_{\min}=\lambda\sinh\left(\frac{x_{\min}-A}{\lambda}\right). \tag{25}\]
Solving (24) for \(x\) and using (25), we find the relation between \(x\) and the arc length \(s\):
\[\frac{x-x_{\min}}{\lambda}=\sinh^{-1}\left(\frac{s-s_{\min}}{\lambda}\right). \tag{26}\]
With that, (8) becomes
\[\frac{y-y_{\min}}{\lambda}=\cosh\sinh^{-1}\left(\frac{s-s_{\min}}{\lambda} \right)-1. \tag{27}\]
For any \(u\in\mathbb{R}\), \(\cosh\sinh^{-1}(u)=\sqrt{1+u^{2}}\). (This follows from \(\cosh^{2}-\sinh^{2}=1\).) Therefore eq. (27) can also be written like this:
\[\frac{y-y_{\min}}{\lambda}=\sqrt{1+\left(\frac{s-s_{\min}}{\lambda}\right)^{2 }}-1. \tag{28}\]
Equations (26) and (28) describe the hanging cable parametrized by arc length.
Figure 6: A long and a short hanging cable, with \(A=2\), \(B=5\), \(H=3\), \(K=7\).
### Elastic cable or spider thread
Now we consider a cable that can stretch. The threads of a spider web are an example. By passing to the limit of zero compliance, this discussion will also yield an alternative derivation of the standard hyperbolic cosine formula discussed in the preceding sections. This derivation is less straightforward than the standard one explained earlier; however, it explains the tangential tension forces more clearly.
Background on Hooke's constant, compliance, and springs in series.A linear spring with resting length \(h\), extended to length \(\ell\), contracts with force \(F=\kappa(\ell-h),\) where the constant of proportionality \(\kappa\) is called _Hooke's constant_. Its reciprocal \(c=1/\kappa\) is called the _compliance_ of the spring, so \(F=\frac{\ell-h}{c}\). The physical dimension of \(c\) is length per force. The greater the compliance, the easier is it to extend the spring.
Consider now two springs in series, with compliances \(c_{1}\) and \(c_{2}\) and resting lengths \(h_{1}\) and \(h_{2}\), attached on one end to a wall as in Fig. 7. Suppose you extend the springs from their combined resting length \(h=h_{1}+h_{2}\) to some length \(\ell\). The springs' lengths will be \(\ell_{1}\) and \(\ell_{2}\), and Newton's third law implies that the springs pull on each other with precisely the overall stretching force:
\[\frac{\ell_{1}-h_{1}}{c_{1}}=\frac{\ell_{2}-h_{2}}{c_{2}}=\frac{\ell-h}{c} \tag{29}\]
where \(c\) is the compliance of the combined spring made up of springs 1 and 2. From (29),
\[\ell_{1}-h_{1}=\frac{c_{1}}{c}(\ell-h)\quad\text{and}\quad\ell_{2}-h_{2}=\frac {c_{2}}{c}(\ell-h).\]
Summing these two equations, we find
\[\ell-h=\frac{c_{1}+c_{2}}{c}(\ell-h)\]
and therefore
\[c=c_{1}+c_{2}. \tag{30}\]
The conclusion is that compliances add when springs are connected in series.
Figure 7: Two springs in series being stretched.
String of mass points connected by springs.Think about a string of finitely many mass points connected by massless springs. Later we will pass to a continuum limit. I'll use the word "cable" after passing to the continuum limit, but "string" for the finitely many mass points connected by springs.
So consider a string of \(N+1\) mass points, connected by \(N\) identical massless springs. Assume that the resting lengths of the springs are all the same; we denote them by \(h\). Assume that each spring has compliance \(qh\), where \(q>0\) is a fixed constant, called the _linear compliance density_ (compliance per unit length), a reciprocal force. Since compliances sum when the springs are arranged in series, the compliance of the string becomes \(qhN=qL\), where \(L\) is the length of the string when it is not under any tension.
Assume similarly that each mass point has mass \(\rho h\), except for the two end points, which have mass \(\rho h/2\), where \(\rho>0\) is the linear mass density. So altogether the mass of the string is \(\rho hN=\rho L\). Since \(q\) is a reciprocal force, the quantity
\[\gamma=q\cdot\rho Lg=\text{linear compliance density}\cdot\text{weight of cable} \tag{31}\]
is non-dimensional. It quantifies the importance of elasticity for the cable, and will play an important role in our analysis.
String attached at both ends.Suppose now that we attach the string, as before, at \((x,y)=(A,H)\) and \((x,y)=(B,K)\), with \(H\leq K\); see Fig. 9. The position of the \(i\)-th mass point is \((x_{i},y_{i})\), with
\[(x_{0},y_{0})=(A,H)\quad\text{ and }\quad(x_{N},y_{N})=(B,K).\]
We write
\[\ell_{i}=\sqrt{(x_{i}-x_{i-1})^{2}+(y_{i}-y_{i-1})^{2}}\]
for the extended length of the \(i\)-th spring, and denote by \(\alpha_{i}\) the angle between the \(x\)-axis and the \(i\)-th spring segment, \(-\pi/2<\alpha_{i}<\pi/2\); see Fig. 9. We have
\[\sin\alpha_{i}=\frac{y_{i}-y_{i-1}}{\ell_{i}}\quad\text{and}\quad\cos\alpha_{i }=\frac{x_{i}-x_{i-1}}{\ell_{i}}. \tag{32}\]
For \(1\leq i\leq N-1\), the total vertical force on mass point \(i\) equals
\[-\rho gh-\frac{\ell_{i}-h}{qh}\sin\alpha_{i}+\frac{\ell_{i+1}-h}{qh}\sin \alpha_{i+1}.\]
This expression has to be zero, so we arrive at \(N-1\) equations that must be satisfied when the cable hangs at rest:
\[-\rho gh-\frac{\ell_{i}-h}{qh}\sin\alpha_{i}+\frac{\ell_{i+1}-h}{qh}\sin \alpha_{i+1}=0,\ \ \ 1\leq i\leq N-1. \tag{33}\]
Figure 8: The string of springs resting on the ground, under no tension.
We will now transform these equations in such a way that difference quotients approximating derivatives appear, since we are planning to let \(h\to 0\) so that a differential equation emerges.
Using (32), we re-write (33) as
\[-\rho hg-\frac{1}{qh}\frac{\ell_{i}-h}{\ell_{i}}(y_{i}-y_{i-1})+\frac{1}{qh} \frac{\ell_{i+1}-h}{\ell_{i+1}}(y_{i+1}-y_{i})=0.\]
Multiplying both sides by \(\frac{q}{h}\), and with a little bit of algebra:
\[\frac{1}{h}\left(\frac{y_{i+1}-y_{i}}{h}\left(1-\frac{h}{\ell_{i+1}}\right)- \frac{y_{i}-y_{i-1}}{h}\left(1-\frac{h}{\ell_{i}}\right)\right)=\rho qg,\]
so
\[\frac{1}{h}\left(\frac{y_{i+1}-y_{i}}{h}\left(1-\frac{1}{\sqrt{\left(\frac{x_{ i+1}-x_{i}}{h}\right)^{2}+\left(\frac{y_{i+1}-y_{i}}{h}\right)^{2}}}\right)-\right.\]
\[\frac{y_{i}-y_{i-1}}{h}\left(1-\frac{1}{\sqrt{\left(\frac{x_{i}-x_{i-1}}{h} \right)^{2}+\left(\frac{y_{i}-y_{i-1}}{h}\right)^{2}}}\right)\right)=\rho qg. \tag{34}\]
The balance of horizontal forces is expressed by the analogous equation
\[\frac{1}{h}\left(\frac{x_{i+1}-x_{i}}{h}\left(1-\frac{1}{\sqrt{\left(\frac{x _{i+1}-x_{i}}{h}\right)^{2}+\left(\frac{y_{i+1}-y_{i}}{h}\right)^{2}}}\right)-\right.\]
\[\left.\frac{x_{i}-x_{i-1}}{h}\left(1-\frac{1}{\sqrt{\left(\frac{x_{i}-x_{i-1}} {h}\right)^{2}+\left(\frac{y_{i}-y_{i-1}}{h}\right)^{2}}}\right)\right)=0. \tag{35}\]
The right-hand side is zero here because there is no horizontal gravitational force.
Figure 9: The string of springs attached at both end points. The definition of the angles \(\alpha_{i}\) (see text) is also indicated here.
Continuum limit.We use arc length in the rest state, under no tension, as the independent variable, and denote it by \(s\). At the left end, \(s=0\), and at the right end, \(s=L\). Equation (34) is a finite difference discretization of
\[\frac{d}{ds}\left(\frac{dy}{ds}\left(1-\frac{1}{\sqrt{\left(\frac{dx}{ds}\right) ^{2}+\left(\frac{dy}{ds}\right)^{2}}}\right)\right)=\rho qg.\]
The right-hand side of this equation equals \(\gamma/L\) (see eq. (31)). Integrating once,
\[\frac{dy}{ds}\left(1-\frac{1}{\sqrt{\left(\frac{dx}{ds}\right)^{2}+\left(\frac {dy}{ds}\right)^{2}}}\right)=\frac{\gamma}{L}\left(s-s_{\min}\right) \tag{36}\]
where \(s_{\min}\) is the parameter corresponding to the lowest point, at which \(\frac{dy}{ds}=0\). Similarly, eq. (35) is a discretization of
\[\frac{d}{ds}\left(\frac{dx}{ds}\left(1-\frac{1}{\sqrt{\left(\frac{dx}{ds} \right)^{2}+\left(\frac{dy}{ds}\right)^{2}}}\right)\right)=0.\]
Integrating once:
\[\frac{dx}{ds}\left(1-\frac{1}{\sqrt{\left(\frac{dx}{ds}\right)^{2}+\left( \frac{dy}{ds}\right)^{2}}}\right)=\mu \tag{37}\]
for some non-dimensional constant \(\mu\) yet to be discussed.
Solving the differential equations.We simplify eqs. (36) and (37) by solving for \(dx/ds\) and \(dy/ds\). First, write
\[R=\sqrt{\left(\frac{dx}{ds}\right)^{2}+\left(\frac{dy}{ds}\right)^{2}}.\]
You may now say "Wait, since \(s\) is arc length, \(ds^{2}=dx^{2}+dy^{2}\), and therefore wouldn't \(R\) always be equal to \(1\)?" However, you have to remember that \(s\) is arclength of the _unstretched_ cable, before it is hung up. Now, however, we are thinking of the cable as it hangs, and it is stretched; therefore \(R>1\).
With this notation, (36) and (37) become
\[\frac{dx}{ds}=\frac{\mu}{1-\frac{1}{R}}\ \ \text{and}\ \ \frac{dy}{ds}=\frac{\frac{ \gamma}{L}\left(s-s_{\min}\right)}{1-\frac{1}{R}}. \tag{38}\]
Therefore
\[R=\sqrt{\left(\frac{dx}{ds}\right)^{2}+\left(\frac{dy}{ds}\right)^{2}}=\sqrt {\frac{\mu^{2}\left(1-\frac{1}{R}\right)^{2}+\frac{\frac{\gamma^{2}}{L^{2}} \left(s-s_{\min}\right)^{2}}}{\left(1-\frac{1}{R}\right)^{2}}}.\]
Multiplying by \(1-\frac{1}{R}\), we find:
\[R-1=\sqrt{\mu^{2}+\frac{\gamma^{2}}{L^{2}}\left(s-s_{\min}\right)^{2}}\]
and therefore
\[\frac{1}{1-\frac{1}{R}}=\frac{R}{R-1}=1+\frac{1}{R-1}=1+\frac{1}{\sqrt{\mu^{2}+ \frac{\gamma^{2}}{L^{2}}\left(s-s_{\min}\right)^{2}}}.\]
Using this in (38), we obtain
\[\frac{dx}{ds}=\mu+\frac{1}{\sqrt{1+\frac{\gamma^{2}}{\mu^{2}L^{2}}\left(s-s_{ \min}\right)^{2}}} \tag{39}\]
and
\[\frac{dy}{ds}=\left(\mu+\frac{1}{\sqrt{1+\frac{\gamma^{2}}{\mu^{2}L^{2}}\left( s-s_{\min}\right)^{2}}}\right)\frac{\gamma}{\mu L}\left(s-s_{\min}\right). \tag{40}\]
The combination \(\frac{\gamma}{\mu L}\) and its square appear three times in eqs. (39) and (40). We simplify the notation by defining
\[\lambda=\frac{\mu L}{\gamma}.\]
Since \(\mu\) and \(\gamma\) are non-dimensional, \(\lambda\) is a length. It will turn out to be the natural analogue in the elastic case of the shape parameter. (This isn't obvious at this point, or at least it wasn't to me. I realized it only after having done the calculation that's about to follow.) With this notation, (39) and (40) become
\[\frac{dx}{ds}=\gamma\frac{\lambda}{L}+\frac{1}{\sqrt{1+\left(\frac{s-s_{\min} }{\lambda}\right)^{2}}}\ \ \text{and}\ \ \frac{dy}{ds}=\left(\gamma\frac{\lambda}{L}+\frac{1}{\sqrt{1+\left(\frac{s-s_{ \min}}{\lambda}\right)^{2}}}\right)\frac{s-s_{\min}}{\lambda}.\]
We integrate again to obtain formulas for \(x\) and \(y\):
\[\frac{x-x_{\min}}{\lambda}=\ \sinh^{-1}\left(\frac{s-s_{\min}}{\lambda}\right)+ \ \gamma\frac{\lambda}{L}\frac{s-s_{\min}}{\lambda}, \tag{41}\]
and
\[\frac{y-y_{\min}}{\lambda}=\sqrt{1+\left(\frac{s-s_{\min}}{\lambda}\right)^{ 2}}-1+\gamma\frac{\lambda}{L}\frac{1}{2}\left(\frac{s-s_{\min}}{\lambda} \right)^{2} \tag{42}\]
where \(x_{\min}\) and \(y_{\min}\) are the values of \(x\) and \(y\) when \(s=s_{\min}\). The constants of integration were chosen to make sure that \(s=s_{\min}\) corresponds to \(x=x_{\min}\) and \(y=y_{\min}\).
Equations (41) and (42) are very similar to eqs. (26) and (28), the arclength parametrization of the inelastic catenary. The only difference here are the extra summands proportional to \(\gamma\). These terms disappear as the compliance density \(q\) tends to zero (recall \(\gamma=q\rho Lg\)), so in the limit of vanishing compliance density, we obtain the standard description of the inelastic catenary, which we have thereby re-derived.
Although this derivation is more involved than the standard one, I prefer it because it paints a microscopic picture of the "tensions forces" -- even if that microscopic picture is, of course, an idealization.
Equations for the parameters.The four parameters \(\lambda\), \(s_{\rm min}\), \(x_{\rm min}\), and \(y_{\rm min}\) must be chosen so that the conditions
\[x(0)=A,\ \ y(0)=H,\ \ x(L)=B,\ \ y(L)=K\]
are satisfied. Using eqs. (41) and (42), \(x(0)=A\) and \(y(0)=H\) mean
\[x_{\rm min}=\gamma\frac{\lambda}{L}\,s_{\rm min}\ +\ A+\lambda\sinh^{-1}\left( \frac{s_{\rm min}}{\lambda}\right) \tag{43}\]
and
\[y_{\rm min}=-\gamma\left(\frac{\lambda}{L}\right)^{2}\frac{L}{2}\left(\frac{s _{\rm min}}{\lambda}\right)^{2}\ +\ H-\lambda\left(\sqrt{1+\left(\frac{s_{\rm min}}{ \lambda}\right)^{2}}-1\right). \tag{44}\]
Given that \(x(0)=A\) and \(y(0)=H\), the remaining two conditions can equivalently be written as \(x(L)-x(0)=B-A\), and \(y(L)-y(0)=K-H\). Using eqs. (41) and (42), these equations become
\[\gamma+\ \sinh^{-1}\left(\frac{L-s_{\rm min}}{\lambda}\right)+\sinh^{-1} \left(\frac{s_{\rm min}}{\lambda}\right)=\frac{B-A}{\lambda}, \tag{45}\]
\[\gamma\frac{\lambda}{L}\frac{1}{2}\left(\left(\frac{L-s_{\rm min}}{\lambda} \right)^{2}-\left(\frac{s_{\rm min}}{\lambda}\right)^{2}\right)\ +\]
\[\sqrt{1+\left(\frac{L-s_{\rm min}}{\lambda}\right)^{2}}-\sqrt{1+\left(\frac{s_ {\rm min}}{\lambda}\right)^{2}}=\frac{K-H}{\lambda}. \tag{46}\]
Unfortunately, the extra terms proportional to \(\gamma\) in eqs. (45) and (46) undermine the algebra that uncoupled the equations earlier, at least as far as I can see. We must solve eqs. (45) and (46) jointly for \((\lambda,s_{\rm min})\) now.
Numerical computation of shape parameter and lowest point.When discussing the inelastic case, I found it convenient to solve for
\[\xi=\frac{B-A}{2\lambda}\]
instead of directly for \(\lambda\). I did the same thing here, replacing \(\lambda\) by \(\frac{B-A}{2\xi}\) in (45) and (46). Then I solved the resulting equations using two-dimensional Newton iteration. Figure 10 shows examples, with \(\gamma\) rising from 0 to 2 in steps of 0.2. For each new value of \(\gamma\), I used the parameters \(\xi\) and \(s_{\rm min}\) computed for the preceding value as starting values for the Newton iteration. The parameters for the elastic catenaries in Fig. 10 are computed in six or fewer Newton iterations with 15-digit accuracy. The parameter \(\xi\) in the inelastic case takes a bit longer, 9 Newton iterations.
It does not appear necessary to use this "continuation" approach. I have not encountered a single example in which Newton's method, starting with the values \(\xi\) and \(s_{\rm min}\) computed for the inelastic case, did not converge rapidly, even when \(\gamma\) is taken to be very large.
Loose ends.Several questions remain unanswered here. First, can we prove that eqs. (45) and (46) have a unique solution \((\lambda,s_{\min})\) with \(\lambda>0\) and \(s_{\min}\leq\frac{L}{2}\), for any choice of \(A<B\), \(H\leq K\), and \(L>0\)? That ought to be the case, but I haven't proved it.
Second, why does Newton's method for (45) and (46), starting with the parameters for the inelastic cable, always seem to work so well, even when the compliance is large? Reassuringly, if there were a case in which it didn't converge rapidly, one _could_ always use continuation, raising the compliance gradually, and that would certainly work. However, in my experience it never seems necessary.
What if the springs were not linear? In that case, analogues of the differential equations (36) and (37) can still be written down, but they cannot in general be solved explicitly for \(x\) and \(y\), so there are no analogues of (41) and (42) any more.
Engineineering literature on hanging cables.None of what I have presented here could conceivably be new. In fact there is an extensive sophisticated engineering-oriented literature of which the catenary problem is merely the starting point; see [7, 8, 16, 17, 18] for a few examples. However, what I have presented here seems difficult if not impossible to extract from that literature.
Spider webs.One could consider multiple elastic cables attached to each other, as in a spider web. The "discrete" model, thinking of spider threads as composed of mass points connected by massless springs, is straightforward to formulate. There is a substantial literature on the mathematical and computational modeling of spider webs; see [1, 10, 13] for a few examples.
This article was inspired by Mark Levi's beautiful discussion of some of the astonishing properties of catenaries in the May 2021 issue of _SIAM News_[12]. I would like to thank the anonymous reviewer for reading my paper so thoughtfully, correcting typos and suggesting improvements.
|
2308.13057 | Data-Side Efficiencies for Lightweight Convolutional Neural Networks | We examine how the choice of data-side attributes for two important visual
tasks of image classification and object detection can aid in the choice or
design of lightweight convolutional neural networks. We show by experimentation
how four data attributes - number of classes, object color, image resolution,
and object scale affect neural network model size and efficiency. Intra- and
inter-class similarity metrics, based on metric learning, are defined to guide
the evaluation of these attributes toward achieving lightweight models.
Evaluations made using these metrics are shown to require 30x less computation
than running full inference tests. We provide, as an example, applying the
metrics and methods to choose a lightweight model for a robot path planning
application and achieve computation reduction of 66% and accuracy gain of 3.5%
over the pre-method model. | Bryan Bo Cao, Lawrence O'Gorman, Michael Coss, Shubham Jain | 2023-08-24T19:50:25Z | http://arxiv.org/abs/2308.13057v1 | # Data-Side Efficiencies for Lightweight Convolutional Neural Networks
###### Abstract
We examine how the choice of data-side attributes for two important visual tasks of image classification and object detection can aid in the choice or design of lightweight convolutional neural networks. We show by experimentation how four data attributes - number of classes, object color, image resolution, and object scale affect neural network model size and efficiency. Intra- and inter-class similarity metrics, based on metric learning, are defined to guide the evaluation of these attributes toward achieving lightweight models. Evaluations made using these metrics are shown to require \(30\times\) less computation than running full inference tests. We provide, as an example, applying the metrics and methods to choose a lightweight model for a robot path planning application and achieve computation reduction of \(66\%\) and accuracy gain of \(3.5\%\) over the pre-method model.
Efficient Neural Network Convolutional Neural Network Image Classification Object Detection
## 1 Introduction
Traditionally for computer vision applications, an algorithm designer with domain expertise would begin by identifying handcrafted features to help recognize objects of interest. More recently, end-to-end learning (E2E) has supplanted that expert by training a deep neural network to learn important features on its own. Besides the little forethought required about data features, there is usually only basic preprocessing done on the input data; an image is often downsampled and converted to a vector, and an audio signal is often transformed to a spectogram. In this paper, we use the term "data-side" to include operations that are performed on the data before input to the neural network. Our proposal is that a one-time analysis of data-side attributes can aid the design of more efficient convolutional neural networks (CNNs) for the many-times that they are used to perform inferences.
On the data side of the neural network, we examine four independent image attributes and two dependent attributes, the latter which we use as metrics. The independent attributes are **number of classes, object color, image resolution** and **object scale**. The metrics are **intra-** and **inter-class similarity**. Our goal is to optimize the metrics by choice of the independent variables - specifically to maximize intra-class similarity and minimize inter-class similarity - to obtain the most computationally efficient model.
Unlike benchmark competitions such as ImageNet [1], practical applications involve a design stage that can include adjustment of input specifications. In Section 2, we tabulate a selection of applications. The "wildlife" application reduced the number of animal and bird classes from 18 to 6 in the Wildlife Spotter dataset [2]. In the "driving" application [3], the 10 classes of the BDD dataset [4] were reduced to 7 by eliminating the "train" class due to few labeled instances and combining the similar classes of rider, motor, and bike into rider.
The main contributions of this paper are:
1. Four data-side attributes are identified, and experiments are run to show their effects on the computational efficiency of lightweight CNNs.
2. Intra- and inter-class similarity metrics are defined to aid evaluation of the four independent attribute values. Use of these metrics is shown to be about \(30\times\) faster than evaluation by full inference testing.
3. Procedures are described using the similarity metrics to evaluate how changing attribute values can reduce model computation while maintaining accuracy.
4. Starting with the EfficientNet-B0 model, we show how our methods can guide the application designer to smaller "sub-EfficientNets" with greater efficiency and similar or higher accuracy.
We describe related work in Section 2. Each of the attributes is defined in Section 3, procedures are described to apply these toward more efficient models in Section 4, and experimental evidence is shown in Section 5. We conclude in Section 6.
## 2 Related Work
The post-AlexNet [5] era (2012-) of convolutional neural networks brought larger and larger networks with the understanding that a larger model yielded higher accuracy (e.g., VGGNet-16 [6] in 2014 with 144M parameters). But the need for more efficiency, especially for embedded systems [7], encouraged the design of lightweight neural networks [8], such as SqueezeNet [9] in 2017 with 1.2M parameters. Models were reduced in size by such architectures as depthwise separable convolution filters [10]. More efficient handling of data was incorporated by using quantization [11, 12, 13], pruning [14, 15, 16, 17], and data saliency [18]. This model-side efficiency quest is a common research trend where new models are evaluated for general purpose classification and object detection on public benchmarks such as ImageNet [1], COCO [19], and VOC [20]. Orthogonal and complementary to these model-side efficiencies [21, 22, 23], we examine efficiencies that can be gained before the model by understanding and adjusting the data attributes within the confines of the application specifications. Early work [24] optimizes models specialized to the target video only for binary-classification. Our work extends to multi-class classification and object detection.
In Table 1, we list a selection of 9 applications whose data attributes are far less complex than common benchmarks. For these applications, class number is often just 2. The largest number of classes, 7, is for the "driving" [4] application. Compare these with 80 classes for the COCO dataset and 1000 for ImageNet. For 2 applications, color is not used. For the "crowd" application, it is not deemed useful and for the "ship, SAR" application, the input data is inherently not color. The resolution range is not broad in this sampling, likely due to matching image size to model input width. Many papers did not describe the scale range; for these, we approximated from the given information or images in the paper. The broadest scale range (as a fraction of image size) is the "driving" application (\(1/32\) to \(1/2\)), and the narrowest is for the "mammals" application, using aerial image capture, with scale from \(1/30\) to \(1/20\).
We use a measure of class similarity to efficiently examine data attributes, based on neural network metric learning. This term describes the use of learned, low-dimensional representations of discrete variables (images in our case). The distance between two instances in the latent space can be measured by L1 [32], L2, or cosine similarity [33]. Previous studies [34, 35, 36] focus on learning a single similarity latent space. Differences between classification [37] and ranking based losses [38] have been studied in [39]. PAN incorporates attributes to visual similarity learning [40]. In Sections 3.6 and 3.7 we extend this line of research by adapting the metric from [33] to measure intra- and inter-class similarity to serve efficiency purposes.
_In contrast to research to improve model performance on public benchmarks, our goal is to develop an empirical understanding of the effects of these attributes on common CNNs, and from this to provide practical guidelines to obtain lightweight CNNs in real-world scenarios._ Our use of intra- and inter-class similarity metrics enables an efficient
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Application & \(N_{Cl}\) & \(N_{Co}\) & \(R_{E}\) & \(S_{C}\) \\ \hline crowd [25] & 2 & no & \(320\times 240\) & \(1/8\), \(1/2\) \\ ship, SAR [26] & 2 & no & \(416\times 416\) & \(1/20\), \(1/6\) \\ cattle [27] & 2 & yes & \(224\times 224\) & \(1/16\), \(1/2\) \\ hardhat [28] & 2 & yes & \(300\times 300\) & \(1/25\), \(1/2\) \\ wildlife [2] & 6 & yes & \(224\times 224\) & \(1/6\), \(1/2\) \\ PCB defect [29] & 6 & yes & \(640\times 640\) & \(1/30\), \(1/15\) \\ ship [30] & 6 & yes & \(416\times 416\) & \(1/20\), \(1/2\) \\ mammals [31] & 6 & yes & \(2\)k \(\times\) 2k & \(1/30\), \(1/20\) \\ driving [3] & 7 & yes & 608 \(\times\) 608 & \(1/32\), \(1/2\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Examples of data attributes for object detection applications with attributes: number of classes (\(N_{Cl}\)), color (\(N_{Co}\)), input resolution (\(R_{E}\)), and scale range (\(S_{C}\)) as a fraction of image size.
methodology toward this goal. Practical, low-complexity applications as in Table 1 can benefit from our investigation and method.
## 3 Data Attributes and Metrics
This work is centered on the hypothesis that, the easier the input data is to classify, the more computationally efficient the model can be at a fixed or chosen lower accuracy threshold. In this section, we describe each of the data attributes and their relationships to the hypothesis. We define metrics to obtain the dependent variables. And we describe procedures to adjust the independent variables to obtain metrics that guide the design of an efficient model.
We first introduce the term, Ease of Classification \(EoC\) and hypothesize the following relationship exists with the data-side attributes,
\[\mathrm{EoC}\leftarrow(S_{1},\frac{1}{S_{2}})\leftarrow(\frac{1}{N_{Cl}}, \frac{1}{N_{Co}},\frac{1}{R_{E}},\frac{1}{S_{C}}). \tag{1}\]
The symbol (\(\leftarrow\)) is used to describe the direct relationships in which the left expressions are related to the right. \(EoC\) increases with intra-class similarity \(S_{1}\) and decreases with inter-class similarity \(S_{2}\). The dependent variable \(S_{1}\) is related to the reciprocals of the independent variables, number of classes \(N_{Cl}\), number of color channels \(N_{Co}\), image resolution \(R_{E}\), and object scale \(S_{C}\). The dependent variable \(S_{2}\) is directly related to these independent variables. The model designer follows an approach to adjust these independent variables to obtain similarity measurements that achieve high \(EoC\). Note that we will sometimes simplify \(N_{Cl}\) to \(CL\) for readability in figures.
In Section 5 we perform experiments on a range of values for each attribute to understand how they affect model size and accuracy. However, these experiments cannot be done independently for each attribute because some have dependencies on others. We call these _interdependencies_ because they are 2-way. We discuss interdependencies below for two groups, {\(S_{C}\), \(R_{E}\)} and {\(S_{1}\), \(S_{2}\), \(N_{Cl}\)}.
### Number of Classes, \(N_{Cl}\)
The \(N_{Cl}\) attribute is the number of classes being classified in the dataset. Experimental results with different numbers of classes are shown in Section 5.2. In Section 5.6 we present results of changing the number of classes for a robot path planning application.
### Object Colors, \(N_{Co}\)
The \(N_{Co}\) attribute is the number of color axes, either 1 for grayscale or 3 for tristimulus color models such as RGB (red, green, blue). When the data has a color format, the first layer of the neural model has 3 channels. For grayscale data input, the model has 1 input channel. In Section 5.3, we show the efficiency gain for 1 versus 3 input channels.
### Image Resolution, \(R_{e}\)
Image resolution, measured in pixels, has the most direct relationship to model size and computation. Increasing the image size by a multiple \(m\) (rows \(I_{r}\) to \(m\times I_{r}/2\) and columns \(I_{c}\) to \(m\times I_{c}\)) increases the computation by at least one-to-one. In Figure 1, MobileNet computation increases proportionally to image size. The lower sized EfficientNet-B0 and -B1 models also increase proportionally, but rise faster with larger models B2, B3, and B4.
It is not the case that higher resolution always yields better accuracy. It usually plateaus, and that plateau is different for different classes involved. The objective is to choose the minimum image resolution before which accuracy drops unacceptably. Note that resolution and scale are dependent attributes, and neither can be adjusted without considering the effect on the other. Experimental results with different resolutions are shown in Section 5.5.
### Object Scale, \(S_{c}\)
For a CNN, the image area, or receptive field, of a \(3\times 3\) filter kernel expands as the following sequence for layers \(L=1,2,3,...,L\),
\[3\times 3,7\times 7,15\times 15,\ldots,(2^{L+1}-1)\times(2^{L+1}-1). \tag{2}\]
For object detection, this means that, if the maximum object size in an image is, for example, \(15\times 15\), the network needs at least 3 layers to yield features that convolve with (or overlap) the full size of these objects. In practice, from the
sequence in equation 2 the minimum number of layers can be found from the maximum sidelength \(b_{max}\) of bounding boxes of objects over all image instances,
\[L\geq\lceil\log_{2}(b_{max}+1)\rceil-1. \tag{3}\]
where \(\lceil x\rceil\) is a ceiling operator, which rounds the \(x\) value to the higher integer. For example, if \(b_{max}\) is 250, the minimum number of layers needed is \(\lceil 7.966\rceil-1=7\).
In practice, the maximum object size is approximated as the longer sidelength of the bounding box of the object. In terms of model size and minimizing computation, by measuring the bounding box size of objects in a dataset, one can discover the maximum number of layers needed for full-object feature extraction. Furthermore, using filter kernels that are larger than the maximum object size increases extractions of inter-object features. When generalized to a sequence of rectangular filters, these tend toward Gaussian filters with increasing convolution layers, so the filter edge versus middle magnitude lowers as well.
To decouple scale from resolution, we define the scale of an object as a fraction of the longer image dimension. With this definition, one can measure the scale of an object in an image without knowing the resolution. The _size_ of the object is measured in pixels; it is the product of scale times the longer dimension of the resolution. Figure 2 shows the range of scales for objects in the COCO dataset. Although 50% of instances account for \(\leq 10\%\) of image size, still the sizes range from very small to full image size. Conversely, in many applications, scale range is known and much more contained as for some applications in Table 1. Experimental results with different scales are shown in Section 5.5.
### Resolution and Scale Interdependency
The interdependence between the attributes {\(S_{C}\), \(R_{E}\) } is common to any image processing application, and can be described by two cases. For case 1, if the scale of objects within a class in an image is large enough to reduce the image size and still recognize the objects with high accuracy, then there is a resulting benefit of lower computation. However, reducing the image size also reduces resolution, and if there is another class that depends upon a high spatial frequency
Figure 1: As the input image size is increased, the computation for the smaller MobileNet models increase at a similar rate. For the larger EfficientNet models (only up to EfficientNet-B4 are shown for clarity of display), the computation increases at a higher rate.
Figure 2: Histogram of bounding box sizes in COCO training dataset highly skewed at small sizes \(<0.2\).
feature such as texture, then reducing the image would reduce overall accuracy. For case 2, if resolution of the highest frequency features of a class is more than adequate to support image reduction, then computational efficiency can be gained by reducing the image size. However, because that reduction also reduces scale, classes that are differentiated by scale will become more similar, and this situation may cause lower accuracy. We can write these relationships as follows,
\[(S_{C}\;\propto\;R_{E})\;\rightarrow\;\frac{1}{\mathrm{EoC}}, \tag{4}\]
where the expression within parentheses is commutative (a change in \(S_{C}\) will cause a proportional change in \(R_{E}\), and vice versa), and these are inversely related to \(EoC\).
### Intra-Class Similarity, \(S_{1}\)
Intra-class similarity is a measure of visual similarity between members of the same class as measured with vectors in the embedding space of a deep neural network. It is described by the average and variance of pairwise similarities of instances in the class,
\[S_{1}(C_{1})=\frac{1}{N}\sum_{i,j\in C_{1}}\cos(\mathbf{Z}_{i}\mathbf{Z}_{j}), \tag{5}\]
\[\sigma_{S1}^{2}(C_{1})=\frac{1}{N}\sum_{i,j\in C_{1}}(S_{ij}-S_{1})^{2}, \tag{6}\]
where \(C_{1}\) is a class containing a set of instances, \(i\) and \(j\) are indices in set \(C_{1}\), \(i\neq j\), and \(N\) is the total number of similarity pairs \(S_{ij}\) of two different instances in the class. \(\mathbf{Z}\) is the latent vector in the embedding space from a neural network trained by metric learning on instances of the class as well as other classes. (That is, it is the same model trained on the same instances as for inter-class similarity.) This metric is adapted from [33]. We show the use of \(S_{1}\) and \(\sigma_{S1}^{2}\) in Section 4.1.
### Inter-Class Similarity, \(S_{2}\)
Inter-class similarity is a measure of visual similarity between classes as determined by the closeness of instances of two classes in the embedding space of a deep neural network. For 2 classes, it is defined as the average of pairwise similarities of instances between classes,
\[S_{2}(C_{1},C_{2})=\frac{1}{N}\sum_{i\in C_{1},j\in C_{2}}\cos(\mathbf{Z}_{i} \mathbf{Z}_{j}), \tag{7}\]
where \(C_{1}\) and \(C_{2}\) are instance sets of two different classes, \(i\) and \(j\) are indices in sets \(C_{1}\) and \(C_{2}\) respectively, \(N\) is the total number of pairs of two instances in two different classes, and \(\mathbf{Z}\) is the latent vector in the embedding space from a neural network trained by metric learning on instances that include both classes as well as other classes if there are more than 2.
For an application involving more than two classes, we choose the inter-class similarity measure to be the maximum of inter-class similarity measures over all class pairs,
\[\hat{S_{2}}(\{C_{K}\})=\max\{S_{2}(C_{m},C_{n})\}, \tag{8}\]
where \(\{C_{K}\}\) is the set of all classes for \(0<=k<K\), and \(\{(C_{m},C_{n})\}\) is all class pairs for \(0<=m,n<K\), \(m\neq n\). We choose the maximum of similarity results of class pairs because maximum similarity represents the worst case - most difficult to distinguish - between two classes. Alternatively, one could calculate the average of inter-class similarities for each pair, however the operation of averaging will hide the effect of one pair of very similar classes among other dissimilar classes, and this effect is worse for larger \(N_{Cl}\). We show the use of \(\hat{S_{2}}\) in Section 4.2.
We also use a measure that is the normalized difference between the maximum and the average,
\[\Delta S_{2}(\{C_{K}\})=\frac{\hat{S_{2}}-S_{2}}{S_{2}}. \tag{9}\]
A larger \(\Delta S_{2}\) indicates a higher value of worst case \(\hat{S_{2}}\), so we seek low \(\Delta S_{2}\) in the methods described in Sections 4.1 and 4.3.
### Intra- and Inter-Class Interdependency
There is a strong interdependence between \(S_{1}\) and \(S_{2}\) with a secondary dependence upon \(N_{Cl}\), as we will describe. With other factors fixed, smaller intra-class similarity increases inter-class similarity because more heterogeneous classes with wider attribute ranges are being compared, thus reducing \(EoC\). As \(N_{Cl}\) is increased, this effect is exacerbated because there is higher probability of low \(S_{1}\) and high \(S_{2}\) pairs, and (similarly) because we use worst-case maximum in equation 8. We can write these dependent relationships as follows,
\[(S_{1}\propto 1/S_{2})\,\rightarrow\mathrm{EoC}\,,\,\mathrm{for}N_{Cl}\geq 2, \tag{10}\]
where the expression within brackets is commutative (either \(S_{1}\) or \(S_{2}\) can cause inverse change in the other, and the relationship becomes stronger as \(N_{Cl}\) increases.
## 4 Method
The general approach toward finding the most efficient model is to select ranges of the independent attribute values [\(N_{Cl}\), \(N_{Co}\), \(R_{E}\), \(S_{C}\)], calculate the similarity measures {\(S_{1}\), \(S_{2}\)} for selected values in the range, and choose those independent attribute values that minimize \(S_{2}\) and maximize \(S_{1}\). Step-by-step procedures for selecting each of the independent attributes are described below. These procedures should be followed in the order of the sections below.
### Selection of \(N_{Cl}\)
If the application permits adjustment of the number of classes, then the following procedure can help finding class groupings and an associated \(N_{Cl}\) that supports a more efficient model. Initialize \(\Delta S_{2}=\infty\), and follow these steps,
1. Choose class groupings.
2. Calculate \(\Delta S_{2}\) from equation 9. If this is less than the current smallest \(\Delta S_{2}\) then this grouping is the most efficient so far.
3. If \(\Delta S_{2}\) is low, one can choose to exit, or continue.
4. If one decides to continue, calculate \(\hat{S_{1}}\) from equation 5 and \(\sigma_{S1}^{2}\) from equation 6 for each class. The value of \(S_{1}\) is for guidance to help understand why the current \(S_{2}\) is good or bad (low or high) to give guidance on choosing the next groupings. In general, class grouping with high \(S_{1}\) and low \(\sigma_{S1}^{2}\) will yield higher accuracy. Repeat these steps.
We want to stop when inter-class similarity is low, indicated by a low value of \(\Delta S_{2}\). However there is subjectivity in this step due to the manual choice of groupings and because different applications will have different levels of intra-class homogeneity and inter-class distinguishability. In practice, the procedure is usually run for a few iterations to understand relative \(\Delta S_{2}\) values for different groupings, and the lowest is chosen.
### Selection of \(N_{Co}\)
The application can gain computational advantage if _all_ classes can be reduced to grayscale; if all classes cannot be reduced, then the model must handle color and there is no computational advantage. Following are the steps to choose grayscale or color,
1. For all classes in color and grayscale, calculate \(\hat{S_{2}}\).
2. If \(\hat{S_{2}}\) for grayscale is less than or equal to \(\hat{S_{2}}\) for color, choose a grayscale model and grayscale for all instances of all classes.
If the procedure above does not result in the choice of grayscale, there is a second option. This requires more testing and does not directly yield computation reduction, but may improve accuracy.
1. For each class, calculate \(\hat{S_{2}}\) against every other class for these four combinations of the \((C_{1},C_{2})\) pairs: (grayscale, grayscale), (grayscale, color), (color, grayscale), and (color, color).
2. For each class \(C_{1}\) whose \(\hat{S_{2}}\) for (grayscale, grayscale) and (grayscale, color) is smaller than for (color, grayscale) and (color, color), choose to use grayscale for the \(C_{1}\) class.
### Selection of \(R_{e}\) and \(S_{c}\)
We first adjust the attribute \(R_{E}\) to find the lower bound of resolution with respect to acceptable accuracy. Initialize \(\Delta S_{2}=\infty\), and follow these steps,
Calculate \(\Delta S_{2}\) from equation 9. If this is less than the current smallest \(\Delta S_{2}\) then this resolution is the most efficient so far.
* If \(\Delta S_{2}\) is low, one can choose to exit, or continue.
* Reduce the resolution by half and repeat these steps.
In practice, a few iterations is run to see when \(\Delta S_{2}\) rises, then choose the resolution where it is lowest.
After the resolution has been chosen, maximum object scale is multiplied by resolution to find the maximum object size in pixels. Equation 3 is used to find an estimate of the lower bound of number of layers needed in the model.
## 5 Experiments
In this section, we perform experiments on the data attributes to show how their values relate to model efficiency and accuracy. Note that our level of experimentation is not even across all attributes. We believe that the experiments upon number of classes, intra- and inter-class similarities, and resolution are sufficient to support the Ease of Classification relationship of equation 1. For color, we give a quantitative comparison of color versus grayscale computation, and then because the difference is small, leave further investigation of accuracy and efficiency to cited literature. For scale, one aspect of this attribute is covered by its interdependency in the resolution experiments. However another aspect, the relationship of scale to model levels (equation 2), we leave to future work. This is because scale is less easily separated from particular applications - specifically the size range of objects in an application - than for other attributes. Our plan is to investigate this in a more application-oriented paper in the future.
### Similarity Metric Efficiency
We could discover the effects of data attributes by training and inference testing all combinations of attribute values. For \(N_{CL}\) classes, binary classification of pairs would require \(\binom{N_{CL}}{2}\) training and inference operations. In comparison, for similarity metrics, we need to train once for all combinations of binary classifications. During testing, the similarity model (SM) caches the latent space, thus only indices of each instance need to be paired with other instances to obtain their cosine similarities.
For the CIFAR10 dataset for example, there is a one-time task of feeding all test images into SM, which takes 0.72 seconds, and then caching, which takes 0.63 seconds. In contrast, we feed each image into the CNN to obtain its prediction in order to calculate its accuracy in the conventional pipeline. We show the runtime in Table 2.
### Number of Classes, \(N_{Cl}\)
It is well known that accuracy is reduced when more classes are involved since more visual features are needed for a CNN to learn in the feature extractor backbone, as well as more complex decision boundary to learn in the fully connected layers for classification. However, we perform experiments here, first to confirm this relationship with test data, but secondly to gain any extra insight into the relationship between number of classes and accuracy.
We performed three sets of experiments. The first was for object detection using the YOLOv5-nano [22] backbone upon increasing-size class groupings of the COCO dataset [19]. Ten groups with \(Cl\) of \(\{1,2,3,4,5,10,20,40,60,80\}\) were prepared. For each group, we trained a separate YOLOv5-nano model from scratch. As seen in Figure 3 (left), accuracy decreases with number of classes. An added insight is that the accuracy decrease is steep for very few classes, say 5-10 or fewer, and flattens beyond 10.
The second set of experiments was for image classification on the CIFAR-10 dataset. With many fewer classes in CIFAR-10 [41] than COCO (10 versus 80), we expect to see how the number of classes and accuracy relate for this smaller range. We extracted subsets of classes -- which we call groups - from CIFAR-10 with \(N_{Cl}\) ranging from 2 to 9.
\begin{table}
\begin{tabular}{l l l} \hline \hline Model & \(t_{train}\) & \(t_{test}\) \\ & (s/epoch) & (s/pair) \\ \hline VGG19 & 0.69 & 4.49 \\ EfficientNet-B0 & 3.13 & 21.82 \\ MobileNetV2 & 2.19 & 15.05 \\
**Similarity Metrics** & 3.31 & **0.76** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results of runtime comparison of Similarity Metrics and existing models. s: second, pair: all pair of instances in two classes.
For example, a group with \(N_{Cl}=4\) might contain airplane, cat, automobile, and ship classes. We trained classifiers from scratch for each group.
Results of the image classification experiments are shown in Figure 3 (middle). The three classifiers used for testing, EfficientNet-B0, VGG19 [6], and MobileNetV2 [42] showed the expected trend of accuracy reduction as \(N_{Cl}\) per group increased. However, the trend was not as monotonic as might be expected. We hypothesized that this might be due to the composition of each group, specifically if the group classes were largely similar or dissimilar. This insight led to the experiments on class similarity in the next section.
The third set of experiments involved reducing model size for different numbers of classes. We prepared 90 class groupings extracted from the COCO minitrain dataset [43]. There are 80 datasets for \(N_{Cl}=1\), each containing a single class from 80 classes. There are 8 datasets for \(N_{Cl}=10\) combining the 8 single-class datasets. The final dataset is the original COCO minitrain with \(N_{Cl}=80\).
We scale YOLOv5 layers and channels with the depth and width multiples already used for scaling the family between nano and x-large. Starting with depth and width multiples of 0.33 and 0.25 for YOLOv5-nano, we reduce these in step sizes of 0.04 for depth and 0.03 for width. In this way, we design a monotonically decreasing sequence of sub-YOLO models denoted as SY1 to SY8. We train each model separately for each of the six datasets.
Results of sub-YOLO detection are shown in Figure 3 (right). There are three lines where each point of \([email protected]\) is averaged across all models in all datasets for a specific \(N_{Cl}\). An overall trend is observed that fewer-class models (upper-left blue star) achieve higher efficiency than many-class models. Another finding of interest here is that, whereas the accuracies for 80 classes drops steadily from the YOLOv5-nano size, accuracy for 10 classes is fairly flat down to SY2, which corresponds to a 36% computation reduction, and for 1 class down to SY4, which corresponds to a 72% computation reduction.
### Color, \(N_{Co}\)
Because reduction from color to grayscale only affects the number of multiplies in the first layer, the efficiency gain depends upon the model size. For a large model such as VGG-19, percentage efficiency gain will be much smaller than for a small model such as EfficientNet-B0, as shown in Table 3. However, even for small networks, the effect of reducing from color to grayscale processing is small relative to effects of other attributes, so we perform experiments on these more impactful attributes. For further investigation of color computation, refer to previous work on this topic [44].
\begin{table}
\begin{tabular}{l|c|c|c|c|c} \hline \hline Model & \multicolumn{2}{c|}{Layer-1} & \multicolumn{2}{c|}{All Layers} & Ratio \\ \cline{2-5} & color & gray & color & gray & [\%] \\ \hline VGG-19 & 1835.01 & 655.36 & 399474 & 398295 & 99.7 \\ EN-B0 & 884.7 & 294.9 & 31431 & 30841 & 98.1 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Difference between color and grayscale computation [kFLOPS] for the large VGG-19 classifier and much smaller EfficientNet-B0 classifier. Ratio is grayscale-to-color computation for all layers.
Figure 3: Overall relationship of performance and \(N_{CL}\). (Left) Object detection accuracy and recall (R) decrease for the YOLOv5-nano when the number of classes is increased from 1 to 80. (Middle) CIFAR-10 Image classification accuracy decreases for the classifiers tested when the number of classes per group is increased from 2 to 10. (Right) Accuracy plot for increasingly smaller models from YOLOv5-nano through 8 sub-YOLO models (SY1-8) and class groupings of 1 (\(N_{CL}\):1), 10 (\(N_{CL}\):10), and 80 (\(N_{CL}\):80).
### Intra- and Inter-Class Similarity, \(S_{1}\), \(S_{2}\)
In equation 1, we hypothesized that accuracy is lower if inter-class similarity is higher. Table 4 shows accuracy and inter-class similarity results for groups of 2 and 4 classes from the CIFAR-10 dataset that we have subjectively designated similar (S) or dissimilar (D). "S4" indicates a similar group of 4 classes, and "D2" indicates a dissimilar group of 2 classes. The results indicate that our subjective labels correspond to our hypothesis, and that our objective measure of inter-class similarity in equation 7 both corresponds to the subjective, and is consistent with the hypothesis.
### Resolution and Scale, \(R_{e}\) and \(S_{c}\)
In Table 1, the applications often downsize images to much smaller than the originals. Designers of these applications likely determined that accuracy didn't suffer when images were reduced to the degree that they chose. This is also the case for our experiment of YOLOv5 nano on the COCO dataset in Figure 5 (left). One can see that accuracies drop slowly for image reduction from \(640^{2}\) to \(448^{2}\) - and further depending upon the degree of accuracy reduction that can be tolerated. Because scale drops with resolution, this plot includes the effects on accuracy reduction for both. In Figure 5 (right) we see the object detection performance almost flattens when reducing \(R_{E}\) from 4k down to 1080p, and SY1-3 models' accuracies are very close to nano but with about half the required computation (4.2 GFLOPs for nano versus 2.2 GFLOPs for SY3.
We also verify by experiment that \(R_{E}\) has a direct impact on inference runtime per image [45, 46]. Experiments on YOLOv5 nano in the COCO validation set are conducted, and results are shown in Table 6. One observation is that runtime almost doubles from 7.4ms for \(1280^{2}\) to 13.4ms for \(2560^{2}\) on a GPU, while it rises dramatically from 64.1 for \(640^{2}\) to 4396.7 for \(1280^{2}\) on a CPU.
### Robot Path Planning Application
We briefly mention results of methods from this paper used for efficient model design for our own application. The application is to detect times, locations, and densities of human activity on a factory floor for robot path planning. Initial labelling comprised 5 object classes: 2 of humans engaged in different activities, and 3 of different types of robots. Similarity metrics guided a reduction to 2 classes enabling a smaller model with computation reduction of 66% and accuracy gain of 3.5%. See [48] for a more complete description of this application.
## 6 Conclusion
We conclude that for applications requiring lightweight CNNs, data attributes can be examined and adjusted to obtain more efficient models. We examined four independent data-side variables, and results from our experiments indicate the following ranking upon computation reduction. Resolution has the greatest effect on computation. Most practitioners already perform resolution reduction, but many simply to fit the model of choice. We show that, for small (few-class) applications the model size can be reduced (to sub-YOLO models) to achieve more efficiency with similar accuracy, and this can be done efficiently using similarity metrics. Number of classes is second in rank. This is dependent on the application, but our methods using similarity metrics enable the application designer to compare different class groupings efficiently. We showed that the choice of color or grayscale had a relatively small (1-2%) effect on computation for small models. We don't rank scale because scale was only tested with interdependency to resolution.
\begin{table}
\begin{tabular}{c|c c c c c c c} \hline \hline \(R_{E}\) & \(80^{2}\) & \(160^{2}\) & \(320^{2}\) & \(640^{2}\) & \(1280^{2}\) & \(2560^{2}\) & \(5120^{2}\) \\ \hline GPU & 6.4 & 6.2 & 6.3 & 6.4 & 7.4 & 13.4 & 48.7 \\ CPU & 14.0 & 24.7 & 52.8 & 64.1 & 4396.7 & 4728.3 & 5456.9 \\ \hline \hline \end{tabular}
\end{table}
Table 6: YOLOv5 runtimes [ms/img] for resolutions of the COCO validation set. Each number is averaged for 5 runs.
Figure 5: (Left) Effect of resolution on accuracy for YOLOv5-nano object detection on the COCO dataset, and (Right) effect on accuracy of YOLOv5 and sub-YOLO models for 80 classes of the PANDA 4k dataset [47]. |
2303.07350 | Hypergeometric identities related to Ruijsenaars systems | We present and prove hypergeometric identities which play a crucial role in
the theory of Baxter operators in the Ruijsenaars model. | N. Belousov, S. Derkachov, S. Kharchev, S. Khoroshkin | 2023-03-11T11:30:07Z | http://arxiv.org/abs/2303.07350v3 | ###### Abstract
###### Abstract
We present a proof of hypergeometric identities which play a crucial role in the theory of Baxter operators in the Ruijsenaars model.
**Hypergeometric identities related to Ruijsenaars system**
**N. Belousov\({}^{{\dagger}\times}\), S. Derkachov\({}^{{\dagger}\times}\), S. Kharchev\({}^{\star\star}\), S. Khoroshkin\({}^{\circ\ast}\)**
\({}^{{\dagger}}\)_Steklov Mathematical Institute, Fontanka 27, St. Petersburg, 191023, Russia;_
\({}^{\times}\)_National Research University Higher School of Economics, Soyuza Pechatnikov 16, St. Petersburg, 190121, Russia;_
\({}^{\bullet}\)_Institute for Theoretical and Experimental Physics, B. Cheremushkinskaya 25, Moscow, 117259, Russia;_
\({}^{\circ}\)_National Research University Higher School of Economics, Myasnitskaya 20, Moscow, 101000, Russia;_
\({}^{\ast}\)_Institute for Information Transmission Problems RAS (Kharkevich Institute), Bolshoy Karetny per. 19, Moscow, 127994, Russia_
## 1 Introduction
### Statement
In the paper [R0] S. Ruijsenaars showed that the crucial properties of his _kernel function_, which serves for the solution of the Ruijsenaars-Sutherland model [HR1, HR2], are given by certain functional identity (now known as "kernel function identity") found in [KN] by Yu. Kajihara and M. Noumi. It states that for any odd function \(s(z)\) of a complex variable \(z\), satisfying the Riemann relation
\[\begin{split} s(x+y)s(x-y)s(u+v)s(u-v)=& s(x+u)s(x-u )s(y+v)s(y-v)-\\ & s(x+v)s(x-v)s(y+u)s(y-u)\end{split} \tag{1.1}\]
and any complex parameter \(\alpha\) the following identity holds
\[\begin{split}&\sum_{\begin{subarray}{c}I_{r}\subset[n]\\ |I_{r}|=r\end{subarray}}\prod_{i\in I_{r}}\left(\prod_{j\in[n]\setminus I_{r}} \frac{s(z_{i}-z_{j}-\alpha)}{s(z_{i}-z_{j})}\prod_{a=1}^{n}\frac{s(z_{i}-y_{a} +\alpha)}{s(z_{i}-y_{a})}\right)=\\ &\sum_{\begin{subarray}{c}A_{r}\subset[n]\\ |A_{r}|=r\end{subarray}}\prod_{a\in A_{r}}\left(\prod_{b\in[n]\setminus A_{r} }\frac{s(y_{a}-y_{b}+\alpha)}{s(y_{a}-y_{b})}\prod_{i=1}^{n}\frac{s(z_{i}-y_{a }+\alpha)}{s(z_{i}-y_{a})}\right)\end{split} \tag{1.2}\]
Here \([n]\) denotes the set
\[[n]=\left\{1,\ldots,n\right\}.\]
In [BDKK], studying the Baxter operators in hyperbolic Ruijsenaars system, we found that the fundamental properties of these Baxter operator are governed by another functional identities of hypergeometric type, generalizing (1.2) in rational (\(s(z)=z\)) and
trigonometric cases (\(s(z)=\sin\beta z\)). Following the terminology of [KN], one can name them as certain 'duality transformations for multiple hypergeometric series'. In rational case they read as
\[\begin{split}&\sum_{|\boldsymbol{k}|=K}\prod_{i=1}^{n}\frac{(1+ \alpha)_{k_{i}}}{k_{i}!}\prod_{\begin{subarray}{c}i,j=1\\ i\neq j\end{subarray}}^{n}\frac{(x_{i}-x_{j}-k_{j}-\alpha)_{k_{i}}}{(x_{i}-x_ {j}-k_{j})_{k_{i}}}\prod_{a,j=1}^{n}\frac{(x_{j}-y_{a}+\alpha)_{k_{j}}}{(x_{j }-y_{a})_{k_{j}}}=\\ &\sum_{|\boldsymbol{k}|=K}\prod_{a=1}^{n}\frac{(1+\alpha)_{k_{a}} }{k_{a}!}\prod_{\begin{subarray}{c}a,b=1\\ a\neq b\end{subarray}}^{n}\frac{(y_{a}-y_{b}-k_{a}-\alpha)_{k_{b}}}{(y_{a}-y_ {b}-k_{a})_{k_{b}}}\prod_{j,a=1}^{n}\frac{(x_{j}-y_{a}+\alpha)_{k_{a}}}{(x_{j} -y_{a})_{k_{a}}},\end{split} \tag{1.3}\]
Here the sum is taken over all \(n\)-tuples
\[\boldsymbol{k}=(k_{1},\ldots k_{n}),\qquad k_{i}\geq 0,\qquad k_{1}+\ldots+k_{n}=K \tag{1.4}\]
of non-negative integers such that their sum equals \(K\),
\[(x)_{n}=x(x+1)\cdot\ldots\cdot(x+n-1)\]
is the Pochhammer symbol.
The trigonometric version we write down with a help of \(q\)-analogs \((z;q)_{n}\) of the Pochhammer symbol,
\[(z;q)_{n}=(1-z)(1-qz)\cdots(1-q^{n-1}z). \tag{1.5}\]
Then, using the same notation (1.4) for the summation, we have
\[\begin{split}&\sum_{|\boldsymbol{k}|=K}\prod_{i=1}^{n}\frac{(qt;q)_ {k_{i}}}{(q;q)_{k_{i}}}\times\prod_{\begin{subarray}{c}i,j=1\\ i\neq j\end{subarray}}^{n}\frac{(t^{-1}q^{-k_{j}}u_{i}/u_{j};q)_{k_{i}}}{(q^{ -k_{j}}u_{i}/u_{j};q)_{k_{i}}}\times\prod_{a,j=1}^{n}\frac{(tu_{j}/v_{a};q)_{k _{j}}}{(u_{j}/v_{a};q)_{k_{j}}}=\\ &\sum_{|\boldsymbol{k}|=K}\prod_{a=1}^{n}\frac{(qt;q)_{k_{a}}}{(q ;q)_{k_{a}}}\times\prod_{\begin{subarray}{c}a,b=1\\ a\neq b\end{subarray}}^{n}\frac{(t^{-1}q^{-k_{a}}v_{a}/v_{b};q)_{k_{b}}}{(q^{ -k_{a}}v_{a}/v_{b};q)_{k_{b}}}\times\prod_{a,j=1}^{n}\frac{(tu_{j}/v_{a};q)_{k_ {a}}}{(u_{j}/v_{a};q)_{k_{a}}}\end{split} \tag{1.6}\]
A sketch of the proof of (1.6) is given in [BDKK]. In this note we present the complete proof with all necessary technical details.
### Other sources and proofs
After the first version of this note came out O. Warnaar and H. Rosengren informed us that the identity (1.4) in its elliptic version (3.6) has already appeared in the papers [LSW, Corollary 4.3], [HLNR, eq. (6.7)]. The proofs in these papers are different from ours. They are derived from the original Ruijsenaars identity [R1] (or from the related one) on the single tuple of variables, responsible for the commutativity of Ruijsenaars-Macdonald
operators,
\[\begin{split}&\sum_{\begin{subarray}{c}I_{r}\subset[n]\\ |I_{r}|=r\end{subarray}}\prod_{i\in I_{r},\,j\not\in I_{r}}\frac{s(x_{i}-x_{j}- \alpha)s(x_{i}-x_{j}+\alpha-\beta)}{s(x_{i}-x_{j})s(x_{i}-x_{j}-\beta)}=\\ &\sum_{\begin{subarray}{c}I_{r}\subset[n]\\ |I_{r}|=r\end{subarray}}\prod_{i\in I_{r},\,j\not\in I_{r}}\frac{s(-x_{i}+x_{j} -\alpha)s(-x_{i}+x_{j}+\alpha-\beta)}{s(-x_{i}+x_{j})s(-x_{i}+x_{j}-\beta)} \end{split} \tag{1.7}\]
using multiple principal specialization technique, see [KN, K].
Since the proofs of the identity are quite different we leave our note in its original form.
## 2 Proof
The proofs of (1.3) and (1.6) are similar. We prove (1.6). For the proof it is more convenient to rewrite the identity (1.6) in terms of symmetric \(q\)-analogs of the Pochhammer symbols,
\[[z;q]_{n}=(z^{1/2}-z^{-1/2})(q^{1/2}z^{1/2}-q^{-1/2}z^{-1/2})\cdots(q^{(n-1)/2 }z^{1/2}-q^{(-n+1)/2}z^{-1/2}) \tag{2.1}\]
Then (1.6) becomes
\[\begin{split}&\sum_{|\mathbf{k}|=K}\prod_{i=1}^{n}\frac{[qt;q]_{k_{i}}}{[q; q]_{k_{i}}}\times\prod_{\begin{subarray}{c}i,j=1\\ i\neq j\end{subarray}}^{n}\frac{[t^{-1}q^{-k_{j}}u_{i}/u_{j};q]_{k_{i}}}{[q^{- k_{j}}u_{i}/u_{j};q]_{k_{i}}}\times\prod_{a,j=1}^{n}\frac{[tu_{j}/v_{a};q]_{k_{j}}}{[u_ {j}/v_{a};q]_{k_{j}}}=\\ &\sum_{|\mathbf{k}|=K}\prod_{a=1}^{n}\frac{[qt;q]_{k_{a}}}{[q;q]_{k_{ a}}}\times\prod_{\begin{subarray}{c}a,b=1\\ a\neq b\end{subarray}}^{n}\frac{[t^{-1}q^{-k_{a}}v_{a}/v_{b};q]_{k_{b}}}{[q^{- k_{a}}v_{a}/v_{b};q]_{k_{b}}}\times\prod_{a,j=1}^{n}\frac{[tu_{j}/v_{a};q]_{k_{a}}}{[u_ {j}/v_{a};q]_{k_{a}}}\end{split} \tag{2.2}\]
The proof uses the standard arguments of the theory of functions of a complex variable: we check in a rather tricky way that the difference of the LHS and of the RHS has zero residues at all possible simple poles. Thus both sides are the polynomials symmetric over the variables \(u_{i}\) and over the variables \(v_{j}\). Then the asymptotical analysis of these polynomials shows that their difference is actually equal to zero.
The crucial step -- calculation of the residues of both sides of the equality divides into two parts. First we show that each side is regular at the diagonals \(u_{i}=q^{p}u_{j}\) and \(v_{a}=q^{s}v_{b}\) between the variables of the same group, see Lemma 1. In this calculation we actually observe the canceling of terms grouped in corresponding pairs. Then we show that residues at mixed diagonals \(u_{i}=q^{p}v_{a}\) vanish. This is done by induction, using the nontrivial relation between such residues stated in Lemma 2
During the calculations we use the following properties of symmetric \(q\)-Pochhammer symbols:
\[[q^{p}u;q]_{m}\times[u]_{n} =[q^{p}u]_{n-p}\times[u]_{m+p}, \tag{2.3}\] \[[qu;q]_{m}\times[q^{-(m+p)}u^{-1};q]_{n} =(-1)^{p}[qu;q]_{m+p}\times[q^{-m}u^{-1};q]_{n-p} \tag{2.4}\]
which are valid for any \(u\) and integer \(m,n,p\). Here we assume that
\[[z;q]_{-n}=(q^{1/2}z^{1/2}-q^{-1/2}z^{-1/2})^{-1}\cdots(q^{n/2}z^{1/2}-q^{-n/2}z^ {-1/2})^{-1},\qquad n>0. \tag{2.5}\]
It is not difficult to verify that all the poles in (2.2) are simple. Consider the LHS of (2.2) as the function of \(u_{1}\) and calculate the residue of this function at the point
\[u_{1}=u_{2}q^{p},\qquad p\in\mathbb{Z} \tag{2.6}\]
For each \(\boldsymbol{k}\), \(\sum_{j=1}^{n}k_{j}=K\) denote by \(\mathrm{U}_{\boldsymbol{k}}=\mathrm{U}_{\boldsymbol{k}}(\boldsymbol{u}; \boldsymbol{v})\) the corresponding summand of the LHS of (2.2), and by \(\mathrm{V}_{\boldsymbol{k}}=\mathrm{V}_{\boldsymbol{k}}(\boldsymbol{u}; \boldsymbol{v})\) the corresponding summand of the RHS of (2.2),
\[\mathrm{U}_{\boldsymbol{k}} =\prod_{i=1}^{n}\frac{[qt;q]_{k_{i}}}{[q;q]_{k_{i}}}\times\prod_ {\begin{subarray}{c}i,j=1\\ i\neq j\end{subarray}}^{n}\frac{[t^{-1}q^{-k_{j}}u_{i}/u_{j};q]_{k_{i}}}{[q^{- k_{j}}u_{i}/u_{j};q]_{k_{i}}}\times\prod_{a,j=1}^{n}\frac{[tu_{j}/v_{a};q]_{k_{j}}}{[u_{j}/v_{ a};q]_{k_{j}}},\] \[\mathrm{V}_{\boldsymbol{k}} =\prod_{a=1}^{n}\frac{[qt;q]_{k_{a}}}{[q;q]_{k_{a}}}\times\prod_ {\begin{subarray}{c}a,b=1\\ a\neq b\end{subarray}}^{n}\frac{[t^{-1}q^{-k_{a}}v_{a}/v_{b};q]_{k_{b}}}{[q^{- k_{a}}v_{a}/v_{b};q]_{k_{b}}}\times\prod_{a,j=1}^{n}\frac{[tu_{j}/v_{a};q]_{k_{a}}}{[u_{j}/v_{ a};q]_{k_{a}}}\]
The summands \(\mathrm{U}_{\boldsymbol{k}}\), which contribute to the residue at the point (2.23), are divided into two groups. The denominators of the terms \(\mathrm{U}_{\boldsymbol{k}}\) from the group \(\boldsymbol{k}\in I_{p}\) contain Pochhammer symbol
\[[q^{-k_{2}}u_{1}/u_{2};q]_{k_{1}}\]
which vanishes at the point (2.23). It happens when
\[k_{2}-k_{1}+1\leq p\leq k_{2},\]
so that
\[I_{p}=\{\boldsymbol{k},\,|\boldsymbol{k}|=K:k_{1}\geq k_{2}+1-p,\ k_{2}\geq p\}.\]
The denominators of the terms \(X_{\boldsymbol{l}}\) in the group \(II_{p}\) contain Pochhammer
\[[q^{-l_{1}}u_{2}/u_{1};q]_{l_{2}}\]
which vanishes at the point (2.23). It happens when
\[-l_{1}\leq p\leq l_{2}-l_{1}-1,\]
so that
\[II_{p}=\{\boldsymbol{l},\,|\boldsymbol{l}|=K:l_{1}\geq-p,l_{2}\geq l_{1}+1+p\}.\]
Define the maps of sets \(\phi_{p}:I_{p}\to II_{p}\) and \(\psi_{p}:II_{p}\to I_{p}\) by the same formulas
\[\phi_{p}:I_{p}\to II_{p}:\ \phi_{p}(k_{1},k_{2}, \boldsymbol{k}^{\prime})=(k_{2}-p,k_{1}+p,\boldsymbol{k}^{\prime}),\] \[\psi_{p}:II_{p}\to I_{p}:\ \psi_{p}(k_{1},k_{2},\boldsymbol{k}^{ \prime})=(k_{2}-p,k_{1}+p,\boldsymbol{k}^{\prime})\]
**Lemma 1**.:
1. _Maps_ \(\phi_{p}\) _and_ \(\psi_{p}\) _establish bijections between the sets_ \(I_{p}\) _and_ \(II_{p}\)
2. _For any_ \(\boldsymbol{k}\in I_{p}\)__ \[\operatorname{Res}_{u_{1}=u_{2}q^{p}}\operatorname{U}_{\boldsymbol{k}}( \boldsymbol{u};\boldsymbol{v})+\operatorname{Res}_{u_{1}=u_{2}q^{p}} \operatorname{U}_{\phi_{p}(\boldsymbol{k})}(\boldsymbol{u};\boldsymbol{v})=0\] (2.7) \[\operatorname{Res}_{v_{2}=v_{1}q^{p}}\operatorname{V}_{\boldsymbol{k} }(\boldsymbol{u};\boldsymbol{v})+\operatorname{Res}_{v_{2}=v_{1}q^{p}} \operatorname{V}_{\phi_{p}(\boldsymbol{k})}(\boldsymbol{u};\boldsymbol{v})=0\] (2.8)
**Proof of Lemma 1**. The first part is purely combinatorial and can be checked directly. Let us prove the second part.
Note first that each summand \(\operatorname{U}_{\boldsymbol{k}}(\boldsymbol{u};\boldsymbol{v})\) of the LHS of (2.2) has the following structure
\[\operatorname{U}_{\boldsymbol{k}}(\boldsymbol{u};\boldsymbol{v})=\frac{ \mathcal{U}_{\boldsymbol{k}}(\boldsymbol{u};\boldsymbol{v};t)}{\mathcal{U}_{ \boldsymbol{k}}(\boldsymbol{u};\boldsymbol{v};1)} \tag{2.9}\]
where
\[\mathcal{U}_{\boldsymbol{k}}(\boldsymbol{u};\boldsymbol{v};t)=\prod_{i=1}^{n} \left[qt;q\right]_{k_{i}}\times\prod_{\begin{subarray}{c}i,j=1\\ i\neq j\end{subarray}}^{n}\left[t^{-1}q^{-k_{j}}u_{i}/u_{j};q\right]_{k_{i}} \times\prod_{a,j=1}^{n}\left[tu_{j}/v_{a};q\right]_{k_{j}} \tag{2.10}\]
We now establish the identity
\[\mathcal{U}_{k_{1},k_{2},\boldsymbol{k}^{\prime}}(\boldsymbol{u};\boldsymbol{ v};t)|_{u_{1}=q^{p}u_{2}}=\mathcal{U}_{k_{2}-p,k_{1}+p,\boldsymbol{k}^{\prime}}( \boldsymbol{u};\boldsymbol{v};t)|_{u_{1}=q^{p}u_{2}},\qquad\boldsymbol{k}=(k_ {1},k_{2},\boldsymbol{k}^{\prime})\in I_{p} \tag{2.11}\]
valid for any \(\boldsymbol{k}=(k_{1},k_{2},\boldsymbol{k}^{\prime})\in I_{p}\) with a help of an explicit bijection between linear factors of the products in both sides of the equality (2.11). All the factors in both sides of (2.11) which do not depend on the variables \(u_{1}\) and \(u_{2}\) and do not contain indices \(k_{1}\) and \(k_{2}\) are equal tautologically so that the relation (2.11) is reduced to the equality
\[A\prod_{j=3}^{n}B_{j}\prod_{a=1}^{n}C_{a}=A^{\prime}\prod_{j=3}^{n}B^{\prime} _{j}\prod_{a=1}^{n}C^{\prime}_{a} \tag{2.12}\]
where
\[A= [tq;q]_{k_{1}}\cdot[tq;q]_{k_{2}}\cdot[t^{-1}q^{-k_{2}}u_{1}/u_{ 2}]_{k_{1}}\cdot[t^{-1}q^{-k_{1}}u_{2}/u_{1}]_{k_{2}}\] \[= [tq;q]_{k_{1}}\cdot[tq;q]_{k_{2}}\cdot[t^{-1}q^{-k_{2}+p}]_{k_{1} }\cdot[t^{-1}q^{-k_{1}-p}]_{k_{2}};\] \[A^{\prime}= [tq;q]_{k_{2}-p}\cdot[tq;q]_{k_{1}+p}\cdot[t^{-1}q^{-k_{1}-p}u_{ 1}/u_{2}]_{k_{2}-p}\cdot[t^{-1}q^{-k_{2}+p}u_{2}/u_{1}]_{k_{1}+p}\] \[= [tq;q]_{k_{2}-p}\cdot[tq;q]_{k_{1}+p}\cdot[t^{-1}q^{-k_{1}}]_{k_{2 }-p}\cdot[t^{-1}q^{-k_{2}}]_{k_{1}+p};\] \[B_{j}= [t^{-1}q^{-k_{j}}u_{1}/u_{j};q]_{k_{1}}\cdot[t^{-1}q^{-k_{j}}u_{ 2}/u_{j};q]_{k_{2}}\cdot[t^{-1}q^{-k_{1}}u_{j}/u_{1};q]_{k_{j}}\cdot[t^{-1}q^ {-k_{2}}u_{j}/u_{2};q]_{k_{j}}\] \[= [t^{-1}q^{-k_{j}+p}u_{2}/u_{j};q]_{k_{1}}\cdot[t^{-1}q^{-k_{j}}u_ {2}/u_{j};q]_{k_{2}}\cdot[t^{-1}q^{-k_{1}-p}u_{j}/u_{2};q]_{k_{j}}\cdot[t^{-1}q ^{-k_{2}}u_{j}/u_{2};q]_{k_{j}};\] \[B^{\prime}_{j}= [t^{-1}q^{-k_{j}}u_{1}/u_{j};q]_{k_{2}-p}\cdot[t^{-1}q^{-k_{j}}u_ {2}/u_{j};q]_{k_{1}+p}\cdot[t^{-1}q^{-k_{2}+p}u_{j}/u_{1};q]_{k_{j}}\cdot[t^{-1 }q^{-k_{1}-p}u_{j}/u_{2};q]_{k_{j}}\] \[= [t^{-1}q^{-k_{j}+p}u_{2}/u_{j};q]_{k_{2}-p}\cdot[t^{-1}q^{-k_{j} }u_{2}/u_{j};q]_{k_{1}+p}\cdot[t^{-1}q^{-k_{2}}u_{j}/u_{2};q]_{k_{j}}\cdot[t^{-1 }q^{-k_{1}-p}u_{j}/u_{2};q]_{k_{j}};\] \[C_{a}= [tu_{1}/v_{a};q]_{k_{1}}\cdot[tu_{2}/v_{a};q]_{k_{2}}=[tq^{p}u_{ 1}/v_{a};q]_{k_{1}}\cdot[tu_{2}/v_{a};q]_{k_{2}};\] \[C_{a}^{\prime}= [tu_{1}/v_{a};q]_{k_{2}-p}\cdot[tu_{2}/v_{a};q]_{k_{1}+p}=[tq^{p }u_{2}/v_{a};q]_{k_{2}-p}\cdot[tu_{2}/v_{a};q]_{k_{1}+p};\]
Applications of (2.3) imply the equalities
\[B_{j}=B^{\prime}_{j}\qquad\text{and}\qquad C_{a}=C^{\prime}_{a}\]
Applying twice (2.4) we get \(A=A^{\prime}\). This proves (2.14) and (2.11).
The identity (2.11) implies the statement (2.27) about zero sum of the residues. Indeed, the relation (2.11) establishes a bijection between all nonzero factors of the denominators \(\mathcal{U}_{k_{1},k_{2},\boldsymbol{k}^{\prime}}(\boldsymbol{u};\boldsymbol{ v};1)|_{u_{1}=q^{p}u_{2}}\) and \(\mathcal{U}_{k_{2}-p,k_{1}+p,\boldsymbol{k}^{\prime}}(\boldsymbol{u}; \boldsymbol{v};1)\) and the equality of their products. Factors in denominators of \(\mathrm{U}_{k_{1},k_{2},\boldsymbol{k}^{\prime}}(\boldsymbol{u};\boldsymbol{ v})\) and \(\mathrm{U}_{k_{2}-p,k_{1}+p,\boldsymbol{k}^{\prime}}(\boldsymbol{u}; \boldsymbol{v})\) which tend to zero when \(u_{1}\) tends to \(q^{p}u_{2}\) are
\[q^{-p/2}u_{1}/u_{2}-q^{p/2}u_{2}/u_{1}\qquad\text{and}\qquad q^{p/2}u_{2}/u_{1 }-q^{-p/2}u_{1}/u_{2} \tag{2.13}\]
They give inputs into residues, which just differ by sign. Thus we arrive at (2.27) For the proof of (2.8) we note that the involution
\[\tau:u_{i}\mapsto v_{i}^{-1},\qquad v_{i}\mapsto u_{i}^{-1} \tag{2.14}\]
exchanges each \(\mathrm{U}_{\boldsymbol{k}}\) with \(\mathrm{V}_{\boldsymbol{k}}\), as well as the LHS of (2.2) with RHS of(2.2).
**Corollary 1**.: _Both sides of (2.2) have no poles of the form \(u_{i}=q^{p}u_{j}\) and \(v_{a}=q^{p}v_{b}\)._
For any nonegative integer \(p\) denote by \(\varphi_{p}(\boldsymbol{u};\boldsymbol{v})\) the following rational function on \(\boldsymbol{u}=(u_{1},\ldots,u_{n})\) and \(\boldsymbol{v}=(v_{1},\ldots,v_{n})\):
\[\varphi_{p}(\boldsymbol{u};\boldsymbol{v})=(-1)^{p}\frac{[tq;q]_{2p}}{[q;q]_{ p}[q;q]_{p-1}}\prod_{j=2}^{n}\frac{[tu_{j}/v_{1};q]_{p}}{[u_{1}/u_{j};q]_{p}} \prod_{b=2}^{n}\frac{[tu_{1}/v_{b};q]_{p}}{[v_{b}/v_{1};q]_{p}} \tag{2.15}\]
**Lemma 2**.: _For any \(1\leq p\leq k_{1}\) and \(\boldsymbol{k}^{\prime}\in\mathbb{Z}_{\geq 0}^{n-1}\)_
\[\operatorname{Res}_{v_{1}=q^{p-1}u_{1}}\frac{1}{v_{1}}\,\mathrm{ V}_{k_{1},\boldsymbol{k}^{\prime}}(\boldsymbol{u};\boldsymbol{v})= \varphi_{p}(\boldsymbol{u};\boldsymbol{v})\times\mathrm{V}_{k_{1}-p,\boldsymbol{k}^{\prime}}(qv_{1},\boldsymbol{u}^{\prime};q^{-1}u_{1}, \boldsymbol{v}^{\prime}), \tag{2.16}\] \[\operatorname{Res}_{v_{1}=q^{p-1}u_{1}}\frac{1}{v_{1}}\,\mathrm{ U}_{k_{1},\boldsymbol{k}^{\prime}}(\boldsymbol{u};\boldsymbol{v})= \varphi_{p}(\boldsymbol{u};\boldsymbol{v})\times\mathrm{U}_{k_{1}-p,\boldsymbol{k}^{\prime}}(qv_{1},\boldsymbol{u}^{\prime};q^{-1}u_{1}, \boldsymbol{v}^{\prime}), \tag{2.17}\]
**Proof of Lemma 2**. We prove (2.16). Present \(\mathrm{V}_{k_{1},\boldsymbol{k}^{\prime}}(\boldsymbol{u};\boldsymbol{v})\) in the form
\[\begin{split}\mathrm{V}_{k_{1},\boldsymbol{k}^{\prime}}( \boldsymbol{u};\boldsymbol{v})=\frac{[tq;q]_{k_{1}}}{[q;q]_{k_{1}}}\times\\ \prod_{b\neq 1}\frac{[t^{-1}q^{-k_{b}}v_{b}/v_{1};q]_{k_{1}}}{[q^{-k _{b}}v_{b}/v_{1};q]_{k_{1}}}\cdot\frac{[t^{-1}q^{-k_{1}}v_{1}/v_{b};q]_{k_{b}}} {[q^{-k_{1}}v_{1}/v_{b};q]_{k_{b}}}\cdot\frac{[tu_{1}/v_{b};q]_{k_{b}}}{[u_{1 }/v_{b};q]_{k_{b}}}\times\\ \prod_{j=2}^{n}\frac{[tu_{j}/v_{1};q]_{k_{1}}}{[u_{j}/v_{1}]_{k_ {1}}}\times\frac{[tu_{1}/v_{1};q]_{k_{1}}}{[u_{1}/v_{1};q]_{k_{1}}}\times \mathrm{V}^{\prime}\end{split} \tag{2.18}\]
where \(\mathrm{V}^{\prime}\) depends on \(\boldsymbol{u}^{\prime}\), \(\boldsymbol{v}^{\prime}\), \(\boldsymbol{k}^{\prime}\) only. Then
\[\begin{split}&\operatorname{Res}_{v_{1}=q^{p-1}u_{1}}\frac{1}{v_{1}} \,\mathrm{V}_{k_{1},\boldsymbol{k}^{\prime}}(\boldsymbol{u};\boldsymbol{v})= C\cdot\mathrm{V}^{\prime}\times\\ \prod_{b\neq 1}\frac{[t^{-1}q^{-k_{b}}v_{b}/v_{1};q]_{k_{1}}}{[q^{-k _{b}}v_{b}/v_{1};q]_{k_{1}}}\cdot\frac{[tu_{1}/v_{b};q]_{p}}{[u_{1}/v_{b}]_{p}} \cdot\frac{[tq^{p}u_{1}/v_{b};q])_{k_{b}-p}}{q^{p}u_{1}/v_{b};q_{k_{b}-p}}\\ \prod_{b\neq 1}\frac{[t^{-1}q^{-k_{1}}v_{1}/v_{b};q]_{k_{b}}}{[q^{-k _{1}}v_{1}/v_{b};q]_{k_{b}}}\times\prod_{j=2}^{n}\frac{[tu_{j}/v_{1};q]_{p}}{[ u_{j}/v_{1};q])_{p}}\prod_{j=2}^{n}\frac{[tq^{p}u_{j}/v_{1};q]_{k_{1}-p}}{[q^{p}u_{j}/v_{1} ]_{k_{1}-p}}\end{split} \tag{2.19}\]
where
\[C=-\frac{[tq;q]_{k_{1}}}{[q;q]_{k_{1}}}\cdot\frac{[tq^{1-p};q]_{k_{1}}}{[q^{1-p}; q]_{p-1}[q;q]_{k_{1}-p}} \tag{2.20}\]
Here we decomposed two fractions of Pochhammers into the products of four fractions. In this presentation there are two products which do not depend on \(k\) indices. Put them in the front and use the equality \(v_{1}=q^{p-1}u_{1}\). Then the residue (2.19) looks as
\[\begin{split}&\operatorname{Res}_{v_{1}=q^{p-1}u_{1}}\frac{1}{v_{1 }}\operatorname{V}_{k_{1},\mathbf{k}^{\prime}}(\mathbf{u};\mathbf{v})=C\cdot\operatorname {V}^{\prime}\cdot\prod_{b\neq 1}\frac{[tu_{1}/v_{b};q]_{p}}{[v_{1}/v_{b}]_{p}}\cdot \prod_{j=2}^{n}\frac{[tu_{j}/v_{1};q]_{p}}{[u_{j}/u_{1};q]_{p}}\times\\ &\prod_{b\neq 1}\frac{[t^{-1}q^{-k_{b}}v_{b}/v_{1};q]_{k_{1}}}{[q^{-k_ {b}}v_{b}/v_{1};q]_{k_{1}}}\cdot\frac{[tqv_{1}/v_{b};q]_{k_{b}-p}}{[qv_{1}/v_{ b};q]_{k_{b}-p}}\times\\ &\prod_{b\neq 1}\frac{[t^{-1}q^{-k_{1}}v_{1}/v_{b};q]_{k_{b}}}{[q^{-k_ {1}}v_{1}/v_{b};q]_{k_{b}}}\times\prod_{j=2}^{n}\frac{[tqu_{j}/u_{1};q]_{k_{1} -p}}{[qu_{j}/u_{1}]_{k_{1}-p}}\end{split} \tag{2.21}\]
Now we use (2.4) in the second line of (2.21) together with the relation \(v_{1}=q^{p-1}u_{1}\). We get
\[\begin{split}&\operatorname{Res}_{v_{1}=q^{p-1}u_{1}}\frac{1}{v_{1 }}\operatorname{V}_{k_{1},\mathbf{k}^{\prime}}(\mathbf{u};\mathbf{v})=C\cdot\operatorname {V}^{\prime}\cdot\prod_{b\neq 1}\frac{[tu_{1}/v_{b};q]_{p}}{[v_{b}/v_{1}]_{p}} \cdot\prod_{j=2}^{n}\frac{[tu_{j}/v_{1};q]_{p}}{[u_{1}/u_{j};q]_{p}}\times\\ &\prod_{b\neq 1}\frac{[tqv_{1}/v_{b};q]_{k_{b}}}{[qv_{1}/v_{b};q] _{k_{b}}}\cdot\frac{[t^{-1}q^{-k_{b}+p}v_{b}/v_{1};q]_{k_{1}-p}}{[q^{-k_{b}+p} v_{b}/v_{1};q]_{k_{1}-p}}\times\\ &\prod_{b\neq 1}\frac{[t^{-1}q^{-k_{1}}v_{1}/v_{b};q]_{k_{b}}}{[q^{-k_ {1}}v_{1}/v_{b};q]_{k_{b}}}\times\prod_{j=2}^{n}\frac{[tqu_{j}/u_{1};q]_{k_{1} -p}}{[qu_{j}/u_{1}]_{k_{1}-p}}\end{split} \tag{2.22}\]
Set
\[v_{1}^{*}=q^{-1}u_{1},\qquad u_{1}^{*}=qv_{1}. \tag{2.23}\]
Then we can read two last lines in (2.22) as
\[\prod_{b\neq 1}\frac{[tu_{1}^{*}/y_{b};q]_{k_{b}}}{[u_{1}^{*}/y_{b};q]_{k_{b}}} \cdot\frac{[t^{-1}q^{-k_{1}+p}v_{1}^{*}/v_{b};q]_{k_{b}}}{[q^{-k_{1}+p}v_{1}^{* }/v_{b};q]_{k_{b}}}\cdot\frac{[t^{-1}q^{-k_{b}}v_{b}/v_{1};q]_{k_{1}-p}}{[q^{-k _{b}}v_{b}/v_{1};q]_{k_{1}-p}}\cdot\prod_{j\neq 1}\frac{[tu_{j}/v_{1}^{*};q]_{k_{1}-p}}{[u_{j}/v_{1} ^{*};q]_{k_{1}-p}} \tag{2.24}\]
One can recognize in (2.24) the factor of the product \(\operatorname{V}_{k_{1}-p,\mathbf{k}^{\prime}}(u_{1}^{*},\mathbf{u}^{\prime};v_{1}^{*}, \mathbf{v}^{\prime})\) with missing constant
\[C^{\prime}=\frac{[tq;q]_{k_{1}-p}}{[q;q]_{k_{1}-p}}\times\frac{[tu_{1}^{*}/v_{ 1}^{*};q]_{k_{1}-p}}{[u_{1}^{*}/v_{1}^{*};q]_{k_{1}-p}}=\frac{[tq;q]_{k_{1}-p}} {[q;q]_{k_{1}-p}}\times\frac{[tq^{p+1};q]_{k_{1}-p}}{[q^{p+1};q]_{k_{1}-p}} \tag{2.25}\]
We conclude that
\[\operatorname{Res}_{v_{1}=q^{p-1}u_{1}}\frac{1}{v_{1}} \operatorname{V}_{k_{1},\mathbf{k}^{\prime}}(\mathbf{u};\mathbf{v})=\] \[\frac{C}{C^{\prime}}\cdot\prod_{b\neq 1}\frac{[tu_{1}/v_{b};q]_{p}}{[v_ {b}/v_{1}]_{p}}\cdot\prod_{j=2}^{n}\frac{[tu_{j}/v_{1};q]_{p}}{[u_{1}/u_{j};q] _{p}}\times\operatorname{V}_{k_{1}-p,\mathbf{k}^{\prime}}(u_{1}^{*},\mathbf{u}^{\prime };v_{1}^{*},\mathbf{v}^{\prime})=\] \[(-1)^{p}\frac{[tq;q]_{2p}}{[q;q]_{p}[q;q]_{p-1}}\operatorname{V}_{ k_{1}-p,\mathbf{k}^{\prime}}(qv_{1},\mathbf{u}^{\prime};q^{-1}u_{1},\mathbf{v}^{\prime}) \tag{2.26}\]
The proof of (2.17) is analogous. One can get it by combining the involution (2.14) with the previous arguments. \(\Box\)
**Proof of the identity (2.2)**. We are now ready to prove (2.2) by induction over \(K\). Denote the difference of the LHS and RHS of (2.2) by \(W_{K}(\mathbf{u};\mathbf{v})\). Assume that \(W_{K}(\mathbf{u},\mathbf{v})=0\) for all \(K<N\) and any \(m\) -tuples of variables \(\mathbf{u}=(u_{1},\ldots,u_{m});\mathbf{v}=(v_{1},\ldots,v_{ m})\) for arbitrary \(m\). Summing up the difference of (2.17) and (2.16) over all \(k\) with \(|\mathbf{k}|=K\) we get the relation
\[{\rm Res}_{v_{1}=q^{p-1}u_{1}}\,\frac{1}{v_{1}}W_{K}(\mathbf{u};\mathbf{v})=\varphi_{p}(\mathbf{u};\mathbf{v})\times W_ {K-p}(\mathbf{u}^{*},\mathbf{v}^{*}), \tag{2.27}\]
where
\[\mathbf{u}^{*}=(qv_{1},\mathbf{u}^{\prime}),\qquad\mathbf{v}^{*}=(q^{-1}u_{1},\mathbf{v}^{\prime}) \tag{2.28}\]
by the induction assumption the RHS of (2.27) equals zero. Taking in mind the symmetricity of \(W_{K}(\mathbf{u};\mathbf{v})\) with respect to permutation of \(u_{i}\) and of \(v_{j}\) we conclude that it has no poles at all. Since \(W_{K}(\mathbf{u};\mathbf{v})\) is a homogenious rational function on the variables \(u_{i}\) and \(v_{j}\) of total degree zero, it is equal to a constant, which could depend on \(q\) and \(t\). To compute this constant, we consider the behaviour of this function in asymptotical zone
\[u_{1}\ll u_{2}\ll\ldots\ll u_{n}\ll v_{n}\ll v_{n-1}\ll\ldots\ll v_{1} \tag{2.29}\]
Here both sides of (2.2) tend to
\[\sum_{|\mathbf{k}_{n}|=K}\prod_{i=1}^{n}\frac{[qt;q]_{k_{i}}}{[q;q]_ {k_{i}}}\times t^{\frac{1}{2}\left((n-1)k_{1}+(n-3)k_{2}+\ldots+(3-n)k_{n-1}+( 1-n)k_{n}\right)}\times t^{-\frac{nK}{2}} \tag{2.30}\]
Thus \(W_{K}(\mathbf{u};\mathbf{v})\) tends to zero in this asymptotic zone and so equals zero identically. Another way to verify the vanishing of the constant value of \(W_{K}(\mathbf{u};\mathbf{v})\) is to consider \(W_{K}(\mathbf{u};\mathbf{v})\) at the plane
\[u_{i}=t^{-1}v_{i},\qquad i=1,\ldots,n, \tag{2.31}\]
where it is identically zero due to the last products in each summands. This completes the induction step and the proof of the identity (2.2) and thus of (1.6). \(\Box\)
## 3 Comments
**1**. Note first that the trigonometric kernel function identity (1.2) is a particular limit of the trigonometric hypergeometric identity (1.6), as well as the rational kernel function identity is a particular limit of the rational hypergeometric identity (1.3).
Rescale simultaneously all the variables \(x_{j}\), \(y_{a}\) and \(\alpha\) in (1.3)
\[x_{j}\to\varepsilon x_{j},\qquad y_{a}\to\varepsilon y_{a},\qquad\alpha\to \varepsilon\alpha,\]
and tend the rescaling constant \(\varepsilon\) to zero. In this limit the relation (1.3) becomes
\[\begin{split}&\sum_{|\boldsymbol{k}|=K}\prod_{i=1}^{n}\prod_{ \begin{subarray}{c}i,j=1\\ k_{i}\neq 0,\,k_{j}=0\end{subarray}}^{n}\frac{(x_{i}-x_{j}-\alpha)}{(x_{i}-x_{j})} \prod_{\begin{subarray}{c}a,j=1\\ k_{j}\neq 0\end{subarray}}^{n}\frac{(x_{j}-y_{a}+\alpha)}{(x_{j}-y_{a})}-\\ &\sum_{|\boldsymbol{k}|=K}\prod_{\begin{subarray}{c}a,b=1\\ k_{a}=0,\,k_{b}\neq 0\end{subarray}}^{n}\frac{(y_{a}-y_{b}-\alpha)}{(y_{a}-y_{b})} \prod_{\begin{subarray}{c}j,a=1\\ k_{a}\neq 0\end{subarray}}^{n}\frac{(x_{j}-y_{a}+\alpha)}{(x_{j}-y_{a})}=0.\end{split} \tag{3.1}\]
Denote by \(\mathcal{H}_{K}\) the LHS of (3.1) and by \(\mathcal{K}_{r}\) the LHS of the \(r\)-th rational kernel function identity:
\[\begin{split}&\sum_{\begin{subarray}{c}I_{r}\subset[n]\\ |I_{r}|=r\end{subarray}}\prod_{i\in I_{r}}\left(\prod_{j\in[n]\setminus I_{r}} \frac{x_{i}-x_{j}-\alpha}{x_{i}-x_{j}}\prod_{a=1}^{n}\frac{x_{i}-y_{a}+\alpha} {x_{i}-y_{a}}\right)-\\ &\sum_{\begin{subarray}{c}A_{r}\subset[n]\\ |A_{r}|=r\end{subarray}}\prod_{a\in A_{r}}\left(\prod_{b\in[n]\setminus A_{r}} \frac{y_{a}-y_{b}+\alpha}{y_{a}-y_{b}}\prod_{i=1}^{n}\frac{x_{i}-y_{a}+\alpha }{x_{i}-y_{a}}\right)=0.\end{split} \tag{3.2}\]
We see that \(\mathcal{H}_{1}=\mathcal{K}_{1}\), that is the relation (3.1) for \(K=1\) coincides with the relation (3.2) for \(r=1\). Next
\[\mathcal{H}_{2}=\mathcal{K}_{2}+\mathcal{K}_{1} \tag{3.3}\]
where the first term in RHS of (3.3) correspond to partitions \(\boldsymbol{k}=(1,1,0,\ldots,0)\) and their permutations while the second to the partitions \(\boldsymbol{k}=(2,0,\ldots,0)\). Thus we get (3.2) for \(r=2\). Going further we represent each \(\mathcal{H}_{K}\) as a sum of \(\mathcal{K}_{K}\) and of \(\mathcal{K}_{r}\) with \(r<K\) taken with some combinatorial coefficients. By induction we get all the relation (3.2) from (3.1).
In trigonometric case we put
\[u_{i}=e^{2\imath\beta x_{i}},\qquad v_{a}=e^{2\imath\beta y_{a}},\qquad t=e^{2 \imath\beta\alpha},\qquad q=e^{L}\]
and tend in (2.2) the positive constant \(L\) to infinity. By the same arguments we get (1.2) for \(s(z)=\sin\mathbb{Z}\). Note the the original Ruijsenaars identity (1.7) on a single tuple of variables could not be derived from the identity (1.2) on the two tuples of variables. Probably, the same negative statement holds for the identity (1.6).
**2**. The hypergeometric identities (1.6) remains valid, if we replace the \(q\)-Pochhammer symbol (1.5) by its elliptic analog
\[(z;p,q)_{k}=\theta(z;p)\theta(qz;p)\cdot\ldots\cdot\theta(q^{k-1}z;p) \tag{3.4}\]
where \(|p|<1\) and
\[\theta(z;p)=\prod_{n\geq 0}(1-p^{n}z)\prod_{m>0}(1-p^{m}z^{-1}) \tag{3.5}\]
is the modified theta function so that the identity (1.6) takes the form
\[\begin{split}&\sum_{|\boldsymbol{k}|=K}\prod_{i=1}^{n}\frac{(qt;p,q )_{k_{i}}}{(q;p,q)_{k_{i}}}\times\prod_{\begin{subarray}{c}i,j=1\\ i\neq j\end{subarray}}^{n}\frac{(t^{-1}q^{-k_{j}}u_{i}/u_{j};p,q)_{k_{i}}}{(q^ {-k_{j}}u_{i}/u_{j};p,q)_{k_{i}}}\times\prod_{a,j=1}^{n}\frac{(tu_{j}/v_{a};p,q )_{k_{j}}}{(u_{j}/v_{a};p,q)_{k_{j}}}=\\ &\sum_{|\boldsymbol{k}|=K}\prod_{a=1}^{n}\frac{(qt;p,q)_{k_{a}}}{( q;p,q)_{k_{a}}}\times\prod_{\begin{subarray}{c}a,b=1\\ a\neq b\end{subarray}}^{n}\frac{(t^{-1}q^{-k_{a}}v_{a}/v_{b};p,q)_{k_{b}}}{(q^ {-k_{a}}v_{a}/v_{b};p,q)_{k_{b}}}\times\prod_{a,j=1}^{n}\frac{(tu_{j}/v_{a};p,q )_{k_{a}}}{(u_{j}/v_{a};p,q)_{k_{a}}}.\end{split} \tag{3.6}\]
The difference \(W_{K}(\boldsymbol{u};\boldsymbol{v})\) of the LHS and RHS of (3.6) satisfies quasiperiodicity conditions
\[\begin{split}& W(u_{1},\ldots,pu_{i},\ldots,u_{n};\boldsymbol{v}) =t^{-K}W(u_{1},\ldots,u_{i},\ldots,u_{n};\boldsymbol{v}),\\ & W(\boldsymbol{u};v_{1},\ldots,pv_{i},\ldots,v_{n})=t^{K}\ W( \boldsymbol{u};v_{1},\ldots,v_{i},\ldots,v_{n}).\end{split} \tag{3.7}\]
By using (3.7) the absence of singularities in \(W_{K}(\boldsymbol{u};\boldsymbol{v})\) is checked in the same way as in the trigonometric case. Then, using (3.5) and the substitution (2.31) one can show that \(W_{K}(\boldsymbol{u};\boldsymbol{v})\) vanishes identically.
Note finally that the identities (1.2),(1.6) and (3.6) could have a matrix generalization. The paper [MZ] suggests such a possibility.
## Acknowlegements
We are grateful to Ole Warnaar and Hjalmar Rosengren for communicating to us their results.
The work of N. Belousov and S. Derkachov was supported by the Theoretical Physics and Mathematics Advancement Foundation <<BASIS>>. The work of S. Khorshkin was supported by Russian Science Foundation project No 23-41-00150. He also thanks Prof. Maria Gorelik and the Weizmann Institute of Science for the kind hospitality during the summer of 2022, where the main part of this work was done.
|
2308.05635 | Asymptotic stability conditions for linear coupled impulsive systems
with time-invariant subsystems | This article proposes an approach to construct a Lyapunov function for a
linear coupled impulsive system consisting of two time-invariant subsystems. In
contrast to various variants of small-gain stability conditions for coupled
systems, the asymptotic stability property of independent subsystems is not
assumed. To analyze the asymptotic stability of a coupled system, the direct
Lyapunov method is used in combination with the discretization method. The
periodic case and the case when the Floquet theory is not applicable are
considered separately. The main results are illustrated with examples. | Vitalii Slynko, Sergey Dashkovskiy, Ivan Atamas | 2023-08-10T15:25:51Z | http://arxiv.org/abs/2308.05635v1 | # Asymptotic stability conditions for linear coupled impulsive systems with time-invariant subsystems
###### Abstract
This article proposes an approach to construct a Lyapunov function for a linear coupled impulsive system consisting of two time-invariant subsystems. In contrast to various variants of small-gain stability conditions for coupled systems, the asymptotic stability property of independent subsystems is not assumed. To analyze the asymptotic stability of a coupled system, the direct Lyapunov method is used in combination with the discretization method. The periodic case and the case when the Floquet theory is not applicable are considered separately. The main results are illustrated with examples.
keywords: Linear impulsive systems, Lyapunov functions, Lyapunov stability, coupled systems, time-variant systems. +
Footnote †: journal: Journal of LaTeX Templates
## 1 Introduction
Impulsive differential equations can model mechanical systems subjected to shocks. Instantaneous change in holonomic or nonholonomic constraints imposed on the system and changes in parameters of the system in time lead to study impulsive systems with variable coefficients. In this case, it is important
to obtain stability conditions that are robust with respect to variations in the sequence of moments of impulse action.
The theory of impulsive differential equations emerged as an independent direction in the modern differential equations theory and system theory, starting with the classic book [1]. The main methods for studying stability for systems of differential equations with impulsive action are laid down in [1; 2]. In modern control theory, impulsive systems are often considered as an important subclass of hybrid systems [3; 4]. The stability of linear impulsive systems of differential equations with constant parameters has been a subject of research in many works. In contrast to linear time-invariant systems of ordinary differential equations, where the stability problem is exhaustively solved by the classical Routh-Hurwitz theorem, the stability question is open in the general case. This is due to the fact that the dynamics of impulsive systems is determined not only by the parameters of the system, but also by the sequence of moments of impulse action. It should also be taken into account that an impulsive system can be asymptotically stable even in the case when the continuous and discrete dynamics are both unstable. Therefore, the stability conditions for linear impulsive systems should cover this case as well, see [5; 6; 7; 8; 9; 10; 11]. In [10; 11], linear impulsive differential equations in Banach spaces were considered under the assumption that the moments of impulse action satisfy the ADT condition, assuming that the continuous and discrete dynamics of the system are both unstable. Using the identities of the commutator calculus, conditions for asymptotic stability are obtained. In [7; 8; 12], the dwell-time conditions which guarantee the asymptotic stability of linear impulsive systems with constant parameters are obtained. In this case, a construction of a Lyapunov function is reduced to an approximate solution of a two-point boundary value problem with boundary conditions in the form of matrix inequalities. A generalization of these results for some classes of linear impulsive systems with variable coefficients is given in [8]. In [6], the dwell-times estimates that guarantee the asymptotic stability of linear impulsive systems are obtained based on the second Lyapunov method using the derivatives of the second [9] and higher orders
[6] of a Lyapunov function.
The papers [5, 9, 13, 14, 15] are devoted to the study of stability of large-scale impulsive systems. In [5, 9, 14], Lyapunov vector functions are used to study the asymptotic stability of the equilibrium of nonlinear impulsive systems. In [13], the problem of stability of critical equilibria of nonlinear large-scale impulsive systems is considered. In [15], the input to state stability theory (ISS) is developed for nonlinear impulsive systems and sufficient conditions for the global asymptotic stability of nonlinear coupled impulsive systems are established (small-gain theorems). It should be noted that all of these results use the assumption of asymptotic stability of independent subsystems. Hence, it is of interest to find stability conditions for large-scale (coupled) systems which do not assume a priori the presence of the asymptotic stability property of independent subsystems. One of the possible approaches to solve this problem is to use the construction of a matrix-valued Lyapunov function [16? ]. The question of the method of construction is decisive for the practical application of matrix-valued Lyapunov functions in stability problems. In [17], for linear time-variant coupled systems with time-invariant subsystems, a method of construction of a matrix-valued Lyapunov function by some heuristic simplifications of a matrix differential equation written in the block form is proposed. Although, this method allows in some cases to obtain a conclusion about the stability of linear coupled systems with unstable subsystems, the question about the degree of conservatism of the obtained sufficient stability conditions is open. For linear large-scale impulsive systems with variable coefficients, the problem of choosing the elements of a matrix-valued Lyapunov function has not been studied.
Note that for time-discrete coupled systems, some approaches to study Lyapunov stability and ISS in case when some of the independent subsystems do not have the exponential stability property or, respectively, the ISS property are presented in [18, 19, 20]. For continuous-time and impulsive coupled systems with possibly unstable subsystems, the problem of construction of a Lyapunov function remains open even in the linear case of time-variant systems.
The development of computer calculations has opened up new possibilities
to solve the problem of construction of Lyapunov functions for various classes of dynamical systems. In recent years, the use of the discretization method to construct approximate solutions of a Lyapunov matrix differential equations (or their modifications) has led to significant advances in the theory of stability of linear hybrid systems with constant parameters [21], systems with delay [22] and others. The application of the discretization method of construction of Lyapunov functions makes it possible to obtain with high accuracy estimates of the dwell-times that guarantee the stability of linear hybrid systems or the conditions of robust stability. In [23], the discretization method is used to synthesize robust control of a nonlinear affine system. The asymptotic stability of a large-scale system, using the direct Lyapunov method in combination with the discretization method and identities of the commutator calculus were considered in [24]. Here the independent subsystems may not have the asymptotic stability property. In contrast to [24], we consider a coupled impulsive system by substantially modifying the choice of a candidate of a Lyapunov function using the time-invariance of (disconnected) independent subsystems.
In our paper, for the first time, it is proposed to apply the discretization method in order to construct matrix-valued Lyapunov functions for linear impulsive systems with periodic coefficients. In this case, it is assumed that the independent subsystems are time-invariant, and for the dwell-times, two possible cases are considered: they are constant or subject to two-sided estimates. The elements of a matrix-valued Lyapunov function are constructed in the bilinear forms with time-variable matrices. The proposed algorithm of construction of Lyapunov functions admits a simple numerical implementation.
Contributions of this manuscript are as follows. First of all, a new method for construction of a Lyapunov function for a linear time-variant system consisting of two coupled time-invariant subsystems is proposed. Conditions for the asymptotic stability of a linear time-variant system are obtained under various assumptions about the dwell-times and dynamic properties of independent subsystems. We show that the proposed approach is applicable in the case when one of the subsystems is not stable and the classical methods for studying of
coupled systems, which are based on the ideas of Lyapunov vector functions or small-gain theorems, obtained on the basis of the ISS concept, are not applicable. In the case when subsystems are asymptotically stable, but the small-gain conditions are not satisfy, the proposed approach can still work and leads to less conservative stability conditions. The proposed results are new not only in the context of the theory of impulsive systems, but also for coupled periodic ODEs.
We introduce examples where the obtained sufficient stability conditions are applicable under different assumptions regarding the continuous and discrete dynamics of the system, in particular, when both dynamics are simultaneously unstable and the application of known results of the theory of stability of impulsive systems is impossible or very difficult.
Next section describes the problem statement. Section 3 is devoted to an informal discussion of the proposed method of construction of a matrix-valued Lyapunov function. In the fourth section, this method is rigorously justified for the case of linear impulsive periodic systems. Sufficient conditions for asymptotic stability are obtained. Section 5 is devoted to the substantiation of the proposed algorithm of construction of a matrix-valued Lyapunov function in the case when the dwell-times are not constant (in this case, the Floquet theory is not applicable). In Section 6 we consider particular cases of rapidly changing interaction between subsystems. We compare our results with known small-gain theorems. Section 7 provides numerical examples that are discussed in Section 8.
_Notation._ Let \(\mathbb{R}^{n}\) be the Euclidian space with standard dot product and \(\mathbb{R}^{n\times m}\) be the linear space of \(n\times m\) matrices. For \(A\in\mathbb{R}^{n\times n}\), \(\sigma(A)\) denotes its spectrum, \(r_{\sigma}(A)\) denotes its spectral radius and norm \(\|A\|=\lambda_{\max}^{1/2}(A^{\mathrm{T}}\,A)\). If \(\sigma(A)\subset\mathbb{R}\), then \(\lambda_{\min}(A)\) and \(\lambda_{\max}(A)\) are its smallest and largest eigenvalues respectively, \(\lambda_{\max}^{+}(A)=\max(\lambda_{\max}(A),0)\). For any symmetric matrices \(P\) and \(Q\) notation \(P\geq Q\) means that \(P-Q\) is a positive semidefinite matrix and \(P\succ Q\) means that \(P-Q\) is a positive definite matrix. We will also use the Cauchy-Bunyakovsky inequality \(|x^{\mathrm{T}}\,y|\leq\|x\|\|y\|\) for \(x,y\in\mathbb{R}^{n}\).
## 2 Statement of the problem
Consider a coupled linear system of impulsive differential equations
\[\begin{split}\dot{x}_{1}(t)&=A_{11}x_{1}(t)+A_{12}(t)x _{2}(t),\quad t\neq\tau_{k}\\ \dot{x}_{2}(t)&=A_{21}(t)x_{1}(t)+A_{22}x_{2}(t), \quad t\neq\tau_{k},\\ x_{1}(t^{+})&=B_{11}x_{1}(t)+B_{12}x_{2}(t),\quad t =\tau_{k},\\ x_{2}(t^{+})&=B_{21}x_{1}(t)+B_{22}x_{2}(t),\quad t =\tau_{k}\end{split} \tag{1}\]
where \(x_{i}\in\mathbb{R}^{n_{i}}\), \(i=1,2\), \(A_{ij}\,:\mathbb{R}\rightarrow\mathbb{R}^{n_{i}\times n_{j}}\) are piece-wise continuous functions, \(i,j=1,2\). Suppose that \(A_{ii}\) are constant, \(A_{ij}(t)\), \(i\neq j\) are \(\theta\)-periodic, i.e., \(A_{ij}(t+\theta)=A_{ij}(t)\) for all \(t\in\mathbb{R}\), \(\{\tau_{k}\}_{k=0}^{\infty}\) is a sequence of moments of impulse action, and \(B_{ij}\in\mathbb{R}^{n_{i}\times n_{j}}\) are constant. We denote \(x=(x_{1}^{\mathrm{T}}\,,x_{2}^{\mathrm{T}}\,)^{\mathrm{T}}\) and \(n=n_{1}+n_{2}\). For \(\{\tau_{k}\}_{k=0}^{\infty}\), we assume that there are positive constants \(\theta_{1}\) and \(\theta_{2}\) such that the dwell time \(T_{k}=\tau_{k}-\tau_{k-1}\), \(k\geq 1\) satisfy \(\theta_{1}\leq T_{k}\leq\theta_{2}\).
Coupled systems of the form (1) naturally arise in the process of mathematical modeling of the dynamics of coupled impulsive time-independent systems that exchange information with each other. We consider the case when the coupling functions between subsystems change in time, and independent subsystems are not subject to parametric disturbances and are described by linear systems with constant parameters. Classical approaches to the study of the problem of stability of coupled systems based on a Lyapunov vector function or the concept of ISS a priori assume the property of asymptotic stability of independent subsystems. Therefore, it is of interest to get rid of this a priori assumption, which is caused not by the essence of the problem, but by the restrictions of existing methods for studying stability.
The results presented below can be extended easily to the case of an arbitrary number of independent subsystems. Here we restrict ourselves to the case of two subsystems in order to make the presentation more accessible without overshadowing it with technical details. We study a class of linear non-autonomous systems that allow decomposition into subsystems with time-invariant independent subsystems. The construction of a Lyapunov function for this class can
be significantly simplified due to decomposition, in comparison with the direct study of the system (1) without resorting to decomposition.
Note that there exist real constants \(\mu_{i}\), \(\delta_{i}\), \(M_{i}\), \(N_{i}\), \(i=1,2\), such that
\[\|e^{sA_{ii}}\|\leq M_{i}e^{s\mu_{i}},\quad\|e^{-sA_{ii}}\|\leq N_{i}e^{s\delta _{i}},\quad s\geq 0.\]
_Remark 4.1._ There are several ways to obtain the estimate \(\|e^{tA}\|\leq Me^{\mu t}\), \(t\geq 0\), \(A\in\mathbb{R}^{n\times n}\). Here we use a result from [25], where one can find the estimate
\[\|e^{tA}\|\leq e^{\beta_{A}t}\sum_{k=0}^{n-1}\frac{g_{A}^{k}t^{k}}{(k!)^{3/2}}, \tag{2}\]
where \(\beta_{A}=\max\{\operatorname{Re}\,\lambda\,|\lambda\in\sigma(A)\}\), \(g_{A}=\sqrt{\operatorname{tr}\,(AA^{\mathrm{T}}\,)-|\operatorname{tr}\,A^{2}|}\). From (2), we can derive the estimates we need as follows.
If \(g_{A}=0\), \(M=1\) and \(\mu=\beta_{A}\) then from (2) immediately implies the estimates we need. Let \(g_{A}\neq 0\) and for a given \(\epsilon>0\), we denote
\[M_{\epsilon,A}:=\sup_{t\geq 0}e^{-\epsilon t}\sum_{k=0}^{n-1}\frac{g_{A}^{k}t^ {k}}{(k!)^{3/2}}<\infty,\]
then
\[\|e^{tA}\|\leq M_{\epsilon,A}e^{(\beta_{A}+\epsilon)t},\quad t\geq 0.\]
Since the functions \(A_{ij}(t)\) are assumed to be bounded and periodic, then for some positive constants \(\gamma_{12}^{(m)}\), \(\gamma_{21}^{(m)}\), \(m=0,1,\ldots,N-1\) it holds that
\[\sup_{s\in(mh,(m+1)h]}\|A_{12}(s)\|\leq\gamma_{12}^{(m)},\quad\sup_{s\in(mh, (m+1)h]}\|A_{21}(s)\|\leq\gamma_{21}^{(m)}.\]
For any \(m\in\mathbb{Z}\), let \(\gamma_{ij}^{(m)}:=\gamma_{ij}^{(\varrho)}\), where \(\varrho\) is the remainder of \(m\) devided by \(N\). By this we extend the definition of constants \(\gamma_{ij}^{(m)}\) for any \(m\in\mathbb{Z}\). Here we study the asymptotic stability of (1) in the sense of:
**Definition 2.1.** System of differential equations (1) is called
1) stable if for any \(\varepsilon>0\), \(t_{0}\in\mathbb{R}\) there exists \(\delta=\delta(\varepsilon,t_{0})>0\) such that the inequality \(\|x_{0}\|<\delta\) implies that \(\|x(t,t_{0},x_{0})\|<\varepsilon\) for all \(t\geq t_{0}\);
2) uniformly stable if for any \(\varepsilon>0\) there exists \(\delta=\delta(\varepsilon)>0\) such that for all \(t_{0}\in\mathbb{R}\) the inequality \(\|x_{0}\|<\delta\) implies \(\|x(t,t_{0},x_{0})\|<\varepsilon\) for \(t\geq t_{0}\);
3) asymptotically stable if it is stable and \(\lim\limits_{t\to+\infty}\|x(t,t_{0},x_{0})\|=0\).
Here, \(x(t,t_{0},x_{0})\) is the solution of the Cauchy problem (1) with the initial condition \(x(t_{0},t_{0},x_{0})=x_{0}\), \(x_{0}=(x_{10}^{\rm T},x_{20}^{\rm T})^{\rm T}\,\in\mathbb{R}^{n}\).
The aim of this work is to construct a Lyapunov function for (1) and to prove sufficient conditions for its stability. The case of the periodic system (1), when \(\theta_{1}=\theta_{2}=\theta\) and the general case, when \(\theta_{1}<\theta_{2}\) are considered separately. If the linear impulsive system (1) is not periodic, the Floquet theory is not applicable.
## 3 General idea of construction of a matrix-valued Lyapunov function
Here we consider the case \(\theta_{1}=\theta_{2}\) and present the general idea, how we will derive a Lyapunov function for (1). Without any restriction we assume that \(t_{0}=0\). Let \(\mathfrak{V}(t,x)=(v_{ij}(t,\cdot,\cdot))_{i,j=1,2}\) be a matrix-valued Lyapunov function (MFL) [16; 26]. We choose \(v_{ij}(t,x_{i},x_{j})=x_{i}^{\rm T}\,P_{ij}(t)x_{j}\), where \(P_{ij}\,:\mathbb{R}\to\mathbb{R}^{n_{i}\times n_{j}}\) are continuous on the left \(\theta\)-periodic maps, \(P_{ij}(t)=P_{ji}^{\rm T}\,(t)\). It is enough to define \(P_{ij}(t)\), \(i,j=1,2\) on the period \((0,\theta]\). For the practical application of this rather general design, we provide a method of construction of \(P_{ij}(t)\).
From \(\mathfrak{V}\) we proceed to the following scalar Lyapunov function
\[v(t,x_{1},x_{2})=(1,1)\mathfrak{V}(t,x_{1},x_{2})(1,1)^{\rm T}\,=v_{11}(t,x_{ 1})+2v_{12}(t,x_{1},x_{2})+v_{22}(t,x_{2})\]
\[=(x_{1}^{\rm T}\,,x_{2}^{\rm T}\,)P(t)(x_{1}^{\rm T}\,,x_{2}^{\rm T}\,)^{\rm T}\,,\]
where \(P(t)=(P_{ij}(t))_{i,j=1,2}\) is a block matrix \(P_{ij}(t)=P_{ji}^{\rm T}\,(t)\) to be designed.
Let matrix \(P(t)\) satisfy the condition
\[\dot{P}(t)+A^{\rm T}\,(t)P(t)+P(t)A(t)=0,\quad t\in(k\theta,(k+1)\theta),\quad k \in\mathbb{Z}_{+}, \tag{3}\]
with the block matrix \(A(t)=(A_{ij}(t))_{i,j=1,2}\). It is easy to show that for \(t=k\theta\) the following estimate holds
\[v(k\theta+0,x_{1}(k\theta+0),x_{2}(k\theta+0))=x^{\rm T}\,(k\theta +0)P(k\theta+0)x(k\theta+0)\] \[=(Bx(k\theta))^{\rm T}\,P_{0}Bx(k\theta)=x^{\rm T}\,(k\theta)B^{ \rm T}\,P_{0}Bx(k\theta)\leq\lambda v(k\theta,x_{1}(k\theta),x_{2}(k\theta)),\]
where \(\lambda=\lambda_{\rm max}(B^{\rm T}\,P_{0}B(P(\theta))^{-1})\), \(B=(B_{ij})_{i,j=1,2}\) and \(P_{0}=P(0+0)\) is a symmetric positive definite matrix. From (3) it follows that \(v(t,x_{1},x_{2})\) satisfies
the following system of impulsive differential inequalities
\[\dot{v}(t,x_{1}(t),x_{2}(t))=0,\quad t\neq k\theta,\] \[v(t+0,x_{1}(t+0),x_{2}(t+0))\leq\lambda v(t,x_{1}(t),x_{2}(t)), \quad t=k\theta.\]
By means of the comparison principle [2] the stability investigation of (1) reduces to the stability question of
\[\dot{u}(t) =0,\quad t\neq k\theta, \tag{4}\] \[u(t+0) =\lambda u(t),\quad t=k\theta.\]
The asymptotic stability of (4) implies the same property for (1). The condition \(\lambda<1\) (which restricts \(P_{0}\succ 0\)) is necessary and sufficient for the asymptotic stability of (4) and is equivalent to the necessary and sufficient condition for the asymptotic stability of (1), as can be seen from the Floquet-Lyapunov theorem. Resolving (3) on the interval \((k\theta,(k+1)\theta)\) is equivalent to the calculation of the monodromy matrix \(\Phi\) for (1), i.e., the problem is comparable in complexity to the integration of (1). The main idea of construction of the matrix-valued Lyapunov function is to construct an approximate solution to (3) written in the block form:
\[\dot{P}_{11}(t)+A_{11}^{\mathrm{T}}P_{11}(t)+P_{11}(t)A_{11} =-(P_{12}(t)A_{21}(t)+A_{21}^{\mathrm{T}}(t)P_{21}(t)), \tag{5}\] \[\dot{P}_{22}(t)+A_{22}^{\mathrm{T}}P_{22}(t)+P_{22}(t)A_{22} =-(A_{12}^{\mathrm{T}}(t)P_{12}(t)+P_{21}(t)A_{12}(t)),\] \[\dot{P}_{12}(t)+A_{11}^{\mathrm{T}}P_{12}(t)+P_{12}(t)A_{22} =-(P_{11}(t)A_{12}(t)+A_{21}^{\mathrm{T}}(t)P_{22}(t)).\]
Since \(P(t)\) is \(\theta\)-periodic, we need to construct a solution to (5) on the interval \((0,\theta]\). Let \(P_{ij}(0+0)=P_{ij}^{(0)}\), \(i,j=1,2\), \(P_{ij}^{(0)}=(P_{ji}^{(0)})^{\mathrm{T}}\), where \((P_{ij}^{(0)})_{i,j=1,2}\) is a positive definite block matrix. We use the discretization method to resolve (5). For \(N\in\mathbb{N}\) let \(h=\frac{\theta}{N}\) be the discretization step. We derive an approximate solution to (5) step by step on the intervals \((mh,(m+1)h]\), \(m=0,\ldots,N-1\). By the Cauchy formula applied to the interval \((mh,(m+1)h]\) we obtain integral
representations for the solutions of (5)
\[P_{11}(t)=e^{-A_{11}^{\mathrm{T}}(t-mh)}P_{11}^{(m)}e^{-A_{11}(t-mh)}\]
\[-\int\limits_{mh}^{t}e^{-A_{11}^{\mathrm{T}}(t-s)}(P_{12}(s)A_{21}(s)+A_{21}^{ \mathrm{T}}(s)P_{21}(s))e^{-A_{11}(t-s)}\,ds,\]
\[P_{22}(t)=e^{-A_{22}^{\mathrm{T}}(t-mh)}P_{22}^{(m)}e^{-A_{22}(t-mh)}\]
\[-\int\limits_{mh}^{t}e^{-A_{22}^{\mathrm{T}}(t-s)}(A_{12}^{\mathrm{T}}(s)P_{12 }(s)+P_{21}(s)A_{12}(s))e^{-A_{22}(t-s)}\,ds,\]
\[P_{12}(t)=e^{-A_{11}^{\mathrm{T}}(t-mh)}P_{12}^{(m)}e^{-A_{22}(t-mh)}\]
\[-\int\limits_{mh}^{t}e^{-A_{11}^{\mathrm{T}}(t-s)}(P_{11}(s)A_{12}(s)+A_{21}^ {\mathrm{T}}(s)P_{22}(s))e^{-A_{22}^{\mathrm{T}}(t-s)}\,ds.\]
Here, \(P_{ij}^{(m)}=P_{ij}(mh-0)\), \(m=1,\ldots,N-1\), \(i,j=1,2\). For \(h\) small enough we use the approximation \(P_{ij}(s)\approx P_{ij}^{(m)}\) in the integrals which leads to the approximate solutions to (5) for \(t\in(mh,(m+1)h]\) as follows
\[P_{11}(t)\approx e^{-A_{11}^{\mathrm{T}}(t-mh)}(P_{11}^{(m)}-\int\limits_{mh }^{t}(P_{12}^{(m)}A_{21}(s)+A_{21}^{\mathrm{T}}(s)P_{21}^{(m)})\,ds)e^{-A_{11 }(t-mh)},\]
\[P_{22}(t)\approx e^{-A_{22}^{\mathrm{T}}(t-mh)}(P_{22}^{(m)}-\int\limits_{mh }^{t}(A_{12}^{\mathrm{T}}(s)P_{12}^{(m)}+P_{21}^{(m)}A_{12}(s))\,ds)e^{-A_{22} (t-mh)},\]
\[P_{12}(t)\approx e^{-A_{11}^{\mathrm{T}}(t-mh)}(P_{12}^{(m)}-\int\limits_{mh }^{t}(P_{11}^{(m)}A_{12}(s)+A_{21}^{\mathrm{T}}(s)P_{22}^{(m)})\,ds)e^{-A_{22} (t-mh)}.\]
Note that for \(h\to 0+\) these approximations converge to the true solutions to (5). Since the asymptotic stability conditions derived with help of \(\mathfrak{V}(t,x_{1},x_{2})\) are necessary and sufficient, the matrix-valued Lyapunov function, whose elements are given by these approximations leads to sufficient asymptotic stability conditions for (1) arbitrarily close to the necessary ones.
## 4 The case when the system is periodic
### Construction of the matrix-valued Lyapunov function
We proceed to a rigorous justification of the proposed method of construction in the case when the dwell-times are constant and equal to the period \(\theta\) of \(A_{ij}(t)\). We introduce discretization parameters: the number of nodes \(N\in\mathbb{N}\) and the discretization step length \(h=\frac{\theta}{N}\). Let \(P_{0}=(P_{ij}^{(0)})_{i,j=1,2}\) be a positive definite symmetric block matrix, \(P_{ij}^{(0)}\in\mathbb{R}^{n_{i}\times n_{j}}\), \(P_{ij}^{(0)}=(P_{ji}^{(0)})^{\mathrm{T}}\,\). We define recursively the matrices \(P_{ij}^{(m)},\quad P_{ji}^{(m)}=(P_{ij}^{(m)})^{\mathrm{T}}\,,\quad i,j=1,2\) as follows
\[P_{11}^{(m+1)}=e^{-A_{11}^{\mathrm{T}}h}(P_{11}^{(m)}-\int\limits_{mh}^{(m+1) h}(P_{12}^{(m)}A_{21}(s)+A_{21}^{\mathrm{T}}(s)P_{21}^{(m)})\,ds)e^{-A_{11}h}, \tag{6}\]
\[P_{22}^{(m+1)}=e^{-A_{22}^{\mathrm{T}}h}(P_{22}^{(m)}-\int\limits_{mh}^{(m+1) h}(A_{12}^{\mathrm{T}}(s)P_{12}^{(m)}+P_{21}^{(m)}A_{12}(s))\,ds)e^{-A_{22}h} \tag{7}\]
\[P_{12}^{(m+1)}=e^{-A_{11}^{\mathrm{T}}h}(P_{12}^{(m)}-\int\limits_{mh}^{(m+1) h}(P_{11}^{(m)}A_{12}(s)+A_{21}^{\mathrm{T}}(s)P_{22}^{(m)})\,ds)e^{-A_{22}h}. \tag{8}\]
Next, define the matrices \(P_{ij}(t)\), \(i,j=1,2\), \(P_{ij}(t)=P_{ji}^{\mathrm{T}}(t)\) on the intervals \((mh,(m+1)h]\) by setting
\[P_{11}(t)=e^{-A_{11}^{\mathrm{T}}(t-mh)}(P_{11}^{(m)}-\int\limits_{mh}^{t}(P_ {12}^{(m)}A_{21}(s)+A_{21}^{\mathrm{T}}(s)P_{21}^{(m)})\,ds)e^{-A_{11}(t-mh)}, \tag{9}\]
\[P_{22}(t)=e^{-A_{22}^{\mathrm{T}}(t-mh)}(P_{22}^{(m)}-\int\limits_{mh}^{t}(A_ {12}^{\mathrm{T}}(s)P_{12}^{(m)}+P_{21}^{(m)}A_{12}(s))\,ds)e^{-A_{22}(t-mh)} \tag{10}\]
\[P_{12}(t)=e^{-A_{11}^{\mathrm{T}}(t-mh)}(P_{12}^{(m)}-\int\limits_{mh}^{t}(P_ {11}^{(m)}A_{12}(s)+A_{21}^{\mathrm{T}}(s)P_{22}^{(m)})\,ds)e^{-A_{22}(t-mh)}. \tag{11}\]
We define \(\mathfrak{V}(t,x_{1},x_{2})=(v_{ij}(t,.,.))_{i,j=1,2}\), where \(\,v_{ij}(t,x_{i},x_{j})=x_{i}^{\mathrm{T}}\,P_{ij}(t)x_{j}\), \(P_{ij}(t)=P_{ji}^{\mathrm{T}}(t)\), \(i,j=1,2\). Using the function \(\mathfrak{V}\), we construct the scalar Lyapunov function [16]
\[v(t,x_{1},x_{2})=v_{11}(t,x_{1})+2v_{12}(t,x_{1},x_{2})+v_{22}(t,x_{2}). \tag{12}\]
We establish some auxiliary estimates for the derivatives of the components of \(\mathfrak{V}\) along the solutions of (1) and estimates for the Lyapunov function \(v(t,x_{1},x_{2})\), necessary for construction of a comparison linear impulsive differential equation and obtaining sufficient conditions for the asymptotic stability of (1). We will also use the following proposition.
**Proposition 4.1** Let \(M,\mu>0\) be such that \(\|e^{tA}\|\leq Me^{t\mu}\), \(t\geq 0\). Then
\[\|e^{tA}-I\|\leq\frac{\|A\|M}{\mu}(e^{t\mu}-1),\quad t\geq 0. \tag{13}\]
_Proof._ Let \(X(t)=e^{tA}-I\), then \(X(0)=0\) and \(\dot{X}(t)=A(X(t)+I)\), applying the Cauchy formula, we obtain \(X(t)=\int\limits_{0}^{t}e^{A(t-s)}A\,ds\), which implies (13).
The following assertion is necessary to verify the positive-definiteness conditions for the proposed Lyapunov function \(v(t,x_{1},x_{2})\). Since this function depends explicitly on time \(t\), it is impossible to check this condition pointwise for all \(t\in[0,\theta]\). The following Lemma 4.1 reduces this checking to a finite number of conditions.
**Lemma 4.1.** Let \(z_{1m}=e^{-A_{11}(t-mh)}x_{1}\), \(z_{2m}=e^{-A_{22}(t-mh)}x_{2}\), \(t\in(mh,(m+1)h]\). Then,
\[\begin{array}{l}\lambda_{\min}(\Pi_{m})\|z_{m}\|^{2}\leq v(t,x_{1},x_{2}) \leq\lambda_{\max}(\Xi_{m})\|z_{m}\|^{2},\\ \mbox{for all}\quad t\in(mh,(m+1)h],\quad m=0,\ldots,N-1,\end{array} \tag{14}\]
where \(z_{m}=(z_{1m}^{\rm T},z_{2m}^{\rm T})^{\rm T}\), \(\|z_{m}\|^{2}=\|z_{1m}\|^{2}+\|z_{2m}\|^{2}\), \(\Pi_{m}=(\pi_{ij}^{(m)})_{i,j=1,2}\), \(\Xi_{m}=(\xi_{ij}^{(m)})_{i,j=1,2}\) are block matrices with the elements
\[\begin{array}{l}\pi_{11}^{(m)}=P_{11}^{(m)}-h(2\gamma_{21}^{(m)}\|P_{12}^{( m)}\|+(\gamma_{12}^{(m)}\|P_{11}^{(m)}\|+\gamma_{21}^{(m)}\|P_{22}^{(m)}\|))I _{n_{1}},\\ \pi_{22}^{(m)}=P_{22}^{(m)}-h(2\gamma_{12}^{(m)}\|P_{12}^{(m)}\|+(\gamma_{12}^{ (m)}\|P_{11}^{(m)}\|+\gamma_{21}^{(m)}\|P_{22}^{(m)}\|))I_{n_{2}},\\ \pi_{12}^{(m)}=P_{12}^{(m)},\quad\pi_{21}^{(m)}=P_{21}^{(m)}\end{array} \tag{15}\]
\[\begin{array}{l}\xi_{11}^{(m)}=P_{11}^{(m)}+h(2\gamma_{21}^{(m)}\|P_{12}^{( m)}\|+(\gamma_{12}^{(m)}\|P_{11}^{(m)}\|+\gamma_{21}^{(m)}\|P_{22}^{(m)}\|))I _{n_{1}},\\ \xi_{22}^{(m)}=P_{22}^{(m)}+h(2\gamma_{12}^{(m)}\|P_{12}^{(m)}\|+(\gamma_{12}^ {(m)}\|P_{11}^{(m)}\|+\gamma_{21}^{(m)}\|P_{22}^{(m)}\|))I_{n_{2}},\\ \xi_{12}^{(m)}=P_{12}^{(m)},\quad\xi_{21}^{(m)}=P_{21}^{(m)}.\end{array}\]
The proof of this statement is given in the Appendix.
Let
\[\begin{array}{l}\eta_{11}^{(m)}:=\sqrt{\|P_{11}^{(m)}\|^{2}+\|P_{12}^{(m)}\|^{2} }+\|P_{12}^{(m)}\|,\\ \eta_{22}^{(m)}:=\sqrt{\|P_{22}^{(m)}\|^{2}+\|P_{12}^{(m)}\|^{2}}+\|P_{12}^{(m)}\| \\ \eta_{12}^{(m)}:=\frac{1}{2}(\|P_{11}^{(m)}\|\gamma_{12}^{(m)}+\|P_{22}^{(m)}\| \gamma_{21}^{(m)}\\ +\sqrt{(\|P_{11}^{(m)}\|\gamma_{12}^{(m)}+\|P_{22}^{(m)}\|\gamma_{21}^{(m)})^{ 2}+16(\gamma_{21}^{(m)})^{2}\|P_{12}^{(m)}\|^{2})}.\\ \eta_{21}^{(m)}:=\frac{1}{2}(\|P_{11}^{(m)}\|\gamma_{12}^{(m)}+\|P_{22}^{(m)}\| \gamma_{21}^{(m)}\\ +\sqrt{(\|P_{11}^{(m)}\|\gamma_{12}^{(m)}+\|P_{22}^{(m)}\|\gamma_{21}^{(m)})^{ 2}+16(\gamma_{12}^{(m)})^{2}\|P_{12}^{(m)}\|^{2})}.\end{array}\]
We denote
\[\begin{array}{l}\alpha_{11}^{(m)}:=\gamma_{12}^{(m)}\|A_{22}\|N_{1}M_{2}\eta_ {11}^{(m)},\quad\alpha_{12}^{(m)}:=\gamma_{12}^{(m)}\|A_{11}\|N_{1}\eta_{11}^{ (m)},\\ \alpha_{21}^{(m)}:=\gamma_{21}^{(m)}\|A_{11}\|N_{2}M_{1}\eta_{22}^{(m)},\quad \alpha_{22}^{(m)}:=\gamma_{21}^{(m)}\|A_{22}\|N_{2}\eta_{22}^{(m)},\end{array}\]
\[\begin{array}{l}\Theta_{m}(h):=\frac{\alpha_{11}^{(m)}}{\mu_{2}}\Big{(}\frac {e^{(\mu_{2}+\delta_{1})h}-1}{\mu_{2}+\delta_{1}}-\frac{e^{\delta_{1}h}-1}{ \delta_{1}}\Big{)}+\frac{\alpha_{12}^{(m)}}{\delta_{1}}\Big{(}\frac{e^{h\delta _{1}}-1}{\delta_{1}}-h\Big{)}\\ +\frac{\alpha_{21}^{(m)}}{\mu_{1}}\Big{(}\frac{e^{(\mu_{1}+\delta_{2})h}-1}{ \mu_{1}+\delta_{2}}-\frac{e^{\delta_{2}h}-1}{\delta_{2}}\Big{)}+\frac{\alpha_ {22}^{(m)}}{\delta_{2}}\Big{(}\frac{e^{h\delta_{2}}-1}{\delta_{2}}-h\Big{)}+\\ +2\Big{(}\gamma_{12}^{(m)}\eta_{12}^{(m)}N_{1}M_{2}\Big{(}\frac{he^{(\mu_{2}+ \delta_{1})h}}{\mu_{2}+\delta_{1}}-\frac{e^{(\mu_{2}+\delta_{1})h}-1}{(\mu_{2 }+\delta_{1})^{2}}\Big{)}\\ +\gamma_{21}^{(m)}\eta_{21}^{(m)}N_{2}M_{1}\Big{(}\frac{he^{(\mu_{1}+\delta_{2 })h}}{\mu_{1}+\delta_{2}}-\frac{e^{(\mu_{1}+\delta_{2})h}-1}{(\mu_{1}+\delta_{ 2})^{2}}\Big{)}\Big{)}.\end{array} \tag{16}\]
The following lemma establishes estimates for the change of the Lyapunov function on each of the discretization intervals \((mh,(m+1)h]\).
**Lemma 4.2** If \(\Pi_{m}\) are positive definite, for all \(m=0,\ldots,N-1\), then
\[\begin{array}{l}v((m+1)h,x_{1}((m+1)h),x_{2}((m+1)h))\\ \leq e^{\frac{\Theta_{m}(h)}{\lambda_{\min}(\Pi_{m})}}v(mh+0,x_{1}(mh+0),x_{2} (mh+0)).\end{array} \tag{17}\]
Proof.: Let \(t\in(mh,(m+1)h)\). It can be established by direct calculations that
\[\begin{array}{c}\dot{v}(t,x_{1}(t),x_{2}(t))=x_{1}^{\rm T}(t)(P_{12}(t)A_{21}(t )+A_{21}^{\rm T}(t)P_{21}(t)\\ -e^{-A_{11}^{\rm T}(t-mh)}(P_{12}^{(m)}A_{21}(t)+A_{21}^{\rm T}(t)P_{21}^{(m)} )e^{-A_{11}(t-mh)})x_{1}(t)\\ \hskip 28.452756pt+2x_{1}^{\rm T}(t)(P_{11}(t)A_{12}(t)+A_{21}^{\rm T}(t)P_{22} (t)\\ -e^{-A_{11}^{\rm T}(t-mh)}(P_{11}^{(m)}A_{12}(t)+A_{21}^{\rm T}(t)P_{22}^{(m)} )e^{-A_{22}(t-mh)})x_{2}(t)\\ \hskip 28.452756pt+x_{2}^{\rm T}(t)((A_{12}^{\rm T}(t)P_{12}(t)+P_{21}(t)A_{12 }(t))\\ -e^{-A_{22}^{\rm T}(t-mh)}(A_{12}^{\rm T}(t)P_{12}^{(m)}+P_{21}^{(m)}A_{12}(t) )e^{-A_{22}(t-mh)})x_{2}(t)\\ =z_{1m}^{\rm T}(t)(e^{A_{11}^{\rm T}(t-mh)}(P_{12}(t)A_{21}(t)+A_{21}^{\rm T}( t)P_{21}(t))e^{A_{11}(t-mh)}\\ \hskip 28.452756pt-(P_{12}^{(m)}A_{21}(t)+A_{21}^{\rm T}(t)P_{21}^{(m)}))z_{1m }(t)\\ +2z_{1m}^{\rm T}(t)(e^{A_{11}^{\rm T}(t-mh)}(P_{11}(t)A_{12}(t)+A_{21}^{\rm T}( t)P_{22}(t))e^{A_{22}(t-mh)}\\ -(P_{11}^{(m)}A_{12}(t)+A_{21}^{\rm T}(t)P_{22}^{(m)}))z_{2m}(t)\\ +z_{2m}^{\rm T}(t)(e^{A_{22}^{\rm T}(t-mh)}(A_{12}^{\rm T}(t)P_{12}(t)+P_{21}( t)A_{12}(t))e^{A_{22}(t-mh)}\\ -(A_{12}^{\rm T}(t)P_{12}^{(m)}+P_{21}^{(m)}A_{12}(t)))z_{2m}(t)\end{array} \tag{18}\]
Let us consider separately
\[\begin{array}{c}e^{A_{11}^{\rm T}(t-mh)}(P_{12}(t)A_{21}(t)+A_{21}^{\rm T}( t)P_{21}(t))e^{A_{11}(t-mh)}-(P_{12}^{(m)}A_{21}(t)+A_{21}^{\rm T}(t)P_{21}^{(m)} )\\ \hskip 28.452756pt=e^{A_{11}^{\rm T}(t-mh)}P_{12}(t)A_{21}(t)e^{A_{11}(t-mh)}-P _{12}^{(m)}A_{21}(t)\\ \hskip 28.452756pt+e^{A_{11}^{\rm T}(t-mh)}A_{21}^{\rm T}(t)P_{21}(t)e^{A_{11}( t-mh)}-A_{21}^{\rm T}(t)P_{21}^{(m)}\end{array}\]
Taking into account the explicit expressions for \(P_{12}(t)\) from (11) we get
\[\begin{array}{c}e^{A_{11}^{\rm T}(t-mh)}P_{12}(t)A_{21}(t)e^{A_{11}(t-mh)}- P_{12}^{(m)}A_{21}(t)\\ =(P_{12}^{(m)}-\int\limits_{mh}^{t}(P_{11}^{(m)}A_{12}(s)+A_{21}^{\rm T}(s)P_{2 2}^{(m)})\,ds)e^{-A_{22}(t-mh)}A_{21}(t)e^{A_{11}(t-mh)}\\ \hskip 28.452756pt-P_{12}^{(m)}A_{21}(t))=P_{12}^{(m)}(e^{-A_{22}(t-mh)}A_{21}( t)e^{A_{11}(t-mh)}-A_{21}(t))\\ \hskip 28.452756pt-\int\limits_{mh}^{t}(P_{11}^{(m)}A_{12}(s)+A_{21}^{\rm T}( s)P_{22}^{(m)})\,ds)e^{-A_{22}(t-mh)}A_{21}(t)e^{A_{11}(t-mh)}\end{array}\]
Consequently, using (13) we obtain
\[\|e^{A_{11}^{\rm T}(t-mh)}P_{12}(t)A_{21}(t)e^{A_{11}(t-mh)}-P_{12}^{(m)}A_{21}(t)\|\]
\[\leq\gamma_{21}^{(m)}\Big{(}\|P_{12}^{(m)}\|(\frac{\|A_{11}\|M_{1}N_{2}}{\mu_{1} }e^{\delta_{2}(t-mh)}(e^{\mu_{1}(t-mh)}-1)+\frac{N_{2}\|A_{22}\|}{\delta_{2}}( e^{\delta_{2}(t-mh)}-1))\]
\[+(t-mh)(\|P_{11}^{(m)}\|\gamma_{12}^{(m)}+\gamma_{21}^{(m)}\|P_{22}^{(m)}\|)M_ {1}N_{2}e^{(t-mh)(\mu_{1}+\delta_{2})}\Big{)}\]
and
\[\|e^{A_{11}^{\rm T}(t-mh)}A_{21}^{\rm T}(t)P_{21}(t)e^{-A_{11}(t-mh)}-A_{21}^{ \rm T}(t)P_{21}^{(m)}\|\]
\[\leq\gamma_{21}^{(m)}\Big{(}\|P_{12}^{(m)}\|(\frac{\|A_{11}\|M_{1}N_{2}}{\mu_{ 1}}e^{\delta_{2}(t-mh)}(e^{\mu_{1}(t-mh)}-1)+\frac{N_{2}\|A_{22}\|}{\delta_{2} }(e^{\delta_{2}(t-mh)}-1))\]
\[+(t-mh)(\|P_{11}^{(m)}\|\gamma_{12}^{(m)}+\gamma_{21}^{(m)}\|P_{22}^{(m)}\|)M_ {1}N_{2}e^{(t-mh)(\mu_{1}+\delta_{2})}\Big{)}\]
Hence, applying the triangle inequality we obtain
\[\|e^{A_{11}^{\rm T}(t-mh)}(P_{12}(t)A_{21}(t)+A_{21}^{\rm T}(t)P_{21}(t))e^{A_ {11}(t-mh)}-(P_{12}^{(m)}A_{21}(t)+A_{21}^{\rm T}(t)P_{21}^{(m)})\|\]
\[\leq 2\gamma_{21}^{(m)}\Big{(}\|P_{12}^{(m)}\|(\frac{\|A_{11}\|M_{1}N_{2}}{\mu_ {1}}e^{\delta_{2}(t-mh)}(e^{\mu_{1}(t-mh)}-1)+\frac{N_{2}\|A_{22}\|}{\delta_{2 }}(e^{\delta_{2}(t-mh)}-1))\]
\[+(t-mh)(\|P_{11}^{(m)}\|\gamma_{12}^{(m)}+\gamma_{21}^{(m)}\|P_{22}^{(m)}\|)M_ {1}N_{2}e^{(t-mh)(\mu_{1}+\delta_{2})}\Big{)}:=\psi_{11}^{(m)}(t).\]
Let us transform the following expression
\[e^{A_{11}^{\rm T}(t-mh)}(P_{11}(t)A_{12}(t)+A_{21}^{\rm T}(t)P_{22}(t))e^{A_{2 2}(t-mh)}-(P_{11}^{(m)}A_{12}(t)+A_{21}^{\rm T}(t)P_{22}^{(m)})\]
\[=e^{A_{11}^{\rm T}(t-mh)}P_{11}(t)A_{12}(t)e^{A_{22}(t-mh)}-P_{11}^{(m)}A_{12}(t)\]
\[+e^{A_{11}^{\rm T}(t-mh)}A_{21}^{\rm T}(t)P_{22}(t)e^{A_{22}(t-mh)}-A_{21}^{ \rm T}(t)P_{22}^{(m)}\]
Taking into account (9), consider separately
\[e^{A_{11}^{\rm T}(t-mh)}P_{11}(t)A_{12}(t)e^{A_{22}(t-mh)}-P_{11}^{(m)}A_{12}(t)\]
\[=P_{11}^{(m)}(e^{-A_{11}(t-mh)}A_{12}(t)e^{A_{22}(t-mh)}-A_{12}(t))\]
\[-\int\limits_{mh}^{t}(P_{12}^{(m)}A_{21}(s)+A_{21}^{\rm T}(s)P_{21}^{(m)})\, dse^{-A_{11}(t-mh)}A_{12}(t)e^{A_{22}(t-mh)}\]
Consequently, using (13) we get
\[\|e^{A_{11}^{\rm T}(t-mh)}P_{11}(t)A_{12}(t)e^{A_{22}(t-mh)}-P_{11}^{(m)}A_{12 }(t)\|\]
\[\leq\gamma_{12}^{(m)}\Big{(}\|P_{11}^{(m)}\|(\frac{\|A_{22}\|M_{2}N_{1}}{\mu_ {2}}e^{\delta_{1}(t-mh)}(e^{\mu_{2}(t-mh)}-1)+\frac{N_{1}\|A_{11}\|}{\delta_{1} }(e^{\delta_{1}(t-mh)}-1))\]
\[+2(t-mh)\|P_{12}^{(m)}\|\gamma_{21}^{(m)}N_{1}M_{2}e^{(t-mh)(\delta_{1}+\mu_{ 2})}\Big{)}.\]
Taking into account (10) we obtain
\[e^{A^{\rm T}_{11}(t-mh)}A^{\rm T}_{21}(t)P_{22}(t)e^{A_{22}(t-mh)}-A ^{\rm T}_{21}(t)P^{(m)}_{22}\] \[=(e^{A^{\rm T}_{11}(t-mh)}A^{\rm T}_{21}(t)e^{-A^{\rm T}_{22}(t-mh) }-A^{\rm T}_{21}(t))P^{(m)}_{22}\] \[-e^{A^{\rm T}_{11}(t-mh)}A^{\rm T}_{21}(t)e^{-A^{\rm T}_{22}(t-mh) }\int\limits_{mh}^{t}(A^{\rm T}_{12}(s)P^{(m)}_{12}+P^{(m)}_{21}A_{12}(s))\,ds\]
Thus, taking into account (13) we have
\[\|e^{A^{\rm T}_{11}(t-mh)}A^{\rm T}_{21}(t)P_{22}(t)e^{A_{22}(t-mh)}-A^{\rm T} _{21}(t)P^{(m)}_{22}\|\]
\[\leq\gamma^{(m)}_{21}\Big{(}\|P^{(m)}_{22}\|(\frac{\|A_{11}\|M_{1}N_{2}}{\mu_ {1}}e^{\delta_{2}(t-mh)}(e^{\mu_{1}(t-mh)}-1)+\frac{N_{2}\|A_{22}\|}{\delta_{2 }}(e^{\delta_{2}(t-mh)}-1))\]
\[+2(t-mh)\|P^{(m)}_{12}\|\gamma^{(m)}_{12}N_{2}M_{1}e^{(t-mh)(\delta_{2}+\mu_{ 1})}\Big{)}\]
Thereby, finally we find the following estimate
\[\|e^{A^{\rm T}_{11}(t-mh)}(P_{11}(t)A_{12}(t)+A^{\rm T}_{21}(t)P_{ 22}(t))e^{A_{22}(t-mh)}-(P^{(m)}_{11}A_{12}(t)+A^{\rm T}_{21}(t)P^{(m)}_{22})\|\] \[\leq\gamma^{(m)}_{12}\Big{(}\|P^{(m)}_{11}\|(\frac{\|A_{22}\|M_{2 }N_{1}}{\mu_{2}}e^{\delta_{1}(t-mh)}(e^{\mu_{2}(t-mh)}-1)+\frac{N_{1}\|A_{11} \|}{\delta_{1}}(e^{\delta_{1}(t-mh)}-1))\]
\[+2(t-mh)\|P^{(m)}_{12}\|\gamma^{(m)}_{21}N_{1}M_{2}e^{(t-mh)(\delta_{1}+\mu_{ 2})}\Big{)}\]
\[+\gamma^{(m)}_{21}\Big{(}\|P^{(m)}_{22}\|(\frac{\|A_{11}\|M_{1}N_{2}}{\mu_{1} }e^{\delta_{2}(t-mh)}(e^{\mu_{1}(t-mh)}-1)+\frac{N_{2}\|A_{22}\|}{\delta_{2}}( e^{\delta_{2}(t-mh)}-1))\]
\[+2(t-mh)\|P^{(m)}_{12}\|\gamma^{(m)}_{12}N_{2}M_{1}e^{(t-mh)(\delta_{2}+\mu_{ 1})}\Big{)}:=\psi^{(m)}_{12}(t).\]
Further, we consider
\[e^{A^{\rm T}_{22}(t-mh)}(A^{\rm T}_{12}(t)P_{12}(t)+P_{21}(t)A_{12}(t))e^{A_{2 2}(t-mh)}-(A^{\rm T}_{12}(t)P^{(m)}_{12}+P^{(m)}_{21}A_{12}(t))\]
\[=e^{A^{\rm T}_{22}(t-mh)}A^{\rm T}_{12}(t)P_{12}(t)e^{A_{22}(t-mh)}-A^{\rm T}_{1 2}(t)P^{(m)}_{12}\]
\[+e^{A^{\rm T}_{22}(t-mh)}P_{21}(t)A_{12}(t)e^{A_{22}(t-mh)}-P^{(m)}_{21}A_{12}(t)\]
Taking into account (11) we obtain
\[e^{A^{\rm T}_{22}(t-mh)}A^{\rm T}_{12}(t)P_{12}(t)e^{A_{22}(t-mh)}-A^{\rm T}_{1 2}(t)P^{(m)}_{12}\]
\[=(e^{A^{\rm T}_{22}(t-mh)}A^{\rm T}_{12}(t)e^{-A^{\rm T}_{11}(t-mh)}-A^{\rm T}_ {12}(t))P^{(m)}_{12}\]
\[-e^{A^{\rm T}_{22}(t-mh)}A^{\rm T}_{12}(t)e^{-A^{\rm T}_{11}(t-mh)}\int \limits_{mh}^{t}(P^{(m)}_{11}A_{12}(s)+A^{\rm T}_{21}(s)P^{(m)}_{22})\,ds\]
Consequently, applying (13) we obtain
\[\|e^{A^{\rm T}_{22}(t-mh)}A^{\rm T}_{12}(t)P_{12}(t)e^{A_{22}(t-mh)}-A^{\rm T}_{12 }(t)P^{(m)}_{12}\|\]
\[\leq\gamma^{(m)}_{12}\Big{(}\|P^{(m)}_{12}\|(\frac{\|A_{22}\|M_{2}N_{1}}{\mu_{2} }e^{\delta_{1}(t-mh)}(e^{\mu_{2}(t-mh)}-1)+\frac{N_{1}\|A_{11}\|}{\delta_{1}}( e^{\delta_{1}(t-mh)}-1))\]
\[+(t-mh)(\|P^{(m)}_{22}\|\gamma^{(m)}_{21}+\gamma^{(m)}_{12}\|P^{(m)}_{11}\|)M_ {2}N_{1}e^{(t-mh)(\mu_{2}+\delta_{1})}\Big{)}\]
Thus,
\[\|e^{A^{\rm T}_{22}(t-mh)}(A^{\rm T}_{12}(t)P_{12}(t)+P_{21}(t)A_{12}(t))e^{A_{ 22}(t-mh)}\]
\[-(A^{\rm T}_{12}(t)P^{(m)}_{12}+P^{(m)}_{21}A_{12}(t))\|\]
\[\leq 2\gamma^{(m)}_{12}\Big{(}\|P^{(m)}_{12}\|(\frac{\|A_{22}\|M_{2}N_{1}}{\mu_ {2}}e^{\delta_{1}(t-mh)}(e^{\mu_{2}(t-mh)}-1)+\frac{N_{1}\|A_{11}\|}{\delta_{1} }(e^{\delta_{1}(t-mh)}-1))\]
\[+(t-mh)(\|P^{(m)}_{22}\|\gamma^{(m)}_{21}+\gamma^{(m)}_{12}\|P^{(m)}_{11}\|)M_ {2}N_{1}e^{(t-mh)(\mu_{2}+\delta_{1})}\Big{)}:=\psi^{(m)}_{22}(t)\]
We define matrices \(\Psi_{m}(t)=(\psi^{(m)}_{ij}(t))_{i,j=1,2}\), then
\[\dot{v}(t,x_{1}(t),x_{2}(t))\leq\zeta^{\rm T}_{m}(t)\Psi_{m}(t)\zeta_{m}(t), \tag{19}\]
where \(\zeta_{m}(t)=(\|z_{1m}(t)\|,\|z_{2m}(t)\|)^{\rm T}\). We represent \(\Psi_{m}(t)\) in the following form
\[\Psi_{m}(t)=\gamma^{(m)}_{12}\Big{(}\|A_{22}\|M_{2}N_{1}e^{\delta_{1}(t-mh)} \frac{e^{\mu_{2}(t-mh)}-1}{\mu_{2}}+N_{1}\|A_{11}\|\frac{e^{\delta_{1}(t-mh)}- 1}{\delta_{1}}\Big{)}\Upsilon_{1}\]
\[+\gamma^{(m)}_{21}\Big{(}\|A_{11}\|M_{1}N_{2}e^{\delta_{2}(t-mh)}\frac{e^{\mu _{1}(t-mh)}-1}{\mu_{1}}+N_{2}\|A_{22}\|\frac{e^{\delta_{2}(t-mh)}-1}{\delta_{ 2}}\Big{)}\Upsilon_{2}\]
\[+2\gamma^{(m)}_{12}N_{1}M_{2}(t-mh)e^{(t-mh)(\mu_{2}+\delta_{1})}\widetilde{ \Upsilon}_{1}+2\gamma^{(m)}_{21}N_{2}M_{1}(t-mh)e^{(t-mh)(\mu_{1}+\delta_{2})} \widetilde{\Upsilon}_{2},\]
where
\[\Upsilon_{1}=\begin{pmatrix}0&\|P^{(m)}_{11}\|\\ \|P^{(m)}_{11}\|&2\|P^{(m)}_{12}\|\end{pmatrix},\quad\Upsilon_{2}=\begin{pmatrix} 2\|P^{(m)}_{12}\|&\|P^{(m)}_{22}\|\\ \|P^{(m)}_{22}\|&0\end{pmatrix}\]
\[\widetilde{\Upsilon}_{1}=\begin{pmatrix}0&2\|P^{(m)}_{12}\|\gamma^{(m)}_{21}\\ 2\|P^{(m)}_{12}\|\gamma^{(m)}_{21}&(\|P^{(m)}_{11}\|\gamma^{(m)}_{12}+\|P^{(m)} _{22}\|\gamma^{(m)}_{21})\end{pmatrix},\]
\[\widetilde{\Upsilon}_{2}=\begin{pmatrix}(\|P^{(m)}_{11}\|\gamma^{(m)}_{12}+\| P^{(m)}_{22}\|\gamma^{(m)}_{21})&2\|P^{(m)}_{12}\|\gamma^{(m)}_{12}\\ 2\|P^{(m)}_{12}\|\gamma^{(m)}_{12}&0\end{pmatrix},\]
Direct calculations show \(\|\Upsilon_{1}\|=\eta^{(m)}_{11}\), \(\|\Upsilon_{2}\|=\eta^{(m)}_{22}\), \(\|\widetilde{\Upsilon}_{1}\|=\eta^{(m)}_{12}\), \(\|\widetilde{\Upsilon}_{2}\|=\eta^{(m)}_{21}\).
For \(\Psi_{m}(t)\) the following estimates hold
\[\|\Psi_{m}(t)\|\leq\alpha_{11}^{(m)}e^{\delta_{1}(t-mh)}\frac{e^{\mu_ {2}(t-mh)}-1}{\mu_{2}}+\alpha_{12}^{(m)}\frac{e^{\delta_{1}(t-mh)}-1}{\delta_{1}}\] \[+\alpha_{21}^{(m)}e^{\delta_{2}(t-mh)}\frac{e^{\mu_{1}(t-mh)}-1}{ \mu_{1}}+\alpha_{22}^{(m)}\frac{e^{\delta_{2}(t-mh)}-1}{\delta_{2}}\] \[+2(\gamma_{12}^{(m)}\eta_{12}^{(m)}N_{1}M_{2}(t-mh)e^{(t-mh)(\mu_ {2}+\delta_{1})}+\gamma_{21}^{(m)}\eta_{21}^{(m)}N_{2}M_{1}(t-mh)e^{(t-mh)(\mu _{1}+\delta_{2})}).\]
Therefore, for \(\int_{mh}^{(m+1)h}\|\Psi_{m}(s)\|\,ds\) the following estimate holds
\[\int\limits_{mh}^{(m+1)h}\|\Psi_{m}(s)\|\,ds\leq\frac{\alpha_{11}^ {(m)}}{\mu_{2}}\Big{(}\frac{e^{(\mu_{2}+\delta_{1})h}-1}{\mu_{2}+\delta_{1}}- \frac{e^{\delta_{1}h}-1}{\delta_{1}}\Big{)}+\frac{\alpha_{12}^{(m)}}{\delta_{ 1}}\Big{(}\frac{e^{h\delta_{1}}-1}{\delta_{1}}-h\Big{)}\] \[+\frac{\alpha_{21}^{(m)}}{\mu_{1}}\Big{(}\frac{e^{(\mu_{1}+ \delta_{2})h}-1}{\mu_{1}+\delta_{2}}-\frac{e^{\delta_{2}h}-1}{\delta_{2}} \Big{)}+\frac{\alpha_{22}^{(m)}}{\delta_{2}}\Big{(}\frac{e^{h\delta_{2}}-1}{ \delta_{2}}-h\Big{)}+\] \[+2\Big{(}\gamma_{12}^{(m)}\eta_{12}^{(m)}N_{1}M_{2}\Big{(}\frac{ he^{(\mu_{2}+\delta_{1})h}}{\mu_{2}+\delta_{1}}-\frac{e^{(\mu_{2}+\delta_{1})h}-1}{( \mu_{2}+\delta_{1})^{2}}\Big{)}\] \[+\gamma_{21}^{(m)}\eta_{21}^{(m)}N_{2}M_{1}\Big{(}\frac{he^{(\mu_ {1}+\delta_{2})h}}{\mu_{1}+\delta_{2}}-\frac{e^{(\mu_{1}+\delta_{2})h}-1}{( \mu_{1}+\delta_{2})^{2}}\Big{)}\Big{)}=\Theta_{m}(h).\]
From (19), taking into account the Lemma 4.1, for \(t\in(mh,(m+1)h)\) we obtain
\[\dot{v}(t,x_{1}(t),x_{2}(t))\leq\zeta_{m}^{\rm T}\Psi_{m}(t)\zeta _{m}(t)\leq\|\Psi_{m}(t)\|(\|z_{1m}(t)\|^{2}+\|z_{2m}(t)\|^{2})\] \[\leq\frac{\|\Psi_{m}(t)\|}{\lambda_{\min}(\Pi_{m})}v(t,x_{1}(t),x _{2}(t)),\]
Integrating this differential inequality we get
\[v((m+1)h,x_{1}((m+1)h),x_{2}((m+1)h))\] \[\leq e^{\int\limits_{mh}^{(m+1)h}\frac{\|\Psi_{m}(s)\|}{\lambda_ {\min}(\Pi_{m})}\,ds}v(mh+0,x_{1}(mh+0),x_{2}(mh+0))\] \[\leq e^{\frac{\Theta_{m}(h)}{\lambda_{\min}(\Pi_{m})}}v(mh+0,x_{1 }(mh+0),x_{2}(mh+0)),\]
which completes the proof of the Lemma.
### Conditions for the asymptotic stability
We establish sufficient conditions for the asymptotic stability of (1) using \(\mathfrak{V}(t,x_{1},x_{2})\) constructed in the previous section and Lemma 4.2. Recall that \(P_{0}=(P_{ij}^{(0)})_{i,j=1,2}\), \(P_{N}=(P_{ij}^{(N)})_{i,j=1,2}\) are block matrices, where \(P_{ij}\) are defined by (6)-(8).
**Theorem 4.1.** Let \(N\in\mathbb{N}\) and \(P_{0}\succ 0\) be such that for \(m=0,\ldots,N-1\) the matrices \(\Pi_{m}\) defined by (15) are positive definite. If
\[Q:=\sum_{m=0}^{N-1}\frac{\Theta_{m}(h)}{\lambda_{\min}(\Pi_{m})}+\ln\lambda_{ \max}(P_{N}^{-1}B^{\rm T}\,P_{0}B)<0, \tag{20}\]
where \(h=\frac{\theta}{N}\) and \(\Theta_{m}(h)\) are defined in (16), then (1) is asymptotically stable.
_Proof._ From (17) it follows that for all \(m=0,\ldots,N-1\)
\[v((m+1)h,x_{1}((m+1)h),x_{2}((m+1)h))\]
\[\leq\exp\Big{(}\frac{\Theta_{m}(h)}{\lambda_{\min}(\Pi_{m})}\Big{)}v(mh+0,x_{1 }(mh+0),x_{2}(mh+0)).\]
Therefore,
\[v(\theta,x_{1}(\theta),x_{2}(\theta))\leq\exp\Big{(}\sum_{m=0}^{N-1}\frac{ \Theta_{m}(h)}{\lambda_{\min}(\Pi_{m})}\Big{)}v(0+0,x_{1}(0+0),x_{2}(0+0)).\]
For \(t=\theta\),
\[v(\theta+0,x_{1}(\theta+0),x_{2}(\theta+0))=x^{\rm T}\,(\theta+0)P_{0}x( \theta+0)\]
\[=x^{\rm T}\,(\theta)B^{\rm T}\,P_{0}Bx(\theta)=(P_{N}^{1/2}x(\theta))^{\rm T} \,P_{N}^{-1/2}B^{\rm T}\,P_{0}BP_{N}^{-1/2}P_{N}^{1/2}x(\theta)\]
\[\leq\lambda_{\max}(P_{N}^{-1/2}B^{\rm T}\,P_{0}BP_{N}^{-1/2})\|P_{N}^{1/2}x( \theta)\|^{2}=\lambda_{\max}(P_{N}^{-1}B^{\rm T}\,P_{0}B)v(\theta,x_{1}(\theta ),x_{2}(\theta)).\]
We recall that here \(x(\theta)=(x_{1}^{\rm T}\,(\theta),x_{2}^{\rm T}\,(\theta))^{\rm T}\,\). Consequently,
\[v(\theta+0,x_{1}(\theta+0),x_{2}(\theta+0))\]
\[\leq\lambda_{\max}(P_{N}^{-1}B^{\rm T}\,P_{0}B)\exp\Big{(}\sum_{m=0}^{N-1} \frac{\Theta_{m}(h)}{\lambda_{\min}(\Pi_{m})}\Big{)}v(0+0,x_{1}(0+0),x_{2}(0+0))\]
\[=e^{Q}v(0+0,x_{1}(0+0),x_{2}(0+0)).\]
Due to the periodicity in \(t\) of (1) and \(\mathfrak{V}(t,.,.)\) for any \(k\in\mathbb{Z}_{+}\) the following inequality holds
\[v((k+1)\theta+0,x_{1}((k+1)\theta+0),x_{2}((k+1)\theta+0))\]
\[\leq e^{Q}v(k\theta+0,x_{1}(k\theta+0),x_{2}(k\theta+0)).\]
Therefore,
\[v(k\theta+0,x_{1}(k\theta+0),x_{2}(k\theta+0))\leq e^{Qk}v(0+0,x_{1}(0+0),x_{2 }(0+0)).\]
From Lemma 4.1 it follows that
\[\lambda_{\min}(\Pi_{0})\|x(k\theta+0)\|^{2}\leq v(k\theta+0,x_{1}(k \theta+0),x_{2}(k\theta+0))\] \[\leq e^{Qk}v(0+0,x_{1}(0+0),x_{2}(0+0))\leq e^{Qk}\lambda_{\max}( \Xi_{0})\|x(0+0)\|^{2},\]
which is equivalent to
\[\|x(k\theta+0)\|\leq\sqrt{\frac{\lambda_{\max}(\Xi_{0})}{\lambda_{\min}(\Pi_{0 })}}e^{Qk/2}\|x(0+0)\|.\]
By conditions of Theorem 4.1, \(Q<0\), therefore, \(\|x(k\theta+0)\|\to 0\) as \(k\rightarrow\infty\) which proves the asymptotic stability of (1). The theorem is proved.
To check the condition (20), it is necessary to calculate \(\Theta_{m}(h)\). In addition, with the constants \(\gamma_{ij}^{(m)}\), \(m=0,\ldots,N-1\) included in \(\Theta_{m}(h)\), it is necessary to calculate the matrices \(P_{ij}^{(m)}\), \(i,j=1,2\), \(m=0,\ldots,N\) using the recurrent formulas (6)-(8). These formulas contain \(e^{-A_{ii}h}\) and \(\int_{mh}^{(m+1)h}A_{ij}(s)\,ds\). Such calculations are accessible to modern computing tools. It is intuitively clear that \(h\) should be chosen such that \(h\sup_{t\in[0,\theta],i,j=1,2}\|P_{ij}(t)\|\ll 1\), i.e. so that the change of \(P_{ij}(t)\) is rather small over the discretization period \((mh,(m+1)h]\).
**5. The case of a non-periodic system**
Consider the case of a linear impulsive system (1) when the dwell-times are subject to the conditions \(\theta_{1}\leq T_{k}\leq\theta_{2}\), \(\theta_{1}\neq\theta_{2}\). In this case, the system (1) is not periodic, since its solutions do not have the property of invariance with respect to the semigroup \(\theta\mathbb{Z}_{+}\) and the Floquet theory is not applicable.
The Lyapunov function \(v(t,x_{1},x_{2})\) has a quadratic form (12), we will construct \(P_{ij}(t)\), \(i,j=1,2\) step by step on every interval \([\tau_{k},\tau_{k+1}]\). If \(\tau_{k}\notin h\mathbb{Z}\), then on the interval \((\tau_{k},d_{k}h]\), where \(d_{k}h\) is the grid node nearest on the left to \(\tau_{k}\), we choose \(P_{ij}(t)\) as a constant on the interval \((\tau_{k},d_{k}h]\). Between the nodes we construct \(P_{ij}(t)\) similarly to the periodic case. Finally, if \(\tau_{k+1}\notin h\mathbb{Z}\), then on the interval \((\varkappa_{k}h,\tau_{k+1}]\) (\(\varkappa_{k}h\) is the grid node nearest on the right to \(\tau_{k+1}\)) we choose \(P_{ij}(t)\) again as a constants, i.e. \(P_{ij}(t)=P_{ij}(\varkappa_{k}h-0)\), for \(t\in(\varkappa_{k}h,\tau_{k+1}]\). Thus, it becomes necessary to estimate the derivative of the Lyapunov function
\(v(t,x_{1},x_{2})\) on the time intervals \((\tau_{k},d_{k}h]\), \((\varkappa_{k}h,\tau_{k+1}]\). For this we will introduce Assumption 5.1 and Lemma 5.1.
### Construction of the Lyapunov function
To construct a Lyapunov function, the discretization method is also used here. The discretization parameters are: the number of nodes \(N\in\mathbb{N}\), \(N\geq 2\) and \(h=\frac{\theta}{N}\). We denote \(N_{3}=-\left[\frac{2h-\theta_{1}}{h}\right]\), \(N_{4}=\left[\frac{\theta_{2}}{h}\right]\). In this case, additional assumptions regarding the connections between the subsystems are required.
**Assumption 5.1.** There are positive constants \(l_{ij}^{(m)}\), \(i,j=1,2\), \(i\neq j\), \(m=0,\dots,N-1\) such that
\[\sup_{t\in(mh,(m+1)h]}\|A_{ij}(t)-A_{ij}(mh)\|\leq l_{ij}^{(m)}h.\]
For any \(m\in\mathbb{Z}\), let \(l_{ij}^{(m)}:=l_{ij}^{(\varrho)}\), where \(\varrho\) is the remainder of dividing \(m\) by \(N\). Let \(\mathsf{I}_{m}:=\sqrt{(l_{12}^{(m)})^{2}+(l_{21}^{(m)})^{2}}\).
We denote the constant matrices
\[A_{m}=\begin{pmatrix}A_{11}&A_{12}(mh)\\ A_{21}(mh)&A_{22}\end{pmatrix},\quad m\in\mathbb{Z}.\]
Let \(P_{0}=(P_{ij}^{(0)})_{i,j=1,2}\), \(P_{ij}^{(0)}=(P_{ij}^{(0)})^{\mathrm{T}}\) be a positive definite symmetric block matrix. We define sequences of block matrices \(P_{m}^{(l)}=(P_{ij}^{(m,l)})_{i,j=1,2}\), \(l=0,\dots,N-1\), \(m=0,\dots,N_{4}-1\) as follows \(P_{ij}^{(0,l)}\equiv P_{ij}^{(0)}\)
\[\begin{split} P_{11}^{(m+1,l)}=e^{-A_{11}^{\mathrm{T}}h}(P_{11}^ {(m,l)}\\ -\int\limits_{(m+l)h}^{(m+l+1)h}(P_{12}^{(m,l)}A_{21}(s)+A_{21}^{ \mathrm{T}}(s)P_{21}^{(m,l)})\,ds)e^{-A_{11}h},\\ \end{split} \tag{21}\]
\[\begin{split} P_{22}^{(m+1,l)}=e^{-A_{22}^{\mathrm{T}}h}(P_{22}^ {(m,l)}\\ -\int\limits_{(m+l)h}^{(m+l+1)h}(A_{12}^{\mathrm{T}}(s)P_{12}^{(m,l)}+P_{21}^{(m,l)}A_{12}(s))\,ds)e^{-A_{22}h},\\ \end{split} \tag{22}\]
\[\begin{array}{c}P_{12}^{(m+1,l)}=e^{-A_{11}^{\rm T}h}(P_{12}^{(m,l)}\\ -\int\limits_{(m+l)h}^{(m+l+1)h}(P_{11}^{(m,l)}A_{12}(s)+A_{21}^{\rm T}(s)P_{22} ^{(m,l)})\,ds)e^{-A_{22}h}.\end{array} \tag{23}\]
Next, we define the matrices \(P_{ij}(t)\), \(i,j=1,2\) sequentially on the intervals \((\tau_{k},\tau_{k+1}]\). Let \(\widetilde{\tau}_{k}=\tau_{k}-\left[\frac{\tau_{k}}{\theta}\right]\theta\in[0,\theta)\), \(l_{k}:=\left[\frac{\widetilde{\tau}_{k}}{h}\right]+1\), \(d_{k}:=\left[\frac{\tau_{k}}{h}\right]+1\), \(\varkappa_{k}=\left[\frac{\tau_{k+1}}{h}\right]\). It is easy to show that
\[(\tau_{k},\tau_{k+1}]=(\tau_{k},d_{k}h]\cup\bigcup_{l=d_{k}}^{\varkappa_{k}-1 }(lh,(l+1)h]\cup(\varkappa_{k}h,\tau_{k+1}].\]
Let \(P_{ij}(t)=P_{ij}^{(0)}\) for \(t\in(\tau_{k},d_{k}h]\). On each interval \(((m+d_{k})h,(m+1+d_{k})h]\), \(m=0,\ldots,\varkappa_{k}-d_{k}-1\), we put
\[\begin{array}{c}P_{11}(t)=e^{-A_{11}^{\rm T}(t-(m+d_{k})h)}(P_{11}^{(m,l_{k} )}\\ -\int\limits_{(m+d_{k})h}^{t}(P_{12}^{(m,l_{k})}A_{21}(s)+A_{21}^{\rm T}(s)P_{2 1}^{(m,l_{k})})\,ds)e^{-A_{11}(t-(m+d_{k})h)},\end{array} \tag{24}\]
\[\begin{array}{c}P_{22}(t)=e^{-A_{22}^{\rm T}(t-(m+d_{k})h)}(P_{22}^{(m,l_{k} )}\\ -\int\limits_{(m+d_{k})h}^{t}(A_{12}^{\rm T}(s)P_{12}^{(m,l_{k})}+P_{21}^{(m,l _{k})}A_{12}(s))\,ds)e^{-A_{22}(t-(m+d_{k})h)},\end{array} \tag{25}\]
\[\begin{array}{c}P_{12}(t)=e^{-A_{11}^{\rm T}(t-(m+d_{k})h)}(P_{12}^{(m,l_{k} )}\\ -\int\limits_{(m+d_{k})h}^{t}(P_{11}^{(m,l_{k})}A_{12}(s)+A_{21}^{\rm T}(s)P_{ 22}^{(m,l_{k})})\,ds)e^{-A_{22}(t-(m+d_{k})h)}.\end{array} \tag{26}\]
For \(t\in(\varkappa_{k}h,\tau_{k+1}]\), let \(P_{ij}(t)=P_{ij}^{(\varkappa_{k}-d_{k},l_{k})}\).
_Remark._ Note that \(P(t)\) is continuous on \((\tau_{k},\tau_{k+1})\) and left-continuous at \(t=\tau_{k+1}\). Indeed, due to the \(\theta\)-periodicity of \(A_{ij}(t)\), the matrices \(P_{ij}(t)\) satisfy
\[P_{ij}((m+d_{k})h+0)=P_{ij}^{(m,l_{k})},\quad m=0,\ldots,\varkappa_{k}-d_{k}-1, \tag{27}\]
\[P_{ij}((m+d_{k})h-0)=P_{ij}^{(m,l_{k})},\quad m=0,\ldots,\varkappa_{k}-d_{k}-1. \tag{28}\]
Let us prove (27) and (28). We restrict ourselves to the case \((i,j)=(1,1)\), since other cases one can consider in a similar way. In case \(m=0\) the formulas
(27) and (28) are obvious. From the equality (24) it is easy to show that (27) is true for all \(m=1,\ldots,\varkappa_{k}-d_{k}-1\).
Next we prove (28). From (24) we have
\[\begin{array}{c}P_{11}((m+d_{k})h-0)=e^{-A_{11}^{\rm T}h}(P_{11}^{(m-1,l_{k}) }\\ -\int\limits_{(m+d_{k}-1)h}^{(m+d_{k})h}(P_{12}^{(m-1,l_{k})}A_{21}(s)+A_{21}^{ \rm T}(s)P_{21}^{(m-1,l_{k})})\,ds)e^{-A_{11}h},\end{array} \tag{29}\]
In the integral (29) make the change of variables \(\widetilde{s}:=s+(l_{k}-d_{k})h\), then
\[\begin{array}{c}P_{11}((m+d_{k})h-0)=e^{-A_{11}^{\rm T}h}(P_{11}^{(m-1,l_{k} )}\\ -\int\limits_{(m+l_{k}-1)h}^{(m+l_{k})h}(P_{12}^{(m-1,l_{k})}A_{21}( \widetilde{s}+(l_{k}-d_{k})h)\\ +A_{21}^{\rm T}(\widetilde{s}+(l_{k}-d_{k})h)P_{21}^{(m-1,l_{k})})\,ds)e^{-A_{ 11}h},\end{array} \tag{30}\]
Note that \((d_{k}-l_{k})h\in\theta\mathbb{Z}\). Indeed, the definition of the function \(x\mapsto[x]\) implies the inequalities
\[\Big{[}\frac{\tau_{k}}{h}\Big{]}\leq\frac{\tau_{k}}{h}<\Big{[}\frac{\tau_{k}} {h}\Big{]}+1,\quad\Big{[}\frac{\widetilde{\tau}_{k}}{h}\Big{]}\leq\frac{ \widetilde{\tau}_{k}}{h}<\Big{[}\frac{\widetilde{\tau}_{k}}{h}\Big{]}+1. \tag{31}\]
Therefore, from (31) it follows that
\[\frac{\tau_{k}-\widetilde{\tau}_{k}}{h}-1<\Big{[}\frac{\tau_{k}}{h}\Big{]}- \Big{[}\frac{\widetilde{\tau}_{k}}{h}\Big{]}<\frac{\tau_{k}-\widetilde{\tau} _{k}}{h}+1.\]
From the definition of \(\widetilde{\tau}_{k}\) follows \(\frac{\tau_{k}-\widetilde{\tau}_{k}}{h}=[\frac{\tau_{k}}{\theta}]\frac{\theta }{h}=[\frac{\tau_{k}}{\theta}]N\in\mathbb{Z}\), hence
\[\Big{[}\frac{\tau_{k}}{\theta}\Big{]}N-1<\Big{[}\frac{\tau_{k}}{h}\Big{]}- \Big{[}\frac{\widetilde{\tau}_{k}}{h}\Big{]}<\Big{[}\frac{\widetilde{\tau}_{k }}{\theta}\Big{]}N+1. \tag{32}\]
Since \([\frac{\tau_{k}}{h}]-[\frac{\widetilde{\tau}_{k}}{h}]\in\mathbb{Z}\), then \([\frac{\tau_{k}}{h}]-[\frac{\widetilde{\tau}_{k}}{h}]=[\frac{\tau_{k}}{\theta}]N\) and \(h([\frac{\tau_{k}}{h}]-[\frac{\widetilde{\tau}_{k}}{h}])=[\frac{\tau_{k}}{ \theta}]Nh=[\frac{\tau_{k}}{\theta}]\theta\in\theta\mathbb{Z}\), which means \((d_{k}-l_{k})h\in\theta\mathbb{Z}\). In turn, \(A_{12}(s)\) and \(A_{21}(s)\) are periodic functions, therefore \(A_{12}(s+(d_{k}-l_{k})h)=A_{12}(s)\) and \(A_{21}(s+(d_{k}-l_{k})h)=A_{21}(s)\), then for equality (30), taking into account (21), \(P_{11}((m+d_{k})h-0)=P_{11}^{(m,l)}\).
We choose the elements \(\mathfrak{V}(t,x)\) in the form \(v_{ij}(t,x_{i},x_{j})=x_{i}^{\rm T}\,P_{ij}(t)x_{j}\), \(i,j=1,2\).
The following lemma uses Assumption 5.1 and is needed to estimate the derivative of the Lyapunov function \(v(t,x_{1},x_{2})\) on \([\tau_{k},d_{k}h)\) and \((\varkappa_{k}h,\tau_{k+1}]\), where \(P_{ij}(t)\) is constant.
**Lemma 5.1.** Let \(\mathfrak{l}_{m}:=\sqrt{(l_{12}^{(m)})^{2}+(l_{21}^{(m)})^{2}}\), then
\[\dot{v}(t,x(t))\leq\lambda_{\max}(P_{0}^{-1}(A_{l_{k}-1}^{\mathrm{T}}P_{0}+P_{ 0}A_{l_{k}-1}+2h\mathfrak{l}_{l_{k}-1}\|P_{0}\|I_{n}))v(t,x(t)),\quad t\in( \tau_{k},d_{k}h],\]
\[\dot{v}(t,x(t))\leq\lambda_{\max}((P_{\varkappa_{k}-d_{k}}^{(l_{k})})^{-1}(A_{ l_{k}+\varkappa_{k}-d_{k}}^{\mathrm{T}}P_{\varkappa_{k}-d_{k}}^{(l_{k})}+P_{ \varkappa_{k}-d_{k}}^{(l_{k})}A_{l_{k}+\varkappa_{k}-d_{k}}\]
\[+2h\mathfrak{l}_{l_{k}+\varkappa_{k}-d_{k}}\|P_{\varkappa_{k}-d_{k}}^{(l_{k})} \|I_{n}))v(t,x(t)),\quad t\in(\varkappa_{k}h,\tau_{k+1}].\]
_Proof._ Let \(t\in(\tau_{k},d_{k}h]\), then, taking into account that \((l_{k}-d_{k})h\in\theta\mathbb{Z}\)
\[\dot{v}(t,x(t))=x^{\mathrm{T}}\left(t\right)(A^{\mathrm{T}}\left(t\right)P(t )+P(t)A(t))x(t)=x^{\mathrm{T}}\left(t\right)(A^{\mathrm{T}}\left(t\right)P_{ 0}+P_{0}A(t))x(t)\]
\[=x^{\mathrm{T}}\left(t\right)(A^{\mathrm{T}}\left((d_{k}-1)h\right)P_{0}+P_{ 0}A((d_{k}-1)h))x(t)\]
\[+x^{\mathrm{T}}\left(t\right)((A(t)-A((d_{k}-1)h))^{\mathrm{T}}\,P_{0}+P_{0}( A(t)-A((d_{k}-1)h)))x(t)\]
\[\leq x^{\mathrm{T}}\left(t\right)(A^{\mathrm{T}}\left((l_{k}-1)h\right)P_{0}+P_{ 0}A((l_{k}-1)h))x(t)+2h\mathfrak{l}_{l_{k}-1}\|P_{0}\|x^{\mathrm{T}}\left(t \right)x(t)\]
\[\leq(P_{0}^{1/2}x(t))^{\mathrm{T}}\,P_{0}^{-1/2}(A^{\mathrm{T}}\left((l_{k}-1 )h\right)P_{0}+P_{0}A((l_{k}-1)h)+2h\mathfrak{l}_{l_{k}-1}\|P_{0}\|I_{n})P_{0} ^{-1/2}(P_{0}^{1/2}x(t))\]
\[\leq\lambda_{\max}(P_{0}^{-1}(A_{l_{k}-1}^{\mathrm{T}}P_{0}+P_{0}A_{l_{k}-1}+ 2h\mathfrak{l}_{l_{k}-1}\|P_{0}\|I_{n}))v(t,x(t)).\]
The second part of the lemma can be proved similarly. The lemma is proved.
Lemma 5.2 is similar to Lemma 4.1 and reduces the verification of the positive definiteness of the Lyapunov function to a finite number of inequalities.
**Lemma 5.2.** If \(z_{im}=e^{-A_{ii}(t-mh)}x_{i}\) for \(t\in(mh,(m+1)h]\), \(i=1,2\), then
\[\lambda_{\min}(\Pi_{m}^{(l)})\|z_{m}\|^{2}\leq v(t,x_{1},x_{2}) \leq\lambda_{\max}(\Xi_{m}^{(l)})\|z_{m}\|^{2}, \tag{34}\] \[\mbox{for all}\quad t\in(mh,(m+1)h],\quad m=0,\ldots,N-1,\]
where \(z_{m}=(z_{1m}^{\mathrm{T}},z_{2m}^{\mathrm{T}})^{\mathrm{T}}\), \(\|z_{m}\|^{2}=\|z_{1m}\|^{2}+\|z_{2m}\|^{2}\), \(\Pi_{m}^{(l)}=(\pi_{ij}^{(m,l)})_{i,j=1,2}\), \(\Xi_{m}^{(l)}=(\xi_{ij}^{(m,l)})_{i,j=1,2}\) are block matrices with the elements
\[\pi_{11}^{(m,l)}=P_{11}^{(m,l)}-h(2\gamma_{21}^{(m+l)}\|P_{12}^{(m,l)}\|+( \gamma_{12}^{(m+l)}\|P_{11}^{(m,l)}\|+\gamma_{21}^{(m+l)}\|P_{22}^{(m,l)}\|)) I_{n_{1}},\]
\[\pi_{22}^{(m)}=P_{22}^{(m,l)}-h(2\gamma_{12}^{(m+l)}\|P_{12}^{(m,l)}\|+( \gamma_{12}^{(m+l)}\|P_{11}^{(m,l)}\|+\gamma_{21}^{(m+l)}\|P_{22}^{(m,l)}\|)) I_{n_{2}},\]
\[\pi_{12}^{(m)}=P_{12}^{(m,l)},\quad\pi_{21}^{(m,l)}=P_{21}^{(m,l)} \tag{35}\]
\[\xi_{11}^{(m,l)}=P_{11}^{(m,l)}+h(2\gamma_{21}^{(m+l)}\|P_{12}^{(m,l)}\|+(\gamma_{1 2}^{(m+l)}\|P_{11}^{(m,l)}\|+\gamma_{21}^{(m+l)}\|P_{22}^{(m,l)}\|))I_{n_{1}},\]
\[\xi_{22}^{(m,l)}=P_{22}^{(m,l)}+h(2\gamma_{12}^{(m+l)}\|P_{12}^{(m,l)}\|+(\gamma_ {12}^{(m+l)}\|P_{11}^{(m,l)}\|+\gamma_{21}^{(m+l)}\|P_{22}^{(m,l)}\|))I_{n_{2}},\]
\[\xi_{12}^{(m,l)}=P_{12}^{(m,l)},\quad\xi_{21}^{(m,l)}=P_{21}^{(m,l)}.\]
The proof is similar to the proof of Lemma 4.1.
We denote
\[\eta_{11}^{(m,l)}:=\sqrt{\|P_{11}^{(m,l)}\|^{2}+\|P_{12}^{(m,l)}\| ^{2}}+\|P_{12}^{(m,l)}\|,\] \[\eta_{22}^{(m,l)}:=\sqrt{\|P_{22}^{(m,l)}\|^{2}+\|P_{12}^{(m,l)}\| ^{2}}+\|P_{12}^{(m,l)}\|\]
\[\eta_{12}^{(m,l)}:=\frac{1}{2}(\|P_{11}^{(m,l)}\|\gamma_{12}^{(m+l)}+\|P_{22}^ {(m,l)}\|\gamma_{21}^{(m+l)}\]
\[+\sqrt{(\|P_{11}^{(m,l)}\|\gamma_{12}^{(m+l)}+\|P_{22}^{(m,l)}\|\gamma_{21}^{( m+l)})^{2}+16(\gamma_{21}^{(m,l)})^{2}\|P_{12}^{(m,l)}\|^{2}}).\]
\[\eta_{21}^{(m,l)}:=\frac{1}{2}(\|P_{11}^{(m,l)}\|\gamma_{12}^{(m+l)}+\|P_{22}^ {(m,l)}\|\gamma_{21}^{(m+l)}\]
\[+\sqrt{(\|P_{11}^{(m,l)}\|\gamma_{12}^{(m+l)}+\|P_{22}^{(m,l)}\|\gamma_{21}^{ (m+l)})^{2}+16(\gamma_{12}^{(m+l)})^{2}\|P_{12}^{(m,l)}\|^{2}}).\]
\[\alpha_{11}^{(m,l)}:=\gamma_{12}^{(m+l)}\|A_{22}\|N_{1}M_{2}\eta_{11}^{(m,l)}, \quad\alpha_{12}^{(m,l)}:=\gamma_{12}^{(m+l)}\|A_{11}\|N_{1}\eta_{11}^{(m,l)},\]
\[\alpha_{21}^{(m,l)}:=\gamma_{21}^{(m+l)}\|A_{11}\|N_{2}M_{1}\eta_{22}^{(m,l)}, \quad\alpha_{22}^{(m,l)}:=\gamma_{21}^{(m+l)}\|A_{22}\|N_{2}\eta_{22}^{(m,l)},\]
Let
\[\Theta_{m}^{(l)}(h):=\frac{\alpha_{11}^{(m,l)}}{\mu_{2}}\Big{(} \frac{e^{(\mu_{2}+\delta_{1})h}-1}{\mu_{2}+\delta_{1}}-\frac{e^{\delta_{1}h}- 1}{\delta_{1}}\Big{)}+\frac{\alpha_{12}^{(m,l)}}{\delta_{1}}\Big{(}\frac{e^{h \delta_{1}}-1}{\delta_{1}}-h\Big{)} \tag{36}\] \[+\frac{\alpha_{21}^{(m,l)}}{\mu_{1}}\Big{(}\frac{e^{(\mu_{1}+ \delta_{2})h}-1}{\mu_{1}+\delta_{2}}-\frac{e^{\delta_{2}h}-1}{\delta_{2}}\Big{)} +\frac{\alpha_{22}^{(m,l)}}{\delta_{2}}\Big{(}\frac{e^{h\delta_{2}}-1}{\delta_ {2}}-h\Big{)}+\] \[+2\Big{(}\gamma_{12}^{(m+l)}\eta_{12}^{(m,l)}N_{1}M_{2}\Big{(} \frac{he^{(\mu_{2}+\delta_{1})h}}{\mu_{2}+\delta_{1}}-\frac{e^{(\mu_{2}+\delta _{1})h}-1}{(\mu_{2}+\delta_{1})^{2}}\Big{)}\] \[+\gamma_{21}^{(m+l)}\eta_{21}^{(m,l)}N_{2}M_{1}\Big{(}\frac{he^{( \mu_{1}+\delta_{2})h}}{\mu_{1}+\delta_{2}}-\frac{e^{(\mu_{1}+\delta_{2})h}-1}{ (\mu_{1}+\delta_{2})^{2}}\Big{)}\Big{)}.\]
Lemma 5.3 is similar to Lemma 4.2 and allows us to estimate the variations of the Lyapunov function \(v(t,x_{1},x_{2})\) on each discritization interval \(((m+d_{k})h,(m+d_{k}+1)h)\).
**Lemma 5.3** Let \(\Pi_{m}^{(l)}\) be positive definite for \(m=0,\ldots,\varkappa_{k}-d_{k}-1\), \(l=0,\ldots,N-1\), then for \(t\in((m+d_{k})h,(m+d_{k}+1)h)\), we have
\[v((m+d_{k}+1)h,x_{1}((m+d_{k}+1)h),x_{2}((m+d_{k}+1)h)) \tag{37}\] \[\leq e^{\frac{\Theta_{m}^{(l)}(h)}{\lambda_{\min}(\Pi_{m}^{(l)})}}v ((m+d_{k})h+0,x_{1}((m+d_{k})h+0),x_{2}((m+d_{k})h+0))\]
The proof of Lemma 5.3 is similar to the proof of Lemma 4.2.
### Conditions for the asymptotic stability
The Lyapunov function \(v(t,x_{1},x_{2})\) constructed in the previous subsection and and Lemmas 5.1-5.3 allows us to formulate sufficient conditions for the asymptotic stability of (1) when \(\theta_{1}\neq\theta_{2}\).
**Theorem 5.1.** Let \(N\in\mathbb{N}\) and \(P_{0}\succ 0\) be such that for \(m=0,\ldots,N_{4}\), \(l=0,\ldots,N-1\), the matrices \(\Pi_{m}^{(l)}\) defined by (35) are positive definite and for all \(M\in\mathbb{N}\), such that \(N_{3}\leq M\leq N_{4}-1\) the following inequality hold
\[h\lambda_{\max}^{+}(P_{0}^{-1}(A_{l-1}^{\rm T}P_{0}+P_{0}A_{l-1}+2 hl_{-1}\|P_{0}\|I_{n})) \tag{38}\] \[+h\lambda_{\max}^{+}((P_{M}^{(l)})^{-1}(A_{l+M}^{\rm T}P_{M}^{(l )}+P_{M}^{(l)}A_{l+M}+2hl_{+M}\|P_{M}^{(l)}\|I_{n}))\] \[\qquad+\sum_{m=0}^{M-1}\frac{\Theta_{m}^{(l)}(h)}{\lambda_{\min}( \Pi_{m}^{(l)})}+\ln\lambda_{\max}((P_{M}^{(l)})^{-1}B^{\rm T}\,P_{0}B)<0,\]
where \(h=\frac{\theta}{N}\), \(P_{ij}^{(m,l)}\) are defined in (21)-(23) and \(\Theta_{m}^{(l)}(h)\) in (36). Then (1) is asymptotically stable, if \(T_{k}\in[\theta_{1},\theta_{2}]\).
_Proof._ Consider the variation of the Lyapunov function on the interval \((\tau_{k},\tau_{k+1}]\). As a consequence of Lemma 5.1, we have
\[v(hd_{k},x(hd_{k})) \tag{39}\] \[\leq\exp\Big{(}\lambda_{\max}(P_{0}^{-1}(A_{l_{k}-1}^{\rm T}P_{0} +P_{0}A_{l_{k}-1}+2hl_{k_{k}-1}\|P_{0}\|I_{n}))(hd_{k}-\tau_{k})\Big{)}v(\tau_{ k}+0,x(\tau_{k}+0))\] \[\qquad\leq\exp\Big{(}\lambda_{\max}^{+}(P_{0}^{-1}(A_{l_{k}-1}^{ \rm T}P_{0}+P_{0}A_{l_{k}-1}+2hl_{k_{k}-1}\|P_{0}\|I_{n}))h\Big{)}v(\tau_{k}+0,x (\tau_{k}+0)),\]
\[v(\tau_{k+1},x(\tau_{k+1}))\] \[\leq\exp\Big{(}\lambda_{\max}((P^{(l_{k})}_{\varkappa_{k}-d_{k}})^{ -1}(A^{\rm T}_{l_{k}+\varkappa_{k}-d_{k}}P^{(l_{k})}_{\varkappa_{k}-d_{k}}+P^{(l _{k})}_{\varkappa_{k}-d_{k}}A_{l_{k}+\varkappa_{k}-d_{k}}\] \[\quad+2hl_{l_{k}+\varkappa_{k}-d_{k}}\|P^{(l_{k})}_{\varkappa_{k}- d_{k}}\|I_{n}))(\tau_{k+1}-h\varkappa_{k})\Big{)}v(\tau_{k}+0,x(\tau_{k}+0)) \tag{40}\] \[\leq\exp\Big{(}\lambda^{+}_{\max}((P^{(l_{k})}_{\varkappa_{k}-d_{ k}})^{-1}(A^{\rm T}_{l_{k}+\varkappa_{k}-d_{k}}P^{(l_{k})}_{\varkappa_{k}-d_{k}}+P^{(l _{k})}_{\varkappa_{k}-d_{k}}A_{l_{k}+\varkappa_{k}-d_{k}}\] \[\qquad+2hl_{l_{k}+\varkappa_{k}-d_{k}}\|P^{(l_{k})}_{\varkappa_{ k}-d_{k}}\|I_{n}))h\Big{)}v(\varkappa_{k}h+0,x(\varkappa_{k}h+0)).\]
Lemma 5.3 implies the inequalities
\[\begin{array}{c}v((m+1+d_{k})h,x((m+1+d_{k})h))\\ \\ \leq\exp\Big{(}\frac{\Theta^{(l)}_{m}(h)}{\lambda_{\min}(\Pi^{(l_{k})}_{m})} \Big{)}v((m+d_{k})h+0,x((m+d_{k})h+0))\end{array} \tag{41}\]
for all \(m=0,\ldots,\varkappa_{k}-d_{k}-1\).
At the moment of impulse action \(t=\tau_{k+1}\), we get
\[\begin{array}{c}v(\tau_{k+1}+0,x(\tau_{k+1}+0))\\ \\ =x^{\rm T}\,(\tau_{k+1}+0)P_{0}x(\tau_{k+1}+0)=x^{\rm T}\,(\tau_{k+1})B^{\rm T }\,P_{0}Bx(\tau_{k+1})\\ \\ =((P^{(l_{k})}_{\varkappa_{k}-d_{k}})^{1/2}x(\tau_{k+1}))^{\rm T}\,(P^{(l_{k} )}_{\varkappa_{k}-d_{k}})^{-1/2}B^{\rm T}\,P_{0}B(P^{(l_{k})}_{\varkappa_{k}- d_{k}})^{-1/2}(P^{(l_{k})}_{\varkappa_{k}-d_{k}})^{1/2}x(\tau_{k+1})\\ \\ \leq\lambda_{\max}((P^{(l_{k})}_{\varkappa_{k}-d_{k}})^{-1}B^{\rm T}\,P_{0}B)v( \tau_{k+1},x(\tau_{k+1}))\end{array} \tag{42}\]
Comparing the inequalities (39)--(42), we obtain
\[v(\tau_{k+1}+0,x(\tau_{k+1}+0))\leq e^{Q^{(l_{k})}_{\varkappa_{k}-d_{k}}}v( \tau_{k}+0,x(\tau_{k}+0)), \tag{43}\]
where
\[Q^{(l)}_{s}=h\lambda^{+}_{\max}(P^{-1}_{0}(A^{\rm T}_{l-1}P_{0}+P_{0}A_{l-1}+ 2h\l_{l-1}\|P_{0}\|I_{n}))\]
\[+h\lambda^{+}_{\max}((P^{(l)}_{s})^{-1}(A^{\rm T}_{l+s}P^{(l)}_{s}+P^{(l)}_{s} A_{l+s}+2h\l_{l+s}\|P^{(l)}_{s}\|I_{n}))\]
\[+\sum_{m=0}^{s-1}\frac{\Theta^{(l)}_{m}(h)}{\lambda_{\min}(\Pi^{(l)}_{m})}+ \ln\lambda_{\max}((P^{(l)}_{s})^{-1}B^{\rm T}\,P_{0}B)\]
Note that from the condition \(\theta_{1}\leq\theta_{2}\) it follows that \(N_{3}\leq N_{4}-1\). By assumption about dwell-times, \(\theta_{1}\leq T_{k}\leq\theta_{2}\). Therefore,
\[h(\varkappa_{k}-d_{k})<T_{k}\leq\theta_{2},\]
and since \(\varkappa_{k}-d_{k}\in\mathbb{Z}_{+}\), we get \(\varkappa_{k}-d_{k}\leq N_{4}-1\). On the other hand,
\[\theta_{1}\leq T_{k}=\tau_{k+1}-\varkappa_{k}h+h(\varkappa_{k}-d_{k})+d_{k}h- \tau_{k}\leq h(\varkappa_{k}-d_{k})+2h\]
Consequently, \(\varkappa_{k}-d_{k}\geq\frac{\theta_{1}}{h}-2\), and since \(\varkappa_{k}-d_{k}\in\mathbb{Z}_{+}\), we have \(\varkappa_{k}-d_{k}\geq N_{3}\). From the conditions of the theorem, it follows that
\[Q_{\max}=\max_{l=0,\ldots,N-1,N_{3}\leq s\leq N_{4}-1}Q_{s}^{(l)}<0.\]
It follows from the inequality (43) that for all \(k\in\mathbb{Z}_{+}\),
\[v(\tau_{k+1}+0,x_{1}(\tau_{k+1}+0),x_{2}(\tau_{k+1}+0))\leq e^{Q_{\max}}v(\tau _{k}+0,x_{1}(\tau_{k}+0),x_{2}(\tau_{k}+0)). \tag{44}\]
From the inequalities (44), we find
\[v(\tau_{k}+0,x(\tau_{k}+0))\leq e^{kQ_{\max}}v(\tau_{0}+0,x(\tau_{0}+0)). \tag{45}\]
Hence,
\[\lambda_{\min}(P_{0})\|x(\tau_{k}+0)\|^{2}\leq v(\tau_{k}+0,x(\tau_{k}+0))\]
\[\leq e^{kQ_{\max}}v(\tau_{0}+0,x(\tau_{0}+0))\leq e^{kQ_{\max}}\lambda_{\max}( P_{0})\|x(\tau_{0}+0)\|^{2}.\]
and
\[\|x(\tau_{k}+0)\|\leq e^{kQ_{\max}/2}\sqrt{\frac{\lambda_{\max}(P_{0})}{ \lambda_{\min}(P_{0})}}\|x(\tau_{0}+0)\|.\]
Without loss of generality, we assume that \(t_{0}<\tau_{0}\). Using the Gronwall-Bellman inequality, one can show that there exists a positive constant \(C_{t_{0}}\) such that \(\|x(\tau_{0}+0)\|\leq C_{t_{0}}\|x_{0}\|\). It follows from the inequality \(T_{k}<\theta_{2}\) that there exists a positive constant \(C_{2}\) such that for all \(t\in(\tau_{k},\tau_{k+1}]\), the inequality \(\|x(t)\|\leq C_{2}\|x(\tau_{k}+0)\|\) holds. Let \(t\in(\tau_{k},\tau_{k+1}]\). Then, the inequality
\[t-t_{0}=t-\tau_{k}+\sum_{s=1}^{k}T_{s}+\tau_{0}-t_{0}\leq k\theta_{2}+\theta_ {2}+\tau_{0}-t_{0}\]
implies that
\[kQ_{\max}\leq\frac{Q_{\max}}{\theta_{2}}(t-t_{0})-\frac{(\theta_{2}+\tau_{0}- t_{0})Q_{\max}}{\theta_{2}}.\]
Thus,
\[\|x(t)\|\leq Ce^{-\beta(t-t_{0})}\|x_{0}\|,\quad t\geq t_{0}, \tag{46}\]
where
\[C=C_{t_{0}}C_{2}\exp\Big{(}-\frac{(\theta_{2}+\tau_{0}-t_{0})Q_{\max}}{2\theta_{2} }\Big{)}\sqrt{\frac{\lambda_{\max}(P_{0})}{\lambda_{\min}(P_{0})}},\quad\beta=- \frac{Q_{\max}}{2\theta_{2}}>0.\]
The estimate (46) implies the exponential stability of (1). The theorem is proved.
To verify the conditions of Theorem 5.1, it is necessary to calculate \(\Theta_{m}^{(l)}(h)\). In addition, with the constants \(\gamma_{ij}^{(m+l)}\), \(m=0,\ldots,N-1\), \(l=0,\ldots,N-1\) included in \(\Theta_{m}^{(l)}(h)\), it is necessary to calculate the matrices \(P_{ij}^{(m,l)}\), \(i,j=1,2\), using the recurrent formulas (6)-(8) and check a finite number \(N_{4}-N_{3}\) of inequalities (38).
## 6 Comparison of results
Here we compare the results obtained in the Section 4 with the known stability conditions for coupled time-variant systems. We restrict ourselves to a special case of high-frequency periodic functions, assuming dwell-time \(T_{k}=\theta\) are constant and \(\theta\) is a sufficiently small parameter. The value of this parameter we obtain from Theorem 4.1 applied to the case \(N=1\). We will compare our results with small-gain conditions obtained on the basis of the ISS theory and the Lyapunov vector function. We will also consider the case when one of the independent subsystems is not stable, and the known approaches from the theory of stability of coupled systems are not applicable.
Consider (1) with \(\widehat{A}_{ij}=\frac{1}{\theta}\int_{0}^{\theta}A_{ij}(t)\,dt\) for \(i\neq j\). Let \(B=(B_{ij})_{i,j=1,2}\), \(\widehat{A}=\begin{pmatrix}0&\widehat{A}_{12}\\ \widehat{A}_{21}&0\end{pmatrix}\) are block matrices. Consider the system of linear matrix inequalities
\[\operatorname{diag}\,\{e^{\theta A_{i1}^{\mathrm{T}}},e^{\theta A_{22}^{ \mathrm{T}}}\}B^{\mathrm{T}}\,P_{0}B\operatorname{diag}\,\{e^{\theta A_{i1}}, e^{\theta A_{22}}\}\prec P_{0}-\theta(\widehat{A}^{\mathrm{T}}\,P_{0}+P_{0} \widehat{A}). \tag{47}\]
Suppose it has a solution \(P_{0}=(P_{ij}^{(0)})_{i,j=1,2}\) as a symmetric positive-definite
matrix, i.e. \(P_{ij}^{(0)}=(P_{ji}^{(0)})^{\rm T}\). Let us define matrices
\[\pi_{11}^{(0)}=P_{11}^{(0)}-\theta(2\gamma_{21}^{(0)}\|P_{12}^{(0)} \|+(\gamma_{12}^{(0)}\|P_{11}^{(0)}\|+\gamma_{21}^{(0)}\|P_{22}^{(0)}\|))I_{n_{1}},\] \[\pi_{22}^{(0)}=P_{22}^{(0)}-\theta(2\gamma_{12}^{(0)}\|P_{12}^{(0 )}\|+(\gamma_{12}^{(0)}\|P_{11}^{(0)}\|+\gamma_{21}^{(0)}\|P_{22}^{(0)}\|))I_{n_ {2}},\] \[\pi_{12}^{(0)}=P_{12}^{(0)},\quad\pi_{21}^{(0)}=P_{21}^{(0)}\]
and block matrix \(P_{1}=(P_{ij}^{(1)})_{i,j=1,2}\), \((P_{ij}^{(1)})^{\rm T}=P_{ji}^{(1)}\) with the blocks
\[P_{11}^{(1)}=e^{-A_{11}^{\rm T}\theta}(P_{11}^{(0)}-\theta(P_{12}^{(0)} \widehat{A}_{21}+\widehat{A}_{21}^{\rm T}P_{21}^{(0)}))e^{-A_{11}\theta}, \tag{48}\]
\[P_{22}^{(1)}=e^{-A_{22}^{\rm T}\theta}(P_{22}^{(0)}-\theta(\widehat{A}_{12}^{ \rm T}P_{12}^{(0)}+P_{21}^{(0)}\widehat{A}_{12}))e^{-A_{22}\theta} \tag{49}\]
\[P_{12}^{(1)}=e^{-A_{11}^{\rm T}\theta}(P_{12}^{(0)}-\theta(P_{11}^{(0)} \widehat{A}_{12}+\widehat{A}_{21}^{\rm T}P_{22}^{(0)})))e^{-A_{22}\theta}. \tag{50}\]
The following proposition is a direct consequence of Theorem 4.1.
**Proposition 6.1.** Let \(\Pi_{0}=(\pi_{ij}^{(0)})_{i,j=1,2}\) be positive definite and \(Q:=\frac{\Theta_{0}(\theta)}{\lambda_{\min}(\Pi_{0})}+\ln\lambda_{\max}(P_{1} ^{-1}B^{\rm T}\,P_{0}B)<0\), then (1) is asymptotically stable.
_Example._
\[\begin{array}{l}\dot{x}_{1}(t)=0.01x_{1}(t)-0.2\sin^{2}\frac{2\pi t}{\theta }x_{2}(t),\\ \dot{x}_{2}(t)=-0.1x_{2}(t)+0.2\cos^{2}\frac{2\pi t}{\theta}x_{1}(t)\end{array} \tag{51}\]
Choose \(\theta=0.09\), \(P_{11}^{(0)}=18\), \(P_{12}^{(0)}=P_{21}^{(0)}=-7\), \(P_{22}^{(0)}=12\), then \(\mu_{1}=0.01\), \(\mu_{2}=-0.1\), \(\delta_{1}=-0.01\), \(\delta_{2}=0.1\), \(\gamma_{12}^{(0)}=\gamma_{21}^{(0)}=0.2\). By the direct calculations we obtain that
\[\Pi_{0}=\begin{pmatrix}17.208&-7\\ -7&11.208\end{pmatrix},\quad P_{1}=\begin{pmatrix}18.0934&-7.0025\\ -7.0024&12.0897\end{pmatrix}\]
and \(\eta_{11}^{(0)}=26.3132\), \(\eta_{22}^{(0)}=20.8924\), \(\eta_{12}^{(0)}=7.1037\), \(\eta_{21}^{(0)}=7.1036\), \(\alpha_{11}^{(0)}=0.5263\), \(\alpha_{12}^{(0)}=0.0526\), \(\alpha_{21}^{(0)}=0.0418\), \(\alpha_{22}^{(0)}=0.4178\), \(Q=-0.00004279<0\). Therefore, the system is asymptotically stable. At the same time, the first subsystem is unstable, which makes it impossible to apply a small-gain theorem.
### Comparison with the small-gain conditions
Consider (1) with \(\int_{0}^{\theta}A_{ij}(t)\,dt=0\) for \(i\neq j\) and constant dwell-time \(T_{k}=\theta\). Since the Lyapunov vector functions or small-gain results are only applicable
when the independent subsystems are asymptotically stable, we assume that \(r_{\sigma}(e^{\theta A_{ii}}B_{ii})<1\) for \(i=1,2\). Now (47) reduces to two inequalities
\[e^{\theta A_{ii}^{\rm T}}B_{ii}^{\rm T}\,P_{ii}B_{ii}e^{\theta A_{ii}}\prec P_{ ii},\quad i=1,2. \tag{52}\]
To apply Theorem 4.1, we assume \(P_{ii}^{(0)}=P_{ii}\), \(i=1,2\)\(P_{12}^{(0)}=0\), then from (48)-(50) we obtain
\[P_{11}^{(1)}=e^{-\theta A_{11}^{\rm T}}P_{11}e^{-\theta A_{11}},\quad P_{22}^ {(1)}=e^{-\theta A_{22}^{\rm T}}P_{22}e^{-\theta A_{22}},\quad P_{12}^{(1)}=0.\]
Let us define
\[\Phi=\begin{pmatrix}e^{\theta A_{11}}P_{11}^{-1}e^{\theta A_{11}^{\rm T}}&0 \\ 0&e^{\theta A_{22}}P_{22}^{-1}e^{\theta A_{22}^{\rm T}}\end{pmatrix}\begin{pmatrix} B_{11}^{\rm T}&B_{21}^{\rm T}\\ B_{12}^{\rm T}&B_{22}^{\rm T}\end{pmatrix}\begin{pmatrix}P_{11}&0\\ 0&P_{22}\end{pmatrix}\begin{pmatrix}B_{11}&B_{12}\\ B_{21}&B_{22}\end{pmatrix}\]
A consequence of Theorem 4.1 is the following
**Proposition 6.2.** If \(\int_{0}^{\theta}A_{ij}(t)\,dt=0\) for \(i\neq j\), \(r_{\sigma}(\Phi)<1\), the conditions of Assumption 4.1 and the following inequalities hold
\[\begin{split}\theta<\min\Big{\{}\frac{\lambda_{\min}(P_{11})}{ \varrho},\frac{\lambda_{\min}(P_{22})}{\varrho}\Big{\}},\\ \frac{\Theta_{0}(\theta)}{\min\{\lambda_{\min}(P_{11})-\theta \varrho,\lambda_{\min}(P_{22})-\theta\varrho\}}<-\ln\lambda_{\max}(\Phi), \end{split} \tag{53}\]
where \(\varrho=\gamma_{12}\|P_{11}\|+\gamma_{21}\|P_{22}\|\). Then (1) is asymptotically stable.
To compare the obtained results with the results known in the literature, obtained on the basis of the ISS approach or Lyapunov vector function (small-gain conditions), we consider (1) without impulsive action, i.e. \(B_{ii}=I\), \(B_{ij}=0\) for \(i,j=1,2\), \(i\neq j\) and such that \(\int_{0}^{\theta}A_{ij}(t)\,dt=0\) for \(i\neq j\).
Since the Lyapunov vector function or small-gain results are only applicable when the independent subsystems are asymptotically stable, so we assume, that \(\max\{{\rm Re}\;\lambda\,|\,\lambda\in\sigma(A_{ii})\}<0\) for \(i=1,2\). For given symmetric and positive-definite matrices \(Q_{i}\), \(i=1,2\) consider the linear algebraic Lyapunov equations
\[A_{ii}^{\rm T}P_{ii}+P_{ii}A_{ii}=-Q_{i}. \tag{54}\]
It is known that under our assumptions for \(A_{ii}\), \(i=1,2\) these equations have unique solutions in the form of symmetric positive-definite matrices \(P_{ii}\).
_Remark 6.1._ Solutions of matrix algebraic equations (54) satisfy linear matrix inequalities (52), up to \(O(\theta^{2})\).
In this case \(\Phi=\operatorname{diag}\{e^{\theta A_{11}}P_{11}^{-1}e^{\theta A_{11}^{\rm T}}P _{11},e^{\theta A_{22}}P_{22}^{-1}e^{\theta A_{22}^{\rm T}}P_{22}\}\). Since, we assume that \(\theta\) is a sufficiently small positive number, we note that
\[\Phi=I-\theta\operatorname{diag}\{P_{11}^{-1}Q_{1},P_{22}^{-1}Q_{2}\}+O( \theta^{2}),\]
therefore
\[-\ln\lambda_{\max}(\Phi)=\theta\min\{\lambda_{\min}(P_{11}^{-1}Q_{1}),\lambda_ {\min}(P_{22}^{-1}Q_{2})\}+O(\theta^{2})\]
is positive-defined for sufficiently small \(\theta>0\). On the other hand, it is easy to show that \(\frac{\Theta_{0}(\theta)}{\min\{\lambda_{\min}(P_{11})-\theta\varrho,\lambda_ {\min}(P_{22})-\theta\varrho\}}=O(\theta^{2})\), hence it follows that there exists \(\theta^{*}>0\), such that for all \(\theta\in(0,\theta^{*})\) the conditions of Proposition 6.1 are satisfied. Thus, we come to an important Corollary.
**Corollary 6.1.** System (1) without impulsive action and \(\int_{0}^{\theta}A_{ij}(t)\,dt=0\) is asymptotically stable for \(\theta\in(0,\theta^{*})\) if \(\theta^{*}>0\) is small enough.
Note that the \(\theta^{*}\) is determined from the conditions (53).
We apply the same Lyapunov functions \(V_{1}(x_{1})=x_{1}^{\rm T}\,P_{11}x_{1}\), \(V_{2}(x_{2})=x_{2}^{\rm T}\,P_{22}x_{2}\) to study the stability of the coupled system (1) without impulsive action, using the small-gain theorem in [27]. Consider estimates of derivatives \(\dot{V}_{i}\) along solutions of (1). Taking into account Assumption 4.1 and using the Cauchy-Bunyakovsky inequality, we define
\[\dot{V}_{i}(x_{i})=-x_{i}^{\rm T}\,Q_{i}x_{i}+2x_{i}^{\rm T}\,P_{ii}A_{ij}(t)x_ {j}\]
\[=-(P_{i}^{1/2}x_{i})^{\rm T}\,P_{ii}^{-1/2}Q_{i}P_{ii}^{-1/2}P_{ii}^{1/2}x_{i} +2(P_{ii}^{1/2}x_{i})^{\rm T}\,P_{ii}^{1/2}A_{ij}(t)P_{jj}^{-1/2}P_{ii}^{1/2}x _{j}\]
\[\leq-\lambda_{\min}(P_{ii}^{-1/2}Q_{i}P_{ii}^{-1/2})V_{i}(x_{i})+2\|P_{ii}\|^ {1/2}\|P_{jj}\|^{-1/2}\gamma_{ij}V_{ii}^{1/2}(x_{i})V_{jj}^{1/2}(x_{j})\]
\[=-\lambda_{\min}(P_{ii}^{-1}Q_{i})V_{i}(x_{i})+2\|P_{ii}\|^{1/2}\|P_{jj}\|^{-1 /2}\gamma_{ij}V_{i}^{1/2}(x_{i})V_{j}^{1/2}(x_{j}). \tag{55}\]
Here \(i\neq j\), \(i,j=1,2\). To check the conditions of the small-gain theorem from [27] (Theorem 4), we choose
\[\chi_{i}(r)=\Big{(}\frac{2\gamma_{ij}\|P_{ii}\|^{1/2}\|P_{jj}\|^{-1/2}}{ \lambda_{\min}(P_{ii}^{-1}Q_{i})}+\varepsilon\Big{)}^{2}r.\]
Then Theorem 4 from [27] lead us to the following sufficient conditions for the asymptotic stability of (1) (small-gain condition)
\[\gamma_{12}\gamma_{21}<\frac{1}{4}\lambda_{\min}(P_{11}^{-1}Q_{1})\lambda_{\min} (P_{22}^{-1}Q_{2}) \tag{56}\]
_Remark 6.2_. The method of Lyapunov vector functions lead us to the same stability condition. Indeed, (55) in the new variables \(y_{i}(t)=V_{i}^{1/2}(x_{i}(t))\) leads to a linear system of differential inequalities
\[\dot{y}_{i}(t)\leq-\frac{1}{2}\lambda_{\min}(P_{ii}^{-1}Q_{i})y_{i}(t)+\|P_{ii} \|^{1/2}\|P_{jj}\|^{-1/2}\gamma_{ij}y_{j}(t).\]
Application of the comparison principle again leads to (56). From (56) follows that the small-gain conditions are not depend on \(\theta\). Therefore, it is possible to choose the parameters of the system (1), such that (56) is not satisfied, however, based on Corollary 6.1, this system is asymptotically stable for sufficiently small \(\theta\). Thus, we conclude that our approach leads to less conservative stability conditions than the known small-gain conditions.
_Example._ Consider a second-order linear system
\[\begin{split}&\dot{x}_{1}(t)=-0.2x_{1}(t)+0.15a_{12}(t)x_{2}(t), \\ &\dot{x}_{2}(t)=-0.1x_{2}(t)+0.2a_{21}(t)x_{1}(t)\end{split} \tag{57}\]
where \(a_{ij}\in C(\mathbb{R})\), \(\|a_{ij}\|_{C[0,\theta]}\leq 1\), \(\int\limits_{0}^{\theta}a_{ij}(t)\,dt=0\), then \(\mu_{1}=-0.2\), \(\mu_{2}=-0.1\), \(\delta_{1}=0.2\), \(\delta_{2}=0.1\) and the small-gain condition \(0.03\|a_{12}\|_{C[0,\theta]}\|a_{21}\|_{C[0,\theta]}<0.02\) is not satisfied.
Choose \(\theta=0.5\), \(P_{11}^{(0)}=2.5\), \(P_{22}^{(0)}=5\), \(P_{12}^{(0)}=0\), \(M_{1}=M_{2}=N_{1}=N_{2}=1\). By a direct calculation \(\gamma_{12}^{(0)}=0.15\), \(\gamma_{21}^{(0)}=0.2\),
\[\Pi_{0}=\begin{pmatrix}1.8125&0\\ 0&4.3125\end{pmatrix},\quad P_{1}=\begin{pmatrix}3.0535&0\\ 0&5.5259\end{pmatrix}\]
Since \(Q=-0.0050178428<0\), the considered system asymptotically stable.
Note that in our example (57), the functions \(a_{ij}\) are actually unknown, we know only restrictions on their norm and mean value. Thus, Theorem 4.1 provides conditions for a whole class of systems (57). In this sense, Theorem 4.1 is similar to the well-known small-gain theorems for coupled systems.
## 7 Numerical examples
Here we consider some examples of application of the main results from Sections 4 and 5 to fourth order systems. Example 1 is an impulsive system, with one unstable subsystem. In this case, small-gain conditions are not applicable. Example 2 illustrates the possibility to apply Theorem 4 in the case when the continuous and discrete dynamics are both unstable at constant dwell-times, and example 3 is the general case of non-constant dwell-times.
_Example 1._ Consider a linear fourth-order impulsive system (1) with the matrices
\[A_{11}=\begin{pmatrix}0.1&0.05\\ 0.05&0.1\end{pmatrix},\quad A_{22}=\begin{pmatrix}-1&0.01\\ 0.01&-1\end{pmatrix}\] \[A_{12}(t)=-2\sin^{2}(\omega t)I,\quad A_{21}(t)=2\sin^{2}(\omega t )I;\]
\[B_{11}=\begin{pmatrix}0.98&0\\ 0&0.98\end{pmatrix},\quad B_{22}=\begin{pmatrix}1.02&0\\ -0.01&1.01\end{pmatrix}\] \[B_{12}=\begin{pmatrix}-0.01&0\\ 0.02&0\end{pmatrix},\quad B_{21}=\begin{pmatrix}0&0.01\\ -0.05&0\end{pmatrix}.\]
Here, \(\tau_{k+1}-\tau_{k}=\theta=0.2\), \(\omega=\frac{2\pi}{\theta}\). To check the asymptotic stability conditions of Theorem 6, we choose \(N=50\), \(P_{11}^{(0)}=18I\), \(P_{12}^{(0)}=-7I\), \(P_{22}^{(0)}=12I\). Then \(\min_{k}\lambda_{\min}(\Pi_{k})=7.0322\), \(Q=-0.0464<0\). Hence (1) is asymptotically stable. Note, that the independent subsystem
\[\begin{split}&\dot{x}_{2}(t)=A_{22}x_{2}(t),\quad t\neq\tau_{k}, \\ & x_{2}(t^{+})=B_{22}x_{2}(t),\quad t=\tau_{k},\end{split} \tag{58}\]
is not stable due to the fact that \(r_{\sigma}(e^{A_{22}\theta}B_{22})>1\).
_Example 2._ Consider a linear impulsive system (1) with the matrices
\[A_{11}=\begin{pmatrix}-1&0\\ 0&-1\end{pmatrix},\quad A_{22}=\begin{pmatrix}0.1&0\\ 0&0.1\end{pmatrix}\]
\[A_{12}(t)=\begin{pmatrix}0.2\cos(\omega t)&-0.2\sin(\omega t)\\ 0.2\sin(\omega t)&0.2\cos(\omega t)\end{pmatrix},\] \[A_{21}(t)=\begin{pmatrix}0.1\cos(\omega t)&-0.1\sin(\omega t)\\ 0.1\sin(\omega t)&0.1\cos(\omega t)\end{pmatrix}\] \[B_{11}=\begin{pmatrix}1.2&0.1\\ -0.1&1.5\end{pmatrix},\quad B_{22}=\begin{pmatrix}0.5&0.05\\ -0.05&-0.5\end{pmatrix}\] \[B_{12}=\begin{pmatrix}0.04&0.1\\ 0.1&0.04\end{pmatrix},\quad B_{21}=\begin{pmatrix}0.05&0.1\\ 0.2&0.05\end{pmatrix}\]
Here, \(\tau_{k+1}-\tau_{k}=\theta=0.5\), \(\omega=\frac{2\pi}{\theta}\). To check the asymptotic stability conditions, obtained in Theorem 6.1, we choose \(N=3\), \(P_{11}^{(0)}=I\), \(P_{12}^{(0)}=0\), \(P_{22}^{(0)}=I\). Then,
\[P_{3}=\begin{pmatrix}2.7172&0&-0.0008&-0.0207\\ 0&2.7172&0.0207&-0.0008\\ -0.0008&0.0207&0.902&0\\ -0.0207&-0.0008&0&0.902\end{pmatrix},\]
\(\min_{k}\lambda_{\min}(\Pi_{k})=0.7793\), \(Q=-0.0383<0\). Therefore, the linear impulsive system (1) is asymptotically stable. Consider separately the continuous dynamics of a linear impulsive system, which is described by a linear time-variant ODEs
\[\dot{x}_{1}(t)=A_{11}x_{1}(t)+A_{12}(t)x_{2}(t), \tag{59}\] \[\dot{x}_{2}(t)=A_{21}(t)x_{1}(t)+A_{22}x_{2}(t),\]
We denote
\[U(\omega t)=\begin{pmatrix}\cos(\omega t)&-\sin(\omega t)\\ \sin(\omega t)&\cos(\omega t)\end{pmatrix}\]
and rewrite (59) as
\[\dot{x}_{1}(t)=-x_{1}(t)+0.2U(\omega t)x_{2}(t), \tag{60}\] \[\dot{x}_{2}(t)=0.1U(\omega t)x_{1}(t)+0.1x_{2}(t),\]
Consider the Lyapunov-Chetaev function \(v(x_{1},x_{2})=2\|x_{2}\|^{2}-\|x_{1}\|^{2}\), the total derivative of which is
\[\dot{v}(x_{1},x_{2})=2(0.2x_{2}\|^{2}+\|x_{1}\|^{2}+0.4\sin(\omega t)x_{1}^{ \mathrm{T}}\,Jx_{2})\]
\[\geq 2(0.2\|x_{2}\|^{2}+\|x_{1}\|^{2}-0.4\|x_{1}\|\|x_{2}\|)>0,\quad\text{for all }(x_{1},x_{2})\neq(0,0).\]
Here, \(J=\begin{pmatrix}0&1\\ -1&0\end{pmatrix}\) is the symplectic unit. Hence, the ODE (60) is unstable. The discrete dynamics is unstable since \(r_{\sigma}(B)=1.4722\).
_Example 3._ Consider an example, which illustrate the application of Theorem 5.1 to the case of a non-periodic impulsive system. Let
\[A_{11}=\begin{pmatrix}-1&0\\ 0&-1\end{pmatrix},\quad A_{22}=\begin{pmatrix}0.1&0\\ 0&0.1\end{pmatrix}\]
\[A_{12}(t)=\begin{pmatrix}0.05\cos(\omega t)&-0.05\sin(\omega t)\\ 0.05\sin(\omega t)&0.05\cos(\omega t)\end{pmatrix},\]
\[A_{21}(t)=\begin{pmatrix}0.05\cos(\omega t)&-0.05\sin(\omega t)\\ 0.05\sin(\omega t)&0.05\cos(\omega t)\end{pmatrix}\]
\[B_{11}=\begin{pmatrix}1.2&0.1\\ -0.1&1.5\end{pmatrix},\quad B_{22}=\begin{pmatrix}0.25&0.05\\ -0.05&-0.25\end{pmatrix}\]
\[B_{12}=\begin{pmatrix}0.04&0.1\\ 0.1&0.04\end{pmatrix},\quad B_{21}=\begin{pmatrix}0.05&0.1\\ 0.2&0.05\end{pmatrix}\]
For \(\theta=1\), \(\theta_{1}=0.8\), \(\theta_{2}=1.2\), we choose \(N=7\), \(P_{11}^{(0)}=I\), \(P_{12}^{(0)}=0\), \(P_{22}^{(0)}=I\). Then, \(N_{3}=4\), \(N_{4}=8\), \(\min_{m=\overline{0,11}}\lambda_{\min}(\Pi_{m}^{(l)})=0.7585>0\), \(\max_{l=\overline{0,9},m=\overline{6,12}}Q_{m}^{(l)}=-0.1399<0\). Thus, by Theorem 5.1, the linear system (1) is asymptotically stable only if the dwell-times \(T_{k}=\tau_{k+1}-\tau_{k}\) belong to the interval \([\theta_{1},\theta_{2}]\). We note that in this example both continuous and discrete dynamics are not stable. The instability of continuous dynamics is proved using the Lyapunov-Chetaev function \(v(x_{1},x_{2})=2\|x_{2}\|^{2}-\|x_{1}\|^{2}\) as done above. The instability of discrete dynamics follows from the fact that \(r_{\sigma}(B)=1.4746\).
## 8 Discussion
Theorems 4.1 and 5.1 are the main result of the paper and establish sufficient conditions of the exponential stability of the linear time-variant impulsive system (1). The proposed approach to the study of a coupled impulsive system significantly expands the capabilities of the method of Lyapunov vector
functions and allows one to study the asymptotic stability of a linear impulsive system with unstable subsystems. Given examples in Section 6 show that our results significantly expand the known methods for studying impulsive systems developed in [5; 9; 14; 15]. In addition, the obtained results are applied in the case when the continuous and discrete dynamics of an impulsive system are both unstable. In the case when dwell-times are non constant for studying the stability of a linear impulsive system, the classical Floquet theory turns out to be inapplicable. These results significantly expand the possibilities of the direct Lyapunov method in the context of theorems from [1].
In this case, Theorem 5.1 allows us to deduce the Lyapunov asymptotic stability. It is of interest to generalize the proposed approaches for the construction of Lyapunov functions for nonlinear coupled systems, as well as when expanding the assumptions about the number of independent subsystems.
## 9 Appendix
_Proof_ of Lemma 4.1. Using the expressions for \(P_{ij}^{(m)}\) we obtain
\[v(t,x_{1},x_{2})=z_{1m}^{\rm T}\Big{(}P_{11}^{(m)}-\int\limits_{mh}^{t}(P_{12 }^{(m)}A_{21}(s)+A_{21}^{\rm T}(s)P_{21}^{(m)})\,ds\Big{)}z_{1m}\]
\[+2z_{1m}^{\rm T}\Big{(}P_{12}^{(m)}-\int\limits_{mh}^{t}(P_{11}^{(m)}A_{12}(s) +A_{21}^{\rm T}(s)P_{22}^{(m)})\,ds\Big{)}z_{2m}\]
\[+z_{2m}^{\rm T}\Big{(}P_{22}^{(m)}-\int\limits_{mh}^{t}(A_{12}^{\rm T}(s)P_{12 }^{(m)}+P_{21}^{(m)}A_{12}(s))\,ds\Big{)}z_{2m}.\]
Applying the Cauchy-Bunyakovsky inequality, we obtain
\[\Big{|}z_{1m}^{\rm T}\int\limits_{mh}^{t}(P_{12}^{(m)}A_{21}(s)+A_{21}^{\rm T}( s)P_{21}^{(m)})\,dsz_{1m}\Big{|}\leq 2\|P_{12}^{(m)}\|\gamma_{21}^{(m)}\|z_{1m}\| ^{2},\]
\[\Big{|}z_{1m}^{\rm T}\int\limits_{mh}^{t}(P_{11}^{(m)}A_{12}(s)+A_{21}^{\rm T} (s)P_{22}^{(m)})\,dsz_{2m}\Big{|}\leq(\|P_{11}^{(m)}\|\gamma_{12}^{(m)}+\|P_{ 22}^{(m)}\|\gamma_{21}^{(m)})\|z_{1m}\|\|z_{2m}\|,\]
\[\left|z_{2m}^{\rm T}\int\limits_{mh}^{t}(A_{12}^{\rm T}(s)P_{12}^{(m)}+P_{21}^{(m) }A_{12}(s))\,dsz_{2m}\right|\leq 2\|P_{12}^{(m)}\|\gamma_{12}^{(m)}\|z_{2m}\|^{2}\]
Thus,
\[v(t,x_{1},x_{2})\geq z_{1m}^{\rm T}P_{11}^{(m)}z_{1m}+2z_{1m}^{\rm T}P_{12}^{(m )}z_{2m}+z_{2m}^{\rm T}P_{22}^{(m)}z_{2m}-2\|P_{12}^{(m)}\|\gamma_{21}^{(m)}\|z _{1m}\|^{2}\]
\[-2(\|P_{11}^{(m)}\|\gamma_{12}^{(m)}+\|P_{22}^{(m)}\|\gamma_{21}^{(m)})\|z_{1m} \|\|z_{2m}\|-2\|P_{12}^{(m)}\|\gamma_{12}^{(m)}\|z_{2m}\|^{2}.\]
Applying the Cauchy inequality, we get
\[v(t,x_{1},x_{2})\geq z_{1m}^{\rm T}P_{11}^{(m)}z_{1m}+2z_{1m}^{\rm T}P_{12}^{( m)}z_{2m}+z_{2m}^{\rm T}P_{22}^{(m)}z_{2m}\]
\[-(2\|P_{12}^{(m)}\|\gamma_{21}^{(m)}+(\|P_{11}^{(m)}\|\gamma_{12}^{(m)}+\|P_{2 2}^{(m)}\|\gamma_{21}^{(m)}))\|z_{1m}\|^{2}\]
\[-(2\|P_{12}^{(m)}\|\gamma_{12}^{(m)}+(\|P_{11}^{(m)}\|\gamma_{12}^{(m)}+\|P_{2 2}^{(m)}\|\gamma_{21}^{(m)}))\|z_{2m}\|^{2}\geq\lambda_{\min}(\Pi_{m})\|z_{m} \|^{2}.\]
The upper bound for the Lyapunov function \(v(t,x_{1},x_{2})\) is proved similarly.
|
2301.06637 | How many acres of potatoes does a society need? | One of the main difficulties in a class on Sources of Energy and Social
Policy is the wide variety of units used by different technologists (BTU's,
Barrels of oil, Quads, kWh, etc). As every student eats, I think some of this
confusion can be resolved by starting and grounding the class with a discussion
of food and food production. A general outline for this introduction is
provided and two interesting historical cultural examples, Tenochtitlan and the
Irish Potato Famine, are provided. Science and Social Policy classes are full
of bespoke units and involve many different contexts. Starting the class with a
discussion of food energy is a nice way for everyone to start with the same
context. In addition, discussion of Food Energy can lead to interesting
historical claims. | Nathan T. Moore | 2023-01-16T23:41:12Z | http://arxiv.org/abs/2301.06637v2 | # How many acres of potatoes does a society need?
###### Abstract
One of the main difficulties in a class on Sources of Energy and Social Policy is the wide variety of units used by different technologists (BTU's, Barrels of oil, Quads, kWh, etc). As every student eats, I think some of this confusion can be resolved by starting and grounding the class with a discussion of food and food production. A general outline for this introduction is provided and two interesting historical cultural examples, Tenochtitlan and the Irish Potato Famine, are provided. Science and Social Policy classes are full of bespoke units and involve many different contexts. Starting the class with a discussion of food energy is a nice way for everyone to start with the same context. In addition, discussion of Food Energy can lead to interesting historical claims.
_Keywords_: Energy, Social Policy, kcals, Tenochtitlan, Irish Potato Famine, History, self-reliance
## 1 Introduction
When the United States entered World War One one of the problems they faced was logistics. How much food do you need to ship overseas to Europe to feed a million soldiers? That early work in nutrition led to the 3000 Calorie diet many people remember from secondary Health Education class. A bit about "Calorie" (uppercase) vs "calorie" (lowercase) units you might remember: \(1\)\(Calorie=1\)\(kilocalorie\) (\(kcal\)), and a dietitian might build a \(3000kcal\) diet for a 20 year old basketball player. A _calorie_ is the amount of energy it takes to heat a gram of water by a degree Celsius. There are about 4.2 Joules in a single calorie, and a Joule occurs all over introductory physics. If you need to buy a new home furnace, the sales brochure might advertise that it is capable of delivering \(100,000\) BTU's of heat each hour. What's a BTU? Heat a round of water by \(1^{\circ}F\). Of course Heat Pumps are far more efficient than simply burning methane or propane, but they consume kilo-watt-hours (kWh) of electricity, not BTU's. What's a kWh? Run a 1000 Watt toaster for an hour and you'll have pulled one kWh off the grid, it will cost you about $0.13 in Minnesota. If you decide to put solar panels in your backyard, they will probably collect about 10% of the \(3.5kWh\) the the sun delivers to each square meter of your lawn (in Minnesota) each day.
As the previous paragraph illustrates, there are a frustratingly large number of different units in an "Energy" class. At Winona State, this 3 credit class fulfills a "Science and Social Policy" general education requirement and is taken by students from across the university. Lots of college majors don't require a math class beyond algebra or introductory statistics and the population is largely math-averse. You could jokingly say that one of the main things students learn in the class is unit conversion, but it isn't far off. Nearly every field finds energy a useful representation, and every profession has their own set of units and terminology most well suited for quick calculation. Would a medical lab scientist talk about the fractional acre-foot of urine needed test kidney function? No, but someone in the central valley of California would certainly care about the acre-feet of water necessary to grow almonds! Does a gas station price their gasoline in dollars per kWh? Given the growing electrification of cars, they might soon.
Everyone eats, maybe not \(3000kcals\) per day, but at least something every day. When I teach our energy class, [1, 2], I spend a few weeks talking about food energy before all other types. While food production is not central to climate change and wars over oil, food is essential in a way that diesel and gasoline are not. Vehicle fuel makes modern life possible, but we could live, unpleasantly, without it. We can't live without fats and protein.
## 2 Food Energy
To introduce Food Energy, I ask the students to work through a few questions:
### Converting food into body heat
Planning to save money, one college student decides to go to an all-you-can-eat buffet each day at 11am, eg figure 1. If he brings homework and stretches the meal out for a few hours he can get all \(3000kcals\) with only one bill. Food is fuel for the human body - could too much fuel make his body feel sick? If his body burned all this food at once, how much warmer would he get? Useful information: the student has a mass of \(80kg\) and is made mostly of water. A Calorie heats \(1kg\) of water \(1^{\circ}C\).
Here's a possible answer: equate food energy with calorimetric heating and assume human bodies have the same heat capacity as water, about \(1\frac{kcal}{kg\cdot^{o}C}\). This allows us to calculate the body's temperature increase.
\[3000kcals = 80kg\cdot 1\frac{kcal}{kg\cdot^{o}C}\cdot\Delta T\] \[\Delta T \approx +37.5^{\circ}C\]
Students are normally quite surprised at this number. Although wildly unrealistic, \(\Delta T\approx+6^{\circ}C\) is typically fatal, there is a related phenomena of diet-induced thermogenesis[3] known informally as "the meat sweats". Some students connect this calculation to feeling quite hungry after a cold swim in the pool (a similar effect). On
Figure 1: A proto-college-student at Winona’s China King Buffet, dreaming about visiting the steam tables every day.
a larger scale, discussing what's wrong with this estimate is useful. The main storage mechanism for storing food energy is fat tissue, which the calculation completely ignores. Infants are generally born with little fat, and an infant sleeping through the night often coincides with the baby developing enough fat tissue to store sufficient kcals to make it though a night without waking up ravenously hungry. A related follow-up is that if a person is stranded in the wilderness, they should immediately start walking downstream (ie, towards civilization) as they likely won't be able to harvest an amount of kcals equivalent to what they already have stored on their hips and abdomen.[4] The contrast of bear hibernation [5] and songbirds constantly eating through the winter are related connections to investigate.
### Biophysical Power
A more realistic question to follow up with relates to the average _power_ given off by a person over a day. Again, assuming \(3000kcal\) is burned over \(24hours\), with useful information: \(1kcal\approx 4200J\) and \(1J/s=1W\).
\[\frac{3000kcal}{24hours}\cdot\frac{4200J}{1kcal}\cdot\frac{1hour}{3600sec} \approx 145W \tag{1}\]
Most students still remember \(75Watt\) lightbulbs, but given the spread of LED lighting, "A person's body heat is two \(75W\) light bulbs" will probably only make sense for a few more years. Desert or cold-weather camping, alone versus with friends, and survival swimming are also examples for students to make sense of this answer. If you can take advantage of other people's waste body heat, you'll sleep more pleasantly and survive longer in cold water.
Another application to discuss is that of "brown fat," a sort of biological space heater that humans and other mammals develop in response to cold weather. This tissue's mitochondria can burn lipids and carbohydrates in a useless proton pumping scheme, which produces metabolic heat [6, 7, 8, 9]. Most common in rodents and infants, this mechanism can be stimulated by extended exposure to cold temperatures - the original work was done on lumberjacks in Finland [10]. The idea of a biological space heater that takes a month to turn on and a month to turn off matches the lived experience of college students in Minnesota, who wear down jackets in \(4^{\circ}C\) weather in November, and beachwear in the same \(4^{\circ}C\) weather in March. Additionally, transplants to northern climates often take a few years to "get used to" the colder weather up north. It seems just as easy to say that transplants' bodies take a few years to develop the brown fat cells which allow them to be comfortable in cold weather.
One other distinction to emphasize is the difference between power and energy. A graph of a human body's "kcal content" over the course of a day can be a useful illustration. When sedentary, this graph probably has the slope of \(-150W\approx-125\frac{kcals}{hour}\). If the \(3000kcal\) meal at the buffet takes an hour, this period corresponds to an energy-time slope of \(+3000\frac{kcal}{hour}\approx+3500W\).
In medicine, these slopes are effectively equivalent to "Metabolic Equivalent of Task" (METS), a common measure in cardiology and exercise physiology. METS is
power normalized by mass, \(1METS=1\frac{kcal}{kg\cdot hour}\), and METS levels are available for many different physical activities. [11]
### Burning off food energy
Imagine that after eating a \(600kcal\) bacon-maple long-john (donut), you decide to go for a hike to "work off" the Calories. Winona State is in a river valley bounded by \(200m\) tall bluffs. How high up the bluff would you have to hike to burn off the donut? Useful information: human muscle is about \(1/3\) efficient, and on Earth's surface, gravitational energy has a slope of about \(10\ \frac{Joules}{kg\cdot m}\).
One way to approach this problem is by using Energy Bar Charts [12] to illustrate how the energy held in food changes form as it is used. An approximation for this question is shown in figure 2. In this story, the "system" is taken to be the earth, food, and hiker. The hiker's body is assumed to be \(1/3\) efficient, which means one of the food energy blocks of energy is transformed into gravitational energy (elevation) at the end of the hike. The other 2 blocks of energy are transformed into heat and leave the hiker's body, most likely by mechanisms of respiration and sweat evaporation. The purpose of a bar chart like this is to provide a pictorial and mathematical representation of the energy conservation equation given in 2.
Figure 2: An Energy Bar Chart to illustrate the \(1/3\) efficient student hiking up a bluff to burn off the morning’s donut. The initial state (left) is the hiker at the bottom of the hill, with donut in stomach. The final state (right) is the hiker at the top of the bluff with \(2/3\) of the energy removed to the atmosphere by sweat and exhalation of warm air. \(1/3\) of the donut’s energy is stored in elevation. The system for this diagram includes the earth, the hiker, and the donut. The system does not include the atmosphere around the hiker.
\[\frac{1}{3}\cdot 600kcal\cdot\frac{4200J}{1kcal} = 80kg\cdot 10\frac{Joules}{kg\cdot m}\cdot height \tag{2}\] \[height \approx 1000m \tag{3}\]
This estimate is again surprising to students. Five trips up the bluff to burn off $2 of saturated fat, sugar, and flour! A nice followup calculation is to imagine a car that can burn a \(100kcal\) piece of toast in the engine: from rest, what speed will the toast propel it to? If (again) the engine converts \(1/3\) of the energy into motion (kinetic energy), a \(1300kg\) Honda Civic will reach a speed of about \(15\frac{m}{s}\approx 33mph\)!
The point of these energy calculations is not to give students an eating disorder. Rather, the numbers show food's amazing power. A single slice of toast will bring a car up to the residential speed limit! A day's food, \(3000kcal\), will power you up an \(5000m\) mountain peak! The body-work food allows us to do is astonishing, and increases in food production have made modern comforts, unimaginable 150 years ago, possible to the point of being taken for granted.
### Where does food energy come from?
One feature of the aught's "homesteading" culture [13] is the idea that a person should probably be able to move to the country, eat a lot of peaches, and grow all their own food. Learning that farming labor is _skilled_ labor can be brutal and disheartening. Eating \(3000kcals\) each day means planting, weeding, harvesting, and storing more than a million kcals each year [14]. Where will those Calories come from? Is your backyard enough to homestead in the suburbs [15]?
At some point between 1920 and 1950, US chemical manufacturers realized that in the post-war period, they could repurpose processes developed for manufacturing munitions and chemical warfare agents, to produce chemicals that would kill insects and increase the nitrogen levels in the soil. As figures 3 and 4 show, the epoch of "Better Living Through Chemistry" produced a dramatic increase in per-acre yields across all commodity food crops, particularly corn and potatoes.
However, if you're discussing backyard Calorie production it isn't reasonable to
Figure 3: USDA per acre Corn and Potato production figures, plotted over time. Data is given in harvest units, \(56lbs\) bushels per acre for field corn and hundred-weight (CWT) for potatoes. By mass, corn is about 4.5 times more calorie dense than potato which results in a nearly equal \(kcal/acre\) values for both crops in figure 4. Details on the data source and conversions are given in Appendix A.
use modern yield estimates for planning. "Roundup Ready" Corn, Soybean, and Sugar Beet seeds are not available to the public, nobody wants to put on a respirator to apply Atrazine ten feet from the back door, and the edge effects from deer and insects are much smaller on a 600 acre field than they are in an community garden allotment. As mentioned in the introduction, in 1917 the USDA published a pamphlet [16] giving detailed Calorie estimates a farmer might expect from a given acre of a crop. A table from this pamphlet is shown in Figure 5. The pamphlet data came from pre-war, pre-chemical agriculture, and the yields cited were produced with horses, manure, lime, and large families full of children. If you want to be self sufficient, these yield numbers are probably a good upper bound on what's realistically possible by a dedicated Luddite.
So, another question using this data. If you want to feed your family of four people potatoes, how much land will you need to cultivate? Here's an estimate: a family of 4 requires \(3000kcal/person\) each day[19]. If we over-estimate and produce food for the
Figure 4: USDA per acre crop production figures, plotted over time. Production data is scaled by estimated dietary kcal content to show that, over all crops, there has been a dramatic increase in kcal production since about 1940. Details of the data source and conversions are given in Appendix A. The idea for this plot came from an online blog, [17]. It would be interesting to know if there are patterns of scaling among vegetable families (grains, legumes, tubers, etc) in the same way that there are family classifications for the minimal energy required for transport [18].
entire year, the family will need about 4.4 million kcals.
\[4\ people\cdot\frac{3000kcal}{person\cdot day}\cdot\frac{365\ days}{year}\approx 4.4Mkcal \tag{4}\]
A brief aside for those bored by the simplistic unit conversion: when I ask students to solve problems like these, one undercurrent of conversation is "Should I divide by 365 or multiply?" Particularly with online homework systems, checking your answer for reasonability isn't typically graded. Asking the students to reason proportionally with units is a skill that can give meaning to numbers.
From figure 5 we can estimate \(1.9\ million\ kcals\) per acre of potato production. Again the students might ask, should I multiple 4.4 and 1.9 or should I divide them? It can be useful in a class discussion to have the students discuss and vote which of the following two forms will give the meaningful answer.
\[\frac{4.4Mkcal}{family}\cdot\frac{1acre}{1.9Mkcal}\ \ \mbox{or}\ \ \frac{4.4Mkcal}{family}\cdot\frac{1.9Mkcal}{1acre} \tag{5}\]
The choice of operation is difficult to make without seeing the units present, which is again a learning opportunity for the students.
What does the answer of 2.3 acres mean? The university's \(91m\times 49m\) football field has an area of about 1.1 acres, so you could say that a football field planted in potatoes will probably feed a family through the winter [20]. Can a person enjoy the benefits of urban living and grow all their own food? The population density of New Jersey is \(1,263\ people/mile^{2}\approx 1.97\ people/acre\) and our 4 person family needs 2.3 acres for their potatoes. Unless the social model is one of a country Dacha or an endless suburb with no duplexes or apartment buildings, urban living and food self-sufficiency seem mutually exclusive.
More emotionally charged conversations can be had about converting the United States to all organic agriculture, which, for corn, typically has a yield penalty of about \(20-40bu/acre\) when compared to conventional production. The 1917 data isn't directly applicable, but it relates. At \(180bu/acre\) conventional corn requires \(\approx 24\ million\ acres\) (half of Wisconsin, or all of Indiana) to feed the US population (350 million people) corn for a year. The remainder of the corn belt can be devoted to animal feed, ethanol, and export. If the corn belt was devoted to producing organic corn at lower yield [21], we probably wouldn't starve, but cheap meat and ethanol vehicle fuel would likely disappear.
## 4 Farmers' bulletin 877.
Figure 5: A table from a USDA booklet giving 1917 yields for various farm products.
## 3 Example: How big could Tenochtitlan have been?
The questions described thus far have largely been centered within a physics context. The paper closes with two more examples that leverage this food energy picture to make historical claims. The first example relates to the pre-Columbian capital of the Aztec Empire, Tenochtitlan, now known as Mexico City. Tenochtitlan was built on and around a endorheic lake, Texcoco. Crops were grown in shallow parts of the lake via chinampas [22], floating patches of decaying vegetation and soil. Given the proximity to water and decaying vegetation, these fields were very fertile [23, 24] and some continue to be used in the present day [25].
Estimates of Tenochtitlan's population in 1500CE vary widely, from 40,000 [26] to more than 400,000 [27] inhabitants, comparable in size to Paris at that time. These estimates come from oral and written records and estimates of archaeological building density and land area. While cannibalism was part of Aztec religious ritual and practice [28], the staple Calorie sources for the Aztecs were corn and beans.
Few if any Native American cultures made use of draft animals for food or power before the Columbian Exchange. This means that the food that fed Tenochtitlan must have been brought to the city center by foot or canoe. How much land must have been devoted to chinampas to feed the population, or conversely, how many people could be supported by the land within walking or paddling distance from the city center?
A 1964 paper in Scientific American [24] gives a general outline of the chinampas in the area of Tenochtitlan in 1500CE. This map seems to be the basis for the similar figure in Wikipedia [29]. Descriptions of chinampas agriculture indicate that as many as 7 successive crops could be grown and harvested from the same plot of soil each year, two of which could be maize (corn). This is truly amazing productivity, given that in the midwest United States corn is normally grown, at most, every other year because of it's extreme nutrient demands on the soil.
There are many ways to approach this estimation problem. We could assume a Tenochtitlan population of \(100,000\) people has a \(3000kcal/day\) diet that comes completely from corn. Assuming that corn's density and nutritional content haven't changed in the 4 centuries preceding the 1917 data in figure 5, we could assume \(1lbs\) of corn contains \(\approx 1594kcal\) of food energy. Looking at the map with ImageJ [30], it seems like the recorded area devoted to chinampas might be about \(16,000\)\(acres\) - details are given in B. With these assumptions, we could equate the corn energy production from chinampas with the population's yearly food need. Note, in this version of the story, the corn productivity, \(P\frac{bu}{acre}\) is treated as an unknown variable.
\[Food\ production\qquad=16,000acres\cdot\frac{2\ corn\ crops}{year} \cdot P\frac{bu\ of\ corn}{acre}\] \[Population\ requires=100,000\ people\cdot\frac{3000kcal}{person\cdot day} \cdot\frac{365days}{year}\cdot\frac{1lbs\ corn}{1594kcal}\cdot\frac{1bu}{56 lbs}\] \[P\approx 38\frac{bu}{acre} \tag{6}\]
This crop productivity is in remarkable agreement with the 1917 USDA yields, \(35bu/acre\), which seems to validate the assumed \(100,000\) person population of Tenochtitlan. Some references [24] describe an extensive tribute system that Aztec government required of it's subjects, which certainly would have been necessary to support populations on the upper end of historical estimates [27].
## 4 Example: Was the Irish Potato Famine a Natural Disaster?
In contrast to native cultures of the Americas, Ireland's population boomed with the Columbian Exchange and the introduction of the potato. [31, 32]. Figure 6 shows that from about 1700 onward there was a dramatic growth in the island's population. There's never just one reason for historical events, but unlike grains, potatoes thrived in Ireland's cool damp climate and potatoes, kale, and milk form a nutritionally complete diet that greatly reduced hunger-related mortality among the poor working-class in Ireland. If you look closely at the data in figure 6 you might believe that there were _two_ weather and potato related famines, the most obvious 1845-49 and the second, with much smaller effect on population in 1740-1. Both famines were precipitated by poor weather, but an important difference is that in 1740, Ireland was a sovereign state but by 1845 the island was effectively an economic colony of the British Empire [32].
As the story goes, the two main commodity crops in Ireland were potatoes (for humans), and oats, which as horse feed, were something like gasoline in today's economy. A sovereign government can halt the export of food to feed English horses, which is what happened in 1741 (and 1782). The grain was diverted back as relief to starving people in Ireland, reducing the famine's mortality. However, by 1845 most of Irish farmland was economically controlled by foreign (English) markets, and grain traders typically refused to divert oats (horse feed) as famine relief for the sake of their investment income.
This inflammatory claim, which is certainly a simplified version of history, serves as a useful evaluation example for students. Specifically, in years that the potato crop failed because of weather or late blight, could the amount of oats produced (and exported) have fed the Irish population? More broadly, was the Great Famine due to weather and disease, natural causes "we can't do anything about," or was the depth of the tragedy a result of political choices?
Some estimates follow: Ireland's population in 1845 was about 8.5 million people. The island has an area of about \(84,400km^{2}\)[33] and you might estimate that 64% of the land (\(54,000km^{2}\)) is arable for agriculture [37]. It seems reasonable to use the 1917 productivity, figure 5, to make calculations for Ireland in 1845. Reminder, in 1917, potatoes produced \(1.908\times 10^{6}kcal/acre\) and oats \(1.254\times 10^{6}kcal/acre\). With students, evaluation of the claim could be approached as a series of questions:
How much food does the island need?
\[food\;needed\;per\;year = 8.5\times 10^{6}\;people\cdot\frac{3000}{person\cdot day}\cdot \frac{365days}{year}\] \[\approx 9.3\times 10^{12}kcals\]
How much land area, sown in potatoes, would produce this food?
\[9.3\times 10^{12}kcals/\left(1.908\times 10^{6}\frac{kcal}{acre} \right) = 4.87\times 10^{6}acres\] \[\approx 19,700km^{2}\]
How much land area, sown in oats, would produce this food?
\[9.3\times 10^{12}kcals/\left(1.254\times 10^{6}\frac{kcal}{acre}\right) = 7.41\times 10^{6}acres\] \[\approx 30,000km^{2}\]
Summed, \(49,700km^{2}\), these two areas devoted to oats and potatoes are roughly equivalent to the amount of arable land estimated above for Ireland, \(54,000km^{2}\)[37]. What do the numbers mean? Did there have to be a famine? If all of the potato crop failed because of late blight, there would likely have been enough oats to feed the population a \(2000kcal\) ration of oats with leftover to spare. Like the Holodomor or the Great Leap Forward, the numbers suggest that large-scale suffering wasn't a natural disaster, but rather a human disaster resulting from poor government policy insensitive to the value of human life.
Figure 6: The population of Ireland over time, file from Wikipedia [35], data sources [36]. The humble potato, kale, and milk were part of an amazing population boom. Note that there were two weather-related “potato” famines in Ireland, in about 1740 and 1850. government policy response to the famines could explain the drastic difference in subsequent population following each of the two famines. The population of Ireland finally re-reached it’s 1851 peak in 2021 [34].
## 5 Conclusion
A class about Energy and Social Policy and the author hasn't mentioned climate change, coal, or solar panels even once! What is he thinking?
How many tons of carbon does your car release in a year? How many shiploads of iron oxide will we have to dump into the ocean for phytoplankton to eat up the equivalent about of carbon? Every question in a class like this is, to at least some extent, informed by numerical calculation and it's pretty arrogant to assume that "those students" don't need to (or can't) do the math. If you're going to have success talking about numerical calculations, you might as well start with examples that everyone can relate to, and everyone eats! Along the way you might find fascinating historical questions to investigate.
The work was influenced and improved by discussions with Diane Dahle-Koch, Larry Moore, John Deming, Carl Ferkinhoff, and Sarah Taber.
## Appendix A Creating the historical kcal/acre figure from USDA data
The United States Department of Agriculture (USDA) provides historical crop information via the National Agricultural Statistics Service [38]. Data was downloaded in spreadsheet csv format and then combined and plotted via a Python Jupyter notebook.
Each crop has its own bespoke units, for example potatoes are sold by hundredweight (CWT) but sugar beets are measured by the ton. Every imaginable agricultural product seems to be tracked in the NASS site, for example Maple Syrup production is tracked and given in gallons of syrup per tap! Conversion factors used are summarized in Table 1. Calorie (kcal) density for each crop was taken from the USDA's Food Data Central [39]. Within this database, foods are identified by an FDC ID.
An example calculation (implemented in the Jupyter notebook) follows for Corn. In 2022 the USDA reported an average production of 172.3 bushels of corn per acre of farmland.
\[172.3\frac{bu}{acre}\cdot\frac{56lbs\ corn}{bu}\cdot\frac{453.6\ grams}{lbs} \cdot\frac{365\ kcal}{100\ grams}=15,974,657\frac{kcal}{acre} \tag{1}\]
Obviously the result is only reasonable to two significant figures!
Raw data from the USDA NASS is plotted in figure 1. The scaling described in equation 1 produces figure 4 earlier in the paper.
Figure A1: Average USDA per acre yields for a number of commodity crops over time. This “raw” data (in bespoke harvest units) was scaled to produce the data in figure 4 earlier in the paper.
## Appendix B Estimating land area devoted to chinampas with ImageJ
ImageJ is a free software program developed by the National Institutes of Health for photo analysis, [30]. I used the program to measure a calibration scale in a map and I also used the program to measure the area of two polygons that I drew on the map. The length and both areas are shown in figure 1.
Specifically, to find the area of the two large chinampas areas near Tenochtitlan, I took a screenshot from the 1964 paper, [24], and saved it in jpg format. Then, I opened the image in the Windows-Java edition of ImageJ [30]. The length of the 10 mile distance scale was 213 pixels. The long chinampas area at the south end of the lake was measured with a Polygon selection via the Measure tool to have an area of \(9940\ pixel^{2}\approx 21.9miles^{2}\). The smaller region near Chalco had an area of about \(1439\ pixel^{2}\approx 3.2miles^{2}\). While there were certainly other regions devoted to chinampas agriculture, the portion visible near the Aztec capital seems to be about \(25.1miles^{2}\) or \(16,000acres\).
Figure 1: Three screen captures showing chinampa areas and the calibration stick used to convert pixel-squared area into \(miles^{2}\). The image being analyzed is available in [24]. |
2303.17143 | Orientation of Finite Reynolds Number Anisotropic Particles Settling in
Turbulence | We present experimental and computational results for the orientation
distributions of slender fibers and ramified particles settling in an isotropic
turbulent flow. The rotational dynamics of the particles is modeled using a
slender-body theory that includes the inertial torque due to sedimentation that
tends to rotate the particles toward a broadside orientation. The particles are
assumed to rotate due to viscous forces associated with the turbulent velocity
gradients occurring on the particle length scale. In the simulations, the
turbulence is obtained from a stochastic model of the velocity gradient in a
Lagrangian reference frame. In the experiments, the turbulence is generated by
active jets in a vertical water tunnel. It is well known that axisymmetric
particles rotate according to Jeffery's solution for the rotation of a
spheroidal particle if one adopts an appropriate effective aspect ratio. We
show that the same result applies to a ramified particle consisting of three
coplanar fibers connected with equal angles at a central point which rotates
like a thin oblate spheroid. The orientation statistics can be quantified with
a single non-dimensional parameter, the settling factor $S_F$, defined as the
ratio of rotations due to sedimentation and turbulent shear. For low values of
$S_F$, we observe nearly isotropically oriented particles, whereas particles
become strongly aligned near the horizontal plane for high values of $S_F$. The
variance of the angle away from horizontal scales as $S_F^{-2}$ for $S_F \gg
1$, but the orientation distribution is non-Gaussian due to turbulent
intermittency in this limit. | Anubhab Roy, Stefan Kramel, Udayshankar Menon, Greg A. Voth, Donald L. Koch | 2023-03-30T04:29:09Z | http://arxiv.org/abs/2303.17143v1 | # Orientation of Finite Reynolds Number Anisotropic Particles Settling in Turbulence
###### Abstract
We present experimental and computational results for the orientation distributions of slender fibers and ramified particles settling in an isotropic turbulent flow. The rotational dynamics of the particles is modeled using a slender-body theory that includes the inertial torque due to sedimentation that tends to rotate the particles toward a broadside orientation. The particles are assumed to rotate due to viscous forces associated with the turbulent velocity gradients occurring on the particle length scale. In the simulations, the turbulence is obtained from a stochastic model of the velocity gradient in a Lagrangian reference frame. In the experiments, the turbulence is generated by active jets in a vertical water tunnel. It is well known that axisymmetric particles rotate according to Jeffery's solution for the rotation of a spheroidal particle if one adopts an appropriate effective aspect ratio. We show that the same result applies to a ramified particle consisting of three coplanar fibers connected with equal angles at a central point which rotates like a thin oblate spheroid. The orientation statistics can be quantified with a single non-dimensional parameter, the settling factor \(S_{F}\), defined as the ratio of rotations due to sedimentation and turbulent shear. For low values of \(S_{F}\), we observe nearly isotropically oriented particles, whereas particles become strongly aligned near the horizontal plane for high values of \(S_{F}\). The variance of the angle away from horizontal scales as \(S_{F}^{-2}\) for \(S_{F}\gg 1\), but the orientation distribution is non-Gaussian due to turbulent intermittency in this limit.
keywords: keyword one, keyword two Pacs: 0000, 1111 Msc: 0000, 1111
## 1 Introduction
Sedimentation of non-spherical particles in turbulent flows occurs in many natural situations and has important consequences for a wide range of engineering applications. Mixed-phase cloud systems, such as cirrus clouds, consist of sedimenting ice crystals whose orientation distributions critically affect global climate models. Recently lidar polarization measurements have focussed on distinguishing droplet laden clouds from icy clouds, especially clouds that are dominated by horizontally oriented crystals [50; 68]. A vertically pointed Doppler lidar observes'mirror-like' specular reflections from horizontally aligned ice crystals and thus special care needs to be taken to distinguish them from water clouds as both produce low values of the depolarization ratio [26]. Particle shape also plays an important role in pneumatic conveying and fluidized bed risers and current models do not account for behavior of highly non-spherical particles such as mica flakes [24]. In this paper we probe the competition between inertial torques due to sedimentation that align particles and randomizing torques due to turbulent shear in determining the orientation distribution and translational motion of high aspect ratio particles settling in homogeneous, isotropic turbulence. Experimental observations of ramified particles settling in a vertical turbulent water column are complemented by theory and stochastic simulations based on a slender-body description of the particles.
Since the orientations of small particles sedimenting in turbulence are only affected by the nearly universal inertial and dissipation range of the flow and not by the large scales, the dynamics of these particles is nearly the same in many different turbulent environments. In the simplified case of spheroids sedimenting in homogeneous, isotropic turbulence, there are five non-dimensional parameters necessary to specify the problem. The turbulence can be characterized by its Taylor Reynolds number, \(\mbox{\it Re}_{\lambda}=\sqrt{15}\left(\mathcal{L}/\eta\right)^{4/3}\) where \(\mathcal{L}\) is the integral scale, \(\eta=\left(\nu^{3}/\epsilon\right)^{1/4}\) is the Kolmogorov length, \(\nu\) is the kinematic viscosity, and \(\epsilon\) is the energy dissipation rate per unit mass. A spheroid is an axisymmetric ellipsoid that can be characterized by three non-dimensional parameters: its maximum dimension compared
with the Kolmogorov length, \(L/\eta\), an aspect ratio \(\kappa\) of the symmetry axis length to a perpendicular length, and the relative density of the particle to the fluid, \(\rho_{p}/\rho_{f}\). The fifth parameter characterizes the importance of gravity. This can be quantified by the ratio of terminal particle velocity \(W\) to the Kolmogorov velocity \(u_{\eta}=(\nu\eta)^{1/4}\), the so-called settling parameter \(\mathit{Sv}=W/u_{\eta}\)[4; 60; 27]. We will see that the alignment of non-spherical particles is determined by a settling parameter \(S_{F}\) defined as the ratio of the rate of change of particle orientation due to sedimentation induced inertia to that caused by turbulent velocity gradients. For small fibers in the slender body limit, \(S_{F}\) is proportional to \(\mathit{Sv}^{2}\) and has a weak logarithmic dependence on the particle aspect ratio.
The most accessible case of small and neutrally buoyant, non-spherical particles in turbulence has been studied extensively in simulations [57; 69; 54] and experiments [52; 43; 37; 23] and a review is given by Voth and Soldati [65]. The particle dynamics depend only on particle shape and possibly Reynolds number, and preferential alignment with the local velocity gradients results in reduced tumbling rates compared to randomly oriented particles. This is also true for particles with lengths in the inertial range, however, the alignment and tumbling rates of large particles depend on the coarse grained velocity gradient at the scale of the particle [57; 53]. This observation from experiments and direct-numerical simulations motivates our assumption that the rotation of thin particles caused by turbulent shearing motions can be modeled using Jeffery's solution [29] for particle rotation provided that a velocity gradient on the scale of the particle is used.
The larger the particle size, the more pronounced the effects of inertia when particles and fluid are not density matched. This can alter the particle dynamics drastically, even when gravity is neglected. Direct numerical simulations of small, heavy ellipsoidal particles [47; 74; 42; 8] in turbulent channel flows have shown that the preferential orientation changes non-trivially with increasing particle inertia, especially in the near-wall regions, which can have important consequences for particle deposition. Sabban et al. [56] measured the translational and rotational motion of fibers in homogeneous, isotropic turbulent air under conditions of small particle Reynolds number and finite particle Stokes number. Most of these studies have focused on small particles and ignored external forces.
In most physical situations, gravity cannot be ignored and heavy particles will sediment. This again leads to very different particle dynamics and alignment, depending on the environment, whether it is quiescent or turbu
lent. Simulations [22; 72; 63] and experiments [5; 48] in quiescent fluids have revealed great insight into the drag force and inertial torques on sedimenting, non-spherical particles. There has been significant interest in studying settling dynamics of ice crystals and phytoplankton, two canonical examples of non-spherical particles. Experimental measurements of fall velocities and trajectories for rimed [75] and unrimed [30] ice crystals have found complex falling trajectories depending on the Reynolds number and moment of inertia. Ardekani et al. [2] analyzed the clustering and preferential sampling of sedimenting phytoplankton, modelled as inertia-less prolate spheroids, in homogeneous isotropic turbulence. The majority of studies on non-spherical particles in turbulence in the fluid dynamics community have ignored the role of the inertial torque [38; 61; 8; 73; 20]. The inertial torque determines the horizontal alignment of ice crystals settling in a turbulent background flow, a fact that has been acknowledged in the atmospheric science literature [9; 34; 49; 25; 51]. Inertial torques cause slender bodies to sediment with a preferential orientation, where the long axis is perpendicular to gravity. This is a stable orientation at low and intermediate particle Reynolds numbers, but will eventually become unstable at large Reynolds numbers and the particle motion becomes complex Ern et al. [16].
The complexity of the problem becomes clear when in addition to the underlying turbulence, the particle orientation has to be considered. Therefore, it is no surprise that many studies focused on neutrally buoyant particles and neglected the effects of particle inertia. In many situations however, particles are not neutrally buoyant, there exists a density difference between them and the fluid, and this can drastically change the dynamics and interactions of these particles. For small particles, inertia can still be neglected, but it becomes increasingly important with increasing particle size. Compared to spherical particles, where inertia only affects transport and causes enhanced sedimentation, the so-called sweeping effect ([66]), non-spherical particles can show very different orientation distributions and inertia can alter their preferential alignment. Direct numerical simulations (DNS) of small, heavy ellipsoidal particles ([47; 74; 8], [42]) in turbulent channel flows have shown that the preferential orientation changes non-trivially with increasing particle inertia, especially in the near-wall regions, which has important consequences for particle deposition. Most of these studies have focused on small particles and ignored external forces.
In addition to turbulence and particle inertia, external forces such as gravity can have a pronounced effect on particle motion. Particles with a larger
density than the fluid will sediment under the influence of gravity. DNS of the flow field around fibers and disk-like particles ([22]) have revealed great insight into the vast parameter space, but due to the inherent complexity of the problem, many studies have been forced to ignore turbulence and focus on the slightly easier task of understanding sedimentation in quiescent fluid first. Experimental observations of free falling, non-spherical particles have shown that they do not fall straight and do not have random orientation distributions, but the motion depends on the particle Reynolds number and shape [70], [75], [18] and [30]. The torque induced on thin cylinders or prolate spheroids in this Reynolds number regime causes the body to rotate into a stable position with its symmetry axis aligned horizontally. This effect has been studied theoretically by [10; 39; 31], and experimentally by [28; 5]. Only during the last decades has it been possible to study the influence of turbulence on sedimenting non-spherical particles ([49; 48; 38]). This is of special interest to the atmospheric research community, where prolate and oblate ellipsoids are used as archetypes for column and plate like ice crystals in clouds ([61]). The in-cloud turbulence is often not able to destroy the strong alignment of these particles ([9]), which is in agreement with the orientation model of [34]. As a result, their orientation statistics and sedimentation velocities can have important consequences for remote sensing applications like polarization LIDAR, which is a key component of climate-research programs to characterize the properties of mixed-phase cloud systems, such as cirrus clouds ([50],[68],[26]). On a side note, the strong alignment of ice-crystals in the atmosphere can also be observed by eye since it causes optical phenomena by scattering light, the origin of the Perry Arc ([67]).
The present study analyses the orientation dynamics of anisotropic particles settling in a turbulent flow using a combination of analytical, numerical and experimental approaches. Recently Kramel [36], Menon et al. [45], Menon [46] have proposed a "rapid-settling theory" for studying orientation dynamics, a regime wherein the decorrelation time for a Kolmogorov eddy is much larger than the time a particle takes to settle through a Kolmogorov eddy. Gustavsson et al. [21] and Anand et al. [1] have also explored the orientation dynamics of spheroids in the rapid-settling regime using numerical simulations, where they confirm the transition from random to horizontally aligned orientation distributions with increasing settling speeds. Gustavsson et al. [21] modelled the turbulent background flow as a sum of Fourier modes, the kinetic simulation model. They obtained a normal probability distribution function (PDF) for the component of the orientation
vector along gravity with a variance that scaled as \(S_{v}^{-4}\), in agreement with our earlier studies [36; 45; 46]. Anand et al. [1] analyzed the sedimentation of spheroids in an ambient homogeneous isotropic turbulent field, where the background flow was obtained using direct numerical simulations (DNS). Their calculation of variance agrees with the previous investigations. Using analyses of the higher moments, they showed that the orientation PDFs are non-Gaussian due to the non-Gaussian nature of the turbulent velocity gradient stemming from the dissipation range intermittency.
In this paper, we propose a new way of investigating non-spherical, inertial particles which enables us to observe the transition from strongly aligned particles (\(S_{F}\gg 1\)) to almost randomly oriented particles (\(S_{F}\ll 1\)). Instead of ellipsoidal shaped particles, whose full solid-body rotation is difficult to measure, we introduce ramified particles. A ramified particle consists of any number of individual but connected fibers and can be used to model a wide variety of shapes by adjusting the number and length of the fibers. A triad, three coplanar fibers connected with equal angles at their ends, is a crude approximation of a disk-like particle, whereas a jack, three orthogonal fibers connected at their center, is an approximation for a spherical-particle. Adjusting the length of each fiber gives us control over the effective aspect ratio of the ramified particle and in the limit of many fibers, the ramified particle will approach its ellipsoidal counterpart. The advantage of using ramified particles over ellipsoidal shaped particles is that we can measure the orientation very precisely. Moreover, ignoring the interactions between individual fibers of a ramified particle and using the well developed slender body theory for single fiber motion, is a good starting point for more accurate models.
Section 2 of this paper is a presentation of the theory and stochastic simulations describing the orientation of thin settling particles in isotropic turbulence. Starting with a treatment of fibers that are smaller than the Kolmogorov length scale and settle with small but non-zero Reynolds number, we progress to consider disk-like particles and triads formed by connecting three fibers and finally describe modifications of the theory to account for larger particle sizes and larger particle Reynolds numbers. Section 3 describes the complementary experimental investigation. First, we discuss the experimental methods including the turbulent channel apparatus and characterization of the turbulence as well as the synthesis and characterization of the particles. We then present results for the orientational behavior of particles settling in quiescent fluids and in turbulent flows and compares the latter results with the theoretical predictions. Section 4 is a conclusion and
summary of the study.
## 2 Theory
In this section we present theoretical models for the orientation of sedimenting fibers and triads in isotropic turbulence. In section 2.1, we first consider fibers smaller than the Kolmogorov scale that settle with small but finite \(\mbox{{Re}}_{\ell}\). We then outline a method to extend this theory to larger particle lengths by defining an empirical settling factor using input from the settling of large particles in a quiescent fluid. The results in section 2.1 are based on stochastic simulations of the fiber dynamics in a Lagrangian reference frame. However, fully analytical results are obtained for fibers settling rapidly through the Kolmogorov scale eddies corresponding to the limit \(S_{F}\gg 1\) in section 2.2. In section 2.3 we study the rotational dynamics of rapidly settling disks motivated by the similarity in symmetry of triads and disks. Building upon our understanding of fibers and disks, a model for the dynamics of small triads is presented in section 2.4. In section 2.5 we outline approximate modification for the equations for the translational and rotational motion of fibers and triads for larger particle Reynolds numbers.
### Sedimentation of Small Fibers
In this subsection we present equations governing the orientation and settling velocity of small, slender fibers. It is assumed that the fiber Reynolds number \(\mbox{{Re}}_{\ell}=W_{min}l/\nu\) is small so that we include inertial effects only when
Figure 1: Sedimenting Triad. The particle is leaving a trail of dye behind it.
they break the degeneracy of Stokes flow behavior. Here, \(W_{min}\) is the settling velocity of the particle in a broadside orientation and \(l=L/2\) is the half-length of the particle. The fiber length is much less than the Kolmogorov length scale \(L\ll\eta\), so that the turbulent velocity field can be approximated as a local linear flow field. In the absence of particle inertia, fibers experience no net force or torque. In a quiescent fluid, their orientations, described by a unit vector \(\mathbf{p}\), are independent of time and determined by the initial conditions. Fluid inertia will break this degeneracy and so we include the first effect of fluid inertia on the hydrodynamic torque experienced by a settling fiber even though \(\mbox{{Re}}_{\ell}\) is small.
Since a settling particle of length \(L=2\ell\) disturbs a fluid volume of \(\mbox{{O}}(L^{3})\), the particle mass is small compared with the fluid mass disturbed, if
\[\frac{\rho_{p}}{\rho_{f}}\ll\left(\frac{L}{D}\right)^{2}=\kappa^{2} \tag{1}\]
From this relation we can see that for high aspect ratio particles, whenever the particle and fluid densities are comparable, the particle inertia is negligible compared to that of the fluid as is the case in the experimental study. We will use this observation to justify the neglect of particle inertia so that particles experience no net force or torque.
Any external force is balanced by drag and lift forces. Batchelor [3] derived analytical expressions for the drag and lift force valid for \(\mbox{{Re}}_{\ell}\ll 1\), \(\mbox{{Re}}_{D}\ll 1\) and \(\kappa\gg 1\). The balance of forces expressed to leading order in aspect ratio at low \(\mbox{{Re}}_{\ell}\) is given by
\[-\frac{4\pi\mu L}{\ln(2\kappa)}\left(\mathbb{1}-\frac{1}{2}\mathbf{p}\mathbf{p}\right) \cdot\mathbf{W}+m\mathbf{g}=0, \tag{2}\]
where \(\mathbb{1}\) is the identity matrix, \(m=(\rho_{p}-\rho_{f})\pi LD^{2}/4\) is the mass difference between a cylindrical fiber and the displaced fluid, \(\mu\) is the dynamic fluid viscosity and \(\mathbf{g}\) is the gravitational acceleration. A fiber will therefore translate with a quasi-steady state velocity \(\mathbf{W}\) relative to the local fluid velocity. Equation 2 yields a well-known result for the transverse and longitudinal settling velocities of a fiber
\[W^{f}_{max}=2W^{f}_{min} \tag{3}\]
where \(W^{f}_{max}=|\mathbf{W}|_{\theta=0}\) and \(W^{f}_{min}=|\mathbf{W}|_{\theta=\pi/2}\), respectively. Here, \(\theta\) is the angle between \(\mathbf{p}\) and \(\mathbf{g}\).
While one can neglect inertial effects on the settling velocity of small fibers, fluid inertia will break the degeneracy of particle orientation when a particle settles in a quiescent fluid. With the inclusion of fluid inertia, fibers experience inertial torques \(\mathbf{G}_{sed}\) that rotate the particle to an equilibrium orientation where \(\mathbf{p}\) is perpendicular to \(\mathbf{W}\). Khayat and Cox [31] derived expressions for the torque experienced by a translating fiber \(\mathbf{G}_{sed}\) (see their Eq. 6.12), which becomes in the low Reynolds number limit (\(\mbox{\it Re}_{\ell}\ll 1\)) and to leading order in small aspect ratio
\[\mathbf{G}_{sed}=\frac{5\pi\rho_{f}L^{3}}{24(\ln 2\kappa)^{2}}(\mathbf{W}\cdot\mathbf{p})( \mathbf{W}\times\mathbf{p}) \tag{4}\]
The particle also experiences a rotational resistance \(\mathbf{G}_{rel}\) to its relative rotation [3]:
\[\mathbf{G}_{rel}=-\frac{\pi\mu L^{3}}{3\ln(2\kappa)}(\mathbb{1}-\mathbf{p}\mathbf{p}). \mathbf{\Omega}_{rel} \tag{5}\]
Here, \(\mathbf{\Omega}_{rel}\) is the rotation of the particle relative to the local fluid rotation. In addition, fibers experience torques due to the fluid strain rate \(\mathbf{S}=\frac{1}{2}(\mathbf{\Gamma}+\mathbf{\Gamma}^{T})\):
\[\mathbf{G}_{strain}=\frac{\pi\mu L^{3}}{3\ln(2\kappa)}(\mathbf{p}\times(\mathbf{S}\cdot \mathbf{p})) \tag{6}\]
Here, \(\Gamma_{ij}=\partial u_{i}/\partial x_{j}\) is the turbulent velocity gradient. For a symmetric fiber sedimenting in turbulence, in the absence of particle inertia, a torque balance reads:
\[\underbrace{\frac{5\pi\rho_{f}L^{3}}{24(\ln 2\kappa)^{2}}(\mathbf{W}\cdot\mathbf{p})( \mathbf{W}\times\mathbf{p})}_{\mbox{\scriptsize inertial sedimentation}}-\underbrace{ \frac{\pi\mu L^{3}}{3\ln(2\kappa)}(\mathbb{1}-\mathbf{p}\mathbf{p}).\mathbf{\Omega}_{ rel}}_{\mbox{\scriptsize relative rotation}}+\underbrace{\frac{\pi\mu L^{3}}{3\ln(2\kappa)}(\mathbf{p}\times(\mathbf{S}\cdot\mathbf{p}))}_{ \mbox{\scriptsize turbulent strain}}=0. \tag{7}\]
The torque balance yields the following equation for the time rate of change \(\mathbf{\dot{p}}\) of the fiber orientation
\[\mathbf{\dot{p}}=\mathbf{\Gamma}.\mathbf{p}-\mathbf{p}\left(\mathbf{p}.\mathbf{S}.\mathbf{p}\right)+ \frac{5}{8\nu\ln\left(2\kappa\right)}(\mathbf{W}\cdot\mathbf{p})\ \mathbf{W}\cdot(\mathbf{p}\mathbf{p}-\mathbb{1}) \tag{8}\]
where the first two terms correspond to Jeffery rotation in the local linear flow field [29] and the last term is the rotation due to the inertial torque
caused by the particles sedimentation. Without a background flow, the inertial torque acts to orient a sedimenting spheroidal particle broadside-on to gravity. However, in the presence of additional torque, such as a gravitational torque for an axisymmetric particle with mass asymmetry, the inertial torque can compete to create an oblique settling orientation [55]. In the current study, the torque due to turbulent shear competes with the inertial torque.
A time scale \(\tau_{sed}\) of rotation due to sedimentation of a fiber at small \(\mbox{{Re}}_{\ell}\) may be defined as the inverse rotation rate of the particle in quiescent fluid at \(\theta=45^{\circ}\), where \(\theta\) is the angle between \(\boldsymbol{p}\) and \(\boldsymbol{g}\). When we generalize this definition to disk and triad particles, \(\boldsymbol{p}\) will be the normal vector to the plane of the particle. Upon solving Eq. 2 and the component of the tumbling rate due to the inertial torque (from Eq. 8), at \(\theta=45^{\circ}\) we get,
\[\tau_{sed}=\frac{8\nu\ln{(2\kappa)}}{5W_{min}^{2}} \tag{9}\]
where we have used Eq. 2 to find \(W_{min}\), the minimum settling velocity of a fiber which is achieved at \(\theta=90^{\circ}\). The typical timescale of response of small fibers due to turbulence \(\tau_{turb}\) is the Kolmogorov timescale,
\[\tau_{\eta}=\frac{\eta}{u_{\eta}}=\sqrt{\frac{\nu}{\epsilon}} \tag{10}\]
The orientation distribution of fibers may now be understood by comparing the two time scales \(\tau_{\eta}\) and \(\tau_{sed}\). We define the settling factor as the ratio of these two time scales to be,
\[S_{F}=\frac{\tau_{\eta}}{\tau_{sed}} \tag{11}\]
When \(S_{F}\gg 1\), the rotation due to turbulence is weak compared to that due to the inertial torque and leads to a small deviation from the horizontal orientation, making \(\boldsymbol{W}\) align parallel to gravity and perpendicular to \(\boldsymbol{p}\). On the other hand at \(S_{F}\ll 1\), turbulence is relatively stronger leading to an isotropic orientation distribution. For small fibers, (\(L\ll\eta\)), and \(\mbox{{Re}}_{\ell}\ll 1\) the expression for \(S_{F}^{f}\) reduces to,
\[S_{F}^{f}=\frac{5W_{min}^{2}\tau_{\eta}}{8\nu\ln{(2\kappa)}}=\frac{5}{8\ln{(2 \kappa)}}\left(\frac{W_{min}}{u_{\eta}}\right)^{2} \tag{12}\]
The superscript \(f\) in Eq. 12 indicates the settling factor for small fibers. We will continue to use the general definition Eq. 11 to define settling factors for small triads and disks in the subsequent developments. In the cloud microphysics literature, the parameter Sv denotes the ratio of the Kolmogorov eddy turnover time to the time a particle takes to settle across an eddy, which can also be written as \(S_{v}=W/u_{\eta}\)[12]. Thus, we can see that the settling factor at small Reynolds number is proportional to the square of this non-dimensional settling velocity, i.e., \(S_{F}^{f}\propto S_{v}^{2}\).
To determine the fiber orientation, we must solve Eq. 2 and Eq. 8 in a reference frame translating with the particle. For slowly settling fibers \(S_{F}\ll 1\), the fiber follows a Lagrangian path. For rapidly settling fibers \(S_{F}\gg 1\), it will be shown in the next subsection that the fiber orientation arises from a quasi-steady balance of turbulent shear and inertial rotation so that the particle path does not change the orientation. One might then expect to obtain a reasonable estimate of the fiber orientation by simulating fibers experiencing the turbulence on a Lagrangian path for all \(S_{F}\). For this purpose we employ a stochastic model. Meneveau [44] reviews stochastic models to describe the fluid velocity gradient along a Lagrangian path. We have employed the model developed by Girimaji and Pope [19] that captures the log-normal distribution of the pseudo-dissipation, the time scale for relaxation of the strain rate tensor on a Lagrangian path, and the tendency of the nonlinear inertial terms to align the vorticity with the strain axes. Shin and Koch [57] showed that this model predicts the rotational velocity variance of neutrally buoyant particles computed in direct-numerical simulations (DNS) with much greater accuracy than a simple Gaussian velocity gradient model (Brunk et al. [7]). Girimaji and Pope obtained favorable comparisons of many of the tensor invariants of turbulence with the DNS of Yeung and Pope [71]. The inputs to the model which include the correlation times for the pseudo-dissipation and the components of the strain rate and the variance of the logarithm of the pseudo-dissipation can be obtained from Yeung and Pope for several values of \(Re_{\lambda}\), and we use \(Re_{\lambda}=38\) and \(93\) for our simulations. Recently, in another problem of particles in a turbulent flow, we have used the Lagrangian velocity gradient model of Girimaji and Pope [19] to explore the role of non-Gaussian statistics in collisions of hydrodynamically interacting particles settling in a turbulent flow [13].
In Figure 2, the orientational variance for a fiber settling in a turbulent flow with \(Re_{\lambda}=38\) is shown as a function of the settling factor. It is seen that \(\langle\cos^{2}(\theta)\rangle\), where \(\theta\) is the angle between \(\mathbf{p}\) and \(\mathbf{g}\), shows a smooth
transition from an isotropic orientation distribution \(\langle\cos^{2}(\theta)\rangle=\frac{1}{3}\) to nearly aligned distribution in which orientational dispersion about the horizontal plane (\(\cos\theta=0\)) decays as a power law in the settling factor. As will be derived analytically in section 2.2, the decay in the orientational variance about the equilibrium orientation \(\theta=90^{\circ}\) follows,
\[\langle\cos^{2}\theta\rangle=0.022\ S_{F}^{f\,-2} \tag{13}\]
The scaling of the orientational variance with \(S_{F}^{f}\) is in agreement with the findings of Gustavsson et al. [21] (\(\langle\cos^{2}\theta\rangle\propto\)Sv\({}^{-4}\)) and Anand et al. [1]. Anand et al. [1] evaluates the variance in terms of a Froude number, Fr\({}_{\eta}\), that is identical in definition to that of Sv.
The theory can be extended in an approximate way to larger particles by replacing the time scale of rotation due to sedimentation \(\tau_{sed}\) with \(T_{sed}\) extracted from experimental observations of a particle settling in a quiescent fluid
\[T_{sed}=\frac{1}{\left|\vec{\mathbf{p}}\right|_{\theta=45^{\circ}}} \tag{14}\]
Moreover, the rotations of particles larger than the Kolmogorov length scale are dominated by turbulent eddies close to their size. Therefore, the appropriate turbulent time scale for rotations is the eddy turn over time at the scale of the particle (Parsa and Voth [53])
\[\tau_{L}=\frac{L}{u_{L}}=\sqrt{\frac{4}{15}}\frac{L}{u_{L}^{T}}, \tag{15}\]
where \(u_{L}^{T}=\sqrt{\langle(\Delta\mathbf{u}\cdot(\mathbb{1}-\widehat{\mathbf{r}} \widehat{\mathbf{r}}))^{2}\rangle}\) is the root-mean-square of the transverse components of the fluid velocity difference \(\langle(\Delta\mathbf{u}=\mathbf{u}(\mathbf{x}+L\widehat{\mathbf{r}})-\mathbf{u}( \mathbf{x}))\) at the scale of the particle and the factor of \(\sqrt{4/15}\) is chosen such that \(\tau_{L}\rightarrow\tau_{\eta}\) in the limit \(L\ll\eta\). The ratio of these two time scales allows us to define a settling factor
\[S_{F}=\frac{\tau_{L}}{T_{sed}} \tag{16}\]
that will be used characterize relative rates of sedimentation-induced and turbulent-induced rotation rate in the experiments.
### Rapid Settling of Small Fibers
In this subsection, we will derive an analytical prediction for the variance of the orientation in the rapid settling limit, \(S_{F}=\tau_{\eta}/\tau_{sed}\gg 1\). This indicates that the relaxation of fiber orientation toward its equilibrium horizontal
Figure 2: Orientation variance of small fibers as a function of the settling factor, \(S_{F}\). The squares correspond to simulations and the lines are asymptotes derived in the low and high \(S_{F}\) limits. The solid line at high \(S_{F}\) is the rapid settling limit in a particle reference frame for which the particle orientation and velocity gradient are uncorrelated. The dashed line is the asymptote for a Lagrangian frame which captures the correlation of transverse particle orientation with velocity gradient observed in the simulation.
alignment occurs much more rapidly than the Kolmogorov time scale. From Eq. 12, it can be seen that this limit corresponds to one in which the time \(\tau_{samp}=\eta/W\) for a fiber to sample a Kolmogorov scale eddy is also much smaller than \(\tau_{\eta}\). In particular, \(\tau_{\eta}/\tau_{samp}=W/u_{\eta}={S_{F}^{f}}^{1/2}[\ln(2\kappa)]^{1/2}\gg 1\). From these results it can be seen that the relaxation of fiber orientation is much faster than the sampling time
\[\frac{\tau_{sed}}{\tau_{samp}}=\frac{[\ln(2\kappa)]^{1/2}}{{S_{F}^{f}}^{1/2}}\ll 1 \tag{17}\]
Thus, despite the rapid translational motion of the particle through the eddies, the fiber responds to changes in the local shear rate and achieves a new orientation sufficiently rapidly so that one may obtain the orientation from a quasi-steady balance of the rotation due to sedimentation and turbulent shear.
We will determine the rotation rate of fibers whose orientations exhibit small deviations from the horizontal plane, so that \(\langle{p_{3}}^{2}\rangle\ll 1\) where the 3 axis is parallel to gravity. We begin with an alternate mobility form of Eq. 2 written using Einstein notation as
\[W_{i}=\frac{\ln 2\kappa}{8\pi\mu l}\left(\delta_{ij}+p_{i}p_{j}\right)F\delta_{ j3} \tag{18}\]
where \(F=mg\). For \(\langle{p_{3}}^{2}\rangle\ll 1\), it can be seen that
\[W_{i}p_{i}=2W_{3}p_{3}=\frac{\ln 2\kappa}{8\pi\mu l}Fp_{3} \tag{19}\]
Substituting this result into Eq. 8 yields,
\[\dot{p}_{i}=\frac{10Re_{\ell}p_{3}}{8\ell\ln 2\kappa}2W_{3}p_{3}p_{i}-\frac{10 Re_{\ell}p_{3}}{8\ell\ln 2\kappa}W_{i}+\Gamma_{ij}p_{j}-p_{i}S_{jl}p_{j}p_{l} \tag{20}\]
Since the bodies will remain horizontal on average in this limit, as expected from theory and shown in simulation, we have \(p_{3}\ll p_{1,2}\). Thus,
\[\dot{p}_{i}^{t}=\Gamma_{ij}p_{j}^{t}-p_{i}^{t}S_{jl}p_{j}^{t}p_{l}^{t} \tag{21}\]
where \(p_{i}^{t}\) denotes the transverse component (\(i=1,2\)) of the orientation vector, indicating that the fiber orientation samples the plane normal to gravity by turbulent shearing motions. This will lead to an isotropic distribution of
orientation within the 1-2 plane. Since \(\tau_{sed}\ll\tau_{\eta}\), the motion within the 1-2 plane will be slow compared with the equilibration of the 3 component of the fiber orientation with the current turbulent shear flow. In the settling direction (3) there will exist a quasi-static equilibrium because \(\tau_{sed}\ll\tau_{samp}\). Thus,
\[\dot{p}_{3}=\frac{10Re\epsilon p_{3}}{8\ell\ln 2\kappa}2W_{3}p_{3}p_{3}-\frac{1 0Re\epsilon p_{3}}{8\ell\ln 2\kappa}W_{3}+\delta_{i3}\Gamma_{ij}p_{j}^{t}-p_{3}S_{ jl}p_{j}^{t}p_{l}^{t}=0 \tag{22}\]
Since \(p_{3}\ll 1\) we can further simplify the above equation by balancing the second and third terms to obtain,
\[p_{3}\sim\frac{8\ell\ln 2\kappa}{10Re\epsilon W_{3}}\delta_{i3}\Gamma_{ij}p_{j}^ {t} \tag{23}\]
Thus, the variance characterizing the "wiggle" out of the horizontal plane is
\[\langle p_{3}^{2}\rangle=\left(\frac{8\ell\ln 2\kappa}{10Re\epsilon W_{3}} \right)^{2}\delta_{i3}\delta_{m3}\langle p_{j}^{t}p_{n}^{t}\rangle\langle \Gamma_{ij}\Gamma_{mn}\rangle=\left(\frac{8\ell\ln 2\kappa}{10Re\epsilon_{ \ell}W_{3}}\right)^{2}\delta_{i3}\delta_{m3}\langle p_{j}^{t}p_{n}^{t} \rangle\left[\langle S_{ij}S_{mn}\rangle+\langle R_{ij}R_{mn}\rangle\right] \tag{24}\]
where the \(\langle\rangle\) denotes ensemble averages and \(R_{ij}=1/2(\Gamma_{ij}-\Gamma_{ji})\) is the antisymmetric part of the velocity gradient. Cross terms such as \(\langle S_{ij}R_{mn}\rangle\) are zero due to isotropy of the turbulent field. In the above expression, we have assumed \(\langle p_{j}^{t}p_{n}^{t}\Gamma_{3j}\Gamma_{3n}\rangle\) to be the corresponding product of mean of velocity gradient \(\langle\Gamma_{3j}\Gamma_{3n}\rangle\) and orientation \(\langle p_{j}^{t}p_{n}^{t}\rangle\) dyads. This is a valid assumption in the rapid settling limit because \(\tau_{sed}\ll\tau_{\eta}\) and, as a result, \(\mathbf{p}^{t}\) changes on a much larger time scale than the velocity gradient. \(\langle S_{ij}S_{mn}\rangle\) and \(\langle R_{ij}R_{mn}\rangle\) are fourth order isotropic tensors whose form can be deduced using the properties of symmetry and continuity, as shown in Brunk et al. [7] to obtain
\[\langle S_{ij}S_{mn}\rangle=\frac{S^{2}}{10}\left[\delta_{im}\delta_{jn}+ \delta_{in}\delta_{jm}-\frac{2}{3}\delta_{ij}\delta_{mn}\right] \tag{25a}\] \[\langle R_{ij}R_{mn}\rangle=\frac{R^{2}}{6}\left[\delta_{im}\delta_{jn}- \delta_{in}\delta_{jm}\right] \tag{25b}\]
where \(S^{2}=\langle S_{ij}S_{ij}\rangle\) and \(R^{2}=\langle R_{ij}R_{ij}\rangle\). Substituting Eq. 25 in Eq. 24 we
have
\[\langle p_{3}^{2}\rangle =\left(\frac{8\ell\ln 2\kappa}{10Re_{\ell}W_{3}}\right)^{2}\delta_{i3} \delta_{m3}\left[\frac{S^{2}}{10}\left(\delta_{im}+\frac{\langle p_{i}p_{m} \rangle}{3}\right)+\frac{R^{2}}{6}\left(\delta_{im}-\langle p_{i}p_{m}\rangle \right)\right]\] \[=\left(\frac{8\ell\ln 2\kappa}{10Re_{\ell}W_{3}}\right)^{2}\left[ \frac{S^{2}}{10}+\frac{R^{2}}{6}\right]\] \[\langle p_{3}^{2}\rangle=\left(\frac{8\ell\ln 2\kappa}{5Re_{\ell}W_{3 }}\right)^{2}\frac{\Gamma_{\eta}^{2}}{30}=\frac{1}{30}{S_{F}^{f}}^{-2} \tag{26}\]
where we have used the relations \(S^{2}=R^{2}=\Gamma_{\eta}^{2}/2\) for homogeneous isotropic turbulence and \(S_{F}^{f}\) from Eq. 12. Thus, we have for the rapid settling limit the following relation characterizing the departure of orientation from the horizontal plane due to turbulence,
\[\langle\cos^{2}\theta\rangle=\frac{1}{30}{S_{F}^{f}}^{-2}\approx 0.033~{}{S_{F}^ {f}}^{-2} \tag{27}\]
In our simulations we use a Langrangian model of velocity gradient instead of a particle frame model of turbulence. In the large \(S_{F}^{f}\) limit of the simulations, while the strong inertial torque tries to maintain a horizontal orientation, the fiber orientation continues to change on the Kolmogorov time scale in the 1-2 plane. This leads to a correlation of the transverse fiber orientation with the velocity gradient. As a result, our simulations are different from Eq. 27 by a factor corresponding to \(\langle p_{i}^{t}p_{j}^{t}\Gamma_{3i}\Gamma_{3j}\rangle/\left(2\Gamma_{\eta}^{ 2}/15\right)\). In our simulations, this factor is observed to be around 0.66, making the asymptote
\[\langle\cos^{2}\theta\rangle=\frac{0.66}{30}{S_{F}^{f}}^{-2}\approx 0.022{S_{F}^ {f}}^{-2} \tag{28}\]
### Rapid settling of Disks
In this subsection, we will derive an analytical prediction for the variance of the orientation of rapidly settling disks before studying the rotational dynamics of triads in 2.4. Disks and triads are geometrically similar - triads have 3-fold rotational symmetry while disks have circular symmetry. This hints at possible similarities of the resistance tensors of the two objects. (See Brenner Brenner (1983) for a general discussion on the equality of resistance tensors for objects based on symmetry considerations.)
An oblate spheroid of semi-major axis length \(l\) spinning with relative angular velocity \(\mathbf{\Omega}_{rel}\) with respect to the local fluid rotation in a linear flow
experiences a net torque
\[{\bf G}_{rel+strain}=-8\pi\mu l^{3}\left[X^{C}{\bf pp}+Y^{C}(\mathbb{1}-{\bf pp })\right].{\bf\Omega}_{rel}+8\pi\mu l^{3}Y^{H}({\bf p}\times({\bf S}.{\bf p})). \tag{29}\]
Here \(X^{C},Y^{C}\) and \(Y^{H}\) are the scalar resistance functions associated with the dynamics of a spheroidal particle in Stokes flow [32]. For thin disks they take the values
\[X^{C}\sim\frac{4}{3\pi},\ Y^{C}\sim\frac{4}{3\pi},\ Y^{H}\sim- \frac{4}{3\pi} \tag{30}\]
With the inclusion of fluid inertia, disks, like fibers, experience inertial torques that rotate toward an equilibrium orientation with the large dimensions of the particle perpendicular to the velocity. In the case of disks, this corresponds to \({\bf p}\) aligned with \({\bf W}\). Dabade et al. [11] derived expressions for the torque experienced by a sedimenting spheroidal particle, assuming fluid inertia to be weak. For oblate spheroids, they derive the torque on a thin disk of semi-major axis length \(l\)
\[{\bf G}_{sed}=-\left\{\frac{38}{9}-\frac{17216}{945\pi^{2}} \right\}\frac{l^{3}\rho_{f}}{8}({\bf W}.{\bf p})({\bf W}\times{\bf p}) \tag{31}\]
For a thin disk sedimenting in turbulence, in the absence of particle inertia, a torque balance reads:
\[-\underbrace{\left\{\frac{38}{9}-\frac{17216}{945\pi^{2}} \right\}\frac{l^{3}\rho_{f}}{8}({\bf W}.{\bf p})({\bf W}\times{\bf p})}_{ inertial\ sedimentation}-\underbrace{\frac{32\mu l^{3}}{3}{\bf\Omega}_{rel}}_{ relative\ rotation}-\underbrace{\frac{32\mu l^{3}}{3}({\bf p}\times({\bf S}.{\bf p}))}_{ turbulent\ strain}=0. \tag{32}\]
The zero torque balance yields the following equation for the time rate of change \(\dot{\bf p}\) of the disk orientation
\[\dot{\bf p}={\bf R}.{\bf p}-{\bf S}.{\bf p}+{\bf p}({\bf p}.{ \bf S}.{\bf p})-\frac{c}{\nu}({\bf W}.{\bf p}){\bf W}.({\bf pp}-\mathbb{1}) \tag{33}\]
where \(c=(19/32-269/105\pi^{2})/12\approx 0.028\). The first three terms on the right-hand side of Equation 33 are the Jeffery rotation rate [29] for a particle with \(\kappa\ll 1\) and the third term is the rotation due to the settling-induced inertial torque.
We will now determine the rotation rate of rapidly settling disks that remain nearly aligned with the 3 axis with small fluctuations in the horizontal
plane, so that \(<p_{3}^{2}>\sim 1\). We begin with the mobility expression giving the sedimentation velocity
\[W_{i}=\frac{3}{32\mu l}(\delta_{ij}-\frac{1}{3}p_{i}p_{j})F\delta_{j3} \tag{34}\]
where \(F=mg\). Equation 34 yields the following well-known result for the transverse and longitudinal settling velocities of a thin disk
\[W_{max}^{d}=1.5W_{min}^{d}=3F/(32\mu l) \tag{35}\]
where \(W_{max}^{d}=|{\bf W}|_{\theta=\pi/2}\) and \(W_{min}^{d}=|{\bf W}|_{\theta=0}\), respectively. For \(<p_{3}^{2}>\sim 1\)
\[W_{i}p_{i}=W_{3}p_{3}=\frac{Fp_{3}}{16\mu l} \tag{36}\]
Substituting this result into Eq. 33 yields
\[\dot{p}_{i}=R_{ij}p_{j}-S_{ij}p_{j}+p_{i}p_{k}S_{kl}p_{l}+\frac{3cW_{3}^{2}}{2 \nu}(\delta_{ij}-p_{i}p_{j})\left(\delta_{j3}-\frac{1}{3}p_{j}p_{3}\right)p_{3} \tag{37}\]
Since the disk remains nearly horizontal, we can write \(p_{i}=\delta_{i3}+p_{i}^{t}\) where \(p_{i}^{t}\ll 1\). The rotation rate of the transverse component of the orientation vector, \(p_{i}^{t}\) (\(i=1,2\)), then assumes the following simplified form
\[\dot{p}_{i}^{t}\approx R_{i3}-S_{i3}-\frac{3cW_{3}^{2}}{2\nu}p_{i}^{t}=0 \tag{38}\] \[\Rightarrow p_{i}^{t}\approx\frac{2\nu}{3cW_{3}^{2}}(R_{i3}-S_{i3}). \tag{39}\]
Thus the variance characterizing the "wiggle" out of the vertical axis is
\[\left<p_{i}^{t\,2}\right> = \left(\frac{2\nu}{3cW_{3}^{2}}\right)^{2}(\left<R_{i3}R_{i3} \right>+\left<S_{i3}S_{i3}\right>)=\left(\frac{2\nu}{3cW_{3}^{2}}\right)^{2} \left(\frac{\Gamma_{\eta}^{2}}{6}+\frac{\Gamma_{\eta}^{2}}{10}\right) \tag{40}\] \[= \left(\frac{2\nu}{3cW_{3}^{2}}\right)^{2}\frac{\Gamma_{\eta}^{2} }{3}=\left(\frac{2\nu}{3cW_{min}^{d\,2}}\right)^{2}\frac{4\Gamma_{\eta}^{2}}{ 15}\]
Similar to fibers we can define a settling factor for disks as
\[S_{F}^{d}=\frac{\tau_{\eta}}{\tau_{sed}}=\frac{3cW_{min}^{d\,2}}{4\Gamma_{ \eta}} \tag{41}\]
where \(\tau_{sed}=4/(3cW_{min}^{d\,2})\) is the time scale of rotation due to sedimentation for a disk whose axis is at a \(45^{\circ}\) angle to gravity. The small orientation variance in the high \(S_{F}^{d}\) limit is
\[1-\left\langle p_{3}^{2}\right\rangle=\left\langle\sin^{2}\theta\right\rangle= \left\langle p_{i}^{t\,2}\right\rangle=\frac{1}{15}S_{F}^{d\,-2} \tag{42}\]
To compare orientational behavior of disks with fibers, we define the average deviation of a disk away from horizontal, similar to the \(\left\langle\cos^{2}\theta\right\rangle\) for fibers,
\[0.50\left(1-\left\langle p_{3}^{2}\right\rangle\right)=\frac{1}{30}{S_{F}^{d\, -2}} \tag{43}\]
### Triads Settling in Turbulence
In this subsection, the theory and simulation are extended to triads - three armed ramified particles where all three fiber arms lie in the same plane at equal separation. We model ramified particles as hydrodynamically independent fibers connected together to translate and rotate as a single rigid body. Thus, the force and torque balances on the triad are:
\[\sum_{n=1}^{3}[\mathbf{F}_{drag}^{n}+\mathbf{F}_{gravity}^{n}]=\sum_{n=1}^{3}\left[- \frac{4\pi\mu L}{\ln(2\kappa)}\left(\mathbb{1}-\frac{1}{2}\mathbf{p}_{n}^{\prime} \mathbf{p}_{n}^{\prime}\right)\cdot\mathbf{W}_{n}+m_{n}\mathbf{g}\right]=0 \tag{44a}\] \[\sum_{n=1}^{3} \left[\underbrace{\frac{5\pi\rho_{f}L^{3}}{24(\ln 2\kappa)^{2}}(\mathbf{W}_{n} \cdot\mathbf{p}_{n}^{\prime})(\mathbf{W}_{n}\times\mathbf{p}_{n}^{\prime})}_{inertial\, sedimentation}-\underbrace{\frac{\pi\mu L^{3}}{3\ln(2\kappa)}(\mathbb{1}-\mathbf{p}_{n}^{ \prime}\mathbf{p}_{n}^{\prime})\cdot\mathbf{\Omega}_{rel}}_{relative\,rotation}\right.\] (44b) \[+\underbrace{\frac{\pi\mu L^{3}}{3\ln(2\kappa)}(\mathbf{p}_{n}^{ \prime}\times(\mathbf{S}\cdot\mathbf{p}_{n}^{\prime}))}_{turbulent\,strain}+ \underbrace{\ell\mathbf{p}_{n}^{\prime}\times\mathbf{F}_{drag}^{n}}_{drag\,on\,arms} \right]=0\]
where \(L=2\ell\) is the arm length,\(\mathbf{W}^{c}\) is the relative velocity of the triad center of mass with the fluid and \(\mathbf{W}_{n}=\mathbf{W}^{c}+\ell\mathbf{\Omega}^{c}\times\mathbf{p}_{n}^{\prime}-\ell\mathbf{p} _{n}^{\prime}.\mathbf{\Gamma}\) is the relative velocity of the nth arm with the fluid. The orientation of each arm is defined by \(\mathbf{p}_{n}^{\prime}\) and the orientation of a ramified particle is defined by \(\mathbf{p}\), which is perpendicular
to the plane spanned by the arms. From the symmetry of the particle, the gravitational torque sums to zero. As in the case of disks, the minimum velocity occurs when \(\mathbf{p}\) is parallel to gravity, so that \(W^{t}_{max}=|\mathbf{W}|_{\theta=\pi/2}\) and \(W^{t}_{min}=|\mathbf{W}|_{\theta=0}\).
This model neglects hydrodynamic interactions among the rods. The influence of hydrodynamic interactions on the drag and the torques due to relative rotation and straining motions are of higher order in the small parameter \(1/\ln(2\kappa)\). However, hydrodynamic interactions would influence the inertial torque at the same order of magnitude as the terms retained and the present model that includes only the torque on each arm acting independently is likely an underestimate of the triads' inertial torque. When comparing with experimental measurements of orientation in turbulent flows, the experimentally observed inertial rotation rate of a large triad is used to correct for this discrepancy.
It is important to note that our theory applies to a case where the Reynolds number is small \(Re_{\ell}\ll 1\). In this limit, the rotation of the triad toward horizontal orientations is slow. The competition between turbulent shear and inertial rotation leads to intermediate orientation distributions between isotropic and full alignment when the turbulence is weak \(G=\ell\Gamma_{\eta}/W^{f}_{min}\ll 1\) and the Reynolds number is small \(Re_{\ell}\ll 1\), but the settling parameter \(S^{f}_{F}=\left(\frac{5}{16}\right)\frac{Re_{\ell}}{G}=O(1)\). In this limit the velocity of the triad center of mass \(\mathbf{W}^{c}\) is much larger than the relative velocity of the arms with respect to the triad center of mass, so that the inertial torque due to the translational motion of the particle dominates that due to the triad's rotation. In order to ensure these conditions in our simulations, especially at higher settling rates, we scale our force and torque balance equations. We scale Eq. 44a using \(\mu W^{f}_{min}\ell^{2}\) and Eq. 44b using \(\mu\ell^{3}\Gamma_{\eta}\), and express Eq. 44 with \(\mathbf{W}_{n}=\mathbf{W}^{c}+\mathbf{w}_{n}\), where \(\mathbf{W}^{c}\) is the velocity of triad center of mass and \(\mathbf{w}_{n}\propto\ell\Gamma_{\eta}\) is the disturbance velocity experienced by arms \(n=1,2,3\). In the limit of \(Re_{\ell}\ll 1\) and \(G=\ell\Gamma_{\eta}/W^{f}_{min}\ll 1\), the triad equations reduce to
\[\sum_{n=1}^{3}\left[\left(\mathbb{1}-\frac{1}{2}\mathbf{p}^{\prime}_{n}\mathbf{p}^{ \prime}_{n}\right)\cdot\bar{\mathbf{W}}^{c}-\hat{\mathbf{e}}_{g}\right]=0 \tag{45a}\] \[\sum_{n=1}^{3}\left[S^{f}_{F}\left(\bar{\mathbf{W}}^{c}\!\cdot\!\mathbf{p }^{\prime}_{n}\right)\left(\bar{\mathbf{W}}^{c}\!\times\!\mathbf{p}^{\prime}_{n} \right)-4(\mathbb{1}\!-\!\mathbf{p}^{\prime}_{n}\mathbf{p}^{\prime}_{n})\!\cdot\! \bar{\mathbf{\Omega}}^{c}\right.\] \[+\left.4(\mathbf{p}^{\prime}_{n}\!\times\!(\bar{\mathbf{\Gamma}}\cdot\bm {p}^{\prime}_{n}))\right]=0 \tag{45b}\]
In Eq. 45a, \(W_{min}^{f}=\frac{mg\ln 2\kappa}{4\pi\mu L}\) is transverse velocity of a settling fiber, \(\bar{\mathbf{W}}^{c}=\frac{\mathbf{W}^{c}}{W_{min}^{f}}\), and \(\hat{\mathbf{e}}_{g}\) is the gravitational unit vector. In Eq. 45b, the settling factor is defined using the definition for fibers in Eq. 12, \(\bar{\mathbf{\Omega}}^{c}=\frac{\mathbf{\Omega}^{c}}{\Gamma_{\eta}}\), and \(\bar{\mathbf{S}}=\frac{\mathbf{S}}{\Gamma_{\eta}}\). The symmetry of the triad leads to \(\sum_{n=1}^{3}\mathbf{p}_{n}^{\prime}\mathbf{p}_{n}^{\prime}=3/2(\mathbb{1}-\mathbf{p}\mathbf{ p})\), the signature of a body whose hydrodynamic response is transversely isotropic. The triad equations (Eq. 45a-45b) then assume the following simplified forms -
\[\bar{\mathbf{W}}^{c}=\frac{4}{3}\left(\mathbb{1}-\frac{1}{4}\mathbf{p}\bm {p}\right).\hat{\mathbf{e}}_{g} \tag{46}\] \[-S_{F}^{f}\left(\bar{\mathbf{W}}^{c}\cdot\mathbf{p}\right)\left(\bar{\bm {W}}^{c}\times\mathbf{p}\right)-4(\mathbb{1}-\mathbf{p}\mathbf{p})\cdot\bar{\mathbf{\Omega}}^ {c}+4(\mathbf{p}\times(\bar{\mathbf{\Gamma}}\cdot\mathbf{p}))=0 \tag{47}\]
In the low Reynolds number limit, this ramified particle model (Eq. 46) predicts a different ratio of maximum and minimum sedimentation velocities for triads than for disks. In place of Eqn. 35, we have
\[W_{max}^{t}=\frac{4}{3}W_{min}^{t} \tag{48}\]
In the rapid settling limit the triad is approximated to be in a quasi-steady nearly horizontal orientation, allowing the angular velocity that would rotate the triad out of 12-plane to be neglected. Eq. (47) then simplifies to
\[S_{F}^{f}(-\delta_{i1}p_{2}+\delta_{i2}p_{1})\approx 3\epsilon_{imk}\Gamma_{ km}-3\epsilon_{ijk}\Gamma_{km}p_{j}p_{m}. \tag{49}\]
Similar to disks, the variance characterizing the "wiggle" of a triad out of the vertical axis is
\[\langle p_{i}^{t\,2}\rangle=\frac{9}{S_{F}^{f}}\left[\langle\Gamma_{31}^{2} \rangle+\langle\Gamma_{32}^{2}\rangle\right] \tag{50}\]
where \(p_{i}^{t}\) (\(i=1,2\)) is the transverse component of the orientation vector. The asymptotic expression for a triad is qualitatively similar to that for a disk, with a power law dependence.
\[1-<p_{3}^{2}>=\langle\sin^{2}\theta\rangle=\frac{12}{5}{S_{F}^{f}}^{-2}. \tag{51}\]
However, we see a difference of coefficient compared to Eq. (42) for the disk orientational moment. This reflects differences in the inertial rotation rate of triads and disks as well as the fact that we have used \(S_{F}^{f}\) for the settling
factor in the preceding development. To define a settling factor for triads, we use Eq. (47) to obtain an expression for \(\dot{\mathbf{p}}\) in a quiescent fluid and find the rotational time scale for a triad oriented at a \(45^{\circ}\) angle to gravity to be \(\tau_{sed}=6/(S_{F}^{f}\Gamma_{\eta})\). Thus, the triad settling factor is
\[S_{F}^{t}=\frac{\tau_{\eta}}{\tau_{sed}}=\frac{S_{F}^{f}}{6}. \tag{52}\]
Using Eq. 52, we rewrite Eq. 51 to obtain the average deviation of arms away from the horizontal, a definition similar to \(\langle\cos^{2}\theta\rangle\) for fibers, as
\[0.50\;\left(1-\langle p_{3}^{2}\rangle\right)=\frac{1}{30}{S_{F}^{t}}^{-2} \tag{53}\]
Comparing Eqs. 27, 43, and 53, it is seen that the mean-square deviation of the orientation from horizontal is the same for fibers, disks and triads when defined in terms of settling factors based on the inertial rotation of the respective particles in a quiescent fluid at a \(45^{\circ}\) angle to gravity.
Fig. 3 presents simulation results for the orientational variance of settling triads obtained by solving the triad velocity, rotation rate and orientational dynamics using Eqs. 45a and 45b in the Lagrangian stochastic fluid velocity gradient model. The simulation results are in good agreement with the rapid settling theory (Eq. 53) for \(S_{F}^{t}\gg 1\). Unlike in the case of fibers, correlations between the transverse orientation of triads and disk-like particles do not affect the high \(S_{F}\) limit, because the transverse orientation for disk-like particles is very small. Thus, there is no difference between Lagrangian and rapidly settling particle frame models for the high \(S_{F}\) behavior of disk-like particles.
Fig. 4 compares stochastic simulation results for triads and fibers at different \(Re_{\lambda}\). It may be noted that the general definition of the settling factor \(S_{F}\) based on each particle's inertial rotation rate nearly collapses the results for different particle shapes. The results are also nearly independent of \(Re_{\lambda}\).
### Corrections for Finite Particle Reynolds Number
The theory outlined above can be extended to finite \(Re_{\ell}\) as long as \(Re_{D}\ll 1\) by using the full expressions derived by Khayat and Cox [31]. However, the resulting non-linear Reynolds number dependency couples with particle orientation in a non-trivial way when including terms of order \(\mathcal{O}(\ln(\kappa)^{2})\)
Figure 3: Mean-squared orientation of triads as a function of settling factor \(S_{F}^{t}\). The squares correspond to simulations, while the lines are asymptotes derived in the low and high \(S_{F}^{t}\) limits. Note that \(\theta\) is the angle made by the normal to the triad plane with gravity and hence, \(\langle\cos^{2}\theta\rangle=1\) in the absence of turbulence. The upper symbols indicate \(\langle\cos^{2}\theta\rangle\) and the lower symbols are \(\langle 0.5(1-\cos^{2}\theta)\rangle\), which is the average variance of deviation of the triad arms away from the horizontal plane.
Figure 4: Orientational variance of triads and fibers as a function of \(S_{F}\) at different \(Re_{\lambda}\). For fibers the orientational variance is \(\langle\cos^{2}\theta\rangle\), while for triads we plot \(\langle 0.5(1-\cos^{2}\theta)\rangle\), which is the average variance of deviation of the triad arms away from the horizontal plane. The transition from isotropic to preferential alignment happens around the same range of \(S_{F}\) as one might expect. There is a slight difference of the asymptotes, while they share the \(S_{F}^{-2}\) behavior, due to the correlation of the velocity gradient and orientation vector for fibers in the rapid settling limits in a Lagrangian stochastic model.
Solving the force and torque balance equations is therefore computationally very expensive. Lopez and Guazzelli [40] suggested a convenient way to handle this case while at the same time keeping computational cost at a minimum. Since the first-order term in aspect ratio nicely decouples drag and lift, we can define two constants, \(C_{\perp}\) and \(C_{R}\), that account for finite Reynolds number and aspect ratio effects:
\[\frac{4\pi\mu LC_{\perp}}{\ln(2\kappa)}\left(\mathbb{1}-C_{R}\mathbf{p}\mathbf{p} \right)\cdot\mathbf{W}-m\mathbf{g}=0 \tag{54}\]
Here, \(C_{\perp}\) accounts for the overall change in drag on a particle sedimenting at non-zero Reynolds number and \(C_{R}\) accounts for the change of the drag ratio between a particle sedimenting with \(\theta=0\) and \(\theta=\pi/2\). In the low Reynolds number limit, \(C_{\perp}=1\) and \(C_{R}=1/2\) (see Eq. 3). The expressions for \(C_{\perp}\) and \(C_{R}\) include the full analytical expressions given by Khayat and Cox [31] and the only approximation comes from interpolating the angular dependence at intermediate orientations. They are defined as:
\[C_{\perp}=\frac{\ln(2\kappa)}{\ln(\kappa)}\left(1+\frac{\mathcal{F}_{\perp}}{ \ln(\kappa)}\right) \tag{55}\]
\[C_{R}=\left(1-\frac{1}{2}\frac{(1-\mathcal{F}_{\perp}/\ln(\kappa))}{(1- \mathcal{F}_{\parallel}/\ln(\kappa))}\right) \tag{56}\]
where \(\mathcal{F}_{\parallel}=\mathcal{F}_{D}(\mbox{\it Re}_{\ell},\theta=0)\) and \(\mathcal{F}_{\perp}=\mathcal{F}_{D}(\mbox{\it Re}_{\ell},\theta=\pi/2)\). The above expression has two small parameters present - \((\ln(2\kappa))^{-1}\) and \((\ln(\kappa))^{-1}\). This is due to the difference in the choice of perturbation parameters in the slender body theories of [3] and Khayat and Cox [31]. The expressions for \(\mathcal{F}_{D}(\mbox{\it Re}_{\ell},\theta)\) are given by Khayat and Cox [31]. The two constants \(C_{\perp}\) and \(C_{R}\) are plotted as functions of Reynolds number in Fig. 5 for different aspect ratios. The unexpected behavior of these functions at low Reynolds numbers shows that the theory is very sensitive to the high aspect ratio requirement and that the Stokes flow limit can only be recovered when \(\kappa\rightarrow\infty\) as \(\mbox{\it Re}_{\ell}\to 0\). Solving Eq. 54 for the velocity of the fiber \(\mathbf{W}\) yields the same expression as derived by Lopez and Guazzelli [40], who have approached this problem in terms of the mobility matrix.
For finite Reynolds numbers, the experimental measurements of the sedimentation rate of horizontal fibers in quiescent fluid \(W_{\mathit{min}}=|\mathbf{W}|_{\theta=\pi/2}\) described in section 3 enable us to determine \(C_{\perp}\). This also includes the un
certainties in particle dimensions and density. With
\[C_{\perp}=\frac{\ln(2\kappa)mg}{4\pi\mu LW_{min}}, \tag{57}\]
we measure \(C_{\perp}\approx 3.5\) and \(3.0\) for our small fibers and triads, and \(C_{\perp}\approx 7.8\) and \(7.6\) for our large fibers and triads, respectively. In order to determine \(C_{R}\), one has to measure the velocity of vertical fibers \(W_{max}=|\mathbf{W}|_{\theta=0}\). It is well known that \(W_{max}=2\ W_{min}\) for slender fibers in the Stokes flow limit. Finite aspect ratio effects lower this ratio, but it increases again slowly when considering finite Reynolds number effects. Our particles fall into the range where this ratio is approximately \(2\). As seen in Fig. 5, the theoretical predictions for \(C_{\perp}\) and \(C_{R}\) (from Eqs. 55 and 56) are less than our experimental measurements. This may be due to the finite value of \(\mbox{{Re}}_{D}\) in the experiments. Thus measurements of \(C_{\perp}\) and \(C_{R}\) from quiescent fluid experiments are used for the analysis.
Similar to the finite Reynolds number force corrections (\(C_{\perp}\) and \(C_{R}\)), we introduce a constant \(C_{G}\) to correct the inertial torque for large particles. Here, \(C_{G}\) is defined as the ratio of inertial torque from the full expression \(\mathcal{F}_{G}(\mbox{{Re}}_{\ell},\theta)\) (Eq. 6.13) given by [31] to the low Reynolds number expression \(\mathcal{F}_{G}(\mbox{{Re}}_{\ell}\ll 1,\theta)\) (Eq. 6.22). The behavior of \(C_{G}\) is plotted in Fig. 6 and shows that the inertial torques decrease quickly with increasing Reynolds number.
Figure 5: (a) \(C_{\perp}\) as a function of Reynolds number \(\mbox{{Re}}_{\ell}\) for three different aspect ratios, \(\kappa=20,100\) and \(10^{6}\). The inset shows the \(\kappa=20\) results for larger Reynolds number for comparison with experiments. (b) \(C_{R}\) as a function of Reynolds number \(\mbox{{Re}}_{\ell}\) for the same three aspect ratios as in (a). The inset again shows the \(\kappa=20\) results for larger Reynolds number for comparison with experiments.
The value of \(C_{G}\) for the experiments can be determined by comparing the time scales from the low Reynolds number model Eq. 9 and the empirically determined time scale Eq. 14:
\[C_{G}=\frac{\tau_{sed}}{T_{sed}} \tag{58}\]
and we find \(C_{G}=0.007\) for small fibers and triads and \(C_{G}=0.002\) for large fibers and triads. These values are larger than the theoretical value of \(C_{G}\) at the corresponding particle Reynolds numbers and \(\theta=45^{\circ}\), where \(C_{G}=0.002\) for small fibers and \(C_{G}=0.0002\) for large fibers.
With the force and torque correction factors \(C_{\perp},C_{R}\) and \(C_{G}\), the force and torque balance equations for large triads can be written as:
\[\sum_{n=1}^{3}\left[-C_{\perp}\left(\mathbb{1}-C_{R}\mathbf{p}_{n}^{\prime}\mathbf{p}_{ n}^{\prime}\right)\cdot\bar{\mathbf{W}}^{c}+\hat{\mathbf{e}}_{g}\right]=0 \tag{59a}\] \[\sum_{n=1}^{3}\left[C_{G}S_{F}^{f}\left(\bar{\mathbf{W}}^{c}\cdot\mathbf{p }_{n}^{\prime}\right)\left(\bar{\mathbf{W}}^{c}\times\mathbf{p}_{n}^{\prime}\right)-4 (\mathbb{1}-\mathbf{p}_{n}^{\prime}\mathbf{p}_{n}^{\prime})\cdot\bar{\mathbf{\Omega}}^{c}\right.\] \[\left.+\ 4(\mathbf{p}_{n}^{\prime}\times(\bar{\mathbf{\Gamma}}\cdot\mathbf{p}_{ n}^{\prime}))\right]=0 \tag{59b}\]
## 3 Experiments
In this section, we describe experiments in which we measure the orientation of particles as they sediment in nearly homogeneous, isotropic turbulence and compare them with the theoretical predictions from section 2. We also measure particle motion in quiescent fluid to allow the comparison.
### Experimental Methods
First we describe the vertical water tunnel which allows control of both the turbulence intensity and the through flow so that sedimentation can be balanced with advection to keep particles suspended in the test section for measurement. Then we describe the imaging and particle fabrication methods.
We identified the non-dimensional parameters, \(\mathit{Re}_{\lambda}\), \(L/\eta\), \(\kappa\), \(\rho_{p}/\rho_{f}\), and \(S_{F}\) that determine the sedimentation statistics of non-spherical particles in
turbulence. With that in mind, we constructed a 4.2 m tall, vertical water tunnel (see Fig. 7(a)) that gives us good control over each parameter and allows us to explore a large range of values. It keeps heavy particles suspended and we can simultaneously control \(S_{F}\) by adjusting the amount of turbulence they experience. The particles are suspended by a mean through flow, which allows us to record long, individual trajectories. The density ratio and particle dimensions, \(\rho_{p}/\rho_{f}\), \(L\) and \(\kappa\), were chosen to yield particle sedimentation rates that could be supported by the through flow. To ensure a flat velocity profile of the through flow, the fluid first passes through a pressure plate (2% open area), a 20 cm tall honeycomb flow straightener and a contraction zone (area ratio 4:1) before entering the test section. The exit conditions downstream of the test section are kept symmetric to the inlet conditions. One of the challenges of the experiments is keeping the particles suspended without clogging filters, valves or the pump. This is particularly difficult for fibers, which is the reason why they have been excluded from the turbulent flow experiments in this paper.
Turbulence is generated and controlled with a jet-array issued from a 3D-printed grid with grid spacing \(M=6\) cm and 30% solidity. Using Nylon as material and selective laser sintering we were able to fabricate the jet array with internal channels and 40 nozzles (see Fig. 7(b)). Each nozzle can be triggered independently through a solenoid valve to e
Figure 6: \(C_{G}\) as function of Reynolds number \(\mbox{\it Re}_{\ell}\) for three different angles, \(\theta=0^{\circ},45^{\circ}\) and \(90^{\circ}\) (blue, red and yellow lines, respectively).
Figure 7: a) Autocad (perspective) rendering of the vertical water tunnel. From bottom to top (color online): inlet chamber for through flow (orange), pressure plate and honeycomb (green), contraction zone (blue), manifolds (yellow) for jet array (purple), test section (clear), expansion zone (blue), flow exit chamber (orange). b) 3D-model of the jet array. c) Side view of a slice through the jet array shows internal channels, turning vanes and nozzles.
the surrounding flow. In the minimum turbulence configuration, all valves are closed and the jet-array becomes a passive grid for the through flow. The streamwise mean fluid velocity \(\langle U_{f}\rangle_{z}\) has a small fluctuating component \(u^{\prime}_{z}\), resulting in turbulence intensities \(u^{\prime}_{z}/\langle U_{f}\rangle_{z}\) as small as 7% (see Tab. 1), typical for passive grid configurations. Here, \(u^{\prime}_{(x,y,z)}\), are the components of the root-mean-square (rms) fluctuating velocity. The system can also be driven solely through the jet-array, similar to the random jet-array used by Variano and Cowen [64], achieving much larger turbulence intensities. The intermediate turbulence regimes can be reached by either adjusting the number of jets, the duration each jet is firing or the jet velocity. For the experiments presented here, the turbulence intensity was controlled through the jet velocity, keeping the average number of jets (8 jets, 20% of the total number of jets) and the average duration of each jet (1 s \(\pm\)0.25 s) constant. Jets were chosen randomly with some nearest neighbor restrictions. The generated turbulence is essentially isotropic in the horizontal plane (span-wise, wall-normal), where \(u^{\prime}_{x}\approx u^{\prime}_{y}\), but has an rms fluctuating velocity in the vertical direction \(u^{\prime}_{z}\) (stream-wise) that is about 20% higher than in the horizontal directions as shown in Tab. 2. Both through flow and jet-array are powered by a 3 hp, variable speed pump that can produce a through flow of up to 10 cm/s in the test section and an estimated jet velocity of up to 4 m/s at the nozzles.
Two coarse meshes confine the particles in a clear, tall, 30 x 30 x 150 cm\({}^{3}\) test section, where four high-speed cameras image a 12 x 12 x 10 cm\({}^{3}\) detection volume in the center region, 10\(M\) downstream of the jet-array. Two high-powered, pulsed and monochromatic LED lights (SmartVisionLights ODMOBL series) create a uniform background illumination. The cameras and the lights are triggered synchronously at 450 Hz, ensuring a single pulse illumination during each exposure. A real-time image compression system, which allows continuous data acquisition for several days, is used to collect large data sets. It is essential for these experiments, because there is not always a particle in view on all four cameras, which is required to determine an accurate particle orientation.
The particles used in the experiments are 3D-printed (using VeroBlack material and fused deposition modeling, \(\rho_{p}=1.13-1.15\) g cm\({}^{-3}\)) and consist of several slender fibers, connected at the center (we call them ramified particles). In this case, three fibers of equal length \(L=2\ell\) and radius \(r\) with aspect ratio \(\kappa\)=\(\ell/r\)=20, oriented in planar symmetry and with a 120\({}^{\circ}\) angle
\begin{table}
\begin{tabular}{c c c c c c c c} & Turb. & Pump Speed & Thru Flow & Total Jet Flow & \(u^{\prime}_{z}\) & \(\langle U_{f}\rangle_{z}\) & \(\langle U_{p}\rangle_{z}\) \\ & Intensity & [rpm] & [l/s] & [l/s] & [mm/s] & [mm/s] & [mm/s] \\ \hline & 0.07 & 700 & 1.7 & 0 & 1.32 & 19.78 & -2.87 \\ & 0.21 & 800 & 1.3 & 0.4 & 5.01 & 23.38 & 0.22 \\ Small & 0.39 & 1200 & 1.4 & 1.0 & 11.64 & 29.98 & 5.48 \\ Triads & 0.62 & 1550 & 0.9 & 1.5 & 16.68 & 26.91 & 2.26 \\ & 0.91 & 2000 & 0.2 & 2.1 & 19.98 & 21.96 & -3.98 \\ & 1.06 & 2100 & 0 & 2.2 & 21.28 & 20.08 & -4.12 \\ \hline & 0.07 & 1150 & 2.9 & 0 & 2.39 & 34.14 & -1.92 \\ & 0.10 & 900 & 2.1 & 0.3 & 3.10 & 30.82 & -5.68 \\ Large & 0.29 & 1100 & 1.6 & 0.8 & 9.19 & 31.68 & -7.07 \\ Triads & 0.28 & 1700 & 3.2 & 1.0 & 16.24 & 58.00 & 15.88 \\ & 0.37 & 1700 & 2.5 & 1.3 & 18.61 & 50.28 & 8.88 \\ & 0.95 & 2500 & 0.6 & 2.5 & 28.56 & 30.05 & -4.35 \\ \end{tabular}
\end{table}
Table 1: Experimental parameters: Volumetric through flow and total flow emitted by the jet array (measured with two magnetic flow meters); \(u^{\prime}_{z}\), stream-wise rms fluctuating velocity; \(\langle\mathbf{U}_{f}\rangle\), mean fluid velocity in the detection volume (measured with tracers). \(\langle\mathbf{U}_{p}\rangle\), mean particle velocity.
between them, form a triad (see Fig. 1). The triad is a model for a very small aspect ratio disk-like particle. The advantage of using ramified particles as models is that the orientations can be measured very accurately and even rotations around the symmetry axis can be resolved. The rotations of any ellipsoid can be approximated by a corresponding ramified particle (exact for small particles) and therefore even the rotations of spherical particles can be tracked accurately, overcoming one measurement limitation inherent to ellipsoidal particles.
We fabricated 150 triads with smallest dimension \(r=(225\pm 5)\)\(\mu\)m (referred to as small particles, but not small enough to be in the Stokes flow regime) and with \(r=(450\pm 5)\)\(\mu\)m (large particles), both with \(\kappa=20\). The particle Reynolds number \(\mbox{\it Re}_{D}=wD/\nu\) based on the fiber diameter ranges from \(\mbox{\it Re}_{D}\approx 10\) for small triads to \(\mbox{\it Re}_{D}\approx 40\) for large triads (see Table 3),
where \(w\) is the velocity of the particle relative to the fluid. The fluid viscosity is \(\nu=0.9131\pm 0.02\times 10^{-6}\) m\({}^{2}\) s\({}^{-1}\), with the uncertainty coming from temperature fluctuations (\(T=24.0^{\circ}\pm 1^{\circ}\)). Fibers used for some parts of the experiments were manually cut from Nylon fishing line with a very similar density of \(\rho_{p}=1.13-1.15\) g cm\({}^{-3}\), but much smoother surface than the 3D-printed triads. The fiber radius and aspect ratio was close to that of the arms that make up the triads. Grey, neutrally buoyant micro spheres with a diameter of 250 \(\mu\)m were used as tracer particles to measure the fluid velocity, calculate structure functions and extract mean energy dissipation rates \(\epsilon\). For the experiments with small triads, the local fluid velocity at the particle position was measured using tracers within a sphere of radius \(3L\) around the particle. The local mean fluid velocity \(\langle\mathbf{u}_{f}\rangle\) did not depend very strongly on the size of the tracer-sphere and a relatively large radius was chosen to minimize measurement noise. This was not successful for the experiments with the large triads due to an insufficient number of tracers and so the overall mean fluid velocity \(\langle\mathbf{U}_{f}\rangle\) was used to calculate a relative particle velocity. In our experiments, the non-dimensional concentration is \(nL^{3}=O(10^{-3})\), where \(n\) is the number density, and thus the effect of hydrodynamic interactions is negligible.
The particles have to be suspended near the center of the test section in order to take continuous data and gather enough statistics. Depending on particle size, the through flow can be adjusted to match the particles sedimentation rate in quiescent fluid. As we increase the turbulence intensity, triads start to rotate around their equilibrium sedimentation orientation which, in return, increases their sedimentation rate. The through flow was adjusted to keep as many triads suspended as possible. For the largest turbulence intensities, triads were almost entirely suspended by strong jets from the jet array, whereas for intermediate turbulence intensities, the through flow had to be increased to lift particles up into the detection volume. Triads exposed to the lowest turbulence intensities show the same orientation and sedimentation statistics as triads in quiescent fluid. The volumetric mean flow and the total volumetric flow emitted by the jet array were measure with two separate magnetic flow meters (Toshiba GF630 series), see Table 1.
### Sedimentation in Quiescent Fluid
Before presenting experimental results on the orientation of particles sedimentation in turbulent flows, we will first document the translational and
orientational motion of the particles in quiescent fluid. The Reynolds numbers based on both rod diameter and arm length are larger than the range for which the theoretical calculations were performed. Thus, we will use these measurements to quantify the rate of rotation of the triads in the experiments and define the settling parameter \(S_{F}\). The experiments will also allow an assessment of the extent to which the orientation dependence of the settling velocity and rotation rate resembles that of low Reynolds number particles so that a theory based on adjusted values of \(S_{F}\) might capture the orientation of particles in turbulent flows.
The experiments in quiescent fluid include two kinds of particles, fibers and triads, with two different sizes. Fibers were chosen to match the non-dimensional parameters of triads as closely as possible. In quiescent fluid, both particles will assume a stable sedimentation orientation with its longest axis perpendicular to its sedimentation direction. In the lab reference frame, \(\mathbf{\hat{z}}\) is upwards and gravity \(\mathbf{g}\) is downwards, this means the stable orientation of fibers is \(p_{z}=0\) (\(\theta=90^{\circ}\)) and \(p_{z}=1\) (\(\theta=0\)) for triads (\(p_{z}=p_{3}\)).
The measurements of \(W_{min}\) together with Eq. 57 determine \(C_{\perp}\) of the particles in their stable orientation. Moreover, the orientation distributions and variances of particles, sedimenting in quiescent fluid, contain valuable information about particle inhomogeneities and fabrication defects.
To gain insight into the sedimentation statistics in quiescent fluid, at orientations other than their equilibrium orientation, we disturb the particles by letting them hit a thin nylon string and recording the resulting trajectories. The sedimentation rate is measured for all orientations along the particles trajectory and therefore includes effects of the particle's history. In other words, the sedimentation rate is measured during a transient phase whereas the ramified particle model assumes quasi-steady state sedimentation velocity.
In their equilibrium orientation, our small fibers and triads sediment at a mean rate of \(W_{min}=19.8\) mm/s and \(23.2\) mm/s, respectively, and our large fibers and triads at \(W_{min}=35.9\) mm/s and \(36.8\) mm/s, respectively.
Figure 8 (a) shows the components of the relative particle velocity \(\mathbf{W}\) of fibers and triads as functions of particle orientation. The top curves show the mean vertical component \(W_{z}\). Fibers of both sizes follow the predictions of the theoretical model from section 3.4 very well, even though they are far outside the range of Reynolds numbers where this model is valid. Their sedimentation rate in the vertical orientation is about twice their sedimentation rate in the horizontal orientation, \(\langle W_{z}|_{p_{z}=1}\rangle\approx 2\ \langle W_{z}|_{p_{z}=0}\rangle\). Triads on
Figure 8: a) Components of the relative particle velocity \(\mathbf{W}\) as function of particle orientation, \(p_{z}\), measured in quiescent fluid. The top curves show the vertical component \(W_{z}\) for small fibers and triads (small and large circles and triangles, respectively). The lower curves show the horizontal component \(W_{h}\), where \(\mathbf{\hat{h}}=\mathbf{\hat{z}}\times(\mathbf{p}\times\mathbf{\hat{z}})\). Both are normalized by the sedimentation velocity of a broadside settling particle, \(W_{min}\). Error bars are showing the standard deviation between individual trajectories. The dashed lines show the predictions of the simple ramified particle model for infinitely long fibers (thin disks) with \(M_{\perp}\)=\(2M_{\parallel}\). b) The long axis of a fiber with \(p_{z}\)=0.5 is oriented at \(30^{\circ}\) with respect to the horizontal. The inset shows a photograph of the smooth surface of the nylon fibers. c) The plane of a triad with \(p_{z}\)=0.5 is oriented at \(60^{\circ}\) with respect to the horizontal. The inset shows a photograph of a 3D printed triad with a rougher surface and features up to \(0.1r\), for \(r=0.225\) mm.
the other hand are not quite reaching the ratio of sedimentation rates (1.5) predicted by the ramified particle model, but \(\langle W_{z}|_{p_{z}=0}\rangle\approx 1.4\)\(\langle W_{z}|_{p_{z}=1}\rangle\). There are multiple reasons that could explain this lower ratio of sedimentation rates. For our particles, the low fiber-diameter Reynolds number and high aspect ratio approximations are not valid (\(\mbox{\it Re}_{D}>10\) and \(\kappa=20\)) and both effects are known to change that ratio [31]. As described before, one can model these effects in the simple ramified particle model by adjusting the constant \(C_{R}\). However, if this was the dominant reason for a lower ratio of sedimentation rates, we would expect to see a similar effect for fibers. The most likely reason why we see a discrepancy between the model prediction and the measurements of triads is that the model neglects the interactions between arms of a ramified particle. We also refer the interested reader to Chapter 4 of Kramel [36] for a comparison of the angular dependence of the relative velocity of the triad and fluid obtained from theory and experiments.
In addition to vertical sedimentation, fibers and triads have a non-zero horizontal settling velocity depending on particle orientation. The bottom curves in figure 8 (a) show the mean horizontal component, \(W_{h}=\mathbf{W}\cdot\mathbf{\hat{h}}\), where \(\mathbf{h}=(\mathbb{1}-\mathbf{\hat{g}}\mathbf{\hat{ g}})\cdot\mathbf{p}\) is the projection of \(p\) in the plane perpendicular to \(g\) (horizontal). Based on the model, we expect this component to be maximized independently of shape when \(\theta=45^{\circ}\), or \(p_{z}=1/\sqrt{2}\). The experiments show that both fibers and triads reach the largest horizontal velocity at a different angle, when \(p_{z}\approx 0.5\). One has to keep in mind that \(p\) points along the symmetry axis of the fiber, but is perpendicular to the plane of the triad and therefore \(p_{z}=0.5\) for fibers means the long axis makes an angle of \(30^{\circ}\) with respect to the horizontal, while for triads the long axis makes an angle of \(60^{\circ}\) with respect to the horizontal, see Fig. 8 (b) and (c). The experiments show that fibers have roughly twice the horizontal velocity of triads for all orientations, which is in agreement with the ramified particle model.
The model for inertial torques, Eq. 4, is valid at low particle Reynolds numbers. Shin et al. [58] have shown in simulations, that at \(\mbox{\it Re}_{\ell}\sim 10\), the inertial torque passes through a maximum and begins decreasing with increasing Reynolds number. We can therefore expect that the time scale \(\tau_{\mbox{\it sed},45}\), predicted by the low Reynolds number model (Eq. 9) is too short for the particles in the experiments. Figure 9 (a) shows the angle between gravity \(g\) and the particle orientation \(p\) as function of time, normalized by \(\tau_{\mbox{\it sed},45}\). The trajectories shown are averaged trajectories of many individual fibers and triads, with \(t=0\) chosen when each particle is at \(\theta=45^{\circ}\). The error bars are showing the standard deviation between individual trajectories,
which is zero at \(t=0\) because we choose \(\theta=45^{\circ}\) to define time zero. We also show the simulation data from Shin et al. [58] of fibers with \(Re_{D}\ll 1\). For our particles, the low Reynolds number model clearly over-estimates the strength of the inertial torques. Furthermore, the low Reynolds number time scale for particle rotation, \(\tau_{sed,45}\), does not collapse the experimental curves with one another nor the experiments with the simulations. For that reason, we can not use the definition of \(\tau_{sed,45}\) from the low Reynolds number model to calculate \(S_{F}\) for the experimental particles with \(\mbox{\it Re}_{D}=10\).
To gain information about the strength of the inertial torques when the particle Reynolds number is no longer small, we extract the time it takes a particle to come to alignment with its equilibrium orientation. We measure the the rotation rate of the particles when \(\theta=45^{\circ}\) and use it to define an empirical time scale of the inertial torques as \(T_{sed,45}=1/|\mathbf{\dot{p}}|_{\theta=45^{\circ}}\). Here, we determine the rotation rate when \(\theta=45^{\circ}\) by fitting a straight line to the measurements of \(p_{z}\) over the range \(t=0\) to \(0.05T_{sed,45}\). Figure 9 (b) uses this definition to collapse the curves. It is notable that the dependence of angle on \(t/T_{sed}\) is similar in the experiments and theory suggesting that using the measured \(T_{sed}\) the theory may predict the particle dynamics well.
Figure 9: Angle between the force of gravity and particle orientation as function of time. Fibers (blue circles) approach their stable orientation where \(p\) is perpendicular to \(\hat{g}\), whereas triads (red triangles) are stable when \(p\) is parallel to \(\hat{g}\). The black circles show the theoretical prediction for small fibers from the simulation results of Shin et al. [58]. a) Normalized by the inverse of the predicted rotation-rate at 45 degree using the tumbling rate due to inertial torque (Eq. 8)
Interestingly, \(T_{\mathit{sed,45}}\) is roughly the same for all the particle sizes and shapes used in our experiments with \(T_{\mathit{sed,45}}=(1.8\pm 0.1)\) s for small fibers and \(T_{\mathit{sed,45}}=(1.9\pm 0.1)\) s for large fibers and \(T_{\mathit{sed,45}}=(1.7\pm 0.1)\) s for small triads and \(T_{\mathit{sed,45}}=(1.9\pm 0.1)\) s for large triads. One notable difference between the predicted and observed orientational dynamics is that triads often significantly overshoot the \(\theta=0\) point (not shown) and return at slightly different rates. Averaging this over different trajectories makes it appear that the equilibrium angle is at \(\theta\sim 10^{\circ}\) in the shown time range. In the long time limit \(\theta\) will approach \(0^{\circ}\). We will use this time scale of the inertial torques, \(T_{\mathit{sed,45}}\), to calculate an empirical value of the settling factor \(S_{F}\) for the turbulence experiments.
### Sedimentation in Turbulence
The sedimentation of ramified particles under different turbulence intensities gives insight into the competition between turbulence, which tend to randomize particle orientations, and inertial torques, which align particles with their stable sedimentation orientation. We present orientation distributions of triads as the variance of \(p_{z}\) for various values of the turbulence intensity and particle size. We also compare the experiments to simulations and theoretical predictions based on slender body theory.
The relative particle orientation can be defined using two angles, \(\theta\) and
\(\psi\). We define these two angles as
\[\cos(\theta) =|\mathbf{p}\cdot\mathbf{\hat{g}}| \tag{60}\] \[\cos(\psi) =\left|\frac{\left[(\mathbb{1}-\mathbf{pp})\cdot\mathbf{\hat{g}}\right] \cdot\mathbf{p^{\prime}}}{|\left(\mathbb{1}-\mathbf{pp}\right)\cdot\mathbf{\hat{g}}|}\right| \tag{61}\]
where \(\theta\) is the angle between \(\mathbf{p}\) and gravity and \(\psi\) the angle between an arm \(\mathbf{p}^{\prime}\) and the projection of gravity into the plane of the particle \((\mathbb{1}-\mathbf{pp})\cdot\mathbf{\hat{g}}\). In isotropic turbulence, the third Euler angle is a rotation around \(\mathbf{\hat{z}}\) which is randomly distributed and does not encode any relevant statistics.
The orientation of an arbitrary rigid body would require the specification of the three Euler angles, \(\theta,\phi,\psi\), and thus we would define a PDF \(\Pi(\theta,\phi,\psi)\). For an isotropic flow, the PDF would be independent of \(\phi\)\(\Pi(\theta,\phi,\psi)=\mathcal{P}(\theta,\psi)/2\pi\). Further for fore-aft symmetric particles a measure of the distribution of the normal to the plane of the particle can be defined as \(P(p_{z})\) -
\[P(p_{z}=\cos\theta)=2\int_{0}^{2\pi}\mathcal{P}(\theta,\psi)d\psi. \tag{62}\]
Thus as per our definition, for isotropic distribution, \(P(p_{z})=1\). The current theoretical approach does not depend on \(\psi\), but the experimental results display \(\psi\) dependence. This behavior is likely due to inertial effects at larger Reynolds numbers that are not included in the theory and possibly some gravitational torques due to differences in the arms.To observe the \(\psi\) dependence in experiments, we define a PDF
\[\Psi(\psi)=\int_{0}^{\pi}\mathcal{P}(\theta,\psi)\sin\theta d\theta. \tag{63}\]
Figure 10 shows how \(P(p_{z})\) changes with turbulence intensity. For low turbulence intensities, small and large triads are strongly aligned with the direction of gravity, within a few degrees, reflected in the sharp peak near \(p_{z}=1\). This alignment becomes weaker as the turbulence intensity increases. The orientation PDFs become more uniform. Even for high turbulence intensities, particle orientations are not fully randomized. The corresponding settling factors are summarized in Tab. 4.
Triads also show preferential alignment within the plane of the particle. Figure 11 shows the PDF \(\Psi(\psi)\). It is surprising that for low turbulence intensities, small triads show a strong peak at \(\psi=\pi/3\), meaning any one of
Figure 11: PDF \(\Psi(\psi)\) of the particle orientation within the plane spanned by the arms of the particle at different turbulence intensities from low to high (colormap cold to hot). \(\psi=0\) means an arm of the particle is aligned parallel with the direction of gravity (\(120^{\circ}\) symmetry), \(\psi=\pi/3\) mean an arm of the particle is anti-parallel to the direction of gravity. (a) Experiments with small triads. (b) Experiments with large triads.
Figure 10: PDF \(P(p_{z})\) of the particle orientation at different turbulence intensities from low to high (colormap cold to hot). \(|p_{z}|=1\) means horizontal alignment, \(|p_{z}|=0\) vertical alignment and the PDF of randomly oriented particles is uniform at 1. (a) Experiments with small triads. (b) Experiments with large triads.
the three arms is pointing slightly upward. We assume that particle defects or fabrication inhomogeneities cause this alignment, e.g. one arm might experience slightly larger drag. We do not see such behavior for large triads, where these effects would have a smaller impact. With increasing turbulence intensity, the particles get kicked out of their equilibrium orientation and interactions between the arms cause one arm to preferentially align parallel to the direction of gravity, pointing downward. Large triads show that this effect is strongest for intermediate turbulence intensities, where turbulent fluctuations are strong enough to kick a particle far enough out of its equilibrium orientation so that interactions become relevant, but do not fully randomize the particles orientation yet. The theoretical model which assumes symmetric arms and neglects hydrodynamic interactions among the arms predicts a uniform distribution of \(\psi\).
Figure 12 shows spherical histograms of particle orientations for small and large triads for different values of \(S_{F}\). The histogram depicts the components of a unit vector defined in spherical coordinates by setting the polar angle equal to \(\theta\), and the azimuthal angle equal to \(\psi\). The large probability at the poles (see Fig. 12 (a) small triads and Fig. 12 (d) large triads at the lowest turbulence intensity) indicates that \(\mathbf{p}\) is strongly aligned with the direction of gravity. The symmetric, but not random probability distribution around the equator (\(120^{\circ}\) symmetry) shows the preferential alignment of one arm parallel to the direction of gravity. This preferential alignment can only be seen when the particles are significantly kicked out of their equilibrium orientation, as mentioned before. In fact, for small triads at the lowest turbulence intensity, the probability distribution shows small peaks, indicating opposite alignment with one arm upward.
To characterize the alignment of the normal vector to the plane of the particle relative to gravity, we present the variance of the angle of the triad arms, \(\langle p_{z}^{\prime 2}\rangle=0.5\left(1-\langle\cos^{2}(\theta)\rangle\right)\), as a function of the settling factor \(S_{F}\) in Figure 13. It can be seen that simulations and the experiments with both triad sizes exhibit transitions from a nearly isotropic distribution corresponding to \(\langle p_{z}^{\prime 2}\rangle=0.5\left(1-\langle\cos^{2}(\theta)\rangle \right)=1/3\) at small \(S_{F}\) to a nearly horizontal orientation \(\langle p_{z}^{\prime 2}\rangle=0.5\left(1-\langle\cos^{2}(\theta)\rangle \right)\to 0\) at large \(S_{F}\). The transition occurs over approximately the same range \(S_{F}=0.2\) to \(2\) of settling factors for the two particles sizes and for the theory. This suggests that accounting for the measured rotation rate of the particles in quiescent fluid and using a filtered turbulent velocity gradient based on the particle size in defining \(S_{F}\) captures the gross features of particle alignment successfully. At inter
Figure 12: Orientation Probability Distribution Function \(\mathcal{P}(\theta,\psi)\) for small triads at (a) \(S_{F}=1.95\), (b) \(S_{F}=0.18\), and (c) \(S_{F}=0.14\) and large triads at (d) \(S_{F}=1.30\), (e) \(S_{F}=0.25\), and (f) \(S_{F}=0.15\).
Figure 13: Mean-square particle orientation as a function of the settling factor \(S_{F}\). Experimental results for large and small triads are shown as large and small diamonds with dashed lines, respectively, while the simulations are circles. The horizontal line indicates isotropic orientation. Note that the definition of \(S_{F}\) for the experimental results uses the experimentally measured inertial rotation rate of a triad at a \(45^{\circ}\) angle to gravity in a quiescent fluid and the eddy turnover time for eddies of the particle size.
mediate \(S_{F}\) values between about 0.1 and 0.5, the experimental results for the deviation from perfect alignment are generally somewhat lower than the simulation results. One possible cause of this difference is that the simulations use a stochastic turbulent velocity gradient in a Lagrangian reference frame whereas the velocity gradients seen by particles are decorrelated by particle settling as well as turbulence evolution. For \(S_{F}\gg 1\), the rapid settling theory predicts that \(\langle\!\!p_{z}^{\prime 2}\!\rangle=0.5\left(1-\langle\cos^{2}(\theta)\rangle\right)\) is proportional to \(S_{F}^{-2}\) and the simulations follow this scaling. The steep decline of the experimental orientation variance from \(S_{F}=0.5\) to 2 is consistent with evolution toward this limiting behavior. However, the velocity variance for the highest \(S_{F}\) measurement for the small triads is clearly trending above this limit. This is likely due to imperfections in the particle fabrication resulting in a slight deviation from horizontal alignment that was seen even in the quiescent fluid experiments.
#### 3.3.1 A model for orientation PDF - non-Gaussian effects
The variation of the orientation PDF with the turbulence intensity (figure 10) highlights the non-Gaussian nature of the distributions. Anand et al. [1] calculated the higher moments of the orientation PDFs from their DNS calculations and showed that the orientation PDFs are non-Gaussian due to the non-Gaussian nature of the turbulent velocity gradient. Here we consider a model problem to analyze the non-Gaussian orientation PDFs of anisotropic particles settling in a turbulent flow. In our model problem, we will consider the rapid settling of thin disks in vertical simple shear flows (flow axis aligned
\begin{table}
\begin{tabular}{c c c c c c} \(S_{F}\) & \(\langle\cos^{2}(\theta)\rangle\) & \(\frac{1}{2}(1-\langle\cos^{2}(\theta)\rangle)\) & \(S_{F}\) & \(\langle\cos^{2}(\theta)\rangle\) & \(\frac{1}{2}(1-\langle\cos^{2}(\theta)\rangle)\) \\
1.95 & 0.970 & 0.015 & 1.30 & 0.982 & 0.009 \\
0.66 & 0.933 & 0.034 & 1.17 & 0.981 & 0.010 \\
0.26 & 0.757 & 0.122 & 0.44 & 0.853 & 0.074 \\
0.18 & 0.606 & 0.197 & 0.25 & 0.559 & 0.221 \\
0.15 & 0.569 & 0.216 & 0.22 & 0.534 & 0.233 \\
0.14 & 0.554 & 0.223 & 0.15 & 0.431 & 0.285 \\ \end{tabular}
\end{table}
Table 4: Sedimentation parameters. Settling number defined empirically for triads, \(S_{F}\); \(\langle\cos^{2}(\theta)\rangle\) and \(0.5(1-\langle\cos^{2}(\theta)\rangle)\), orientation variance of \(\mathbf{p}\) and one of the arms \(\mathbf{p}^{\prime}\).
with gravity) as shown in figure 14. We have seen earlier that the orientation dynamics of disks closely resemble that of triads. Our objective here is to obtain the dependence of the equilibrium orientation of the disks on the shear rate, which would then help us derive the orientation PDF of the disks based on the assumed form of PDF for the shear rates.
For a vertical simple shear flow (\(\Gamma_{ij}=\gamma\delta_{3i}\delta_{1j}\)), the evolution equation for orientation (see equation 37) reduces to
\[\dot{\theta}=-\gamma\cos^{2}\theta-\frac{3cW_{3}^{2}}{2\nu}\sin\theta\cos\theta \tag{64}\]
The above system has two fixed points - an unstable one at \(\theta=\pi/2\) and a stable one at \(\theta=-\tan^{-1}\left(2\gamma\nu/(3cW_{3}^{2})\right)\). Thus at equilibrium, we have \(\tan\theta=-1/S_{F}^{\rm SS}\). Motivated by our earlier calculations, we have introduced a settling parameter for a thin disk settling in a vertical simple shear flow \(S_{F}^{\rm SS}=2\gamma\nu/(3cW_{3}^{2})\). The relationship between the equilibrium angle and the shear rate allows us to construct the PDF for \(\theta\) or \(p_{z}\) with a known PDF for \(\gamma\).
In our earlier calculation of the Lagrangian model for velocity gradient, we used the Girimaji and Pope [19] model. Their model incorporates the log-normal distribution for pseudodissipation. To construct a PDF for the shear
Figure 14: A thin disk settling in a vertical shear flow
rate, \(P_{\gamma}\), we propose that \(\gamma=h\bar{\phi}^{1/2}\), where \(h\) is a normalized shear rate that obeys a Gaussian distribution with unit variance (\(P_{h}(h)\)) and \(\bar{\phi}\) is proportional to the pseudodissipation and is lognormally distributed. The variance of the logarithm of \(\bar{\phi}\), \(\sigma_{\log\bar{\phi}}^{2}\), depends on \(Re_{\lambda}\) and is obtained from expressions provided by Koch and Pope [35]. Using the known PDFs, \(P_{h}\) and \(P_{\bar{\phi}}\), we obtain
\[P_{\gamma}\left(|\gamma|\right)=4k|\gamma|\exp\left[-\frac{2(\log| \gamma|)^{2}}{\sigma_{\log\bar{\phi}}}\right]\int_{0}^{\infty}P_{h}(h)\exp \left[-\frac{2(\log|h|)^{2}}{\sigma_{\log\bar{\phi}}}\right]h^{\frac{4\log| \gamma|}{\sigma_{\log\bar{\phi}}}-2}dh,\]
where \(k=e^{-\sigma_{\log\bar{\phi}}/2}/\sqrt{2\pi\sigma_{\log\bar{\phi}}}\). We next use the relation between the equilibrium angle and shear rate to obtain the PDF \(P(p_{z})\) from \(P_{\gamma}\). To compare this model for the orientation distribution with experiments, we choose a value \(S_{F}^{\rm SS}\) of the settling parameter that gives the same orientational variance as the experimental measurements
\[\langle\cos^{2}\theta\rangle_{\rm model}=\langle\cos^{2}\theta \rangle_{\rm expts.}. \tag{66}\]
In figure 15, we show comparisons of PDF for \(p_{z}\) from our model problem with that obtained from experiments. The orientation PDFs obtained from the model problem compare well with those from experiments, with the comparisons getting better for higher \(S_{F}^{\rm SS}\) scenarios. We also compare the higher moments, \(\langle(1-p_{3}^{2})^{2}\rangle\), and as visible in figure 15, they agree reasonably with those computed from the experimental data. The model also allows exploration of the dependence of the orientation PDF on \(Re_{\lambda}\), which modulates the non-Gaussian nature of the velocity gradient statistics. The rapid settling analysis outlined above will also allow for the evaluation of particle orientation tensors, \(p_{i}p_{j}\) and \(p_{i}p_{j}p_{k}p_{l}\), that would be required to find the stresslet. Although the particle orientation is highly correlated with the local flow, the orientational moments obtained here are averaged over all flows. The particle orientation tensors can be written as,
\[\langle p_{i}p_{j}\rangle = \delta_{ij}+\lambda\left(\delta_{i3}\delta_{j3}-\frac{1}{3}\delta _{ij}\right), \tag{67}\] \[\langle p_{i}p_{j}p_{k}p_{l}\rangle = \frac{1}{5}\left(1-\bar{\lambda}-\frac{\lambda}{3}\right)\left( \delta_{ij}\delta_{kl}+\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}\right)+ \left(\lambda-7\bar{\lambda}\right)\delta_{i3}\delta_{j3}\delta_{k3}\delta_{l 3}+\] \[\bar{\lambda}\left(\delta_{i3}\delta_{j3}\delta_{kl}+\delta_{i3} \delta_{k3}\delta_{jl}+\delta_{i3}\delta_{l3}\delta_{jk}+\delta_{j3}\delta_{k3 }\delta_{il}+\delta_{j3}\delta_{l3}\delta_{ik}+\delta_{k3}\delta_{l3}\delta_{ ij}\right),\]
Figure 15: Comparison of PDFs \(P(p_{z})\) between experiments and model for small triads. Red symbols and line correspond to \(S_{F}=0.66\) and \(S_{F}^{\rm SS}=6.76\) respectively. Blue symbols and line correspond to \(S_{F}=0.26\) and \(S_{F}^{\rm SS}=2.59\) respectively. \(Re_{\lambda}\) is the same in both experiments and model - \(Re_{\lambda}=(91,141)\). Comparison of higher moment - From the experiments \(\langle(1-p_{3}^{2})^{2}\rangle\approx(0.0119,0.1593)\) and from the model \(\langle(1-p_{3}^{2})^{2}\rangle=(0.0159,0.1215)\).
Figure 16: Variation of the orientation moments calculated from the model \(P(p_{z})\) compared with their large \(S_{F}^{\rm SS}\) asymptotic forms (equation 71). The inset of figure (b) shows the variation of the kurtosis with \(Re_{\lambda}\) in the \(S_{F}^{\rm SS}\gg 1\) limit.
where,
\[\lambda = \frac{3}{2}\left(\langle p_{3}^{2}\rangle-1\right)=\langle\mathrm{P }_{2}\left(p_{3}\right)\rangle-1, \tag{69}\] \[\bar{\lambda} = \frac{6\langle p_{3}^{2}\rangle-5\langle p_{3}^{4}\rangle-3}{8}= \frac{\langle\mathrm{P}_{2}\left(p_{3}\right)\rangle-\langle\mathrm{P}_{4} \left(p_{3}\right)\rangle}{7}-\frac{1}{4}, \tag{70}\]
where \(\mathrm{P}_{n}\left(p_{3}\right)\) is the Legendre polynomial of order \(n\). Rapid settling theory developed in the current study provides us with the expression for \(\langle p_{3}^{2}\rangle\) (equation 42). \(\langle p_{3}^{4}\rangle\) requires information regarding the fourth moment of the turbulent velocity gradient tensor. The fourth moment, \(\langle\Gamma_{ij}\Gamma_{kl}\Gamma_{mn}\Gamma_{pq}\rangle\), is a measure of the non-Gaussian statistics. It involves 105 isotropic tensors constrained by four invariants [62] that are obtained from DNS or experiments (see Fang et al. [17] for further details). \(\langle p_{3}^{2}\rangle\) and \(\langle p_{3}^{4}\rangle\) can be obtained from the experiments. Table 4 lists the values of \(\langle p_{3}^{2}\rangle\) from the current experiments. The model \(P(p_{z})\) developed in this section (from equation 65) also allows us to calculate the values of the orientation moments and the scalar constants, \(\lambda\) and \(\bar{\lambda}\), for the particle orientation tensor. Besides evaluating the moments for arbitrary values of \(S_{F}^{\mathrm{SS}}\) we can also find the asymptotic forms in the rapid settling limit,
\[\langle(1-p_{3}^{2})^{n}\rangle\sim\frac{\mathcal{C}_{n}(Re_{\lambda})}{(S_{F} ^{\mathrm{SS}})^{2n}}, \tag{71}\]
where \(\mathcal{C}_{n}(Re_{\lambda})=\int_{0}^{\infty}\gamma^{2n}P_{\gamma}\left( \gamma\right)d\gamma\) is a constant that can be found numerically. Figure 16 shows the variation of these statistical quantities with the settling parameter (\(S_{F}^{\mathrm{SS}}\)) for two values of \(Re_{\lambda}\), compared with the large \(S_{F}^{\mathrm{SS}}\) asymptotes. In the \(S_{F}^{\mathrm{SS}}\gg 1\) limit, we can also calculate the kurtosis
\[\lim_{S_{F}^{\mathrm{SS}}\rightarrow\infty}\frac{\langle(1-p_{3}^{2})^{4} \rangle}{\langle(1-p_{3}^{2})^{2}\rangle^{2}}=\frac{\mathcal{C}_{4}(Re_{ \lambda})}{\mathcal{C}_{2}(Re_{\lambda})^{2}} \tag{72}\]
to characterize the degree of non-Gaussianity in the PDF. From the inset of figure 16(b), we can observe that the kurtosis monotonically increases with \(Re_{\lambda}\). Thus, the model PDF that we have developed in this section, based on orientation dynamics in an ensemble of simple shear flows, captures the non-Gaussian features observed in the experiments and appears promising in exploring the suspension rheology of heavy anisotropic particles in turbulence.
## 4 Conclusions
The model and the results presented in this paper are relevant in several engineering and environmental scenarios involving sedimenting anisotropic particles in turbulent flows. The dynamics of plankton in the oceans and marine snow [14], fiber suspensions [41], pollen dispersion and particle or cell aggregates in stirred tank reactors [15; 33] often involve competition between the effects of gravitational settling and turbulence. The current study focuses on the orientation dynamics of settling heavy particles in homogeneous isotropic turbulence, accounting for fluid inertia due to sedimentation and extending beyond single fiber models to more complex ramified particles. The particles are assumed to be small enough, so angular acceleration is negligible. This scenario is significant in atmospheric research when studying the orientation distribution of ice crystals in cold cirrus clouds (\(T=-25^{\circ}\)C to \(-50^{\circ}\)C). Heymsfield and Iaquinta [25] presents an elaborate bullet rosette model (similar to ramified particles) for different ice crystal shapes with maximum lengths ranging from \(L=100\)\(\mu\)m to \(L=1\) mm. The corresponding terminal velocities for ice crystals with aspect ratio \(\kappa=20\) range from 2 cm/s (\(L=100\)\(\mu\)m) to 80 cm/s (\(L=1\) mm). This results in a particle Reynolds number of \(\mbox{{Re}}_{\ell}=0.09\) up to \(\mbox{{Re}}_{\ell}=35\).
Estimates for turbulence intensities and energy dissipation rates for warm cumulus clouds (\(T=0^{\circ}\)C to \(10^{\circ}\)C) can be found in Siebert et al. [59] and Siebert et al. [60], with mean energy dissipation rates ranging from \(\langle\epsilon\rangle=\sim 10^{-3}\) m\({}^{2}\)/s\({}^{3}\) to \(\sim 10^{-2}\) m\({}^{2}\)/s\({}^{3}\), respectively, and with a fluid viscosity of about \(\nu=1.4\times 10^{-5}\) m\({}^{2}\)/s). This yields Kolmogorov lengths of \(\eta=1.1\) mm and \(\eta=0.4\) mm, which puts the ice crystals in the small particle limit, even with \(L=2.5\eta\) Parsa and Voth [53]. The corresponding Kolmogorov velocities are \(u_{\eta}=13\) mm/s and \(u_{\eta}=32\) mm/s.
The ratio of ice crystal terminal velocities and Kolmogorov velocities enables us to estimate the range of settling parameters \(S_{F}\) for these atmospheric conditions using Eq. 12. For the smallest ice crystal size, \(S_{F}=0.4\) (Siebert et al. [59]) and \(S_{F}=0.06\) (Siebert et al. [60]), and one could expect no strong preferential alignment. However, the turbulence statistics are taken from warm cumulus clouds, which are known to be more turbulent than cirrus clouds. The settling parameter increases with particle size (terminal velocity) to about \(S_{F}\approx 600\) (Siebert et al. [59]) and \(S_{F}\approx 100\) (Siebert et al. [60]) for the largest ice crystals. For all sizes except the smallest ice crystals, \(S_{F}\gg 1\) and we expect a strong alignment of ice crystals with their
preferential sedimentation orientation. For large ice crystals, \(S_{F}\) will be very large, and the orientation distributions are most likely affected by particle asymmetries that prevent perfect alignment rather than turbulence.
## Acknowledgments
This work was supported by the Army Research Office under grant W911NF-15-1-0205. A.R. would like to acknowledge the support from Laboratory for Atmospheric and Climate Sciences, Indian Institute of Technology Madras.
|
2309.02273 | Computing Hive Plots: A Combinatorial Framework | Hive plots are a graph visualization style placing vertices on a set of
radial axes emanating from a common center and drawing edges as smooth curves
connecting their respective endpoints. In previous work on hive plots,
assignment to an axis and vertex positions on each axis were determined based
on selected vertex attributes and the order of axes was prespecified. Here, we
present a new framework focusing on combinatorial aspects of these drawings to
extend the original hive plot idea and optimize visual properties such as the
total edge length and the number of edge crossings in the resulting hive plots.
Our framework comprises three steps: (1) partition the vertices into multiple
groups, each corresponding to an axis of the hive plot; (2) optimize the cyclic
axis order to bring more strongly connected groups near each other; (3)
optimize the vertex ordering on each axis to minimize edge crossings. Each of
the three steps is related to a well-studied, but NP-complete computational
problem. We combine and adapt suitable algorithmic approaches, implement them
as an instantiation of our framework and show in a case study how it can be
applied in a practical setting. Furthermore, we conduct computational
experiments to gain further insights regarding algorithmic choices of the
framework. The code of the implementation and a prototype web application can
be found on OSF. | Martin Nöllenburg, Markus Wallinger | 2023-09-05T14:37:59Z | http://arxiv.org/abs/2309.02273v1 | # Computing Hive Plots:
###### Abstract
Hive plots are a graph visualization style placing vertices on a set of radial axes emanating from a common center and drawing edges as smooth curves connecting their respective endpoints. In previous work on hive plots, assignment to an axis and vertex positions on each axis were determined based on selected vertex attributes and the order of axes was prespecified. Here, we present a new framework focusing on combinatorial aspects of these drawings to extend the original hive plot idea and optimize visual properties such as the total edge length and the number of edge crossings in the resulting hive plots. Our framework comprises three steps: (1) partition the vertices into multiple groups, each corresponding to an axis of the hive plot; (2) optimize the cyclic axis order to bring more strongly connected groups near each other; (3) optimize the vertex ordering on each axis to minimize edge crossings. Each of the three steps is related to a well-studied, but NP-complete computational problem. We combine and adapt suitable algorithmic approaches, implement them as an instantiation of our framework and show in a case study how it can be applied in a practical setting. Furthermore, we conduct computational experiments to gain further insights regarding algorithmic choices of the framework. The code of the implementation and a prototype web application can be found on OSF1.
Footnote 1: [https://osf.io/6zqx9/](https://osf.io/6zqx9/) (10.17605/OSF.IO/6ZQX9)
Keywords:hive plots graph clustering circular arrangement layered crossing minimization.
## 1 Introduction
Hive plots [16] are a visualization style for network data, where vertices of a graph are mapped to positions on a predefined number of radially emanating axes. Mapping and positioning is usually done based on vertex attributes and not with the intention to optimize layout aesthetics. Due to this strict, rule-based definition, hive plots are a deterministic network visualization style; see Fig. 1 for an example. Similarly to parallel coordinate plots [27], the idea behind hive plots is to quantitatively understand and compare network structures, a task that can quickly get very difficult with force-based layouts due to the 'hairball'
effect for large and dense graphs and their often unpredictable behaviour when optimizing for conflicting aesthetic criteria.
Usually, edges are drawn as Bezier curves connecting their respective end points while being restricted to three axes to avoid the problem of routing longer edges around axes; this is considered beneficial for visual clarity. In case of edges between vertices on the same axis, the axis and its associated vertices are cloned and positioned closely such that edges are either drawn twice (symmetrically) or once (asymmetrically). The latter case reduces visual complexity but increases ambiguity as an edge is only explicitly connected to one copy of each vertex; see Fig. 2 for a sketch of the different concepts.
Multiple hive plots can also be arranged in a matrix, called a hive panel [16], where columns and rows represent different axis mapping functions. Differential hive plots visualize networks changing over time [17]. Since their inception a decade ago, hive plots have been utilized in various applications and use-cases, e.g., cyber security [15], machine learning of visual patterns [24], life sciences [23], biological data [29], or sports data [22]. Although various use-cases exist, hive plots have not yet been investigated from a formal graph drawing perspective.
This is a rather unexpected observation, especially, as hive plots have some inherent properties that make them an interesting layout style. For example, by placing vertices on axes the layout is predictable and has usually a good aspect ratio. Similarly, edges can be routed in a predictable manner. Thus, edges overlapping with unrelated vertices is not an issue in hive plot layouts and increases the overall faithfulness of the drawing. Furthermore, it is also relatively straight forward to position labels and avoid label-edge and label-vertex overlaps. Lastly,
Figure 1: A hive plot created with jhive [16]. Vertices are mapped to axis according to their degree. The position on each axis is determined by vertex attributes such as degree (axis \(a_{1}\)), vertex betweenness (axis \(a_{2}\)), and reachability (axis \(a_{3}\)).
edges between vertices on the same axis can be hidden or shown on demand, thus, reducing unnecessary information and decreasing the cognitive load.
Contributions and Related Work.In this paper, we present a formal model of hive plots and identify their associated computational optimization problems from a combinatorial point of view. Based on this model, we investigate several unused degrees of freedom that can be utilized for optimizing hive plot layouts for arbitrary undirected graphs.
First, in our investigation we take a new angle on assigning vertices to axes. Basically, the idea is to partition the graph into some number \(k\) of densely connected clusters, where each cluster is assigned to exactly one axis. In terms of visual design this allows us to show or hide intra-cluster edges on demand and focus on representing the sparse connectivity between clusters. We find such clusters by applying techniques from the area of community detection in networks [11]. Even though a similar assignment strategy is presented in the original hive plot publication [16], the focus there is on visually clustering vertices according to their community membership and assigning vertex clusters to segments on subdivided axis.
Second, we are free to assign any cyclic order over the \(k\) different axes. Here we optimize the total length of inter-axis edges by placing axes with many edges between them close to each other. This is essentially the circular arrangement problem. In the circular arrangement problem vertices are positioned evenly spaced on a circle such that the total weighted length of edges is minimized. Finding the minimum circular arrangement of undirected and directed graphs is \(\NP\)-complete [12, 18]. However, a polynomial-time \(O(\log n)\)-approximation for undirected graphs exists [18]. Similarly, the problem of minimizing the crossings in a circular arrangement of a graph is \(\NP\)-complete [19]. The concept of circular arrangements has been applied to circular drawings [13] where a subset of edges is drawn outside of the circle to reduce edge crossings.
Lastly, once the order of axes is fixed we want to minimize the number of inter-axis edge crossings. Here, the problem is similar to multi-layer crossing minimization which has been studied in the context of the Sugiyama framework [25, 26] for hierarchical level drawings of directed graphs. In this type of drawing vertices are assigned to horizontal layers with edges either drawn in upward or downward direction. In case of cycles in the graph, some edges need to be reversed in the drawing. Cyclic level drawings have already been mentioned by Sugiyama et al. as an alternativenach ablauf to reversing edges, and they have been thoroughly investigated in more recent years [1, 2, 3, 13].
Crossing minimization in cyclic level drawings and layered drawings is repeatedly performing a 2-layer crossing minimization step. The 2-layer crossing minimization problem is already \(\NP\)-hard [8], even if one layer is fixed. Heuristics [9, 21] have been proposed and adapted for cyclic level drawings [1]. We adapt the barycenter algorithm [9] to efficiently reduce the number of crossings while adding novel constraints to force long edges to not cross over axes. Adding constraints to 2-layer crossing minimization heuristics has been applied previously, e.g., for fixing the relative positions of a subset of vertex pairs [10].
We combined and implemented the three above mentioned problems into a 3-step framework. Finally, we show in a case study how hive plots generated by our framework can be applied in a practical context of co-authorship networks and conduct a small-scale computational experiment on the crossing minimization aspect of our framework.
## 2 Formal Model
A _hive plot layout_\(H(G)=(A,\alpha,\phi,\Pi)\) of a graph \(G=(V,E)\) is a tuple consisting of a set \(A=\{a_{0},\ldots,a_{k-1}\}\) of \(k\) axes, a surjective function \(\alpha:V\to A\) mapping vertices to axes, a bijective function \(\phi:A\rightarrow\{0,...,|A|-1\}\) representing a cyclic ordering of axes and a set \(\Pi=\{\pi_{0},\ldots,\pi_{|A|-1}\}\) of orderings over the vertices assigned to each axis. Each vertex is assigned to exactly one axis \(a_{i}\in A\) imposing a disjoint grouping \(V_{i}:=\alpha^{-1}(a_{i})\) such that \(V_{i}\cap V_{j}=\emptyset\) for each \(i\neq j\) with \(i,j\in\{0,\ldots,|A-1|\}\). Each \(\pi_{i}\) is a bijective function \(\pi_{i}:V_{i}\rightarrow\{0,\ldots,|V_{i}|-1\}\).
We use the shorthand notation \(\phi(u)=\phi(\alpha(u))\) whenever the order of the axis \(\alpha(u)\) of a vertex \(u\) is needed. The _span_ of two axes \(a_{i},a_{j}\) or two vertices \(u,v\) is defined as \(\mathrm{span}(a_{i},a_{j})=\min\{\phi(a_{i})-\phi(a_{j})\pmod{k},\phi(a_{j})- \phi(a_{i})\pmod{k}\}\) or \(\mathrm{span}(u,v)=\mathrm{span}(\alpha(u),\alpha(v))\). Based on the span we can classify edges into three different categories. An edge \(e=(u,v)\) is called _proper_ if \(\mathrm{span}(u,v)=1\). Otherwise, an edge is considered _long_ if \(\mathrm{span}(u,v)>1\) or _intra-axis_ if \(\mathrm{span}(u,v)=0\). _Inter-axis_ edges are all edges that are either proper or long. A long edge \((u,v)\) can be subdivided and replaced by \(\mathrm{span}(u,v)-1\) dummy vertices assigned to the appropriate axes between \(\alpha(u)\) and \(\alpha(v)\). A long edge in a hive plot layout needs to _bypass_ axes to connect source and target vertices without creating axis-edge overlaps. Combinatorially this can be realized by enforcing dummy vertices to appear at certain positions in each axis order. In our model, a hive plot layout can have up to \(g\) gaps per axis, see Fig. 6 for examples. If \(g=1\) then all dummy vertices have to be at the end of each order. If \(g=2\) then dummy vertices have to be at either the beginning or end of each order. In cases where \(g>2\) all
Figure 2: Schematized hive plot with three axes showing different concepts. Axis \(a_{2}\) is collapsed. Axis \(a_{1}\) and \(a_{3}\) are expanded with edges in \(a_{1}\) being drawn symmetrically. A long edge between \(v_{2}\) and \(v_{8}\) is bypassing \(a_{2}\).
dummy vertices form a partition into up to \(g\) groups, where they must appear consecutively within each group.
To consider intra-axis edges during optimization an adaption is necessary. Basically, all axes and their associated vertices are duplicated such that for each axis \(a_{i}\) there are two copies \(a_{i}^{+}\) and \(a_{i}^{-}\), respectively, and vertex sets \(V_{i}^{+}\) and \(V_{i}^{-}\); see Fig. 2. The vertex order on duplicate axes remains the same, i.e., \(\pi_{i}^{+}=\pi_{i}^{-}=\pi_{i}\).
We consider two inter-axis edges \((u,v)\) and \((x,y)\) to be _crossing_ if \(u,x\in V_{i}\) and \(v,y\in V_{j}\) such that \(\pi_{i}(u)<\pi_{i}(x)\) and \(\pi_{j}(y)<\pi_{j}(v)\). Similarly, if the end points of two long edges \((u,v)\) and \((x,y)\) are on four different axes such that \(\phi(u)<\phi(x)<\phi(v)<\phi(y)\pmod{k}\) or on three different axes such that, w.l.o.g., \(\phi(u)=\phi(x)=i\), \(\pi_{i}(u)<\pi_{i}(x)\), and \(\phi(x)<\phi(y)<\phi(v)\pmod{k}\) a crossing is unavoidable. The _neighborhood_ of a vertex \(u\) is defined as \(N(u)=\{v\mid(u,v)\in E,\operatorname{span}(u,v)=1\}\).
## 3 Framework for Computing Hive Plots
Next we present our framework for creating a hive plot layout \(H(G)=(A,\alpha,\phi,\Pi)\) of an undirected simple graph \(G=(V,E)\). The framework itself is modeled as a pipeline consisting of three stages. In stage (1) we partition the vertices into multiple groups each corresponding to an axis of the hive plot. Next, we (2) optimize the cyclic axis order to bring strongly connected groups near each other. Finally, we (3) optimize the vertex ordering on each axis to minimize edge crossings. Furthermore, edge crossing minimization is performed under the constraint that long edges need to be routed through gaps in the axis.
### Vertex Partitioning
In the first stage we partition the vertex set \(V\) into subsets \(\{V_{0},\ldots,V_{k-1}\}\) such that each subset maps to exactly one axis \(a_{i}\) in the hive plot. The core idea is that the subsets of the partition represent dense induced subgraphs. In our implementation we use three different strategies to compute a partition. First, if we consider the paramter \(k\) as an additional input we use the Clauset-Newman-Moore greedy modularity maximization [5] to compute a partition of size \(k\). Second, if \(k\) is not specified we apply the Louvain [4] community detection algorithm instead. Here, the size of the partition is determined by how many communities are detected. Lastly, this step of the framework is not necessary if a partition is given in the input. Note, any other algorithm to partition the graph into meaningful groups can be used.
### Axis Ordering
The second stage orders the axes such that the total _span_ of edges is minimized. Our approach assumes that edges are always drawn along the shortest path around the circle between endpoints. Basically, we want to maximize the number
of proper edges while minimizing the number and length of long edges. We do not consider the individual position of vertices on their respective axes, but rather look at the aggregated edges incident to the subsets of the axis partition.
Let \(w_{ij}\) be the number of edges between \(V_{i}\) and \(V_{j}\) for \(i<j\). The cost function of an axis order \(\phi\) is defined as follows:
\[\text{cost}(\phi)=\sum_{i=0}^{k-1}\sum_{j=i+1}^{k-1}w_{ij}\;\;\text{span}(a_{i},a_{j})\]
We can afford using an exact brute-force approach for instances with \(k\leq 8\) to minimize \(\phi\), as it takes less than 0.5s on our reference machine (see Section 6); otherwise we use simulated annealing.
### Crossing Minimization
In the third stage of our framework we are concerned with minimizing edge crossings under the assumption that assignment to axes and the cyclic axis order are already fixed. On each axis, the vertices are initially in random order. Here, we employ a two-step approach, where first crossings of long edges and then intra-axis edge crossings are minimized. Additionally, we assume that a global parameter \(g\geq 1\), which represents the maximal number of gaps per axis, is given. If \(g=1\) we assume that edges are routed on the outside. In case of \(g=2\) we assume that gaps are on the outside and inside of each axis. If \(g>2\) we evenly distribute the gaps along each axis.
First, we process all long edges to turn them into sequences of proper edges. Each edge \(e=(u,v)\) with \(\text{span}(u,v)>1\) is subdivided by inserting \(\text{span}(u,v)-1\) dummy vertices assigned to the appropriate axes between \(\alpha(u)\) and \(\alpha(v)\). Isolated vertices which are not an endpoint of at least one long edge can be ignored in the first step of the crossing minimization.
Next, we use the barycenter heuristic [26] for crossing minimization. Our approach works by iterating several times in clockwise or counter-clockwise order over all axes while performing a layer-by-layer crossing minimization sweep. At each iteration we process all vertices of an axis by computing a new barycenter position as follows:
\[\text{pos}(u)=\frac{1}{|N(u)|}\sum_{v\in N(u)}\frac{\pi_{\alpha(v)}(v)}{|\pi_{ \alpha(v)}|}.\]
As it is necessary to avoid cases where axes are imbalanced, we normalize both axes and consider the neighbourhood \(N(u)\) of vertex \(u\) in the next and previous axes.
The reason for considering both axes is that when only considering the previous axis crossings might be introduced that are overall worse from the reverse direction.
Once barycenter positions are calculated we sort all vertices \(v\in V_{i}\) of axis \(a_{i}\) by their positions \(\text{pos}(v)\). Now, to consider gaps we have to apply a case
distinction on \(g\). If \(g=1\) we simply want dummy vertices on the outside to route the long edges around axes. We constrain the sorting algorithm to put all normal vertices before all dummy vertices. For \(g>1\) we perform the following procedure. We create a list of \(g\) empty lists representing the gaps and a list of \(g-1\) empty lists representing the segments of the axis between gaps. We initialize a counter \(j=0\) that represents the index of the current list. Next, we iteratively process vertices according to the previously computed order and distinguish between normal and dummy vertices. Whenever we encounter a normal vertex we append it to the list of axis segments at position \(j\). If the list contains more than \(\frac{|V_{i}|}{g-1}\) vertices we increase \(j\) by one. Here, \(|V_{i}|\) represents the number of normal vertices on axis \(a_{i}\). The main idea behind this is that normal vertices are evenly assigned to axis segments to increase symmetry. If we encounter a dummy vertex we have to decide if we assign it to the gap to the left or the right of the current axis segment. Here, the decision can be made by looking at all vertices in the same axis segment that are to the left and compute the crossings if we put the dummy vertex in the gap to the left. Similarly, we repeat the process for the right-hand side and choose the gap which induces less crossings. Thus, appending the dummy vertex in the list at index \(j\) or \(j+1\). Figure 3 illustrates how the gap assignment works. Finally, we assemble a new list by adding all vertices alternating between gap and axis list. The new position of a vertex is determined by the respective index in the list.
We terminate the overall process after either no change is detected for one cycle or some iteration threshold is reached.
In the second phase of the crossing minimization we aim to further reduce intra-axis crossings by applying the barycenter heuristic. As the focus of the layout is on edges between different axes, we introduce the additional constraint that the relative order of vertices incident to inter-axis edges are not allowed to change in this phase any more. Basically, this is again a classic 2-layer crossing minimization for a vertex subset, however both layers have the same order. Moreover, we apply the same procedure as described above to constrain dummy
Figure 3: An axis with \(g=3\) gaps. Gaps are indicated by dashed lines while axis segments are solid lines. Vertices colored red are dummy vertices. After computing new positions we move dummy vertices into either a gap to the left or right. The side is determined by counting the crossings and greedily picking the better option. The order of dummy vertices remains unchanged after the procedure.
vertices to gap positions. We process each axis individually and terminate processing of an axis once no change is detected or the iteration threshold is reached.
## 4 Implementation and Hive Plot Rendering
In this section we will briefly explain the design decisions regarding the visualization; see Fig. 4 or Fig. 8 for examples. The implementation code and a prototype web application can be found on OSF. Axes are drawn as straight lines emanating from a common center with equal angular distribution. Axes can be expanded or collapsed on demand.
When an axis is expanded the background is colored in a light grey color with low opacity. When expanded, the available space is distributed \(40:60\) between intra-axis and inter-axis edges. Vertices are drawn as small circles and their positions on their respective axis \(a_{i}\) are computed based on their position in \(\pi_{i}\). Labels are placed in clockwise direction next to an axis horizontally. If a vertex' assigned axis differs by \(\pm\) 25\({}^{\circ}\) to the horizontal reference direction, the label is rotated by 45\({}^{\circ}\). Edges are drawn as Bezier curves. For edges between neighbouring axes control points are set perpendicular to the axes. If long edges are routed around axes, the positions of their dummy vertices are converted to control points. The color of vertices is computed by mapping the angle of the assigned axis to a radial color map [6]. For edges, we assign the color of the first endpoint in counter-clockwise direction. The ideas behind coloring edges are that it becomes easier to follow individual edges and that it is, for half of the vertices, immediately clear to which axis the edge connects.
Interactivity.Figure 3(d) shows an example how interactivity was realized in our visualization. First, when hovering a vertex, the vertex itself, all neighbours and incident edges are highlighted by a color contrasting the color scheme. Initially, each axis in the visualization is collapsed. By clicking on a single axis it is expanded to show more details on demand. Furthermore, it is also possible to expand all axes with a button click. Naturally, it is also possible to unexpand individual axis. Similarly, by clicking a button in the interface vertices can be scaled to represent their respective degree; see Fig. 3(b). Lastly, labeling can be toggled on or off.
## 5 Case Study
We evaluate our framework by a case study using the _citation_ dataset [7] from the creative contest at the 2017 Graph Drawing conference. This dataset contains all papers published at GD from 1994 to 2015. We created co-authorship graphs for different years by extracting researchers from papers and connecting them with edges whenever they co-authored a paper.
In Fig. 4 we show a hive plot of the co-author network of 2015 and three alternative renderings computed with our framework in less than 10ms. We used the rendering style described in Section 4. We did not specify the number of
axes \(k\) in the input. We specified the number of gaps as \(g=1\). The network has a total of 75 vertices, which are partitioned into 7 groups. A total of 190 edges is split into 172 intra-axis edges, 12 proper edges and 6 long edges.
Authors mapped to individual axes seem to represent mainly clusters of geographic proximity of researchers' institutions or established close collaborations. Inter-axis edges are emphasized in our hive plots and indicate collaborations between clusters in the form of researchers bridging institutions and forming new connections, e.g., via papers originating from research visits or recent changes in affiliation. Another possible interpretation can be seen when vertices are scaled by degree. Researchers with connections to other axes are often also highly connected inside their own cluster. This could mean that they are well connected and prolific and use existing connections to start new collaborations.
In contrast to a force-based layout, see Fig. 5a, several observations can be made. While cliques are very prominent in the force-based layout the macro structure of the graph is less clear. The hive plot layout on the other hand focuses more on the macro structure of the graph with intra-axis structure only shown on demand. However, with two copies per vertex, cliques are harder to identify. Still, the hive plot layout requires no additional cue, such as color in this case, to highlight the community structure. Furthermore, in the hive plot layout individual vertices are easier to identify, labels are more uniform and edges are routed in a predictable manner which is similar to schematic diagrams. Due to the possibility of expanding axes on demand in the hive plot layout, individual communities can be easily explored without being overwhelming. While it is possible to represent communities in the force-based layout by meta vertices, it is not straight-forward to encode the relationship of each single vertex to the rest of the network. Both layouts show some label-edge overlap. Finally, the hive plot layout has a more balanced space utilization.
Furthermore, we also compared against a hierarchical layout; see Fig. 5b. Here, we assumed edge direction from the hive plot layout by directing edges clockwise. Naturally, the hierarchical layout emphasizes the imposed hierarchy while the communities are dispersed over the layout. Still, the communities are visible although it is questionable without the use of coloring. This gives the layer assignment a different meaning than our approach of axis assignment. The orthogonal layout of edges initially simplifies the process of following an edge, but it becomes progressively more challenging with an increase in bends and crossings. In the hive plot layout this is less of an issue, especially for edges between communities. Lastly, the label placement in the hierarchical layout is optimized and avoids label-edge and label-vertex overlaps. However, this optimization comes at the cost of requiring more space. In contrast, the hive plot layout has a few label-edge overlaps but utilizes space more efficiently.
Figure 4: Variations of the co-author graph of GD 2015. In (a) some axes of interest are expanded while (c) shows all axes expanded. In (b) vertices are scaled by degree and all axes are collapsed. In (d) interactive highlighting is obtained by hovering the vertex “M. Nollenburg”.
Figure 5: Force-based layout (a) of the co-authorship graph of Section 5 created with yEd [28]. The smart organic layout functionality was used. The preferred edge length was set to 100 while the minimal vertex distance was set to 20. The option of avoid vertex/edge overlaps was active with a value of 0.8. Also, the labeling was optimized and the graph colored according to the partitions from the case study. (b) shows a hierarchical layout created with yEd. Similar layout optimization steps were applied.
## 6 Computational Experiments
We conducted a small-scale computational experiment which focused on the implications of using different numbers of gaps while minimizing edge crossings. The datasets and the evaluation code can again be found on OSF.
Dataset and Setup.First, we created a dataset of synthetic graphs that tries to capture variations of input sizes similar to the use-case of visualizing communities in a small to medium size graph. Examples can be seen in Fig. 6. To generate the graphs we used a random partition graph [11] implementation of networkX. We varied the number \(n\) of vertices from 60 to 510 with a step size of 30 with a fixed number of six partitions \(\{V_{1},\dots,V_{6}\}\) where each partition was of size \(|V_{i}|=\frac{n}{6}\). Furthermore, we had to specify the probability of edges between vertices of the same and different partitions. For edges between vertices of the same partition we set the probability to \(p_{in}=\frac{6}{|V_{i}|}\) which gives an expected average of connecting a vertex to six other vertices in the same partition. For edges between vertices of different partitions we set \(p_{out}=\frac{2}{n-|V_{i}|}\) which connects a vertex on expected average to two other vertices outside of its own partition. For each step size we generated five graphs. In contrast to using real-world data, the above procedure gives us a predictable size of the graph that has a decent number of long and proper edges.
The implementation to compute a hive plot is written in Python 3.11. All experiments where run on a machine with Ubuntu 22.04, 16GiB of RAM and an Intel i7-9700 CPU with 3.00Hz.
Experiment.In our experiment we evaluated the effect of varying the number of gaps in the input. We computed a hive plot layout for all graphs in the dataset with fixed \(k=6\) but varied \(g\in\{1,2,3\}\).
Figure 6: One example graph from the synthetic datasets used in the computational experiment. (a) shows one gap on the outside, (b) shows an additional second gap on the inside and (c) allows for an additional third gap in the middle of each axis.
We counted the number of crossings of intra-axis edges and the number of crossings of inter-axis edges. We did not consider crossings of inter-axis edges with intra-axis edges as this can only be observed if \(g\geq 3\). The resulting plots can be seen in Fig. 6(a) and Fig. 6(b). Interestingly, the number of intra-axis crossings does not substantially differ between the three variants. On the other hand, using two or three gaps drastically decreases the number of inter-axis crossings. While a difference between two and three gaps is visible in the plots, it is not as strong as between one and two, or one and three, gaps. Thus, it is questionable whether sacrificing the zero crossings between intra-axis and long inter-axis edges one gets with only two gaps is worth the slight reduction in inter-axis edge crossings.
The runtime plot can be seen in Fig. 6(c). The runtime for one gap stays consistently below 0.4s while the runtime for two and three gaps increases much steeper. Interestingly, the runtime for three gaps is lower than for two gaps. This can be explained by moving dummy vertices to gap positions. For two gaps we have to inspect all vertices in the axis to the left and right while for three gaps we have to inspect at most half of the vertices as we can stop once we inspect a vertex that is assigned to a different segment of an axis.
## 7 Discussion and Conclusion
We have introduced a framework for drawing hive plots. Our edge routing guarantees that vertices are never occluded by edges, which is generally not the case in frequently used algorithms such as force-based layouts. The focus of our approach lies on showing inter-axis connections, i.e., the long edges, which reduces the visual complexity, but at the same time emphasizes the weak ties in networks, which are highly important in network analysis [14]. Nonetheless, interactivity can be used to show intra-axis edges on demand and thus give a more detailed view on the dense parts of the network. Since the aspect ratio of hive plots is fixed, the layout can easily be integrated into a multi-view visual analytics system.
Obviously, there are also some limitations to our approach. The original hive plots [16] are deterministic renderings of networks based on vertex attributes, whereas the current implementation of our approach partially uses non-deterministic algorithms, e.g., for the clustering step, which may lead to different hive plots for the same data or data with small changes. In the clustering step there are other approaches that were not considered in our prototype implementation that could potentially improve the layout. Even for an optimal axis order the presence of many long edges decreases the readability quickly. The angular resolution for more than 8-12 axes becomes too small to precisely show connectivity details, especially for vertices closer to the origin. Therefore, our framework has limited visual scalability and is recommended mostly for small to medium graphs with less than 500 vertices and no more than 8-12 clusters. However, it is possible to hide some visual complexity by collapsing individual axes. Lastly, the currently implemented crossing minimization heuristic is
based on the barycenter algorithm. Here, more sophisticated approaches, such as sifting [20] could potentially be adapted to incorporate gaps.
In terms of future work, several questions arise. While our choice of algorithms in the framework was mostly guided by best-practice from existing literature, a more thorough investigation regarding exact solutions or bounds on typical quality criteria could lead to interesting insights. Also, instead of using a multi-stage framework it could be possible to optimize for multiple criteria simultaneously. Similarly, we only looked at aggregates of axes and the cyclic length when optimizing order, which does not consider the actual Euclidean length of edges. Potentially, we can improve scalability in the number of axes if we arrange them on an ellipse to increase space between axes. From a human-computer interaction perspective it would be interesting to see how our hive plot framework compares to other layouts for visualizing small to medium sized graphs in a formal human-subject study. Finally, adding more interactivity and
Figure 7: Number of crossings and runtime for the synthetic datasets. The x-axis shows an increase in number of vertices which correlates with the number of edges. The y-axis in (a) and (b) show the total number of crossings. In (c) the runtime for our hive plot framework is shown.
integrating our framework as an alternative view for data exploration into a visual analytics platform could provide additional insights.
|
2303.03136 | Monitoring Fluid Saturation in Reservoirs Using Time-Lapse Full-Waveform
Inversion | Monitoring the rock-physics properties of the subsurface is of great
importance for reservoir management. For either oil and gas applications or CO2
storage, seismic data are a valuable source of information for tracking changes
in elastic properties which can be related to fluids saturation and pressure
changes within the reservoir. Changes in elastic properties can be estimated
with time-lapse full-waveform inversion. Monitoring rock-physics properties,
such as saturation, with time-lapse full-waveform inversion is usually a
two-step process: first, elastic properties are estimated with full-waveform
inversion, then, the rock-physics properties are estimated with rock-physics
inversion. However, multiparameter time-lapse full-waveform inversion is prone
to crosstalk between parameter classes across different vintages. This leads to
leakage from one parameter class to another, which, in turn, can introduce
large errors in the estimated rock-physics parameters. To avoid inaccuracies
caused by crosstalk and the two-step inversion strategy, we reformulate
time-lapse full-waveform inversion to estimate directly the changes in the
rock-physics properties. Using Gassmann's model, we adopt a new
parameterization containing porosity, clay content, and water saturation. In
the context of reservoir monitoring, changes are assumed to be induced by fluid
substitution only. The porosity and clay content can thus be kept constant
during time-lapse inversion. We compare this parameterization with the usual
density-velocity parameterization for different benchmark models. Results
indicate that the proposed parameterization eliminates crosstalk between
parameters of different vintages, leading to more accurate estimation of
saturation changes. We also show that using the parameterization based on
porosity, clay content, and water saturation, the elastic changes can be
monitored more accurately. | Amir Mardan, Bernard Giroux, Gabriel Fabien-Ouellet, Mohammad Reza Saberi | 2023-03-06T13:54:27Z | http://arxiv.org/abs/2303.03136v3 | # Monitoring fluid saturation in reservoirs using time-lapse full-waveform inversion
###### Abstract
Monitoring the rock-physics properties of the subsurface is of great importance for reservoir management. For either oil and gas applications or CO\({}_{2}\) storage, seismic data are a valuable source of information for tracking changes in elastic properties which can be related to fluids saturation and pressure changes within the reservoir. Changes in elastic properties can be estimated with time-lapse full-waveform inversion. Monitoring rock-physics properties, such as saturation, with time-lapse full-waveform inversion is usually a two-step process: first, elastic properties are estimated with full-waveform inversion, then, the rock-physics properties are estimated with rock-physics inversion. However, multiparameter time-lapse full-waveform inversion is prone to crosstalk between parameter classes across different vintages. This leads to leakage from one parameter class to another, which, in turn, can introduce large errors in the estimated rock-physics parameters. To avoid inaccuracies caused by crosstalk and the two-step inversion strategy, we reformulate time-lapse full-waveform inversion to estimate directly the changes in the rock-physics properties. Using Gassmann's model, we adopt a new parameterization containing porosity, clay content, and water saturation. In the context of reservoir monitoring, changes are assumed to be induced by fluid substitution only. The porosity and clay content can thus be kept constant during time-lapse inversion. We compare this parameterization with the usual density-velocity parameterization for different benchmark models. Results indicate that the proposed parameterization eliminates crosstalk between parameters of different vintages, leading to more accurate estimation of saturation changes. We also show that using the parameterization based on porosity, clay content, and water saturation, the elastic changes can be monitored more accurately.
full-waveform inversion CO\({}_{2}\) monitoring time-lapse inversion reservoir monitoring rock-physics monitoring
## Introduction
Either for oil and gas applications or CO\({}_{2}\) sequestration, the ability to monitor reservoirs is highly important as it allows for safe and profitable operations. Time-lapse seismic monitoring is a powerful tool that allows to investigate large-scale changes in a reservoir (Lumley, 2001; Landry et al., 2003; Dupuy et al., 2016; Maharramov et al., 2016; Fabien-Ouellet et al., 2017; Lang and Grana, 2019; Zhou and Lumley, 2021; Mardan et al., 2022b,d,a). Seismic monitoring is sensitive
to changes in elastic parameters, which can be related to reservoir parameters, such as saturation, through a rock-physics model.
Numerous studies have been conducted to monitor saturation changes in reservoirs. Landros et al. (2003) used \(PP\) and \(PS\) seismic data to better estimate the time-lapse changes. Buland and Ouair (2006) proposed a Bayesian time-lapse inversion method and applied it to monitor the Norne Field offshore Norway with seismic data acquired from 2001 to 2003. Vedanti and Sen (2009) used prestack seismic data inversion to monitor the in situ combustion in the Baloi oil field. Lang and Grana (2019) studied the potential of the Johansen field in Norway by simulating CO\({}_{2}\) injection for 10 years and monitoring the reservoir. All of these studies are based on linearized amplitude variation with offset (AVO) and the convolutional model whose accuracy decreases by increasing the incidence angle (Mallick, 2007). Although AVO has been very successful and is commonly used in industry, it has some other limitations. AVO inversion only uses the amplitude of the reflected waves while for better imaging, all type of waves should be involved during the inversion (Virieux et al., 2017). In addition, AVO relies on the assumption that reflectors are flat, which is a restrictive hypothesis in many cases (Queiher and Singh, 2013). Last but not least, AVO suffers from the uncertainties in data preprocessing such as velocity model errors (Naeini et al., 2017; Hu et al., 2021). These assumptions and the incomplete use of waveforms can lead to suboptimal use of the resources invested in monitoring.
A variety of studies have used time-lapse full-waveform inversion (TL-FWI) to monitor reservoirs. For example, Watanabe et al. (2005) used time-lapse crosswell seismic data to monitor the changes during a gas hydrate thermal production test at the Mallik site. Raknes et al. (2013) monitored the leakage of one producing well over a field in North Sea by studying the seismic data of two vintages with two years time difference. Hicks et al. (2016) and Lescoffit et al. (2016) showed the interest of TL-FWI when studying seismic data and changes in \(P\)-wave velocity (\(V_{P}\)) at Grane field due to gas-oil replacement. Maharramov et al. (2016) showed the potential of TL-FWI to image the changes in the Genesis field in the Green Canyon area of the central Gulf of Mexico. In these studies, TL-FWI is carried out to map the acoustic changes in the subsurface using a monoparameter acoustic formulation (analyzing only \(V_{P}\)). To obtain the changes in terms of rock-physics properties, full-waveform inversion (FWI) is employed to estimate \(V_{P}\) and in the next step, the estimated \(V_{P}\) is inverted using rock-physics inversion. For example, this two-step procedure has been used to estimate the changes in the saturation of CO\({}_{2}\) at Sleipner field in North Sea (Queiher and Singh, 2013; Dupuy et al., 2016; Yan et al., 2019; Dupuy et al., 2021; Romdhane et al., 2022).
To approach the true properties of the subsurface, an elastic formulation is preferred over acoustic as it allows taking into account the different seismic phases in the measured data. An elastic medium requires multiparameter TL-FWI. In this case, model parameters can have coupled effects on the seismic data, which is referred to as crosstalk between parameters (Operto et al., 2013; Yang et al., 2018). Ma et al. (2016) studied the Marmousi model for multiparameter acoustic TL-FWI and showed that the crosstalk problem is more severe in TL-FWI. Although Ma et al. (2016) recovered good images of the subsurface using FWI, the estimated _time-lapse_ images were inaccurate due to the crosstalk problem (Figure 2 and 3 of Ma et al. (2016)). This problem can be alleviated by using different parameterizations (Tarantola, 1986; Operto et al., 2013; Ma et al., 2016; Hu et al., 2021). A comprehensive discussion is provide by Pan et al. (2019) where the authors analyzed the efficiency of different parameterizations such as velocity-density (DV) (\(P\)-wave velocity, \(S\)-wave velocity, and density), modulus-density (DM) (bulk modulus, shear modulus, and density), impedance-density (D-IP) (\(P\)-wave impedance, \(S\)-wave impedance, and density), velocity-impedance-\(I\) (V-IP-\(I\)) (\(P\)-wave velocity, \(S\)-wave velocity, and \(P\)-wave impedance), and velocity-impedance-\(II\) (V-IP-\(II\)) (\(P\)-wave velocity, \(S\)-wave velocity, and \(S\)-wave impedance) for performing elastic FWI.
The goal of this study is to recover the changes in water saturation (\(S_{w}\)) by considering an elastic earth. To minimize the effects of crosstalk that can occur due to the time-lapse nature of the problem, the TL-FWI is formulated in terms of porosity (\(\phi\)), clay content (\(C\)), and water saturation, as proposed by Hu et al. (2021) in the context of FWI. This parameterization (called PCS) is chosen to facilitate the time-lapse inversion as we can assume that porosity and clay content are constant with time in the reservoir. To perform this technique, a connection between the elastic and rock-physical properties is required. In this work, we use Gassmann's equation for this purpose. Using the PCS parameterization, we combine the rock-physics inversion and FWI in a single inversion scheme. In this way, saturation can be directly inverted from the raw seismic data (shot gather). By performing TL-FWI with the PCS parameterization, we can also easily obtain the time-lapse images of elastic properties using the underlying rock-physics model.
This paper is organized by first presenting FWI and the required formulation to include Gassmann's model for estimating the porosity, clay content, and water saturation. This section is followed by a brief discussion on TL-FWI. Then, we study and discuss the efficiency of this method by using numerical models.
## Theory
### Full-waveform inversion
Full-waveform inversion is a local optimization process that minimizes residuals between the observed and estimated wavefields at the receivers locations for different source positions (Virieux and Operto, 2009). The objective function can be written as
\[\chi(\mathbf{m})=\frac{1}{2}\|\mathbf{W}_{d}\left(\mathbf{R}\mathbf{u}(\mathbf{ m})-\mathbf{d}\right)\|_{2}^{2}+\chi_{REG}, \tag{1}\]
where \(\mathbf{R}\) maps the wavefield (\(\mathbf{u}\)) to the receivers locations. Vectors \(\mathbf{m}\) and \(\mathbf{d}\) are the model parameter and observed data, respectively. \(\mathbf{W}_{d}\) is a weighting operator on the data misfit and \(\chi_{REG}\) is a regularization term. FWI is highly ill-posed, where an infinite number of models matches the data. A variety of regularization methods such as the Tikhonov, total variation, prior information, and parameter-relation techniques have been proposed to condition the FWI problem and reduce the ill-posedness (Virieux and Operto, 2009; Asnaashari et al., 2015). In this study, we use the Tikhonov regularization. This regularization method is presented in Appendix A in addition to total variation, prior information, and parameters-relation techniques. In equation 1, the wavefield is the solution of a partial differential equation (PDE) which is discretized to perform the forward modeling. For the 2D isotropic elastic case in time domain, this PDE can be written as
\[\begin{cases}\rho\dot{v}_{x}=\partial_{x}\tau_{xx}+\partial_{z}\tau_{xz},\\ \rho\dot{v}_{z}=\partial_{z}\tau_{zz}+\partial_{x}\tau_{xz},\\ \dot{\tau}_{xx}=(\lambda+2G)\partial_{x}v_{x}+\lambda\partial_{z}v_{z}+s,\\ \dot{\tau}_{zz}=\lambda\partial_{x}v_{x}+(\lambda+2G)\partial_{z}v_{z}+s,\\ \dot{\tau}_{xz}=G(\partial_{z}v_{x}+\partial_{x}v_{z}),\end{cases} \tag{2}\]
where \(\lambda\) and \(G\) denote lame's parameter and where \(\rho\) and \(s\) are density and the source function. In equation 2, the particle velocities in \(x\)- and \(z\)-directions (\(v_{x}\) and \(v_{z}\)) as well as (\(\tau_{xx}\) and \(\tau_{zz}\)) and shear stresses (\(\tau_{xz}\)) constitute vector \(\mathbf{u}\). Spatial partial derivative operators are indicated by \(\partial_{x}\) and \(\partial_{z}\) for \(x-\) and \(z-\)directions and the temporal differentiation is denoted by overhead dot (\(\overset{\cdot}{\square}\)).
In the case of equation 2, the vector of model parameter, \(\mathbf{m}\) in equation 1, is
\[\mathbf{m}=[\lambda,G,\rho]^{T}, \tag{3}\]
where \({}^{T}\) is the transpose operator. Although, the forward modeling is performed using Lame parameters and density, FWI can be performed using different parameterizations. For example, the DV parameterization is a common formulation. Switching between parameters can be carried out using the chain rule, as shown at the end of this section.
### Optimization
To minimize the cost function (equation 1), a variety of optimization algorithms such as steepest-descent, conjugate-gradient, \(\ell\)-BFGS, and Newton algorithms can be employed (Virieux and Operto, 2009). These algorithms minimize the cost function in the vicinity of a starting model, \(\mathbf{m}_{0}\), as
\[\mathbf{m}=\mathbf{m}_{0}+\alpha\mathbf{\Delta}\mathbf{m}, \tag{4}\]
where \(\alpha\) is step size and the search direction, \(\mathbf{\Delta}\mathbf{m}\), for gradient descent is
\[\mathbf{\Delta}\mathbf{m}=-\left[\begin{array}{c}\nabla_{\mathbf{m}^{(1)}} \chi\\ \nabla_{\mathbf{m}^{(2)}}\chi\\ \nabla_{\mathbf{m}^{(3)}}\chi\end{array}\right], \tag{5}\]
where \(\nabla_{\mathbf{m}^{(n)}}\chi\) is the gradient of the cost function with respect to model parameter \(\mathbf{m}^{(n)}\).
One of the most problematic challenges of multiparameter FWI is crosstalk between parameters. Crosstalk is due to the fact that different parameters can have a coupled effect on the seismic wavefield and different combination of changes in these parameters can cause similar seismic response (Operto et al., 2013; Yang et al., 2018). To address this problem, different strategies have been deployed (Metivier et al., 2013; Operto et al., 2013; Lavoue et al., 2014; Fabien-Ouellet et al., 2017; Keating and Innanen, 2019). Operto et al. (2013) listed four main strategies: the choice
of appropriate parameterization, the use of the Hessian as it can measure the level of coupling between parameters, a data driven strategy i.e., using multi-components data, and a sequential inversion where the dominant parameters are estimated before the secondary parameters. To further reduce the crosstalk problem, it is also proposed to scale the gradient (Kamei and Pratt, 2013; Lavoue et al., 2014). Hence equation 5 can be rewritten as
\[\mathbf{\Delta m}=-\left[\begin{array}{c}\nabla_{\mathbf{m}^{(1)}}\chi\\ \xi\nabla_{\mathbf{m}^{(2)}}\chi\\ \beta\nabla_{\mathbf{m}^{(3)}}\chi\end{array}\right], \tag{6}\]
where \(\xi\) and \(\beta\) scale the gradient of different parameter classes.
By using a quasi-Newton method, \(\ell\)-BFGS, we consider an approximation of the Hessian into the solution (Nocedal and Wright, 2006) which helps reducing crosstalk. However the most important point of this study is the chosen parameterization. Considering the fact that the time-lapse changes in \(P\)-wave velocity due to fluid substitution (more affected than \(S\)-wave velocity and density) are approximately \(10\%\) of the baseline model (Zhou and Lumley, 2021), the challenges of FWI are more pronounced in time-lapse studies than in conventional FWI. Ma et al. (2016) studied the possibility of deploying TL-FWI for multiparameter acoustic studies. Although they estimate the model parameters very well, crosstalk prevents from obtaining an acceptable time-lapse image of the subsurface for any of two time-variable parameters. Thereby, an efficient parameterization is required to minimize the crosstalk in multiparameter TL-FWI. Inspired by Hu et al. (2021), we study the feasibility of performing TL-FWI using the PCS parameterization. Recalling that the porosity and clay content can be reasonably considered constant with time, water saturation is the only time-variable parameter.
In the next section, we provide the required equations to adapt the FWI problem to estimate the PCS parameters using Gassmann's equation. In this case, the vector of model parameter can be written as
\[\mathbf{m}=[\phi,C,S_{w}]^{T}. \tag{7}\]
Using the aforementioned assumption that porosity and clay content remain constant over time for reservoirs, the changes in water saturation can be the only parameter to be sought in the time-lapse inversion.
It is worth recalling that the chain rule is used to change the parameters for inversion. As an example, \(\mathbf{q}\) parameterization, \(\mathbf{q}=[q_{1},q_{2},q_{3}]^{T}\), can be changed to a new parameterization, \(\mathbf{p}=[p_{1},p_{2},p_{3}]^{T}\), as
\[\frac{\partial\chi}{\partial q_{i}}=\frac{\partial\chi}{\partial p_{1}}\frac{ \partial p_{1}}{\partial q_{i}}+\frac{\partial\chi}{\partial p_{2}}\frac{ \partial p_{2}}{\partial q_{i}}+\frac{\partial\chi}{\partial p_{3}}\frac{ \partial p_{3}}{\partial q_{i}}, \tag{8}\]
for \(i\in(1,2,3)\). FWI can be performed in any parameterization while the forward modeling is implemented using equation 2.
This study is carried out using PyFWI (Mardan et al., 2023) which is an open-source package that implements FWI in terms of DV parameterization. In the next section, we present the methodology to transfer the earth model from PCS parameterization to DV parameterization for forward modeling. In Appendix B, we present the required equations to obtain the gradient of the cost function in terms of PCS parameterization for performing the inversion.
#### Porous media homogenization
To perform TL-FWI with a rock-physics parameterization, a rock-physics model is needed to map the rock-physics properties to the elastic properties. Although there are numerous empirical rock-physics models that satisfy this requirement, we employ Gassmann's equation (Gassmann, 1951) which is valid for most of consolidated rocks (Dupuy et al., 2016). Gassmann's relation is used to compute the variation of bulk modulus during the fluid substitution. In practice, Gassmann's model requires some information on the fluid and mineral properties. In the context of TL-FWI, this information should be available as time-lapse FWI is usually performed in well studied fields.
The effective properties of the fluid (bulk modulus, \(K\), and density) can be calculated using weighted arithmetic averages (Voigt, 1889),
\[K_{f} =S_{w}K_{w}+(1-S_{w})K_{h}, \tag{9}\] \[\rho_{f} =S_{w}\rho_{w}+(1-S_{w})\rho_{h},\]
where the subscripts \(f\), \(w\), and \(h\) respectively denote the effective property of the fluid, water, and hydrocarbon.
The density of the solid frame of rock can be calculated using weighted average of the density of the grains as
\[\rho_{s}= C\rho_{c}+(1-C)\rho_{q}, \tag{10}\]
where subscripts \(s\), \(c\), and \(q\) denote solid medium, clay, and quartz, respectively. The bulk and shear moduli of the solid can be estimated using Voigt-Reuss-Hill method
\[\begin{split} K_{s}=&\frac{1}{2}\left[(CK_{c}+(1-C)K_{q })\right.\\ +&\left.\left(\frac{1}{C/K_{c}+(1-C)/K_{q}}\right) \right],\\ G_{s}=&\frac{1}{2}\left[(CG_{c}+(1-C)G_{q})\right.\\ &\left.+\left(\frac{1}{C/G_{c}+(1-C)/G_{q}}\right)\right].\end{split} \tag{11}\]
The effective dry moduli of the rock, \(K_{D}\) and \(G_{D}\) can be computed using various effective medium theories (Berryman, 1995; Mavko et al., 2020). Following Pride (2005), we estimate the dry moduli with
\[\begin{split} K_{D}&=K_{s}\frac{1-\phi}{1+cs\phi}, \\ G_{D}&=G_{s}\frac{1-\phi}{1+\frac{3}{2}cs\phi}, \end{split} \tag{12}\]
where \(cs\) is the general consolidation parameter which defines the degree of consolidation among grains. This parameter can be considered as a free parameter in predicting velocities (Lee, 2005). For further studies about \(cs\), readers are referred to Lee (2005) where the author discusses the importance of this parameter and how it can be calculated.
Finally, the effective properties of the porous medium is calculated with the following expressions (Gassmann, 1951; Voigt, 1889),
\[\begin{split} K&=\frac{\phi K_{D}+\left(1-(1+\phi) K_{D}/K_{s}\right)K_{f}}{\phi(1+\Delta)},\\ G&=G_{D},\\ \rho&=(1-\phi)\rho_{s}+\phi\rho_{f},\end{split} \tag{13}\]
where
\[\Delta=\frac{1-\phi}{\phi}\frac{K_{f}}{K_{s}}\left(1-\frac{K_{D}}{(1-\phi)K_{ s}}\right)=\frac{1-\phi}{\phi}\frac{K_{f}}{K_{s}}\left(1-\frac{1}{1+cs\phi} \right). \tag{14}\]
The elastic properties can be calculated using equation 13 as
\[\begin{split} V_{P}&=\sqrt{\frac{K+\frac{4}{3}G}{ \rho}},\\ V_{S}&=\sqrt{\frac{G}{\rho}},\\ \rho&=\rho.\end{split} \tag{15}\]
For a detailed discussion on Gassmann's model and the provenance of equations 9-15 see Dupuy et al. (2016).
#### Time-lapse full-waveform inversion
Time-lapse FWI can be performed by considering different combinations of FWI runs. These combinations can be performed sequentially such as for the cases of the cascaded, cross-updating, and weighted-average methods (Watanabe et al., 2005; Maharramov et al., 2016; Mardan et al., 2022, 2023b), or in parallel as in the case of the independent and central-difference methods (Plessix et al., 2010; Zhou and Lumley, 2021). In addition to these methods, Maharramov et al. (2016) proposed a joint inversion of seismic data from different vintages by introducing the cost function as
\[\begin{split}\chi_{tl}(\mathbf{m}_{b},\mathbf{m}_{m})=& \|\mathbf{Ru}(\mathbf{m}_{b})-\mathbf{d}_{b}\|_{2}^{2}+\|\mathbf{Ru}( \mathbf{m}_{m})-\mathbf{d}_{m}\|_{2}^{2}\\ &+\delta\|\mathbf{m}_{m}-\mathbf{m}_{b}\|_{2}^{2},\end{split} \tag{16}\]
where subscripts \(b\) and \(m\) denote baseline and monitor models. The difference between the estimated baseline and monitor models is used as a regularization term and the weight of this term in the cost function is controlled by parameter \(\delta\) which is constant during the inversion. This parameter, \(\delta\), can vary for different surveys and should be
picked by trial and error. More details on this parameter can be found in Maharramov et al. (2016). The flowchart of this method (hereafter called simultaneous TL-FWI) is presented in Figure 1. Simultaneous TL-FWI gets an initial model and observed seismic data from two vintages. In the first step, the baseline data are inverted and the estimated baseline is used as the initial model for time-lapse FWI. TL-FWI minimizes the residuals between the estimated and observed data from the baseline and monitor models regularized by the difference between the estimated models.
Simultaneous TL-FWI is one of the most accurate strategies for time-lapse studies (Mardan et al., 2023b). Incorporating a regularization term for minimizing the difference between two estimates (baseline and monitor models) helps this method to produce less artifacts.
The methodology we propose is as follows. We first estimate the baseline model, using standard FWI in the PCS parametrization. Then we perform a single-parameter inversion of \(S_{w}\) with jointly inverting the baseline and monitor data (equation 16). We justify the last step by making the hypothesis that porosity and clay content remains constant after fluid injection. In the case of inversion with the DV parameterization, the TL-FWI is a multi-parameter problem. Furthermore, it requires the additional step of inverting the elastic properties to obtain \(S_{w}\). In this work, we use \(\ell\)-BFGS to do so, which minimizes the following cost function
\[\chi_{rp}=\frac{1}{2}\|F(\phi,C,S_{w})-\mathbf{m}_{DV}\|^{2}, \tag{17}\]
where \(F\) is a function that calculates \(V_{P}\), \(V_{S}\), and \(\rho\) based on equations 9-15 and \(\mathbf{m}_{DV}\) is vector of estimated elastic properties using the DV parameterization.
## Numerical analysis
In this section we assess the feasibility of the proposed strategy with two synthetic models. For both analysis, FWI is performed to estimate all model properties (elastic or rock-physics properties) simultaneously. In this regard, the multi-scale strategy (Bunks et al., 1995) is employed to avoid cycle-skipping. We filter the observed and estimated data using a lowpass Butterworth filter with cut-off frequencies of 10, 20, 25, 35, 40, and 55 Hz. The source for forward modeling is a Ricker wavelet with a central frequency of 30 Hz. Perfectly matched layers (PML) are used at the boundaries of the models to minimize waves reflected from the boundaries (Berenger, 1994). We use the Tikhonov regularization (equation 18) in the \(x\)-direction (\(\mathbf{B}_{z}=0\)) to decrease the ill-posedness of FWI problem and make the estimated models smoother in the \(x\)-direction.
Figure 1: Flowchart of simultaneous TL-FWI. After performing FWI on baseline data, a joint time-lapse FWI inverts baseline and monitor data where the inversion process is regularized by the difference between estimated baseline and monitor models.
For numerical studies, we use two synthetic models. For the first experiment, a simple layered model is used. Then, TL-FWI is performed using the Marmousi model which is a more realistic case. During the inversion, the gradient is scaled using equation 6. The scaling weights \(\xi\) and \(\beta\) are obtained by trial and error, by performing FWI runs on the simple layered model. Then we use the same weights for the Marmousi model. The rock-physical properties used in this study are shown in Table 1.
#### Layered model
To investigate the performance of the methodology, a simple model with three flat reflectors is first used (Figure 2). Figure 2a-2c shows the true models for porosity, clay content and water saturation for the baseline and Figure 2d-2f shows the true monitor model. The initial model of the inversion is shown in Figure 2g-2i. Finally, the true time-lapse model is shown in Figure 2j-2l. The elastic properties of this model are shown in Figure 3. Comparing Figure 2j-2l with Figure 3j-3l shows that with a \(25\%\) increment of water saturation, P-wave velocity and density increase by \(3\%\) and \(0.8\%\), respectively, while the S-wave velocity decreases by \(0.4\%\).
To perform this study, seven isotropic sources are used on the surface for forward modeling. Receivers are located on the surface and in two imaginary wells at both sides of the model. Noise-free pressure data are inverted to recover the time-lapse anomaly. The rock-physics properties estimated by using Gassmann's equation are presented in Figure 4a-4c. The parameters in this model are used to obtain an indirect estimate of \(P\)- and \(S\)-wave velocities as well as density, as shown in Figure 4d-4f. A 1D plot obtained at \(X=0.5\) km (dashed line in Figure 2l) is also shown in Figure 5 for both rock-physics properties, Figure 5a-5c, and elastic properties, Figure 5d-5f.
Results of the first step obtained with this simple synthetic model shows that FWI and the proposed formulation have the potential to recover the rock-physics properties. However, reconstructing the clay content seems problematic and the interfaces in this parameter are not recovered sharply (Figure 4b and 5b). Although Hu and Innanen (2021) showed that the \(C\) model can be estimated better by considering a parameter-relation regularization, this type of regularization is not taken into account in this study. The parameter-relation regularization allows us to regularize the inversion based on the relation between two parameters in the field. So in general, we can expect that the parameter-relation and prior-model regularization (Asnaashari et al., 2013) should improve the accuracy of the estimated clay content. These two regularization methods are provided in Appendix A for interested readers.
After estimating the baseline model (Figure 4), TL-FWI can be performed using equation 16. We first compare the efficiency of the DV and PCS parameterization to monitor the rock-physics properties. Using the DV parameterization, changes in terms of the rock-physics properties are estimated in two steps. We first estimate the elastic properties of the baseline and the monitor models using equation 16. In the second step, equation 17 is employed to estimate the rock-physics properties of the baseline and the monitor models and differences are presented in Figure 6a-6c. This is a naive method to perform multiparameter TL-FWI and leads to overestimation of porosity and inaccurate time-lapse estimate. We cannot get an accurate time-lapse image using this method due to the crosstalk between parameters. It is worth reminding that the crosstalk problem is more severe in time-lapse inversion, because the magnitude of changes in reservoirs is comparable to generated artifacts. To improve the results, we follow another strategy. In this strategy, the rock-physics properties of the baseline are estimated using equation 17. Then, by assuming that porosity and clay content are not variable with time, the elastic properties of the monitor model are inverted to estimate the saturation. For this purpose, the estimated rock-physics model of the baseline is used as initial model to estimate the rock-physics properties of the monitor model. Despite the inversion of the baseline model, we perform single-parameter inversion for estimating the rock-physics properties of the monitor model during which only \(S_{w}\) is updated. This strategy leads to the time-lapse estimate presented in Figure 6d-6f where a better time-lapse image of the saturation is obtained. This strategy is used hereafter in this study to compute the time-lapse rock-physics properties using the DV parameterization. By using the PCS parameterization and considering time-independent porosity and clay content, direct TL-FWI is performed to only monitor the water saturation. The estimated time-lapse models of porosity, clay content, and water saturation are presented in Figure 6g-6i. Comparing Figure 6 to Figure 2j-2l shows that the PCS parameterization can improve the accuracy of time-lapse studies for monitoring the saturation of the fluids in the subsurface. The
\begin{table}
\begin{tabular}{l c c c} \hline & Bulk modulus (GPa) & Shear modulus (GPa) & Density (g/cm\({}^{3}\)) \\ \hline \hline Quartz & 37.00 & 44.00 & 2.65 \\ Clay & 21.00 & 10.0 & 2.55 \\ Water & 2.25 & 0.00 & 1.00 \\ Gas & 0.04 & 0.00 & 0.10 \\ \hline \end{tabular}
\end{table}
Table 1: Elastic properties in this study for minerals and fluids where \(cs\) is set to 20 (Dupuy et al., 2016; Hu et al., 2021).
accuracy of the results is measured using root-mean-square error (RMSE) and direct monitoring of \(S_{w}\) using the PCS parameterization leads to a \(37.3\%\) higher accuracy in comparison to using the DV parameterization.
In addition to the rock-physics properties, the PCS parameterization can be used to recover time-lapse images of the elastic properties. For this case, we use equation 16 to obtain the baseline and monitor models in terms of porosity, clay content and water saturation. The results are then used for rock-physics modeling (equation 9-15) to obtain the elastic properties of the baseline and monitor models. While the DV parameterization leads to time-lapse images presented in Figure 7a-7c, the PCS parameterization improves the time-lapse estimates (Figure 7d-7f). The DV parameterization does not allow recovering the changes appropriately. This can be explained by considering that \(25\%\) changes in the saturation (Figure 2l) is now spread out between three elastic properties and it causes only \(3\%\), \(0.4\%\) and \(0.8\%\) variation respectively in \(V_{P}\), \(V_{S}\), and \(\rho\) (Figure 3j-3l). Comparing the estimated time-lapse changes in Figure 7 shows that using the PCS parameterization, we can recover the elastic properties more accurately than using the DV parameterization.
Figure 2: Rock-physics properties of the layered model. (a-c) True baseline, (d-f) true monitor, and (g-i) initial models. (j-l) Time-lapse changes between porosity (a and d), clay content (b and e), and water saturation (c and f). The time-lapse model is presented with two color scales which show the changes in value and percentage. Dashed line in (l) is used for 1D profile in Figure 5 and 8.
The time-lapse elastic properties estimated using the PCS parameterization are \(53\%\) more accurate than with the DV parameterization.
Figure 8a-8c shows 1D profiles of the true and estimated changes in the rock-physics properties along the dashed line in Figure 2l. The estimated time-lapse elastic properties from PCS and DV parameterizations are also compared in Figure 8d-sf. This comparison shows that the results obtained using the DV parameterization (dotted lines) are far from accurate. For 1D analysis, the PCS parameterization leads to \(43\%\) higher accuracy (Figure 8a-8c). For monitoring the elastic properties, the PCS parameterization yields \(57\%\) higher accuracy (Figure 8d-sf).
Figure 3: Elastic properties of the layered model. (a-c) True baseline, (d-f) true monitor, and (g-i) initial models. (j-l) Time lapse changes between \(V_{P}\) (a and d), \(V_{S}\) (b and e), and \(\rho\) (c and f). The time-lapse model is presented with two color scales which show the changes in value and percentage.
Figure 4: The estimated baseline model of (a) porosity, (b) clay content, (c) water saturation, (d) \(P\)-wave velocity, (e) \(S\)-wave velocity, and (f) density estimated using the PCS parameterization.
Figure 5: 1D assessment of the results for (a) porosity, (b) clay content, (c) water saturation, (d) \(P\)-wave velocity, (e) \(S\)-wave velocity, and (f) density at \(X=0.5\) km (center of the 2D section). True value is shown with solid line, estimated model is shown with dashed line and dotted line presents the initial model.
Figure 6: Estimated time-lapse model for \(\phi\), \(C\), and \(S_{w}\) with (a-c) the DV parameterization while all model parameters can vary with time, (d-f) the DV parameterization while \(S_{w}\) is the only time-dependent parameter, and (g-i) the PCS parameterization.
Figure 7: Estimated time-lapse model for \(V_{P}\), \(V_{S}\), and \(\rho\) with (a-c) the DV parameterization and (d-f) the PCS parameterization.
Figure 8: 1D assessment of the time-lapse estimates for (a) porosity, (b) clay content, (c) water saturation, (d) \(P\)-wave velocity, (e) \(S\)-wave velocity, and (f) density at \(X=0.5\) km. True value is shown with solid line, estimated model with PCS parameterization is shown with dashed line and dotted line presents the estimated model with the DV parameterization.
### Marmousi model
In this section, we aim at assessing the methodology using a more realistic model. For this purpose, a portion of the Marmousi model is chosen (Figure 9). The true baseline and monitor models are shown in Figure 9a-9f. This model contains two sandstone reservoirs. The monitor model (Figure 9d-9f) is created by reducing the water saturation in the reservoir indicated by line A\({}_{1}\) and increasing the water saturation in the reservoir indicated by line A\({}_{2}\) (Figure 9l).
The elastic properties of this model are shown in Figure 10. Basically, the density of the saturated rock increases by replacing the hydrocarbon by water. Although the \(P\)-wave velocity has a reverse relationship with density, the hydrocarbon-water replacement increases the \(P\)-wave velocity due to the increase of the saturated bulk modulus. However, the fluid shear modulus is nil and \(S\)-wave velocity is decreased due to the increase in density. Figure 9j-9l and Figure 10j-10l show the time-lapse changes in rock-physics and elastic properties, respectively. The relative changes are shown in Figure 11a-11c and Figure 11d-11f for the rock-physics and elastic properties. In this case, the maximum changes in water saturation occurs in the reservoir A\({}_{2}\), which shows a \(40\%\) change with respect to the baseline. This change leads to maximum change of \(6.2\%\), \(2.31\%\), and \(4.46\%\) in elastic properties, \(V_{P}\), \(V_{S}\), and \(\rho\) respectively. Hence, we can expect that changes can be better tracked using the PCS parameterization.
FWI of the baseline data is carried out using the PCS parameterization with values of respectively 5 and 8 for \(\xi\) and \(\beta\), obtained from the layered model, studied in the previous section. The initial model for this inversion is presented in Figure 9g-9i. The results of the inversion of the baseline are presented in Figure 12. As the receivers are located only on the surface for this model, it can be seen that the quality of the model update decreases toward the edges. Finally, the results of TL-FWI are shown in Figure 13. As shown in Figure 13a-13c, the DV parameterization allows recovering changes in only reservoir A\({}_{1}\). On the other hand, the time-lapse water saturation anomalies are detected in both reservoirs using the PCS parameterization, as shown in Figure 13d-13f.
Figure 13g-13l shows the changes in elastic properties. In this case, the PCS parameterization, shown in Figure 13j-13l, yields significant improvement over the DV parameterization, shown in Figure 13g-13i. This improvement leads to \(11.7\%\) higher accuracy by using the PCS parameterization. As can be seen in Figure 13g, the changes in \(P\)-wave
Figure 9: Rock-physics properties of the Marmousi model. (a-c) True baseline, (d-f) true monitor, and (g-i) initial models. (j-l) Time-lapse changes between porosity (a and d), clay content (b and e), and water saturation (c and f). Lines A\({}_{1}\) and A\({}_{2}\) are used for 1D studies in Figure 14.
velocity are not recovered in reservoir \(A_{2}\). Although the time-lapse changes in \(S\)-wave velocity can be detected, the value in reservoir \(A_{1}\) is overestimated (Figure 13h). Last but not least, the time-lapse anomalies of density are not detectable from Figure 13i. Despite inaccuracies in the estimated time-lapse elastic properties, Figure 13j-13l show that the PCS parameterization can lead to a better estimation of elastic properties.
Finally, a 1D assessment of the results is carried out along lines A\({}_{1}\) and A\({}_{2}\), shown in Figure 9l. The 1D presentation of time-lapse changes (Figure 14) shows that TL-FWI with the DV parameterization is not able to recover the changes in the reservoirs appropriately, while using the PCS parameterization can provide accurate time-lapse images of the model for both the elastic and rock-physics properties. In this 1D analysis, the PCS parameterization leads to \(24.51\%\) higher accuracy for estimating the elastic properties and \(19.57\%\) for estimating the rock-physics properties.
## Discussion
The potential of the PCS parameterization for estimating time-lapse changes in elastic and rock-physics properties is shown in this study. The direct estimation of changes in rock-physics properties provides more accurate results rather than the indirect estimation. However, the estimates presented in Figure 6 show clearly that an effective inversion strategy and optimization method can improve the accuracy of estimated changes using indirect inversion. As the computation time of the rock-physics modeling is negligible, global inversion methods such as Monte-Carlo methods might be a better choice for the second step of the inversion (Dupuy et al., 2016b). However, the direct estimation of rock-physics properties aid to avoid the challenges of picking an appropriate optimization method and parameters for the rock-physics inversion.
The estimated time-lapse changes for the two presented synthetic models show that using the PCS parameterization can also enhance the accuracy for monitoring the elastic properties (Figure 7 and Figure 13). This is due to the fact that using the PCS parameterization, the crosstalk between parameters of different vintages can be avoided. In this study, we assumed that the only variable parameter with time is fluid saturation. However, pore pressure is another important parameter that should be considered. Although the changes in the pore pressure can be negligible in some fields due
Figure 10: Elastic properties of the Marmousi model. (a-c) True baseline, (d-f) true monitor, and (g-i) initial models. (j-l) Time-lapse changes between \(V_{P}\) (a and d), \(V_{S}\) (b and e), and \(\rho\) (c and f).
to the properties such as high permeability (Dupuy et al., 2021), further studies should be conducted to analyze the efficiency of simultaneous monitoring of saturation and pore pressure.
In this work, we have assumed a simple rock-physics model in which only saturation effects on elastic properties are modeled. In reality, this model will contain errors in describing the actual subsurface properties either from the calibration and subsurface assumptions or even the model itself. However, time-lapse studies are carried out in well studied fields. The calibration of the rock-physics model should not pose too much of a problem in this scenario. Besides, the geological information and well-log data of these fields can be employed to regularize the inversion problem and increase the accuracy of FWI results.
Figure 11: Absolute value of percent changes relative to the baseline in terms of studied (a-c) rock-physics and (d-f) elastic properties.
Figure 12: The estimated models of (a) porosity, (b) clay content, (c) water saturation, (d) \(P\)-wave velocity, (e) \(S\)-wave velocity, and (f) density estimated using the PCS parameterization.
## Conclusions
Time-lapse seismic data have been proven as an important source of information for reservoir monitoring. Full-waveform inversion is a powerful tool that relies on seismic data to recover the subsurface properties. However this method generally suffers from crosstalk between parameters in multiparameter studies. This problem becomes more dramatic in time-lapse studies, because the time-lapse changes can be caused by any combination of different parameters in different vintages.
In this study, we formulated time-lapse full waveform inversion in terms of porosity, clay content, and water saturation using Gassmann's equation, to assess its performance for reducing crosstalk and improve estimates of saturation changes. Using this formulation, we can rely on the assumption that water saturation is the only variable parameter that changes in time and thereby, it is possible to avoid crosstalk caused by time-lapse study. We showed that the PCS parameterization improves the time-lapse estimate. By using Gassmann's equation and the new parameterization, it is also shown that the time-lapse estimate in elastic properties can be obtained indirectly with higher accuracy (\(11.7\%\) higher accuracy in case of the Marmousi model).
## Acknowledgment
We would like to thank Francois Lavoue for valuable discussions on multiparameter full-waveform inversion. This project was supported by a NSERC Discovery Grant to BG (RGPIN-2017-06215). A.Mardan was also partially supported by SEG/Landmark Scholarship.
Figure 13: Estimated time-lapse model for \(\phi\), \(C\), and \(S_{w}\) with (a-c) the DV and (d-f) PCS parameterizations. Estimated time-lapse model for \(V_{P}\), \(V_{S}\), and \(\rho\) with (g-i) the DV and (j-l) the PCS parameterizations.
### Data availability statement
Data associated with this research are available and can be obtained by contacting the corresponding author.
Figure 14: 1D assessment of the time-lapse estimates for saturation along the line (a) A\({}_{1}\), (b) A\({}_{2}\). Time-lapse estimates for elastic properties along the line (c-e) A\({}_{1}\) and (f-h) A\({}_{2}\). True value is shown with solid line, estimated model with the PCS parameterization is shown with dashed line and the dotted line presents the estimated model with the DV parameterization. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.